Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
Regulators often barely grasp how current markets function and they are supposed to be futurists now too? Government regulatory interests almost always end up lining up with protecting entrenched interests, so it's essentially asking for a slow moving group of the same mega companies. Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.
So the solution is to allow the actual entrenched interests to determine the future of things when they also barely grasp how the current markets function and are currently proclaiming to be futurists?
The best way for "entrenched interests" to stifle competition is to buy/encourage regulation that keeps everybody else out of their sandbox pre-emptively.
For reference, see every highly-regulated industry everywhere.
You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?
That's a bit oversimplified. Humans have been creating authority systems trying to control others lives and business since formal societies have been a thing, likely even before agriculture. History is also full of examples of arbitrary and counter productive attempts at control, which is a product of basic human nature combined with power, and why we must always be skeptical.
As a member of 'humanity', do you find yourself creating authority systems for AI though? No.
If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.
The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.
You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.
Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.
You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).
I'd urge you to read a book like Black Swan, or study up on statistics.
Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.
(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.
The experience with other industries like cars (specially EV) shows that the ability of EU regulators to shape global and home markets is a lot more limited than they like to think.
Not really china make big policy bet a decade early and win the battle the put the whole government to buy this new tech before everyone else, forcing buses to be electric if you want the federal level thumbs up, or the lottery system for example.
So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.
You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.
I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.
In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.
Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.
> In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years.
Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy
Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google.
Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.
It’s easy to say this in hindsight, though this is the first time I think I’ve seen someone say that about YouTube even though I’ve seen it about Instagram and WhatsApp a lot.
The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.
Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"
This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.
In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.
Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!
I admit that I am biased enough to immediately expect the AI agreement to be exactly what we need right now if this is how Meta reacts to it. Which I know is stupid because I genuinely have no idea what is in it.
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
Even if it gets challenged successfully (and tbh I hope it does), the damage is already done. Blocking it at this stage just pulls up the ladder behind the behemoths.
Unless the courts are willing to put injunctions on any model that made use of illegally obtained copyrighted material - which would pretty much be all of them.
We have exceptions, which are similar, but the important difference is that courts decide what is fair and what is not, whereas exceptions are written in law. It is a more rigid system that tend to favor copyright owners because if what is seen as "fair" doesn't fit one of the listed exceptions, copyright still applies. Note that AI training probably fits one of the exceptions in French law (but again, it is complicated).
I don't know the law in other European countries, but AFAIK, EU and international directives don't do much to address the exceptions to copyright, so it is up to each individual country.
You really went all out with showing your contempt, huh? I'm glad that you're enjoying the tech companies utterly dominating US citizens in the process
These regulations may end up creating a trap for European companies.
Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.
Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.
On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.
Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.
The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.
Nope. This text is embedded in HN and will survive rather better than the prompt or the search result, both of which are non-reproducible. It may bear no relation to reality but at least it won't abruptly disappear.
That it is very likely not going to work as advertised, and might even backfire.
The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.
This is the same entity that has literally ruled that you can be charged with blasphemy for insulting religious figures, so intent to protect citizens is not a motive I ascribe to them.
A good example of how this can end up with negative outcomes is the cookie directive, which is how we ended up with cookie consent popovers on every website that does absolutely nothing to prevent tracking and has only amounted to making lives more frustrating in the EU and abroad.
It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.
Cookie consent popovers were the deliberate decisions of company to create the worst possible compliance. A much simpler one could have been to stop tracking users especially when it is not their primary business.
Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.
I don’t believe that every website has colluded to give themselves a horrible user experience in some kind of mass protest against the GDPR. My guess is that companies are acting in their interests, which is exactly what I expect them to do and if the EU is not capable of figuring out what that will look like then it is a valid criticism of their ability to make regulations
Well, pragmatically, I'd say no. We must judge regulations not by the well wishes and intentions behind them but the actual outcomes they have. These regulations affect people, jobs and lives.
The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.
I think OP is criticising blindly trusting the regulation hits the mark because Meta is mad about it. Zuckerberg can be a bastard and correctly call out a burdensome law.
"Even the very wise cannot see all ends." And these people aren't what I'd call "very wise."
Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.
How about not assuming by default? How about testing something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?
Well Europe haven't enacted policies actually breaking American monopolies until now.
Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
A position which is essentially reasonable if not too polite.
> So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
Maybe the others have put in a little more effort to understand the regulation before blindly criticising it? Similar to the GDPR, a lot of it is just common sense—if you don’t think that "the market" as represented by global mega-corps will just sort it out, that is.
Our friends in the EU have a long history of well-intentioned but misguided policy and regulations, which has led to stunted growth in their tech sector.
Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.
Yet at the same time, they also have a long history of very successful policy, such as the USB-C issue, but also the GDPR, which has raised the issue of our right to privacy all over the world.
How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?
Who said money. Time and human effort are the most valuable commodities.
That time and effort wasted on consultants and lawyers could have been spent on more important problems or used to more efficiently solve the current one.
Which... has the consequences of stifling innovation. Regulations/policy is two-way street.
Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?
The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.
Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.
You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?
No, the laws that ensures that private individuals have the power to know what is stored about them, change incorrect data, and have it deleted unless legally necessary to hold it - all in a timely manner and financially penalize companies that do not.
We don't like what trillion-dollar supranational corporations and infinite VC money are doing with tech.
Hating things like "We're saving your precise movements and location for 10+ years" and "we're using AI to predict how much you can be charged for stuff" is not hating technology
I’d side with Europe blindly over any corporation.
The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by
That's the issue with people's from a certain side of politics, they don't vote for something they always side / vote against something or someone ... Blindly. It's like pure hate going over reason. But it's ok they are the 'good' ones so they are always right and don't really need to think
Sometimes people are just too lazy to read an article. If you just gave one argument in favor of Meta, then perhaps that could have started a useful conversation.
No... making teenagers feel depressed sometimes is not in fact worse than facilitating the Holocaust, using human limbs as currency, enslaving half the world and dousing the earth with poisons combined.
I'm not saying Meta isn't evil - they're a corporation, and all corporations are evil - but you must live in an incredibly narrow-minded and privileged bubble to believe that Meta is categorically more evil than all other evils in the span of human history combined.
Go take a tour of Dachau and look at the ovens and realize what you're claiming. That that pales in comparison to targeted ads.
Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.
The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.
And look which decision they all made.
A few notable examples do exist: https://fabiensanglard.net/
No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.
The mistake the EU made was
to not foresee the madness used to make these decisions.
I’ll give you that it was an ugly, ugly outcome. :(
> The mistake the EU made was to not foresee the madness used to make these decisions.
It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.
Interesting framing. If you continue this line of thought, it will end up in a philosophical argument about what kind of image of humanity one has. So your solution would be to always expect everybody to be the worst version of themselves? In that case, that will make for some quite restrictive laws, I guess.
People are generally responsive to incentives. In this case, the GDPR required:
1. Consent to be freely given, specific, informed and unambiguous and as easy to withdraw as to give
2. High penalties for failure to comply (€20 million or 4 % of worldwide annual turnover, whichever is higher)
Compliance is tricky and mistakes are costly. A pop-up banner is the easiest off-the-shelf solution, and most site operators care about focusing on their actual business rather than compliance, so it's not surprising that they took this easy path.
If your model of the world or "image of humanity" can't predict an outcome like this, then maybe it's wrong.
> and most site operators care about focusing on their actual business rather than compliance,
And that is exactly the point. Thank you. What is encoded as compliance in your example is actually the user experience. They off-loaded responsibility completely to the users. Compliance is identical to UX at this point, and they all know it. To modify your sentence: “and most site operators care about focusing on their actual business rather than user experience.”
The other thing is a lack of differentiation. The high penalities you are talking about are for all but of the top traffic website. I agree, it would be insane to play the gamble of removing the banners in that league. But tell me: why has ever single-site- website of a restaurant, fishing club and retro gamer blog a cookie banner? For what reason? They won’t making a turnover you dream about in your example even if they would win the lottery, twice.
Well, you and I could have easily anticipated this outcome. So could regulators. For that reason alone…it’s stupid policy on their part imo.
Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?
I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.
I’m sure there are other creative solutions as well. Or just implementing larger fines.
If the law incentivized practically every website to implement the law in the "wrong" way, then the law seems wrong and its implications weren't fully thought out.
Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).
Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
Actually, it's because marketing departments rely heavily on tracking cookies and pixels to be their job, as their job is measured on things like conversations and understanding how effective their ad spend is.
The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.
No it's because I'll get fined by some bureaucrat who has never run a business in his life if I don't put a pointless popup on my stupid-simple shopify store.
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
Sure, but Meta saying "We share concerns raised by these businesses" translates to: It is in our and only our benefit for PR reasons to agree with someone, we don't care who they are, we don't give a fuck, but just this second it sounds great to use them for our lobbying.
Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.
Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.
Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.
I hope this isn't coming down to an argument of "AI can't advance if there are rules". Things like copyright, protection of the sources of information, etc.
Nit: (possibly cnbc's fault)
there should be a hyphen to
clarify meta opposes overreach, not growth. "growth-stunting overreach" vs "growth (stunting overreach)"
“Heft of EU endorsement.” It’s amazing how Europeans have simply acquiesced to an illegitimate EU imitation government simply saying, “We dictate your life now!”.
European aristocrats just decided that you shall now be subjects again and Europeans said ok. It’s kind of astonishing how easy it was, and most Europeans I met almost violently reject that notion in spite of the fact that it’s exactly what happened as they still haven’t even really gotten an understanding for just how much Brussels is stuffing them.
In a legitimate system it would need to be up to each sovereign state to decide something like that, but in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
> in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
I am happy to inform you that the EU actually works according to treaties which basically cover every point of a constitution and has a full set of courts of law ensuring the parliament and the European executive respect said treaties and allowing European citizens to defend their interests in case of overreach.
> European aristocrats just decided
I am happy to inform you that the European Union has a democratically elected parliament voting its laws and that the head of commission is appointed by democratically elected heads of states and commissioners are confirmed by said parliament.
If you still need help with any other basic fact about the European Union don’t hesitate to ask.
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
No, it's a voluntary code of conduct so AI providers can start implementing changes before the conduct becomes a legal requirement, and so the code itself can be updated in the face of reality before legislators have to finalize anything. The EU does not have foresight into what reasonable laws should look like, they are nervous about unintended consequences, and they do not want to drive good-faith organizations away, they are trying to do this correctly.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
The the entire ad industry moved to fingerprinting, mobile ad kits, and 3rd party authentication login systems so it made zero difference even if they did comply. Google and Meta aren't worried about cookies when they have JS on every single website but it burdens every website user.
This is not correct, the regulation has nothing to do with cookies as the storage method, and everything to do with what kind of data is being collected and used to track people.
Meta is hardly at blame here, it is the site owners that choose to add meta tracking code to their site and therefore have to disclose it and opt-in the user via "cookie banners"
that's deflecting responsibility. it's important to care about the actual effects of decisions, not hide behind the best case scenario. especially for governments.
in this case, it is clear that the EU policy resulted in cookie banners
LMAO. Facebook is not big? Its founder is literally the sleaziest CEO out there. Cambridge Analytica, Myanmar, restrictions on Palestine, etc. Let us not fool ourselves. There are those online who seek to defend a master that could care less about them. Fascinating.
My opinion on this: Europe lags behind in this field, and thus can enact regulations that profit the consumer. We need more of those in the US.
Meta isn't actually an AI company, as much as they'd like you to think they are now. They don't mind if nobody comes out as the big central leader in the space, they even release the weights for their models.
Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.
About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.
And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
As a citizen I’m perfectly happy with the AI Act. As a “person in tech”, the kind of growth being “stunt” here shouldn’t be happening in the first place. It’s not overreach to put some guardrails and protect humans from the overreaching ideas of the techbro elite.
As a techbro elite. I find it incredibly annoying when people regulate shit that ‘could’ be used for something bad (and many good things), instead of regulating someone actually using it for something bad.
You’re too focused on the “regulate” part. It’s a lot easier to see it as a framework. It spells out what you need to anticipate the spirit of the law and what’s considerate good or bad practice.
If you actually read it, you will also realise it’s entirely comprised of “common sense”. Like, you wouldn’t want to do the stuff it says are not to be done anyway. Remember, corps can’t be trusted because they have a business to run. So that’s why when humans can be exposed to risky AI applications, the EU says the model provider needs to be transparent and demonstrate they’re capable of operating a model safely.
> It aims to improve transparency and safety surrounding the technology
Really it does, especially with some technology run by so few which is changing things so fast..
> Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth
God forbid critical things and impactful tech like this be created with a measured head, instead of this nonsense mantra of "Move fast and break things"
Id really prefer NOT to break at least what semblance of society social media hasn't already broken.
Meta on the warpath, Europe falls further behind. Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.
Yeah. He just settled the Cambridge Analytica suit a couple days ago, he basically won the Canadian online news thing, he's blown billions of dollars on his AI angle. He's jacked up and wants to fight someone.
The US, China and others are sprinting and thus spiraling towards the majority of society's destitution unless we force these billionaires hands; figure out how we will eat and sustain our economies where one person is now doing a white or blue (Amazon warehouse robots) collar job that ten use to do.
The more I read of the existing rule sets within the eurozone the less surprised I am that they make additional shit tier acts like this.
What do surprise me is anything at all working with the existing rulesets, Effectively no one have technical competence and the main purpose of legislation seems to add mostly meaningless but parentally formulated complexities in order to justify hiring more bureaucrats.
>How to live in Europe
>1. Have a job that does not need state approval or licensing.
>2. Ignore all laws, they are too verbose and too technically complex to enforce properly anyway.
Just like GDPR, it will tremendously benefit big corporations (even if Meta is resistant) and those who are happy NOT to follow regulations (which is a lot of Chinese startups).
The economy does not exist in a vacuum. Making number go up isn't the end goal, it is to improve citizens lives and society as a whole. Everything is a tradeoff.
I charge my phone wirelessly. The presence of a port isn't a positive for me. It's just a hole I could do without. The shape of the hole isn't important.
Europe is the world‘s second largest economy and has the world‘s highest standard of living. I’m far from a fan of regulation but they’re doing a lot of things right by most measures. Irrelevancy is unlikely in their near future.
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
[1] https://www.lw.com/en/insights/2024/11/european-commission-r...
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
Lovely when they try to regulate a burgeoning market before we have any idea what the market is going to look like in a couple years.
The whole point of regulating it is to shape what it will look like in a couple of years.
Regulators often barely grasp how current markets function and they are supposed to be futurists now too? Government regulatory interests almost always end up lining up with protecting entrenched interests, so it's essentially asking for a slow moving group of the same mega companies. Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.
The EU is founded on the idea of markets and regulation.
So the solution is to allow the actual entrenched interests to determine the future of things when they also barely grasp how the current markets function and are currently proclaiming to be futurists?
The best way for "entrenched interests" to stifle competition is to buy/encourage regulation that keeps everybody else out of their sandbox pre-emptively.
For reference, see every highly-regulated industry everywhere.
You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?
Regulation exists because of monopolistic practices and abuses in the early 20th century.
That's a bit oversimplified. Humans have been creating authority systems trying to control others lives and business since formal societies have been a thing, likely even before agriculture. History is also full of examples of arbitrary and counter productive attempts at control, which is a product of basic human nature combined with power, and why we must always be skeptical.
As a member of 'humanity', do you find yourself creating authority systems for AI though? No.
If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.
The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.
That can be, however regulation has just changed monopolistic practices to even more profitable oligarchaistic practices. Just look at Standard Oil.
Won't somebody please think of the children?
They’re demanding collective conversation. You don’t have to be involved if you prefer to be asocial except to post impotent rage online.
Same way the pols aren’t futurists and perfect neither is anyone else. Everyone should sit at the table and discuss this like adults.
You want to go live in the hills alone, go for it, Dick Proenneke. Society is people working collectively.
You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.
Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.
You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).
I'd urge you to read a book like Black Swan, or study up on statistics.
Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.
(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.
The experience with other industries like cars (specially EV) shows that the ability of EU regulators to shape global and home markets is a lot more limited than they like to think.
Not really china make big policy bet a decade early and win the battle the put the whole government to buy this new tech before everyone else, forcing buses to be electric if you want the federal level thumbs up, or the lottery system for example.
So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.
You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.
What will happen, like every time a market is regulated in the EU, is that the market will move on without the EU.
The point is to stop and deter market failure, not anticipate hypothetical market failure
If the regulators were qualified to work in the industry, then guess what: they'd be working in the industry.
[dead]
We know what the market will look like. Quasi monopoly and basic user rights violated.
I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.
In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.
Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.
> In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years.
Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy
Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google. Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.
It’s easy to say this in hindsight, though this is the first time I think I’ve seen someone say that about YouTube even though I’ve seen it about Instagram and WhatsApp a lot.
The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.
Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.
Exactly. No anonymity, no thought crime, lots of filters to screen out bad misinformation, etc. Regulate it.
they dont want a marlet. They want total control, as usual for control freaks.
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"
This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
> LLMs are hardly reliable ways to reproduce copyrighted works
Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
it's like these people never tried asking for song lyrics
I have a Xerox machine that can reliably reproduce copyrighted works. Is that a problem, too?
Blaming tools for the actions of their users is stupid.
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
LLMs do not have all copyrighted works in them.
In some cases they can be prompted to guess a number of tokens that follow an excerpt from another work.
They do not contain all copyrighted works, though. That’s an incorrect understanding.
Are there any LLMs available with a, "give me copyrighted material" button? I don't think that is how they work.
Commercial use of someone's image also already has laws concerning that as far as I know, don't they?
You'd think wrong.
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.
https://en.wikipedia.org/wiki/Printer_tracking_dots
i believe only color printers are known to have this functionality, and it’s typically used for detecting counterfeit, not for enforcing copyright
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
> Still, it's a decent example of blaming the tool for the actions of its users.
They're not really "blaming" the tool though. They're using a supply chain attack against the subset of users they're interested in.
EU regulations are sometimes able to bully the world into compliance (eg. cookies).
Usually minorities are able to impose "wins" on a majority when the price of compliance is lower than the price of defiance.
This is not the case with AI. The stakes are enormous. AI is full steam ahead and no one is getting in the way short of nuclear war.
But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.
In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.
Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!
I admit that I am biased enough to immediately expect the AI agreement to be exactly what we need right now if this is how Meta reacts to it. Which I know is stupid because I genuinely have no idea what is in it.
There seem to be 3 chapters of this "AI Code of Practice" https://digital-strategy.ec.europa.eu/en/policies/contents-c... and it's drafting history https://digital-strategy.ec.europa.eu/en/policies/ai-code-pr...
I did not read it yet, only familiar with the previous AI Act https://artificialintelligenceact.eu/ .
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
> is it clear yet if it can be called fair use?
Yes.
https://www.publishersweekly.com/pw/by-topic/digital/copyrig...
Though the EU has its own courts and laws.
District judge pretrial ruling on June 25th, I'd be surprised this doesn't get challenged soon in higher courts.
And acquiring the copyrighted materials is still illegal - this is not a blanket protection for all AI training on copyrighted materials
Even if it gets challenged successfully (and tbh I hope it does), the damage is already done. Blocking it at this stage just pulls up the ladder behind the behemoths.
Unless the courts are willing to put injunctions on any model that made use of illegally obtained copyrighted material - which would pretty much be all of them.
If France, fair use doesn't even exist!
We have exceptions, which are similar, but the important difference is that courts decide what is fair and what is not, whereas exceptions are written in law. It is a more rigid system that tend to favor copyright owners because if what is seen as "fair" doesn't fit one of the listed exceptions, copyright still applies. Note that AI training probably fits one of the exceptions in French law (but again, it is complicated).
I don't know the law in other European countries, but AFAIK, EU and international directives don't do much to address the exceptions to copyright, so it is up to each individual country.
Being evil doesn't make them necessarily wrong.
Agreed, that's why I'm calling out the stupidity of my own bias.
[flagged]
[flagged]
It seems EU governments should be preventing US companies from dominating their countries.
Who hurt you?
You really went all out with showing your contempt, huh? I'm glad that you're enjoying the tech companies utterly dominating US citizens in the process
There’s a summary of the guidelines here for anyone who is wondering:
https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
These regulations may end up creating a trap for European companies.
Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.
Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.
On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.
Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.
And then they'll get fined a few billion anyway to cover the gap for no European tech to tax.
> It’s certainly onerous.
What exactly is onerous about it?
[flagged]
This all seems fine.
Most of these items should be implemented by major providers…
The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.
I don't care about your overly verbose, blandly written slop. If I wanted a llm summary, I would ask an llm myself.
This really is the 2025 equivalent to posting links to a google result page, imo.
More verbose than the source text? And who cares about bland writing when you're summarizing a legal text?
It is... helpful though. More so than your reply
Touché, I'll grant you that.
Nope. This text is embedded in HN and will survive rather better than the prompt or the search result, both of which are non-reproducible. It may bear no relation to reality but at least it won't abruptly disappear.
Unless, ya know, it gets marked as Flagged/Dead.
I'm surprised that most of the comments here are siding with Europe blindly?
Am I the only one who assumes by default that European regulation will be heavy-handed and ill conceived?
What is bad about heavy handed regulation to protect citizens?
That it is very likely not going to work as advertised, and might even backfire.
The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.
This is the same entity that has literally ruled that you can be charged with blasphemy for insulting religious figures, so intent to protect citizens is not a motive I ascribe to them.
Will they resort to turning off the Internet to protect citizens?
Is this AI agreement about "turning off the Internet"?
Or maybe just exclude Meta from the EU? :)
You end up with anemic industry and heavy dependability on foreign players.
A good example of how this can end up with negative outcomes is the cookie directive, which is how we ended up with cookie consent popovers on every website that does absolutely nothing to prevent tracking and has only amounted to making lives more frustrating in the EU and abroad.
It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.
Cookie consent popovers were the deliberate decisions of company to create the worst possible compliance. A much simpler one could have been to stop tracking users especially when it is not their primary business.
Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.
I don’t believe that every website has colluded to give themselves a horrible user experience in some kind of mass protest against the GDPR. My guess is that companies are acting in their interests, which is exactly what I expect them to do and if the EU is not capable of figuring out what that will look like then it is a valid criticism of their ability to make regulations
Perfect example of regulation shaping a market. And succeeding at only ill results.
So because sometimes a regulation misses the mark, governments should not try to regulate?
Well, pragmatically, I'd say no. We must judge regulations not by the well wishes and intentions behind them but the actual outcomes they have. These regulations affect people, jobs and lives.
The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.
I think OP is criticising blindly trusting the regulation hits the mark because Meta is mad about it. Zuckerberg can be a bastard and correctly call out a burdensome law.
[dead]
He also said “ill conceived”
it does not protect citizens? the EU shoves down a lot of the member state's throats.
"Even the very wise cannot see all ends." And these people aren't what I'd call "very wise."
Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.
"blindly"? Only if you assume you are right in your opinion can you arrive at the conclusion that your detractors didn't learn about it.
Since you then admit to "assume by default", are you sure you are not what you complain about?
> Am I the only one who assumes by default
And that's the problem: assuming by default.
How about not assuming by default? How about testing something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?
Well Europe haven't enacted policies actually breaking American monopolies until now.
Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
A position which is essentially reasonable if not too polite.
> So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
The EU sucks at venture capital.
Or you know, some actually read it and agree?
So you're surprised that people are siding with Europe blindly, but you're "assuming by default" that you should side with Meta blindly.
Perhaps it's easier to actually look at the points in contention to form your opinion.
I don't remember saying anything about blindly deciding things being a good thing.
Maybe the others have put in a little more effort to understand the regulation before blindly criticising it? Similar to the GDPR, a lot of it is just common sense—if you don’t think that "the market" as represented by global mega-corps will just sort it out, that is.
Our friends in the EU have a long history of well-intentioned but misguided policy and regulations, which has led to stunted growth in their tech sector.
Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.
Yet at the same time, they also have a long history of very successful policy, such as the USB-C issue, but also the GDPR, which has raised the issue of our right to privacy all over the world.
How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?
Hard disagree on both GDPR and USBC.
If I had to pick a connector that the world was forced to use forever due to some European technocrat, I would not have picked usb-c.
Hell, the ports on my MacBook are nearly shot just a few years in.
Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.
The USB-C charging ports on my phones have always collected lint to the point they totally stop working and have to be cleaned out vigorously.
I don't know how this problem is so much worse with USB-C or the physics behind it, but it's a very common issue.
This port could be improved for sure.
> Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.
Monetary value, certainly, but that’s considering money as the only desirable value to measure against.
Who said money. Time and human effort are the most valuable commodities.
That time and effort wasted on consultants and lawyers could have been spent on more important problems or used to more efficiently solve the current one.
I mean, getting USB-C to be usable on everything is like a nice-to-have, I wouldn't call it "very successful policy".
It’s just an example. The EU has often, and often successfully, pushed for standardisation to the benefit of end users.
Which... has the consequences of stifling innovation. Regulations/policy is two-way street.
Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?
The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.
Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.
Well intentioned, but with negative side-effects.
I'm specifically referring to several comments that say they have not read the regulation at all, but think it must be good if Meta opposes it.
> GDPR
You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?
That is malicious compliance with the law, and more or less indicative of a failure of enforcement against offenders.
No, the laws that ensures that private individuals have the power to know what is stored about them, change incorrect data, and have it deleted unless legally necessary to hold it - all in a timely manner and financially penalize companies that do not.
That's not the GDPR.
Everything in this thread even remotely anti-EU-regulation is being extreme downvoted
Yeah it's kinda weird.
Feels like I need to go find a tech site full of people who actually like tech instead of hating it.
Don't know if I'm biased but it seems there has been a slow but consistent and accelerating redditification of hacker news.
Your opinions aren't the problem, and tech isn't the problem. It's entirely your bad-faith strawman arguments and trolling.
https://news.ycombinator.com/item?id=44609135
That feeling is correct: this site is better without you. Please put your money where your mouth is and leave.
I like tech
I don't like meta or anything it has done, or stands for
As others have pointed out, we like tech.
We don't like what trillion-dollar supranational corporations and infinite VC money are doing with tech.
Hating things like "We're saving your precise movements and location for 10+ years" and "we're using AI to predict how much you can be charged for stuff" is not hating technology
No we like tech that works for the people/public, not against them. I know its a crazy idea.
Tech and techies don't like to be monopolized
If you don't hate big tech you haven't paying attention. Enshittification became a popular word for a reason.
I like tech, but I despise cults
Are you suggesting something here?
The regulations are pretty reasonable though.
It is fascinating. I assume that the tech world is further to the left, and that interpretation of "left" is very pro-AI regulation.
I’d side with Europe blindly over any corporation.
The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by
All corporations that exist everywhere make worse decisions than Europe is a weirdly broad statement to make.
[dead]
Are you aware of the irony in your post?
If I've got to side blindly with any entity it is definitely not going to be Meta. That's all there is.
I feel the same but about the EU. After all, I have a choice whether to use Meta or not. There is no escaping the EU sort of leaving my current life.
Meta famously tracks people extensively even if they don't have an account there, through a technique called shadow profiles.
I mean, ideally no one would side blindly at all :D
That's the issue with people's from a certain side of politics, they don't vote for something they always side / vote against something or someone ... Blindly. It's like pure hate going over reason. But it's ok they are the 'good' ones so they are always right and don't really need to think
Sometimes people are just too lazy to read an article. If you just gave one argument in favor of Meta, then perhaps that could have started a useful conversation.
Perhaps… if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind…
>if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind.
You need some perspective - Meta wouldn't even crack the top 100 in terms of evil:
https://en.m.wikipedia.org/wiki/East_India_Company
https://en.wikipedia.org/wiki/Abir_Congo_Company
https://en.wikipedia.org/wiki/List_of_companies_involved_in_...
https://en.wikipedia.org/wiki/DuPont#Controversies_and_crime...
https://en.m.wikipedia.org/wiki/Chiquita
this alone is worse than all of what you listed combined
https://www.business-humanrights.org/en/latest-news/meta-all...
No... making teenagers feel depressed sometimes is not in fact worse than facilitating the Holocaust, using human limbs as currency, enslaving half the world and dousing the earth with poisons combined.
it is when you consider number of people affected
No, it isn't.
I'm not saying Meta isn't evil - they're a corporation, and all corporations are evil - but you must live in an incredibly narrow-minded and privileged bubble to believe that Meta is categorically more evil than all other evils in the span of human history combined.
Go take a tour of Dachau and look at the ovens and realize what you're claiming. That that pales in comparison to targeted ads.
Just... no.
all of the combined pales in comparison to what meta did and is doing to society at the scale of which they are doing it
Depends on the visibility of the weapon used and the time scale it starts to show the debilitating effects.
EU is going to add popups to all the LLMs like they did all the websites. :(
No, the EU did not do that.
Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.
The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.
And look which decision they all made.
A few notable examples do exist: https://fabiensanglard.net/ No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.
The mistake the EU made was to not foresee the madness used to make these decisions.
I’ll give you that it was an ugly, ugly outcome. :(
> The mistake the EU made was to not foresee the madness used to make these decisions.
It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.
Interesting framing. If you continue this line of thought, it will end up in a philosophical argument about what kind of image of humanity one has. So your solution would be to always expect everybody to be the worst version of themselves? In that case, that will make for some quite restrictive laws, I guess.
People are generally responsive to incentives. In this case, the GDPR required:
1. Consent to be freely given, specific, informed and unambiguous and as easy to withdraw as to give 2. High penalties for failure to comply (€20 million or 4 % of worldwide annual turnover, whichever is higher)
Compliance is tricky and mistakes are costly. A pop-up banner is the easiest off-the-shelf solution, and most site operators care about focusing on their actual business rather than compliance, so it's not surprising that they took this easy path.
If your model of the world or "image of humanity" can't predict an outcome like this, then maybe it's wrong.
> and most site operators care about focusing on their actual business rather than compliance,
And that is exactly the point. Thank you. What is encoded as compliance in your example is actually the user experience. They off-loaded responsibility completely to the users. Compliance is identical to UX at this point, and they all know it. To modify your sentence: “and most site operators care about focusing on their actual business rather than user experience.”
The other thing is a lack of differentiation. The high penalities you are talking about are for all but of the top traffic website. I agree, it would be insane to play the gamble of removing the banners in that league. But tell me: why has ever single-site- website of a restaurant, fishing club and retro gamer blog a cookie banner? For what reason? They won’t making a turnover you dream about in your example even if they would win the lottery, twice.
> Compliance is tricky
How is "not selling user data to 2000+ 'partners'" tricky?
> most site operators care about focusing on their actual business
How is their business "send user's precise geolocation data to a third party that will keep that data for 10 years"?
Compliance with GDPR is trivial in 99% of cases
Well, you and I could have easily anticipated this outcome. So could regulators. For that reason alone…it’s stupid policy on their part imo.
Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?
I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.
I’m sure there are other creative solutions as well. Or just implementing larger fines.
If the law incentivized practically every website to implement the law in the "wrong" way, then the law seems wrong and its implications weren't fully thought out.
"If you have a dumb incentive system, you get dumb outcomes" - Charlie Munger
> The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
You are absolutely right... Here is the site on europa.eu (the EU version of .gov) that goes into how the GDPR works. https://commission.europa.eu/law/law-topic/data-protection/r...
Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).
Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
No popup is required, just every lobotomized idiot copies what the big players do....
Oh ma dey have popups. We need dem too! Haha, we happy!
Actually, it's because marketing departments rely heavily on tracking cookies and pixels to be their job, as their job is measured on things like conversations and understanding how effective their ad spend is.
The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.
No it's because I'll get fined by some bureaucrat who has never run a business in his life if I don't put a pointless popup on my stupid-simple shopify store.
Is it an option for your simple store to not collect data about subjects without their consent? Seems like an easy win.
Your choice to use frameworks subsidized by surveillance capitalism doesn't need to preclude my ability to agree to participate does it?
Maybe a handy notification when I visit your store asking if I agree to participate would be a happy compromise?
I hate these popups so much, the fact that they havent corrected any of this bs shows how slow these people are to move
Who are 'they', and 'these people'? Nb I haven't had a pop up for years. Perhaps it could be you that is slow. Do you ad-blocking?
https://www.joindns4.eu/for-public
Presumably it is Meta's growth they have in mind.
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
Sure, but Meta saying "We share concerns raised by these businesses" translates to: It is in our and only our benefit for PR reasons to agree with someone, we don't care who they are, we don't give a fuck, but just this second it sounds great to use them for our lobbying.
Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.
Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.
Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.
It's always the go to excuse against any regulation.
I hope this isn't coming down to an argument of "AI can't advance if there are rules". Things like copyright, protection of the sources of information, etc.
Meta knows all there is about overreach and of course they don’t want that stunted.
Nit: (possibly cnbc's fault) there should be a hyphen to clarify meta opposes overreach, not growth. "growth-stunting overreach" vs "growth (stunting overreach)"
Why does meta need to sign anything? I thought the EU made laws that anyone operating in the EU including meta had to comply to.
It's not a law, it's a voluntary code of conduct given heft by EU endorsement.
“Heft of EU endorsement.” It’s amazing how Europeans have simply acquiesced to an illegitimate EU imitation government simply saying, “We dictate your life now!”.
European aristocrats just decided that you shall now be subjects again and Europeans said ok. It’s kind of astonishing how easy it was, and most Europeans I met almost violently reject that notion in spite of the fact that it’s exactly what happened as they still haven’t even really gotten an understanding for just how much Brussels is stuffing them.
In a legitimate system it would need to be up to each sovereign state to decide something like that, but in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
You don’t understand the fundamental structure of the EU
> in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
I am happy to inform you that the EU actually works according to treaties which basically cover every point of a constitution and has a full set of courts of law ensuring the parliament and the European executive respect said treaties and allowing European citizens to defend their interests in case of overreach.
> European aristocrats just decided
I am happy to inform you that the European Union has a democratically elected parliament voting its laws and that the head of commission is appointed by democratically elected heads of states and commissioners are confirmed by said parliament.
If you still need help with any other basic fact about the European Union don’t hesitate to ask.
> it's a voluntary code of conduct
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
No, it's a voluntary code of conduct so AI providers can start implementing changes before the conduct becomes a legal requirement, and so the code itself can be updated in the face of reality before legislators have to finalize anything. The EU does not have foresight into what reasonable laws should look like, they are nervous about unintended consequences, and they do not want to drive good-faith organizations away, they are trying to do this correctly.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
Good. As Elon says, the only thing the EU does export is regulation. Same geniuses that make us click 5 cookie pop-ups every webpage
Elon is an idiot.
If he disagrees with EU values so much, he should just stay out of the EU market. It's a free world, nobody forced him to sell cars in the EU.
They didn't give us that. Mostly non-compliant websites gave us that.
The the entire ad industry moved to fingerprinting, mobile ad kits, and 3rd party authentication login systems so it made zero difference even if they did comply. Google and Meta aren't worried about cookies when they have JS on every single website but it burdens every website user.
This is not correct, the regulation has nothing to do with cookies as the storage method, and everything to do with what kind of data is being collected and used to track people.
Meta is hardly at blame here, it is the site owners that choose to add meta tracking code to their site and therefore have to disclose it and opt-in the user via "cookie banners"
that's deflecting responsibility. it's important to care about the actual effects of decisions, not hide behind the best case scenario. especially for governments.
in this case, it is clear that the EU policy resulted in cookie banners
Trump literally started a trade war because the EU exports more to the US than vice versa.
Interesting because OpenAI committed to signing
https://openai.com/global-affairs/eu-code-of-practice/
The biggest player in the industry welcomes regulation, in hopes it’ll pull the ladder up behind them that much further. A tale as old as red tape.
LMAO. Facebook is not big? Its founder is literally the sleaziest CEO out there. Cambridge Analytica, Myanmar, restrictions on Palestine, etc. Let us not fool ourselves. There are those online who seek to defend a master that could care less about them. Fascinating. My opinion on this: Europe lags behind in this field, and thus can enact regulations that profit the consumer. We need more of those in the US.
> Let us not fool ourselves. There are those online who seek to defend a master that could care less about them. Fascinating.
How could you possibly infer what I said as a defense of Meta rather than an indictment of OpenAI?
Fascinating.
Meta isn't actually an AI company, as much as they'd like you to think they are now. They don't mind if nobody comes out as the big central leader in the space, they even release the weights for their models.
Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.
[dead]
Yeah well OpenAI also committed to being open.
Why does anybody believe ANYthing OpenAI states?!
Sam has been very pro-regulation for a while now. Remember his “please regulate me” world tour?
OpenAI does direct business with government bodies. Not sure about Meta.
About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.
And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
As a citizen I’m perfectly happy with the AI Act. As a “person in tech”, the kind of growth being “stunt” here shouldn’t be happening in the first place. It’s not overreach to put some guardrails and protect humans from the overreaching ideas of the techbro elite.
As a techbro elite. I find it incredibly annoying when people regulate shit that ‘could’ be used for something bad (and many good things), instead of regulating someone actually using it for something bad.
You’re too focused on the “regulate” part. It’s a lot easier to see it as a framework. It spells out what you need to anticipate the spirit of the law and what’s considerate good or bad practice.
If you actually read it, you will also realise it’s entirely comprised of “common sense”. Like, you wouldn’t want to do the stuff it says are not to be done anyway. Remember, corps can’t be trusted because they have a business to run. So that’s why when humans can be exposed to risky AI applications, the EU says the model provider needs to be transparent and demonstrate they’re capable of operating a model safely.
> It aims to improve transparency and safety surrounding the technology
Really it does, especially with some technology run by so few which is changing things so fast..
> Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth
God forbid critical things and impactful tech like this be created with a measured head, instead of this nonsense mantra of "Move fast and break things"
Id really prefer NOT to break at least what semblance of society social media hasn't already broken.
Meta on the warpath, Europe falls further behind. Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.
> Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.
You talking about Zuckerberg?
Yeah. He just settled the Cambridge Analytica suit a couple days ago, he basically won the Canadian online news thing, he's blown billions of dollars on his AI angle. He's jacked up and wants to fight someone.
The US, China and others are sprinting and thus spiraling towards the majority of society's destitution unless we force these billionaires hands; figure out how we will eat and sustain our economies where one person is now doing a white or blue (Amazon warehouse robots) collar job that ten use to do.
I have a strong aversion to Meta and Zuck but EU is pretty tone-deaf. Everything they do reeks of political and anti-American tech undertone.
They're career regulators
The more I read of the existing rule sets within the eurozone the less surprised I am that they make additional shit tier acts like this.
What do surprise me is anything at all working with the existing rulesets, Effectively no one have technical competence and the main purpose of legislation seems to add mostly meaningless but parentally formulated complexities in order to justify hiring more bureaucrats.
>How to live in Europe >1. Have a job that does not need state approval or licensing. >2. Ignore all laws, they are too verbose and too technically complex to enforce properly anyway.
Just like GDPR, it will tremendously benefit big corporations (even if Meta is resistant) and those who are happy NOT to follow regulations (which is a lot of Chinese startups).
And consumers will bear the brunt.
[flagged]
Sent from an iPhone probably having USB-C because of the EU.
Just because they occasionally (and even frequently) do good thing, does not mean that overall their policies don't harm them own economies.
The economy does not exist in a vacuum. Making number go up isn't the end goal, it is to improve citizens lives and society as a whole. Everything is a tradeoff.
I charge my phone wirelessly. The presence of a port isn't a positive for me. It's just a hole I could do without. The shape of the hole isn't important.
Besides, I posted from my laptop.
[flagged]
Please don't use ableist language.
[flagged]
Europe is the world‘s second largest economy and has the world‘s highest standard of living. I’m far from a fan of regulation but they’re doing a lot of things right by most measures. Irrelevancy is unlikely in their near future.
the Meta that uses advertising tooling for propaganda and elected trump?