drawfloat a day ago

Skeptics think A.I. is an overhyped tool, not an existential risk. Why would they be shouting about it from the roof tops?

Plenty of comment sections with reasonable skeptical takes, they’re just not good headline fodder.

cmdrk 2 days ago

I feel that the article draws a false equivalence between skepticism and doomsaying. If anything, thinking AI is as dangerous as a nuclear weapon signals a true believer.

  • quuxplusone a day ago

    TFA doesn't even draw an "equivalence" between those two positions; it merely misuses the word "skeptic" to mean "true believer in the Singularity."

    TFA mourns the disappearance of true believers — those pundits saying LLMs would quickly achieve AGI and then go on to basically destroy the world. As that prediction became more obviously false, the pundits quietly stopped repeating it.

    "Skeptics" is not, and never was, the label for those unbridled believers/evangelists; the label was "AI doomers." But an essay titled "Where have all the AI doomers gone?" wouldn't get clicks because the title question pretty much answers itself.

  • swivelmaster 2 days ago

    Exactly. “AI will take over the world because it’s dangerously smart” is the exact opposite of skepticism!

    There are different arguments as to why AI is bad, and they’re not all coming from the same people! There’s the resource argument (it’s expensive and bad for the environment), the quality argument (hallucinations, etc.), the ethical argument (stealing copyrighted material), the moral argument (displacing millions of jobs is bad), and probably more I’m forgetting.

    Sam Altman talking about the dangers of AI in front of Congress accomplishes two things: It’s great publicity for AI’s capabilities (what CEO doesn’t want to possess the technology that could take over the world?), and it sets the stage for regulatory capture, protecting the big players from upstarts by making it too difficult/expensive to compete.

    That’s not skepticism, that’s capitalism.

    • nerfellicat 2 days ago

      I am also tired of this whole "hallucination" nonsense

      These LLMs are buggy as hell. They say they can do certain things - reasoning, coding, summarizing, research, etc - but they can't. They mangle those jobs. They are full of bugs and the teams behind them have proved they can't debug them. They thought they could "scale laws" out of it but that proved as unfruitful as it was illogical.

      What class of software can work this bad and still have people convinced the only solution is to double the amount of compute and data they need, again?

      • phito a day ago

        > What class of software can work this bad and still have people convinced the only solution is to double the amount of compute and data they need, again?

        Cloud providers :)))

        • joebe89 a day ago

          And chip producers

    • MandieD a day ago

      My biggest worry (and I still have some of those other concerns) is for school-age children using it instead of having to learn how to read for information and to write in their own words.

      For everyone who argues, "naysayers said that letting schoolchildren use calculators would ruin their minds, but it didn't," how many people do you know who can make a good estimate of their share of a restaurant bill without pulling out their phones? Think about how that translates to how well they grasp at a glance what they're getting themselves into with Klarna, car loans, etc.

  • pydry a day ago

    It also only seems to be interested in what tech CEOs have to say - people who were as disingenuous about their doom mongering as they were about their gold rush mentality.

ultimafan a day ago

I think a large amount of AI skeptics are just tired. I'm what you would call in the AI skeptic camp and the most I'll contribute to in person conversations anymore is feigned indifference towards the topic or something along the lines of "Oh, I haven't really been keeping up with AI news lately."

For me personally (and maybe for others as well?) there's two parts to this. The first is that it's exhausting to constantly be pulled into "debates" with staunch pro-AI supporters who can't accept that you have some reason to be against it or agree to disagree and move on from the conversation. The second is that I've noticed that even mild anti-AI sentiment lately seems to make people (especially tech people) see and treat you as an anti-science luddite or conspiracy theorist.

It's easier to just pretend I don't care or that I'm not interested in public than be a skeptic.

  • HumblyTossed a day ago

    This is always how it is, though. People would get straight up in your face if you told them that the Apple Vision Pro wasn't going to take over the world. People would get straight up in your face if you told them 3D TV was a fad. People would straight get up in your face if you told them there are alternatives to React. People once thought Visual Basic was going to end programming as we knew it. People once thought UML to code (Rational Rose) was going to end programming as we know it.

    There are certain types of people who will always do this - and think they're right.

    • ultimafan 13 hours ago

      Product/tech fanboyism has always existed, sure, not going to argue that point.

      But I don't remember it ever being getting anywhere near as heated or pushed in real life conversations. That kind of borderline religious fanaticism mostly lived in online spaces.

medhir 2 days ago

I distinctly remember multiple big companies quietly letting go of their AI ethics teams in 2023 around the same time the LLM craze started to pick up real steam.

I don’t think the skeptics disappeared, they just got drowned out with all the added noise that came with the new LLM hype cycle.

  • jonbiggums22 a day ago

    I think it was just that the ethics legal moat creation strategy had failed and the ethics teams no longer served any purpose to company leadership.

  • philipallstar 2 days ago

    I think this was more about scepticism around ethics teams.

JackSlateur a day ago

"Hélas ! comme je suis fatigué de tout ce qui est insuffisant et qui veut à toute force être événement !"

Nietzsche, probably speaking about IA

croes 2 days ago

>What happened? Did we solve AI’s dangers, get bored of talking about them, or just decide that existential risk was bad for business?

We realized that AI isn't as smart as we feared and the real danger lies in the management believing the AI companies ads.

fxtentacle 2 days ago

Yes. I’m an AI skeptic. And I’m quiet. Because AI is the single best thing that ever happened to my consulting business.

Vibe coding generates an infinite stream of „companies“ with proven business model and market fit and obviously unmaintainable, buggy, and impossible to scale software. But once there’s profits and their life depends on that pile of vibe-coded AI vomit continuing to work, they are happy to pay through the nose for a rewrite by experienced human coders. AI might ruin the job market for beginners, but the market for experts is exploding.

So why would I loudly complain about AI? It’s buggy as hell, but from my point of view, that’s a major upside.

  • fulafel a day ago

    Isn't that the standard startup receipe - the hardest part is supposed to be finding a working concept and customers, after which you can get funding to implement it properly?

  • phyalow a day ago

    Sure, but only until Claude Opus 6

    • johnecheck a day ago

      Yes! Just pile on the tech debt now, none of it matters once AGI comes! And that's any day now, right? ...right?

      • phyalow a day ago

        Maybe, I'm just having fun reducing entropy locally in my codebase, while it gets ejected as heat in some data center on the otherside of the planet.

  • metalman a day ago

    The Chinese symbol for disaster is made by combining the symbols for danger, and oportunity.

himeexcelanta 2 days ago

The worst externalities of AI (mass social disinformation/manipulation) were already realized years ago with the Facebook algorithmic feed. Producing content wasn’t the limiting factor; AI-enabled algorithmic targeting to maximize ad revenue without any consideration for negative externalities has already eroded civil society.

PaulHoule 2 days ago

3 words. Sam Bankman Fried.

ChrisArchitect a day ago

Article from February. Lots of skeptic time since, as developments have continued.