thw_9a83c 3 hours ago

"Going forward, it will use technology to detect underage users based on conversations and interactions on the platform, as well as information from any connected social media accounts"

Something tells me that this ain't gonna work. Kids and teens are more inventive than they probably realize.

  • gs17 3 hours ago

    I don't think the "use technology to detect underage users" approach has ever worked well for its stated intent (it works okay for being able to say you're "doing something" about the problem while not losing as many users), but it's slightly better than mandatory ID for everyone.

    • mothballed 3 hours ago

      It's just a way to mitigate being sued when inevitably the lawsuits pour in that the AI-assisted harmful decisions that were made totally weren't shared fault of the parents, other external environment, and/or unlucky genetics.

      • SoftTalker an hour ago

        And certainly not the fault of the always conscientious tech companies who are actually the ones running the services.

      • wat10000 2 hours ago

        Right, the stated intent may be to block underage users, but the actual intent is to Do Something so that you can't be accused of Not Doing Something.

  • zzo38computer an hour ago

    I would expect that "based on conversations and interactions on the platform" would result in both false positives and false negatives, and that "information from any connected social media accounts" might cause additional problems (possibly including that some people being unable to get in at all, or being able to get in once and then unable to access for reasons that are not explained). (It won't affect me because I do not use it, but it might affect someone who does try to use it)

  • alyxya 3 hours ago

    Deterring kids is still valuable. A lot of them won’t bother with figuring out how to get around the automated detection.

    • dylan604 2 hours ago

      Deterring kids just makes the kids that much more determined. What could possibly be so bad that the adults don't want the kids to use it? Let's find out...

      • ambicapter an hour ago

        It makes _some_ kids more determined, most kids probably don't care that much.

      • gilleain 2 hours ago

        As the Simpson's quote goes : "Oh Ralphy, why are you always so curious about Daddy's Forbidden Cupboard of Mysteries? ..."

  • BoorishBears 3 hours ago

    They don't care: if they put in robust sounding guards and openly state "chatbots are not for kids", they can point to them in court if someone gets around it

    • thw_9a83c 3 hours ago

      ...could be the whole point. They want to make the user to be the indemnifying party to themself. Basically getting rid of any responsibility.

      • vacuity 2 hours ago

        To be fair, what do you suppose they should do instead? ID isn't popular at all. I don't think the profit motive here is the same as Instagram's, with the latter putting up a front about child safety but making big money from children.

        • thw_9a83c 29 minutes ago

          I agree that having addicted kids on Character.AI is probably less lucrative than having them on Instagram or TikTok.

          In any case, there is a general problem on the internet regarding how to allow people to remain reasonably anonymous while keeping children out of certain services.

  • add-sub-mul-div 3 hours ago

    A model would also have to account for the emotionally stunted adults using an AI bf/gf service.

    • gs17 3 hours ago

      I doubt they care that much, since the cost of a false positive is that user having to verify their age and the alternative is all users having to verify their age.

    • BoorishBears 3 hours ago

      The number 1 reason they're doing this is so they can loosen the clamps on NSFW/romance, and they've previously released models targeted at adults intended to improve performance at that.

      • gs17 3 hours ago

        Especially now that OpenAI is entering that space officially by the end of the year.

  • LiquidSky 3 hours ago

    Nah, you just have to have some prompt that unknowingly reveals that the user has ever engaged with Roblox in any way, which leads to an instant block.

    • fgonzag 2 hours ago

      Someone who started using Roblox on its release date in 2006 at let's say 6 years old is now 25 years old.

    • zahlman 2 hours ago

      ... But those people will turn 18 eventually... ?

      • dylan604 2 hours ago

        that's the price you pay for playing Roblox

harrisonjackson an hour ago

Lots of threads in here saying this is just a "legal" protection move...

I'd like to believe that most actual people want to protect kids.

It's easy to write off corporations and forget that they are founded by real people and employ real people... some with kids of their own or with nieces or nephews etc, and some of them probably do really care.

Not saying character.ai is driven by that but I imagine the times they've been in the news were genuinely hard times to be working there...

  • b00ty4breakfast 35 minutes ago

    I'm sure some of them care genuinly but I've interacted with corporations enough to know that the individual proclivities of the parts don't necessarily appear in the whole.

  • SoftTalker 33 minutes ago

    Yeah they care so much they strictly limit what their own kids do with the technology. But push it on everyone else.

  • DuperPower 44 minutes ago

    lol they business idea itself is very easy to degenerate for money so if Who created It didnt thought putting barriers to minors from the beggining the person is stupid or greedy

  • ml-anon 6 minutes ago

    They (the leadership and the folks who left to Google) absolutely don’t give a shit about protecting kids. In fact most of America doesn’t give a shit about protecting kids (guns, SNAP, vaccines, the goddamn Epstein files).

BatteryMountain 2 hours ago

How about we just enact a law that state children aren't allowed online, not allowed to interact with interative software (like ai, user generated content). Make the parents responsible, not companies.

  • ottah 2 hours ago

    Children do have rights to free expression. They're not property and honestly some of us would not have survived to adults without the ability to explore our identity independent of parental supervision.

    Plus you're just setting kids up for failure by keeping them from understanding the adult world. You can't keep them ignorant and then drop them in the deep end when they turn 18, and expect good outcomes.

  • GaryBluto 6 minutes ago

    It's incredible how quickly people will call for authoritarianism if it means supporting their Reddit-like aversion to children.

  • kelvinjps10 an hour ago

    What about YouTube, Wikipedia? Although I can say the Internet for me was damaging in some ways, it bring more benefits like math (my math professor was really bad and I learned with YouTube) and just exploring random things in Wikipedia, also I learned English that way. Maybe a restricted internet could help here, restricting bad sites and content.

  • Fourier864 an hour ago

    Do you have kids? How would school even work? 20 years ago in high school we were expected to use the internet.

    And these days internet integration in school is far stronger, my 6 year old's daily homework is entirely online.

    • causal 12 minutes ago

      Yep. People love to make blanket statements like this without remembering how hard it was for parents to supervise Internet use even when there was only one Desktop computer in the family area. Nowadays you can almost any device to lookup porn.

  • r0fl 2 hours ago

    How will you enforce this?

    What if a child is at school where there are Chromebooks and teachers aren’t as tech savvy as the majority of hacker news?

    What if a child is at a library that has Chromebooks you can take out and use for homework?

    Wha if a child is at an older cousins place?

    What if a child is a park with their friends and uses a device?

    Should parents be next to their child helicopters parenting 24/7?

    Is that how you remember your childhood? Is that how you currently parent giving zero atonony to children?

    Blaming parents is ridiculous. Lot of parents aren’t tech savvy and are too dumb to be tech savvy and stay on top of the latest tech thing

    • fluoridation 2 hours ago

      You don't enforce it. The point of such laws is not to actually be enforced, but as a legal tool when problems come up. For example, if you as an adult say something inappropriate to a child online thinking it's an adult, you'd be protected if it's ever brought up, because the child shouldn't have been online to begin with and it was not your responsibility to check the other person's age. It's not unlike being in a bar and assuming everyone is 18 or older, because it's the bar's responsibility to forbid entry to anyone younger.

      • molave 4 minutes ago

        Such a mindset is ripe for selective enforcement and discrimination if society or someone powerful does not like you.

  • SoftTalker an hour ago

    Right. Companies get to externalize all the social costs of what they do, and simply reap profits.

  • tonyhart7 2 hours ago

    "How about we just enact a law that state children aren't allowed online"

    I literally circumvent website blocking using VPN as a kid, no one can stop anyone from going "online" in 2025

causal 3 hours ago

Finally. CAI is notoriously shady with harvesting, selling, and making it difficult to remove data. How many kids have CAI's chatbots already seduced and had their intimate conversations sold to Google?

  • BoorishBears 3 hours ago

    I suspect Google is the one pushing for this. C.ai has dramatically folded on two of its longest standing struggles in the last two months: underage users and intellectual property

    In both cases they went nuclear in a way that implies they actually don't care if the current product survives as long as C.ai (read: Google) isn't exposed to the ongoing risk

Sparkyte 2 hours ago

I think ChatBots are not very helpful if you don't treat them with scrutiny. There should be a mandatory requirement of a disclaimer before interacting with all AI.

empath75 an hour ago

There is nothing about character.ai that would be appealing for more than 30 minutes to anybody that isn't suffering from some kind of acute mental health crisis. They should shut the whole thing down. That is a cool demo that should have never been made into a product.

ChrisArchitect 6 hours ago

Related:

Teen in love with chatbot killed himself – can the chatbot be held responsible?

https://news.ycombinator.com/item?id=45726556

  • nopurpose an hour ago

    How is it possible to develop emotional connection to anyone with a goldfish memory? I'd be surprised if context size there is more than 50K tokens.

    • sosodev an hour ago

      Most AI companions have memory systems. It's still quite simplistic but it's not "goldfish memory" or just limited to the context window.

      From what I understand, some inputs from the user will trigger a tool call that searches the memory database and other times it will search for a memory randomly.

      With that said, I think people started falling in love with LLMs before memory systems and they probably even fell in love with chatbots before LLMs.

      I believe that the simple, unfortunate truth is that love is not at all a rational processes and your body doesn't need much input to produce those feelings.

    • ambicapter an hour ago

      People got attached to a bot that only asked questions.

ivape 3 hours ago

This is the only way. Tech companies cannot become like the police in many countries. The police in many countries are used as a "catch-all" function at the moment of, where they have to deal with the failure of other functions (parenting, community) exactly at the moment of (just-in time catch-all).

Your kid is on the fucking computer all day building an unhealthy relationship with essentially a computer game character. Step the fuck in. These companies absolutely have to make liability clear here. It's an 18+ product, watch your kids.

  • wagwang 2 hours ago

    Step in as parents or step in as the unabomber?

  • throwaway314155 2 hours ago

    > This is the only way.

    You're more optimistic than I am. Their announcement is virtue signaling at best. Nothing will come from this. Kids will figure out a way around their so-called detection mechanisms because if there were any false positives for adults they would lose adult customers _and_ kids customers.

    • ivape 2 hours ago

      Nothing can be stopped. All that can be done is make clear where everyone stands. We're very early in all of this, and it's important to make clear what liability is. You want to sail out to the new world with everyone else, accept the terms, which includes death. You cannot just let your kid touch AI without both eyes open.

      The universe of " I didn't know the kids would make deep fakes of their classmates ", is yet to come. Some parents going straight to fucking jail. Talk to your kids, things are moving at a dangerous pace.

TZubiri 3 hours ago

Very nice. Just yesterday I wrote about the 13-18 age group using ChatGPT and how I think it should be disallowed (without guardian consent), this was in the context of suicide cases.

https://news.ycombinator.com/item?id=45733618

On a similar note, I was completing my application for YC Startup School / Co-Founder matching program. And when listing possible ideas for startups I straight out explicitly mentioned I'm not interested in pursuing AI ideas at the moment, AI features are fine, but not as the main pitch.

It feels like at least for me the bubble has popped, I have talked also recently about the way in which the bubble might pop would be due to legal liability collapse in the courts. https://news.ycombinator.com/item?id=45727060

This added with the fact that AI was always a vague folk category of software, it's being used for robotics, NLP and fake images, I just don't think it's a real taxon.

Similar to the crypto buzz from the last season, the reputable parties will exit and stop associating, while the grifters and free-associating mercenaries will remain.

Even if you are completely selfish, it's not even hugely beneficial to be in the "AI" space, at least in my experience, customers come in with huge expectations, and non-huge budgets. Even if you sell your soul to implement a chatbot that will replace 911 operators, at this point the major actors have already done so, or not, and you are left with small companies that want to be able to fire 5 employees and will pay you 3 months of employee salary if you can get it done by vibe code completing their vibe coded prototype within a 2-3 deadline.

hatefulheart 3 hours ago

It makes no difference to their bottom line. After all, appealing to children over the age of 18 is where LLMs find their market.

  • red2awn 25 minutes ago

    They are not a profitable company at all. They only started monetizing this year and the customer base is not a fan of it at all.

  • hackernewds 3 hours ago

    If they are banning a large part of their current and future market, with competitors serving the space, how does it not affect their bottom line?

    • BoorishBears 3 hours ago

      Most competitors can't successfully serve kids yet.

      These kids hammer H100s for 30+ hours a week but will revolt at ads or the idea of paying money.

      C.ai probably only exists at its current size because Noam had access cheap access to TPUs and people who can scale inference on them at the earliest stages of their growth (and obviously because he could raise with his pedigree, but looking at things others deal with)

      Eventually if the unit economics start to work they can always roll this back, but I think people are underestimating how much of a positive this is for them

alyxya 3 hours ago

I worry that all this does is reduce future liability issues and make those users use another chatbot instead. I trust character.ai more than other chatbots for safety.

> [Dr. Nina Vasan] said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users.

I hope there’s a more gradual transition here for those users. AI companions are often far more available than other people, so it’s easy to talk more and get more attached to them. This restriction may end up being a net negative to affected users.

Edmond 3 hours ago

There is a correct way to do age verification (and information verification in general) that supports strong privacy and makes it difficult to evade:

https://news.ycombinator.com/item?id=44723418

It is also highly compatible with the internet both in terms of technical/performance scalability and utility scalability (you can use it for just about any information verification need in any kind of application).

  • Philpax 3 hours ago

    Undisclosed self-promotion.

    • Edmond 3 hours ago

      My motivation is less about self-promotion at this point and perhaps just frustration with the face-palm quality of the failure to properly implement information verification on the internet.

      Every time I hear about some dumb approach to age verification (conversation analysis...really?) or a romance scam story because of a fraudster somewhere in Malaysia..I have the need to scream...THERE IS A CORRECT SOLUTION.

      • Philpax 3 hours ago

        That's great, but you should still disclose that you're the one providing the "correct solution."

        • vntok 2 hours ago

          No, that's fine.

  • triceratops 3 hours ago

    Age verification doesn't have to be perfect or even cryptographically secure. We don't demand it for alcohol or tobacco: carcinogenic, addictive substances that cause (in the case of alcohol) impaired judgment leading to deadly accidents. There's no justification for online age verification to be more invasive or stringent than what's done today for buying alcohol or tobacco IRL.

    My proposal is here: https://news.ycombinator.com/item?id=45141744

  • wanderingbit 2 hours ago

    There are a couple big problems with this type of digital and decentralized type of authentication (I say this as a long time cryptocurrency professional who wants this to succeed):

    1. backups and account recovery: We’re working with humans here. They will lose their keys in great numbers, sometimes into the hands of malicious actors. How do users then recover their credentials in a quick and reliable manner?

    2. Fragmentation: let’s be optimistic and say digital credentials for drivers licenses are given out by _only_ 50 entities (one per State). Assuming we don’t have a single federal format for them (read: politically infeasible national id) how does facebook, let alone some rando startup, handle parsing and authenticating all these different credential formats? Oh and they can change at any time, due to some rando political issue in the given state.

    OP, you clearly know all this, so I’m just reminding you as someone down in the identity trenches.

    • Edmond 2 hours ago

      1.Backup and recovery with this solution is no different from backup and recovery of your phone. It is a potential issue but not unique. Cryptographic certificates and associated keys reside on your device.

      2.The data format issue is (or was) indeed a concern though it was never insurmountable. A data dictionary would have been the most straight forward approach to address it: https://cipheredtrust.com/doc/#data-processing

      I say data format discernment was a concern because as faith would have it, we now have the perfect tech to address that, LLMs. You can shove any data format into an LLM and it will spit out a transformation into what you are looking for without the need to know the source format.

      Browsers are integrating LLM features as APIs so this type of use would be feasible both for front and back end tasks.