The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.
I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.
This is exactly the way it should be done. Device with parental controls enabled disables content client-side when the header is detected. As far as I can tell, it's a global optimum, all trade-offs considered.
Well why haven't all the big tech companies done it then?
They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"
And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.
The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.
If it is about making everyone identifiable how come California's version doesn't require providing any identifying information when setting it up on a child's device?
Because "making everyone identifiable" isn't an explicit design goal. Rather it is merely an implicit imperative (of Facebook et al, who are pushing these laws) that casts its shadow over the design. That shadow is what results in a design based around sending identifying information from the client to the server. Once this dynamic is normalized, servers will demand ever-more identifying information and evidence that it is correct.
Note that this design does the exact opposite of giving parents control to protect their own children - rather it puts the ultimate decision making ability into the hands of corporate attorneys! For example, we can easily imagine a "Facebook4Kidz" site that does the bare legal minimum to avoid liability for addicting kids to dopamine drips, and no more. Client side software based around RTA headers would allow parents to choose to filter things like that out, whereas when the server is making the decision its anything goes as long as the corporate attorneys have given it the green light.
> The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content.
The reason that mainstream politicians are pushing is because the public wants something done to protect their kids.
Are there likely to be bad actors pushing for it for nefarious reasons as well? Sure.
Are the 'solutions' inadequate and often tech- and privacy-illiterate? Absolutely.
Is the entire impulse to demand that government 'fix' this issue wrong? Maybe.
But the idea that this is all a smoke-screen from top to bottom needs to die. Not just because it's wrong, but because it's also unhelpful. If you wade into the debate saying "It's all a lie, this was never about the kids!" you're easily dismissed as a nut and an absolutist who doesn't appreciate that real people want their real kids to be protected.
Yep, and the tech companies had years to address these concerns and did not, so now the creaky gears of government regulation are turning. They (meaning YOU, a lot of tech company employees who are now outraged about this) could have headed this off years ago and provided a solution on their own terms.
So, why are those "real people" actually not willing to do their job? I am so pissed with parents who think the government is supposed to solve their own inability to raise a child.
We expect every other consumer product/toy that kids are intended to use to be safe by default. This is like asking why parents shouldn't be responsible for testing all their kids toys for lead paint.
Yet when it comes to internet/social media technology, it's suddenly a parenting failure if they don't pre-vet every platform and website and device before allowing their kids to use it.
As a society, we collectively protect kids from stuff they aren't ready to handle. We don't let them gamble, or buy alcohol, cigarettes, or porn. For the most part, everyone buys in to this and parents can pretty much count on it. Are there exceptions, sure but they create scandals and consequences when they are discovered.
But social media and content platforms didn't feel that they had any social obligations. They did not honor this societal convention to keep inappropriate content away from kids. And the top people at these companies actually don't let their own kids use the platforms, they know how harmful they are and they know about all the addictive hooks and dark patterns of engagement that are baked into them.
Well for a start not all of them are very tech savvy, and we've built a world in which tech is essential to their day to day lives, including for their kids.
If school demands the kids have a variety of devices to do their work, and they have no idea how to lock those down to exclude (for example) social media services that we know have been designed to be as addictive as possible, can you not see why they might want someone to intervene?
(edit: Beyond that there are also tons of bad reasons, I'm not going to try and justify them. There are a lot of bad parents and just in general people who are not firing on all cylinders out there. And many of them absolutely love a government regulation to be brought in for just about anything.
We can and should argue with these people and point out why they're wrong. But saying it's "nothing to do with actually stopping kids seeing the content" fails here too.)
If public school is supposed to be free, the school should supply the required devices and take on the burden of securing those devices.
For private schools, the parents are more involved in the first place, but I would expect them to also have guidance for parents to help the less tech savvy among them.
Right. I submit we are solving the wrong problem. Just establishing age vertification doesn't magically make these vast amounts of bad parents good parents. There is a ton of other things they can and will fail at, which their kids have to absorb. If we really cared about those kids, we'd have to reconsider a lot of things. And I know what I am talking about, had to grow up with an undiagnosed ADHS+anciety mother. It was hell. And even 30 years after i moved out, she still can't see what she failed at and continues to fail at. Age verification wouldn't have helped me. MAKING her seek treatment might have helped.
No argument here, I'm not saying they're right to demand that age verification is brought in to protect kids, or that we should give up privacy etc etc.
But coming at it from the angle that "It was never about protecting kids!" is itself incorrect and unhelpful to the debate.
Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)
We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.
OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.
Any "verification" means unacceptable privacy violations.
The best route is better parental controls, that are not enabled by default. Locking down the OS like ransomware until the user submits to age verification is the wrong approach, and what Apple did in the UK needs to be highly illegal.
> Any "verification" means unacceptable privacy violations.
So I'm not necessarily arguing for age controls here, but purely on a technical level what do you think of schemes like Verifiable Credentials, which delegate verification to third parties that have already established your identity?
In theory you can set up a system that works like this:
1. User goes to restricted site and sets up an account
2. Site forwards them on to a verification service with a request "IsOver18?"
3. User selects their bank from a dropdown on the broker site
4. Broker forwards them to the bank, with a request "IsOver18?"
5. User logs in and selects "Sure, prove I am over 18 to this request"
6. Bank sends a signed response to the broker "Yep"
7. Broker verifies and sends its own signed response to the site "Yep"
8. The site tags the account as "Over 18 Status verified"
In this situation, the restricted site doesn't get anything other than a boolean answer from the broker. The broker can link a request to a given bank but doesn't get anything that gives away your identity. The bank knows your identity and that it has approved a request, but not necessarily where the request came from.
Verification broker tracks sites which make requests and records it attached to personal data. Site either sells or leaks personal data along with history of all sites visited which require age verification.
Also your solution requires a bank account, not something everyone has. Many do, but not all. Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.
> Verification broker tracks sites which make requests and records it attached to personal data.
How? What personal data?
The broker doesn't get anything other than "Site X wants to verify over 18, the user selected forward to Bank Y" and "Bank Y responds with TRUE"
> Also your solution requires a bank account, not something everyone has
True. Banks are only one example of an already trusted identity provider in this situation. But I get that there are gaps.
> Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.
Verification need only happen once per site, when setting up an account. This does introduce the possibility of a secondary market for approved accounts though, sure.
User installs a browser extension which forwards the request to everyoneisover18.com, owner of that site has a script set up to log into their bank and pass the verification challenge
Restricted-site.com gets the signed response from the broker, not the bank. In your situation there's not any need for "everyoneisover18.com" to defer to a real bank for a faked response as it signs things itself.
But restricted-site.com doesn't trust everyoneisover18.com's key, it only trusts realbroker.com's key, so the response isn't accepted. If it is found to trust fake brokers like that it gets in trouble with the law.
That's why everyoneisover18.com forwards the request to my bank or my broker and gets my signature on the behalf of literally anyone. I may charge them $5 for this service.
> That's why everyoneisover18.com forwards the request to my bank or my broker
Doesn't work. The response won't be signed by real-broker.com.
The permission request/response itself goes direct from the server at restricted-site.com to the server at real-broker.com over TLS, so you can't MITM it, it's not controlled by the client and you won't be able to just pass out a cached response.
Your malicious client plugin could potentially forward the client session details to you, so you could operate the broker page, then log in to your bank's portal and approve that request, but I don't think that's going to scale very well and I imagine your bank is likely going to rate limit you.
real-broker opens a web page allowing them to verify somehow. The browser extension sends me their URL and cookies so I can load the same page and verify myself. All automated of course.
You could, you could also go to their house and go through the process for them, but in either case I don't think it's going to scale very well (rate-limiting would seem to be called for, maybe with 2FA as well, to mitigate this sort of thing and remove the possibilities for automation).
But sure, you could subvert it on a small scale, just as you can borrow someone else's driving license to register in 'normal' systems already. You could also register an account, validate it and then sell the login details, regardless of what proof of age scheme you use.
The point is the scheme is no worse at validation than asking for ID and it protects user privacy by keeping all ID details away from individual websites, which is the more important part IMHO.
My cellphone provider will be pleased be paid to deliver all those 2FA text messages. Who's sending them? How are they getting paid? Maybe I'm actually my own phone company, so I get paid for delivering them to myself.
Your bank, like they have 2FA for every other access to your account. 2FA also doesn't need to be via SMS, and even when it is that's dirt cheap. Rate limits can be a couple of approvals per hour with daily limits of a small handful. Or a leaky-bucket style algortihm where you can do a few at a time, but you only get one more per hour. Whatever way it's done it precludes your large-scale automation attempt.
I tire of this now. We've entirely wandered off from "Here's a way to prove age without the privacy implications, that works just as well as handing over scans of ID"
[Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.
Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/
"Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"
> Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms
Indeed. For example, the strongest evidence for harm shows that negative mental health is correlated with increasing social media use, but it's an important question of whether using social media more causes mental health problems, or mental health problems mean more social media use (or both, which would suggest a spiraling effect is important to look out for and prevent).
> Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
This is entirely false scare tactic nonsense, and you really need to look at where you sourced that idea and no longer use them as a reference point. There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.
We have not just woefully bad parental controls, but in the name of privacy, modern platforms make it exceptionally hard to implement parental controls. What is being pushed here is largely a mandate that a system for parents to control what their kids can reach needs to exist and Internet companies need to support it.
(Steam is, FWIW, probably one of the best actors in this regard already, Steam Family is incredibly nuanced in the features and tools it gives parents. I have a lot of gripes about Steam but this is not a place they will have difficulty complying with the law. Heck, Steam is better at parental controls than Nintendo and Disney).
> There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.
The Parents Decide Act (PDA) goes considerably farther than superficially similar sounding laws like California's.
The California law requires that an OS allow the parent or guarding to associate an age or birthdate with the account when setting up a child's account on a device that will primarily be used for the child. It does not require any verification of the age information that the parent provides.
The PDA requires that the birthdate be provided for anyone who has an account on the device, and leaves much of the details up to the Federal Trade Commission to work out in the first 180 days after is passed. The wording of the list of things the Commission is to do suggests that the OS is supposed to actually verify age information, rather than just accept whatever a parent enters when setting up the child's device and account, and that it also has to verify that it will require the birthdate of the parent and verify that.
A Steam Deck is just an Arch Linux box. There is, intentionally by by design, no method of securing it against its user. Anyone with root access can change anything on it. There is no method of enforcing an age verification scheme on it in a way that cannot be removed or altered by a sufficiently bright and motivated teenager.
the conclusion from that discussion was that kids should not be allowed Steam Decks because that would provide them a way of getting around age certification.
The California bill, which is not called the Parents Decide Act, lets parents decide. The federal Parents Decide Act doesn't say whether parents can decide or not - it says a commission shall decide whether parents shall be able to decide, and we can predict what that commission will decide.
> any possible excuse to suggest tech companies shouldn't be accountable
The entire impetus for these bills is for Facebook (the sponsor of these bills) to escape liability for how they're currently harming kids. Facebook's only goal here is to be receiving headers that say the user is over 18, so they can continue business as usual under the assertion that any users must be adults.
Then you recognize that the solution definitely does not require privacy invasion, since presumably Facebook does not want actual proof because they hope teenagers will get around it.
That being said, the antiregulatory wonks are not all working for Facebook, and some are indeed manifestly just always opposed to any regulation at all no matter what harm is occurring.
Bear in mind the alternative: Things like Discord collecting personal data to do verification at the website level. A push for a simple "user is over 18" header is incredibly preferable from a privacy standpoint and parents being able to control and monitor it themselves.
This legislation does not require it out of the gate, but it sets up the precedent and the incentives such that it will eventually be required down the line. That's the problem with anything that gives more power, and the expectation of even more power, to the server (ie to big tech).
FWIW I personally would be supportive of legislation where the data flow went the proper way of server->client, for the user-agent to decide. Consider: Any website over a certain size must publish an appropriate set of well known tags asserting whether its content is suitable for kids of certain ages, has social aspects, the type of content, etc. Any device preloaded with an operating system over a certain marketshare must include parental control software that uses tags, as an option in the set up flow. The parental control software "fails closed" and doesn't display websites without tags. The long tails of the open web, bespoke devices, new OSes, etc remain completely unaffected.
>If it was then this kind of solution would be being legislated for.
What's more likely a global conspiracy to get age verification passed to allow these unnamed groups to identify everyone for some unknown purpose or politicians just not understanding tech?
The way people try to pretend that there can't be any organic desire for these proposals is so bizarre and is a major cause for all these proposed solutions being so technically dubious. Refusal to recognize the problem means you won't be part of solving the problem.
You do realize that for whatever reason more and more people in government positions are on the path of authoritarian agendas? Its a pretty important topic right now. All of this privacy related stuff is happening in quick succession.
I mean I cannot believe I have to post these, but here we go:
Your argument has two main flaws. First, it relies on an inherent connection between age verification and authoritarianism that is just taken for granted as true. Meta could easily be in favor of age verification because it reduces their liability and raises the barriers to entry for potential competitors. It doesn't inherently have to be authoritarianism.
But more importantly even if that connection is true, your argument relies on the current proposals of age verification being the only way to satisfy the organic desire for protecting kids from the unfettered internet. OP gave an example that could be a compromise position that addresses the need and isn't authoritarian. Why can't you support that effort?
I can support any effort that puts the responsibility into the hands of the parent without a mechanism that advances identity verification to protect their children.
The way it stands now. this issue is being used by people in power to advanced an authoritarian agenda. Its really clear to see, if you only have the will to look.
>I can support any effort that puts the responsibility into the hands of the parent without a mechanism that advances identity verification to protect their children.
Which brings us right back to what I said here[1]. We don't have to agree on the motivations behind this push. Even if you believe this is all an authoritarian conspiracy, that conspiracy could be undermined by proposals like OP's, but instead people make enemies out of these potential allies which just further empowers the people who you consider to be authoritarians. It's a failure of basic political coalition building.
Im happy to have dialog with anyone that wants to protect children under the circumstances I already described. But if these initiatives push forward IDing people to have protection, then Im sorry you are on the wrong side of life and are involved in making the future of our society worse. I don't see you as an enemy, more misguided then anything. Im sure people are going to turn this into friends and enemies, but I don't look at it that way. I have to defend freedom under all circumstances. In most cases I support deontology over utilitarianism because I have seen how far we have slid in terms of being free as a people because we want to make everyone safe..
Taking away freedoms, for any reason, is not the answer. They make us less secure [0] and promote bad actors to make things worse.
>Im happy to have dialog with anyone that wants to protect children under the circumstances I already described.
But you're ignoring my point that your dialog is actively counterproductive when you don't engage with the root of the problem.
Nowhere in here did I advocate for "taking away freedoms" or for the age verification policies as discussed in this article. The only aspect of this issue that I have argued is that there is a real organic demand from people who want help in preventing children from having unfettered access to the internet.
The reason you see me as "misguided" is because you are refusing to actually listen to what I'm saying. And then you magnify the divide with your rhetoric implying I'm out to take away your freedom. Maybe you don't look at me as an enemy, but your rhetoric and behavior is actively repellent when it could instead be welcoming as you claim to sympathetic to the only issue I have actually advocated for here.
How am I not engaging with the root of the problem? I just see it differently than you. And thats ok. I dont think the problem is solved by id verification. This is the position I have been arguing all along and Im not seeing how my position is getting in the way of what you are talking about.
The politicians that want to identify everyone capitalize on organic desire for these proposals in the form of fear-mongering and "Think of the children!"
Citizens that want these laws are unthinking drones who don't want to raise their children, and instead want legislators to do it for them.
Politicians that want these laws are the people who, ideally, want to track your every move online for a multitude of reasons, not least of which are censoring speech and controlling narratives.
Even if everything you said was true and there was a global conspiracy among the politicians, the tech crowd consistently denies and demeans these organic desires. We could cut the legs out from under these politicians if we listened to these people's concerns, considered actual solutions like OP did at the top of this thread, and turned these people into allies against those politicians. But instead we deny the actual desire to protect children and accuse them of either having ulterior motives or being sheep, turning them into permanent enemies thereby empowering those (hypothetically) conspiratorial politicians.
The public, and consumers in general often state a want or need for something that they don't actually want or that would harm their quality of life, it is correct to demean or deride these wants when they're identified, some aspects of human nature are amusing.
But there is a global conspiracy, a synchronised effort among western leaders to implement near identical solutions to this engineered "problem", the responsibility remains squarely on the shoulders of parents, I say this as a parent.
>The public, and consumers in general often state a want or need for something that they don't actually want or that would harm their quality of life, it is correct to demean or deride these wants when they're identified, some aspects of human nature are amusing.
Thank you for proving my point by doing the exact thing I said tech people do. Do you think that if you demean and deride enough people, the problem will go away?
Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.
The tech companies are the ones lobbing for age verification.
The entire point of this scheme is mass surveillance and shifting responsibility away from big tech companies. It has nothing at all to do with "protecting" kids. Preventing kids from accessing adult material is not even remotely a goal, it is a pretext. Just like every other "think of the children" argument.
That's a good idea. There could be two headers, the existing RTA header that adult sites use today [1] and another static header that explicitly states there shall be no adult content.
What is adult content? I know parents who have no problem with their kids seeing porn. I know parents who give their kids a beer. I know parents who take their kids to violent movies. I used to know parents who will give their kids cigarettes. Most parents I know will disagree with their kids doing one of the above. I know songs that were played on the radio in 1960 that would not be allowed today, even though today we allow some swearing on the radio.
That's between parents and their local governments. Yes when I was a kid my mom let me watch whatever and go wherever. The parent in my example ultimately decides what a kid may or may not do which is in alignment with existing laws. If the parent is endangering their kid that is up to them and their government to sort out.
Point being, put the controls entirely into the hands of the device owner. Options can be to default to:
- Block everything by default unless header states otherwise.
- Block only sites that state they are adult.
- Do nothing. Obey the operator. (Controls disabled on child accounts or make them an adult or otherwise unrestricted account on the device).
I think the options are just limited to our imagination.
This is the problem. What is an "adult" web site? Websites that show porn? Websites that show gore? Websites that show violence? Websites that show non-porn naked people? Websites that have curse words? Websites that promote cults and alternate religions?
Why is it the site's responsibility to "state" that they are adult, given whatever parameters they dream up? Why is it the government's responsibility to say "This is adult content, but that isn't adult content?" Shouldn't the parent get to decide which categories of content count as "adult"?
Let’s not pretend like this is a brand new problem. Even pre-Internet, there have always (well, let’s just say definitely the whole lifetime of anyone GenX or younger) been tons of first-amendment-protected content falling under all 3 of these categories: “obviously fine for children” (e.g. Sesame Street, Paw Patrol), “obviously not appropriate for children” (Hustler magazine, Pornhub), and “Controversial / maybe ok for teens / still probably not okay for 6-year-olds” (e.g. sex ed, depictions of rape, graphic violence). This last category is obviously one where Opinions May Vary, but the way we have handled it in the past has been laws. Nearly every state has statutes prohibiting sale, display, rental, or distribution to minors of material deemed “harmful to minors” - the distinction between the second and third categories is determined by a court if it really has to be. This has worked fine in the offline sphere, and it’s why I couldn’t walk into a video store when I was 8 and rent a stack of porn tapes.
At minimum, it would be a reasonable legislation topic to at minimum mandate that websites publishing obviously “Harmful to minors” content tag it as such[1]. And also it would be ideal to create some kind of campaign to tag the first category as safe (honestly Apple and Google ought to be working together on that one). If you in good faith operate a site in the controversial category, that would be no different than selling books on sex ed in a Barnes & Noble - protected.
Parents could then choose, with simple device controls:
- Allow only “tagged safe” pages (parents with very young kids, or who have a hard time monitoring use)
- Allow safe + no-tag (open-minded parents who choose to err on the side of allow, and monitor the controversial stuff themselves)
- Allow all (parents who want to be solely responsible to regulate it)
I find it frustrating that people are talking like we have to either have a completely “no rules” Internet where obviously any kindergartener is going to stumble upon super disgusting stuff, or this gross surveillance state Internet, where people have to show ID to use any site. Neither of those are how things were before the Internet and it doesn’t have to be how things are now.
[1] you might ask, what do we do when say, a Russian porn site doesn’t want to comply with this tagging. In my opinion, it seems reasonable that someone could put obviously bad faith sites like that into a block list database. In a place like the UK I would expect that to be a government regulator, but there’s no reason why that couldn’t just be something private companies do in the US. As a parent, I would pay two bucks a month to subscribe to a service like that if it were integrated into the operating systems my kids use.
That was our struggle with implementing "blocking" tech at a school I worked at. Is a kid looking up how to do a breast self exam porn? What about a self testicular exam.. What about actual Sex Ed kinds of sites?
Then those parents can turn off their browser/client’s age protections. I think that’s actually a decent argument for the solution posed by this thread.
> I know parents who have no problem with their kids seeing porn.
Surely you mean at least teenagers, and not literally children, right? Consider the prevalence of violence, racial stereotyping, and escalation of fetishism into degeneracy that clearly exists within this medium; what's the line that these parents draw? Are they making sure it's only something vanilla? Or is there no line whatsoever?
The US. If they want to serve users in other countries, or if certain states make their own rules, it's business as usual whether to serve different content there or serve a different header or take the legal risk.
It's the exact same problem that age verification faces. There are different laws in different jurisdictions and operators have to figure out how to comply with the ones that matter to them.
Think of the (current) header as meaning "we would have blocked you if we saw you were under 18" or whatever equivalent and it should make sense.
> I know parents who have no problem with their kids seeing porn.
I don't agree with showing actual children porn, but I also totally expect teenagers to find some way to get access to it in the age of the Internet.
Part of the challenge with this is cultural. Different places in the world think about sex, sexuality, and even the concept of what is a child differently. In the US, showing a woman's bare breasts to a person under 18 is generally considered wrong, and in many cases is illegal. In most of Europe it wouldn't even raise an eyebrow, because bare breasts are on television, sometimes in commercials even.
Set aside for a moment the question of age verification and age limits, we cannot even agree in any sort of universal sense what even qualifies as porn or adult content, and at what age someone should be able to see it. There's a difference between a 7 year old and a 17 year old seeing the same type of content, and there's also a difference between a photographic nude and a video of people engaged in coitus.
The story is basically the same for everything else you listed.
These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do. What the GP suggested using RTA headers though puts the control into the parent's hands, which is as it should be.
We don't need to care what France or China thinks when we make our laws that are about our own citizens. They do the same over there.
> These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do.
Yes there's a chance our rules spill over there naturally, and I don't consider that wrong either.
I considered many of the same points you mentioned.
Though, one area I am still struggling to grasp is the harm that governments are trying to mitigate. If a child were to see inappropriate material, then what harm can truly arise? Also, why do governments need to enact such laws when the onus of protecting children should be on their parents?
I am not trying to start any kind of flame war, but I really cannot see any other basis for all this prohibition that is not somehow traceable back to Western religious beliefs and the societies born and molded from such beliefs.
It seems like you might be a big believer in cultural relativism and that nothing can be right or wrong, so this may be unsuccessful, but many of us do believe that it’s harmful to the normal development of children to be exposed to certain types of content. It is mostly about maturity. A five-year-old who sees explicit sexual acts performed on a screen is going to be curious about it and be interested in trying it. He or she will likely have no sense of what would be problematic (e.g. trying to initiate such an act with a peer or an adult. Consider how they probably don’t understand ideas of consent). It’s why it’s generally considered grooming for people to exhibit that type of thing to children. Children who have been groomed frequently abuse other children (including by force), and can be taken advantage of by pedophile adults.
I think it’s important, as tough as it can be to identify where exactly the line is, distinguish the concept of a 16-year-old cranking his hog to some Internet porn (which yes, probably pretty harmless and inevitable), with little kids being exposed to explicit types of content. And little kids are curious, so just the fact that they make an attempt to find the content doesn’t mean they’re ready for it.
I appreciate your well thought out response, and I apologize for the length of my response:
As to whether I believe in cultural relativism depends on the level of abstraction we are discussing. I believe there is no way to logically prove that something is morally right or wrong in a similar manner to how a mathematical concept can be proven from pure logic alone. But this fact does not often influence my beliefs in terms of morality in the context of social contracts, diplomacy, legal frameworks, etc.. To draw a parallel, I do not believe in complete free will, but I live my life as though it does exist (I believe in more of a 'sandbox' like an RPG video game with clear constraints and limitations).
> many of us believe that it’s harmful to the normal development of children to be exposed to certain types of content.
Are these beliefs supported by evidence or are they merely conjecture? Do not get me wrong, I am not saying I completely disagree. A child exposed to various types of abuse and neglect can have detrimental effects to his or her development, and there is plenty of evidence to support a statistical relationship.
> A five-year-old who sees explicit sexual acts performed on a screen is going to be curious about it and be interested in trying it.
I believe that is quite presumptuous. By that logic, if a child is exposed to comedic content, will that child become funnier? Such conclusions remind me of the debate as to whether violent video games and other media increase aggression and acts of violence in children. The data clearly does not support this conclusion. Now, I would not say there never has been/will be a case of a child trying to replicate a sexual act due to exposure -- much like violent content -- but outliers do not define the norm.
> He or she will likely have no sense of what would be problematic (e.g. trying to initiate such an act with a peer or an adult. Consider how they probably don’t understand ideas of consent).
Understanding consent is irrelevant. Children legally and morally (as determined by my culture) cannot consent to any sexual activity under any circumstances. Consent is de facto impossible. This is a social contract that I also strongly agree with.
> It’s why it’s generally considered grooming for people to exhibit that type of thing to children.
I was under the impression the intention behind the action was more important than the action itself. There is a difference in intentions between a child stumbling upon an adult getting undressed compared to an adult undressing and exposing themselves in front of a child. One action is happenstance and the other is predatory and abusive. It's why family pictures that might have a naked baby in a bathtub is not often considered CSAM.
> Children who have been groomed frequently abuse other children (including by force), and can be taken advantage of by pedophile adults.
I believe this myth is perpetuated too often. The vast majority adults that of sexually abuse children have no history of childhood sexual abuse. Certainly, some do perpetuate the abuse, but it's not as common as some might think. It is just another attempt for abusers to garner sympathy and decrease their punishment. It's very similar to the myth that public urination can result in a registered sex offender. To my knowledge, there are no instances of this type of case in the United States. However, it is a clever little lie to tell comfort folks into living next to a registered sex offender convict of a more heinous crime.
As for children-on-children abuse, I am not certain your claim holds up, but I admit I am less knowledgeable in this area.
Fundamentally, the laws around requiring ID to view adult content do not really prevent any of the harm we are discussing. Sure, I child might not accidentally stumble upon explicit content on Pornhub. However, the laws do not stop Chester Child Molester from sending their dick pics to a kid on Discord or Roblox or whatever.
Why is it the if a child stumbles upon a parent's firearm and hurts themselves or another, the parent can be held liable in both civil and criminal court. However, if a child stumbles upon sexually explicit content via a parent's computer, the onus is placed upon everyone but the parent(s)? If the harm of exposure of sexual material to youth is so damaging, then should parents not also be held to such civil and criminal punishments?
i can make arguments as to potential merits of kids having a beer/cigarette, listening to swear words, or witnessing casual violence. i cant make an argument for letting kids see hardcore pornography in any capacity.
Swear words and violence don't cause addiction, alcohol can but it's way less likely and also easier to restrict... idk why a kid should have cigs even once though
there may be valid use cases in certain demographics eg the disabled. to me it is evidently advantageous teaching a teenager how to have a smoke or have a drink properly , so that they don't go overboard with self directed learning for a valid activity (loosening social inhibition). we could totally teach teenagers the generation and consumption of dispassionate violent relationship simulacra. may I ask what would be advantageous about this ?
it is literally always the same thing - who gets to make these decisions? if you come from a family of alcholics (there are many) you will view alcohol for what it is, one of the most dangerous drugs that someone decide should be "legal." if you come from family that lost loved ones to smoking - same thing with smokes. hardcode porn, eh, they will eventually start putting this into practice ("hard" part is personal preference) so while probably not the greatest thing to have kids exposed to who makes these decisions? Personally, if you gave me a choice between smokes and porn and I had to choose one for my kid - I would choose hardcode porn. the core issue as always - who is making decisions on what kids should or shouldn't be exposed to?! and what do you do when whenever someone else gets that power then decides that reading or math or fishing or camping or ... is not allowed?
why 90%? and who decide is it 90%? or 87%? or 94%? are we going to have a referendum to decide on this? we need 100% people to vote on this referendum or small fraction will work? ...?
Practically it's hard to ban something new across the entire country without overwhelming support like that. There are enough people who strongly think kids shouldn't be able to buy alcohol or cigarettes that it ended up getting banned in every form, in all US states (even before federal law). Wouldn't be possible with a slight majority opinion, even if an individual proposition only needs 50% of votes.
this is 1,000,000% not accurate. there are things that vast majority of people support that are never going to happen (e.g. universal background check for gun purchases) and there are things that ruling party easily gets through that are wildly unpopular.
I said it's hard to ban something without support, not that it's easy to ban with support. Not to mention, gun background checks are more controversial than you're making them out to be, in fact this is an example I would use. Even if more than 50% like the idea of a background check, not so many will trust the implementation, and not everyone will vote.
Just for completeness sake and just for fun about 40 or so states allow private sales of firearms without a background check. Of course it is on the seller to know they are not selling to a felon and they may be on the hook if the buyer does something bad though I am straying a bit off topic from age/ID verification and tracking.
Yes, the RTA header was primarily a solution specific to porn sites. The broader problem is that parental controls don't have reliable standardized signals to filter on which has led to the current nonfunctional mess.
So ideally you want a standardized header that can be used to self classify content into any number of arbitrary and potentially overlapping categories. The presence of that header should then be legally mandated with specific categories required to be marked as either present or absent.
So for example HN might be "user generated T, social media T, porn F" or similar with operators being free to include arbitrary additional categories (but we know from experience that most of them won't).
While this would be required by law, I imagine browser vendors might also drop support to load sites that don't send the header in order to coerce global compliance.
Just an opinion which I know is not super valuable but categories won't help with most sites. Anything that permits user contributed content can become any rating at any minute unless all content would require approval by a moderator before anyone could see it. A few forums support that concept but it requires a proportionate number of moderators or I suppose a very accurate and reliable AI moderator if that is even a thing. I think it's easier and probably legally safer to just tag anything that is not guaranteed to be 100% child safe at all times as adult and let parents decide if they with to approve-list the site in parental controls.
Yeah, and this is a good one. Blacklist is less likely to be ignored by parents. Both have risks of corps doing CYA strats, but less so with the blacklist. Whitelist has the advantage of being more feasible without an actual law, and also better matching how parenting works. Generally kids are given whitelists irl.
An outstanding idea. Those lobbying for age verification hate it though, because they want to be the arbiters of age, and all that juicy PII that they can analyze and resell.
Think about how they validate how old you are. Meta and Google, who are lobbying in support of this legislation,will force you to sign up with your real ID, and be the arbiter for questions like “are you old enough for this website”. For every request that you make through some third-party website that needs to know your age, Meta and Google will know where you tried to login, and for which content. They will then resell this data to the highest bidder. Additionally, through all their ad networks and tracking, they will follow your session and have verified ID to match your entire browsing history. This is the end of anonymity and privacy on the Internet.
None of this is true. The fact that there are many, many companies out here today that are doing exactly what you are claiming for the non-CA age verification laws (like in TN and TX), yet you went down the conspiracy route for Meta and Google shows how much you are being played like a fiddle.
They can feed you an conspiracy and you'll eat it up because you were primed to have a cognitive bias, and will ignore the actual, real-world harms going on.
If technically competent people specify and build this system, sure. But it’ll be specified by complete idiot politicians, influenced by Google and Meta, who 100% DO want to know your government name, DOB, etc., so we’ll end up flashing our IDs at the camera, turning around to be scanned, etc. The platform owners will tell us they “deeply care about our privacy.”
Age verification companies literally require your personal information to function. They don't want you to be able to send them a simple boolean over Tor in exchange for whatever trackable token you need to access something.
I'm not so sure. I think the push is from the government actually. But companies are not exactly opposed to it. Quite the contrary. Big corporations see compliance as a moat. Tobacco companies supported stricter regulations on tobacco advertisements, because they had the deep pockets required to follow the changing laws. Mr. Altman is all-in on AI regulation, because it will mire down competitors while OpnAI has already "slipped past the wire" and done all their training pre-crackdown. When given a choice between regulating their industry (platforms and operating systems) vs regulating someone else's (porn sites and the like) they'll always helpfully "volunteer" to be the first to be regulated. It's just good business.
"The government" is the same as those lobbying the government. The people in the government get paid to push it, so they push it, and get paid more when it goes through, by the people who want that PII to analyze.
Interesting, I've never heard of this. I see an example that involves an HTTP response header "Rating: RTA-5042-1996-1400-1577-RTA". But does this actually still get used by parental controls? I didn't run into a lot of documentation about this, including on the very badly designed RTA web site https://www.rtalabel.org/
For anyone curious about the value, the numbering on the value is just a fixed number everybody decided to use for some reason that isn't clear to me.
I would deeply prefer to do it this way, but my goodness the RTA org needs a serious brush up of their web site and information on how to use this.
But does this actually still get used by parental controls?
Some parental control applications will look for it but it is not yet legislated to be mandatory on a majority of user-agents.
All I am suggesting is we legislate the header to be added to URL's that may contain material not appropriate for small children and mandate the majority of user-agents the ones that are default installed on tablets and operating systems look for said header to trigger optional parental controls. Child accounts created by parents on the device should not be able to install alternate user-agents or bypass the controls (at least not easily). Parents should be guided through this on device setup.
Indeed their site is old and rarely touched. The ideas and concepts have not changed. It really could just be a static text site formatted in ways that law makers are used to or someone could modernize it.
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
Quite true. The US corporations act like a giant global rabid dog. Fake legislation appears in the USA - lo and behold, it is copy/pasted into the EU. At the least lobbyists are getting rich right now.
At least the EU has GDPR. In the US, our personal data is collected by every app and website and company and packaged, sold and sifted through by a vast collection of private data brokers which the government already ingests.
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.
This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
An age header is not the answer. Why should a site have to decide what content is appropriate for a 18 year old and what content is not? Who is qualified to make that decision for every 17 year old in the world? Do they know my 17 year old? Do they know the rules in our home? What if I'm OK with my kid seeing sex-education stuff, but some lawyer at Wikipedia just decides to tag sex ed articles as 18+? Now I have a shitty choice: Open up the floodgates of "18+" to my kid, do it temporarily while the kid browses the sex ed sites, or not let the kid browse them.
Letting a company or government decide what's appropriate for what exact specific age is fraught with problems.
Then this leads to a very unwelcome view that most of the problems we face are actually rooted in parents' unwillingness to invest too much time in education :)
Yes, that's how parental filters already work. They use a combination of rta tags and external data to block pages. Even works with Google safe search, firewall devices, etc. The rta ecosystem is already built out and viable.
What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children. Sanctions and embargoes are always an option.
Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.
I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).
We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.
An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.
But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.
For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.
If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.
If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.
But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.
I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything as I could not predict what people may upload which made for a very long header.
Agreed though in my example the point would be to set the header in the case the child is logged in but for whatever reason the site does not know their age. Instead of a third party site, a header is sent with the video tagged as adult that triggers parental controls if they are enabled by the device owner.
Yeah this seems like the best tradeoff. You avoid the central control infrastructure and you provide information to clients. It's also a great match with free computing devices, which can then utilize the new information, empowering users (eg parents -> parental control on device, or individuals who want to skip some kinds of content).
There are issues today with this approach such as lacking granular information for sites that have many kinds of context, but if you stop investing in the central control infra and invest in this instead that could be remedied.
I agree with the general idea, but I would like this header to be more fine grained than just a binary "adult" or not. For example, so that you can distinguish between content that is age appropriate for teenagers and older from content that is suitable for all ages.
One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.
The question was about fining entities outside of the original jurisdiction, so I am not sure what you have in mind that could be done by network/security engineers here.
In terms of fines if they do not pay the fine their country is at risk of sanctions or embargoes which is probably a bit heavy handed but may incentivize their government to also enforce the rules, collect fines keeping some for themselves and passing the original fine back to the countries implementing child safety controls.
This is extremely naive and short-sighted. There is a literal example of this happening rn, and hopefully you will see why your approach isn't that good.
UK's OFCOM is currenly issuing legal threats to 4chan, for allegedly serving adult content and not willing to implement age verification. 4chan's lawyer tells them to pound sand[0], on the basis that 4chan is hosted in the US and has zero business presence in the UK, and UK is more than welcome to ban the website on their end through UK ISPs. The saga has been ongoing for a while, and the lawyer has been pretty prolific online talking about the case.
Anyway, following your approach, UK should embargo US over 4chan not willing to implement age verification as required by UK law? I plainly don't see this happening, or even being considered, ever.
4chan servers are in the US and the owner is in Japan. If the US wanted to they could seize all the servers but they will not because they have real time monitoring of all activity on the boards and have ever since Christopher testified before congress and the site was sold. If anything 5-eyes want that site to be unrestricted. 4chan has been a goldmine of people self reporting for wanting to shoot up or bomb places, as has Reddit leading to many body-cam videos of the site users and in some cases the moderators being busted.
The IP addresses are all captured by Cloudflare. It is literally next to impossible to post on 4chan without enabling javascript on Cloudflare or buying a 4chan-pass which leaves a money trail not perfect, nothing is but most mentally unstable people do not think these things through.
Should legislation be added to require the RTA header 4chan could and likely would add it in a heart-beat. They already have some decent security headers in place.
> If the US wanted to they could seize all the servers
Are you sure you didn't misread what I said? Asking because I am not sure how what you are saying has anything to do with my point.
Why would the US even consider seizing the servers? 4chan isn't breaking any US laws, and US indicated zero interest in pursuing 4chan.
The case I am describing is about 4chan breaking UK laws (by refusing to implement age verification), and UK OFCOM is threatening 4chan with fines and more. 4chan, as you said, is located in the US, so they claim they don't care about what UK wants, and that 4chan won't implement age verification due to 4chan not having such a requirement under their operating jurisdiction (US).
The only thing UK can do is block 4chan within their country, and that's pretty much it.
They break US laws every single day. Every loli in /b/ and /gif/ thread violate several laws and yes people do debate this endlessly which I will not, discuss that with lawyers that deal with CSAM. On that alone they could easily seize all the servers if they wanted to but that will never happen because like I said it's an goldmine for people self reporting they are going to shoot up a place or show intent for a myriad of other crimes. The feds would never throw away such an easy mode treasure trove nor would I expect them to. The site started glowing hard in 2008 and glowed even harder after 2012. I even showed people how to extract IP addresses using the hashes in the thread and post ID prior to their moving to Cloudflare and the users still went into full cope.
All of this aside it would be trivial to add the RTA header to the entire site. They could add it in the Cloudflare interface in a few seconds. It would cost them nothing. Only groomers would have their jimmies rustled even despite most of the groomers having moved to Roblox.
The header should be the other way around. It should inform your site will not contain adult material. The local government should scan sites participating.
Anyway, yes, that would just solve the problem and not destroy anything. What is the reason why nobody is talking about it.
Servers can then infer user’s ages by whether or not the client renders pages given those headers or not no? See if secondary page requests (e.g images, scripts) are made or not from a client? A bad actor could use this to glean age information from the client and see whether the person viewing the page is a small child. That should be scary
I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device. Some parents have assessed the situation and trust their children to be psychologically ready for adult situations. The client could be literally any age.
Today devices do not default to accounts being child accounts. Some day this may change and may require an initial administrator password or something to that affect but this can evolve over time.
The point and overall goal should be to not signal anything to the server operator unless a credit card is being used. Everyone is whomever they claim to be as far as anyone is concerned, until payments are required which today means sharing identity and age (via the credit card information on file with the financial institution and is shared today).
In the case of RTA the only signalling taking place is a server header being transmitted to the client. The client could be anyone at any age. Nothing to explicitly leak or disclose. Server operators can guess all they desire as some do using AI based on user behavior of which they sometimes get wrong.
I disagree. The legal requirement to apply a warning label is a well known, understood and accepted process that is applied to a myriad of hazards to children and adults. As just one example businesses in some states, most notably California are compelled to add warning labels to foods and other products that could cause cancer.
That's not the best example, since the levels set for Prop 65 warnings are so low that the warnings are effectively useless; every single commercial building in CA now somehow causes cancer.
Surely we both understand the point I was making in that labels are already compelled by laws today.
Fine, cigarettes must be labelled as being a risk of causing cancer. The punishment for failing to do this is both civil and federal penalties including massive fines and federal prison time.
Now that I think about it, perhaps that example did a good job of demonstrating how ill-conceived requirements can wind up having zero effect except for just making everything a little bit more inconvenient.
I never implied an internet license. Rather if a server operator a business has content that may be adult in nature they must label their site. Businesses require a license already but that is unrelated to this.
Clients could refuse to show content that does not have headers set.
On other hand servers might choose to lie. After all that is their free speech right.
So maybe you need some third party vetting list. Ofc, that one should be fully liable for any damages misclassification can cause... But someone would step up.
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
I could be misunderstanding the context but to me that sounds like a moderation issue assuming we even want small children on social media in the first place. There should probably be a dedicated child-safe social media site that limits what communication can take place for small children and has severe punishments for adults pretending to be children for the purposes of grooming.
Moderation is like law enforcement, it doesn't prevent crimes from happening it just punishes the people they can catch. There exist severe punishments for the kinds of behavior I'm talking about, but unsurprisingly, this does not stop kids from being harmed and it doesn't undo it.
This isn't hypothetical, by the way. There are adults catfishing kids into producing CSAM [0], kidnapping and assaulting minors [1], [2], and in the most extreme case, there's a borderline cult of crazy young adults who do terrorize people for fun [3].
It is a constant game of whackamole by moderators/admins to keep this behavior out of online spaces where kids hang out.
I recognize that this is a "think of the children" argument, but indeed that's the point. The anonymous web was created without thinking about the children, just like how all social media was created without thinking about how it could be used to harm people. Age verification is the smallest step towards mitigating that harm.
Now I disagree very strongly with the laws proposed (and indeed, I've been writing/calling/talking with state reps about this locally, because I don't want my state's bill passed). But the technical challenge needs to address the real problems that legislators are trying to go after.
I am only interesting in protected the majority of children which I believe my proposal more than covers. There will always be exceptions. Today teens share porn, warez, pirated movies and music with small children in rated-G video games. I am not proposing anything for that. It is up to businesses to detect and block such things.
Point being, there will be a myriad of exceptions. I am not looking to address the exceptions. Those can be a game of whack-a-mole as they are today. I am proposing something that would prevent the vast majority of children from being exposed to the trash we today call social media and of course also porn sites.
Look, please don't sideline/marginalize people by using the "whataboutism" term. Thats being used more and more to silence dialog from people that see problems outside the focus of a specific area. Its important that we see ALL sides of the problem.
Thank you for understanding. I know sometimes topics can get out of hand with comments about related things, but I this case. We might be better off looking at all the extremities.
These aren't exceptions or whataboutism. It's the debate being had on the floors of state legislatures.
> It is up to businesses to detect and block such things.
Which is exactly why age verification legislation is hitting the books. No one (serious) cares about whether kids can download porn and R rated movies. Parental controls already exist if the threat model is preventing access to specific content that is able to report itself as _being_ that content.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content. They define specifically what classifies as an addictive stream and put the onus on service providers to assert that they're not delivering addictive streams of media to kids. An HTTP header isn't enough, because it's not about the content being shown to kids but the design patterns of how it's accessed.
Essentially: age verification isn't about porn. 18+ content stirs the pot a bit with the evangelical crowd but it's really not what people are worried about when it comes to controlling digital media access with age gates.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content.
That sounds simple to me. If a type of content is addictive then require the RTA header.
- Adult content, or possible adult content.
- User contributed or generated content (this covers most of social media)
- Site psychological profiles that are deemed addictive (TikTok and their ilk)
Overall we are describing things that are harmful to the development of the minds of small children. If adults wish to avoid such content they can create a child account on their device for themselves to be excluded from this behavior as well. I use a child account in a couple of popular video games to avoid most of the trash talking and spam. I'm not hiding my age as the games have my debit card information but rather I opt-in to parental controls.
How would this work with sites like YouTube which allow sharing of content, potentially not appropriate for children, but the content is generated by the site's users? Who will be fined for "violations"? And how would such a fine be levied, especially internationally?
I think that initially the onus would be on Youtube to figure this out. They have some very intelligent engineers. For example, if the Youtube client is receiving affiliate funds then they are easy to ID and fine. If they are random people then Youtube would have to share the violation data with the other countries and the US or UK would have to pressure those countries to participate in fining the end user. There could be financial incentives for the foreign country to participate. They can also just force label a video to be adult as they do today when enough people report it which is admittedly not uniformly applied.
This already has been solved. Youtube disables viewing via embeds for any content that has been age restricted. Either you view it on Youtube which requires logging in to see age restricted content in the first place, or you get the ! icon and the warning about needing to log in.
I have probably never met anyone that is not committing at least three (3) felonies per day. That is at least how legal theory is applied. It's a fun topic to research. As a side note it would be interesting to see how far down the totem pole they venture in terms of verification of what sites are using age/ID verification and tracking.
An anecdote: I am 40 years old and I have an Onlyfans account. I enjoy some hippie chick that makes pottery and takes pics of herself without clothes on.
I went on vacation to Tennessee and tried to log in and it said I needed to verify with their identity verification provider. Of course I refused.
Now I am home in a different state and still cannot log in. I contacted support and because I was detected in TN once irrespective of my name and address and credit card info in their system they refuse to let me back in.
Support said they canceled my subscriptions for me because you can't even access that part of your account.
It's ridiculous this is where things have landed. And it's not even stopping porn in the slightest it's just making it harder for honest people to pay for what they like. And so the government can track us more easily. Wish I could do something other than vote with my wallet.
> it's just making it harder for honest people to pay for what they like.
I have a female friend who creates that kind of content. Her take is that this is very much intentional. There is a general crackdown on porn in the US. They're not just trying to make it difficult for the clients, but also difficult for people to make this kind of content, distribute it and get paid for it.
Of course none of this makes sense. There are VPNs and there is bittorrent. All of this is just making this kind of stuff more underground. In China porn is fully illegal, but people still share bootleg porn on thumb drives.
New man-in-the-middle attack: proxy the request through an IP address tagged as a prohibited location, and you can permanently deny access without ever needing to modify or even decided SSL/TLS.
I moved to asia about 1.5 years ago. But b/c my credit card's billing address is still in the state of WA, Apple and other subscriptions think they should still charge me a sales tax. To remove the sales tax, I have to cancel the subscription and re-create it (losing my grandfather'd rates).
THe government shouldn't be raising anyone's children, that's what parents are for. If you're a bad parent, your kids will get access to bad things and could become an adult failure.
The future of your family and your legacy is up to you, not the government. We don't need age verification to restrict the social darwinism of raising children.
I wish I could upvote this comment harder. I started having unsupervised internet access (with the family computer in the living room) when I was 8. I'm a functional and successful adult because I trusted my parents. When my mother forbade me from registering on online forums I complied. When I read "fellation" in some minecraft chat (albeit somewhat later) I asked my mom what it was and understood that "sex" was something for the grown-ups and that I shouldn't worry about it. All because I would never even conceive that my parents wouldn't do what's best for me, and was unconditionally loved (even though I didn't know about this concept).
I would rather have parenting licenses than online age verification
I'm a functional and successful adult despite doing plenty online behind my parents' back as a kid. I don't think that part of our upbringings had as much of an effect on us as you suspect.
And I also suspect you did not grow up with kids whose parents clearly would like them to go away and stop bothering them. I also did lots of dumb stuff in my parents' back. The nuance here is that when you know that your parents love you, you'll tell them once you do something that's actually harmful/a big mistake, because you trust they'll help you instead of punishing you. I've seen people make "questionable" life choices, in my opinion, because they've learned, consciously or not, to not seek help from others and always hide/blame on others every problem them encounter.
Yeah I'm not sure why the govt or any other 3rd party needs to get involved. If I don't want my kids to look at porno online I will educate them on porn. If I don't trust my kids to listen to me then I will install an open source monitoring software and educate them on trust.
Letting the govt dictate what is age restricted is an easy way for the govt to control speech and narrative. For example, children's books that feature LGBT characters are being reclassified as adult [1], thus requiring additional verification. If I do/don't want my kids to read LGBT books, it's my decision. The govt should not dictate that. What else will the govt reclassify? Anything involving people of color?
Education isn't based on the premise that they'll never disobey. It's to help them recognize when things become dangerous or are getting to be a problem. Of course kids will do things they're told not to do - this is just helping them tap the brakes and understand how to recover. The attitude that the only solution is perfect enforcement is (in my opinion at least) partially to blame for the lack of self-awareness that makes the more vulnerable to later addiction problems in the first place.
Not sure if this is sarcastic but that's exactly how drug education works in the US. Sure it's optimistic but almost everything about raising kids is optimistic.
I was curious about drugs after DARE because I learned about stuff I'd never heard about before. But it didn't make me want to _try_ drugs. And if DARE weren't enough, watching Euphoria was definitely enough to make me not ever want to touch drugs.
This points get brought up in every thread about this topic, and although I agree with it completely, I feel it's the wrong point to make. They don't want to raise our children. Caring about the children is just pretense. The goal is surveillance. So this is a moot point, really.
I do agree fundamentally, but you are making a lot of assumptions about the parents here. Many do not have parents able to do this. Do they not deserve some protection against such content?
Blaming the parents for their failures is not going to help the kids.
That being said the current approach really has nothing to do with protecting kids and everything with tracking us.
I keep thinking we can't fight age verification by just saying "no" to it, and have to offer an alterative.
Maybe we need to turn it on its head, point out that if we want legislation to help out with this, we could choose legislation that gives power to parents. Age verification laws put the power directly into the law itself, they're a blanket solution that gives all the power to legislators and that prevents parents from making decisions about what's appropriate for their kids and what isn't.
If the market isn't delivering the level of parental controls people want, then sure, maybe legislation is needed. But it should be legislation that improves parental such that parents can make decisions about what's appropriate for their children.
Yeah I agree. Let me decide what's appropriate for my kids. Like for video games or movies... A game rated M for foul language and nothing else might be OK for my adolescent kid. A game rated M for excessive nudity and sex probably not.
As much as this is true, no disagreement, there is the issue that we are all fighting against systems that have billion of dollars of studies and A/B testing designed to completely subvert said parenting abilities.
It was difficult enough back 20 years ago when you have TV advertising that just shot gun out the messaging in the hope of landing a target, now it is algorithmically targeted. Even if you can keep this stuff under control in the home, outside of the home these influences can still bleed in from others.
But having the government use mandatory age restrictions, that is a wild over correction. They shouldn't be parenting kids in the same way corporations shouldn't be doing it either.
Alas we are walking into the wild contradictions of libertarian thinking and authoritarianism. Liberal companies have no checks and balances, authoritarian governments take peoples freedoms in the name of "safety".
The deeper questing is to all of these technologies that have have imbalanced positive and negative outcomes. If you cannot balance it, you either have the worst outcomes happen or you end up with an authoritarian reflex to control the technology and those that use it. Rarely do we take the middle path, that being government control of the businesses.
That is seen as touching the political third rail, but that instinct is now by design.
You can see the thinking that goes, the best solution was to never invent it to begin with, but that is just wishful stuff that doesn't really contribute.
I have no solutions and barely any responses other than, this is some predicament we find ourselves in.
Western society, for better or worse, is set up such that parents need to resume work as soon as possible. Saying the government has no responsibility in child rearing ignores the economic reality of parents.
"Because I have a job, it is now impossible for me to raise my children. I have to outsource this to a council of legislators because I'm simply too busy!"
Bad argument, bad outcomes. These are exactly the "bad parents" I was referring to in my original comment. The government HAS no responsibility in raising your child, but they would LOVE to change that. It's absolutely imperative for the human race that that does not happen.
Besides the bad reinterpretation of my point, how to solve the problem? It is simply insufficient to say "yeah both parents work full time with the sword of damocles hanging over their head but too bad so sad". Without changing the economic situation there is no changing the child rearing situation. One caused the other. It's all well and good to say this is imperative for the species but I see no solution offered. The economic situation must change and the government is responsible for this.
Absolutely true in Australia. The parents I know are either rich enough to outsource it or basically fighting for their life managing work and childrearing.
And to add salt to the wound, it's the people on the positive side of the economic bell curve that have strong familial support networks where grandparents and uncles and aunts can contribute to childrearing, while those on the other side of the curve can't always rely on having those support networks. A generalisation of course, but a relevant one.
From the people I know, the financial pressure seems to build around 6 months as their employer's maternity pay is fading into the distance, but they struggle on a bit longer.
I admit there may be different definitions of "as soon as possible" between the USA and other countries. Most people here would love to be able to afford at least 1 year if not more.
Yes we can compare, and your original comment was wildly incorrect. You aren't going to get proven correct by digging into this further
Just because the the US provides zero paid leave by law doesn't mean women don't take maternity leave - it's often self funded of course. How about you look into that and compare, instead of trying to ask specific questions to arrive at a gotcha
Also, different kids mature at different rates. I wouldn't give a shit about my kid watching, say, an R rated movie if I understand they'll be able to handle it and understand it's fiction. If I had a 14 or 15 year old and they had a healthy understanding of sex and the dangers of porn, I wouldn't give a shit if they managed to see some poorly drawn tits online. Why? Because if you didn't intentionally seek out lewd content as a teenager you're either very very religious or a liar
California and a slew of other states deemed it necessary to step in and take over for parents with transgender kids didn’t they? even threatening to take a child from their parents should they refuse gender dysphoria treatments.
it seems to me the left already opened this can of worms.
Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.
And half of it will be from adults trying to avoid privacy invasion.
Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
A tech company doing scans for validation could actually connect to a state database to verify the ID is legit and is not already being used for a different account. It would then be saved. I don't think real world vs tech world usage of fake IDs are the same at all.
>Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
Not necessarily true. There's a local stripclub that scans and saves the scan to fight chargebacks and the like. It is definitely logging stuff. They've told me that they were going through the logs once and the bartender ended up googling my fullname. We're cool and I didn't care, but this what you said is not a blanket true statement. I trust a physical business that I can visit far more than some ID verification company that is going to get hacked at some point.
I've seen this before in London too in some venues. They have full-on computers that scan your passport and take your photo, for the express purpose of storing this info.
why would you trust a physical location who typically wouldn’t have a robust architecture or any opsec but not trust an online first business that likely has opsec and monitoring?
tech companies care even less? how do you arrive to that conclusion? tech companies log/store EVERYTHING. this would be an absolute boon for them to be able unequivocally assign to you all of the data they track about you. suddenly, anonymous analytics become identified data and not just deanonymized data.
Logs of location data on people are already worth real money. The FBI has admitted to buying it. The companies that do age verification will absolutely be selling that data unless there are severe penalties for doing so, and what are the odds that the U.S. government passes a law making it illegal for the FBI to buy data?
That's bad enough if you're a U.S. citizen. If you're a non-U.S. citizen, now you're in the situation where all these U.S. social media sites are collecting personal information from you and reselling it, but you have no legal protection unless your government risks tariffs and invasion threats to pass legislation against it, which the U.S. will probably ignore anyways.
This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
> This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
I guess, but like, who? During the time TikTok was not available on an app store (even though the service wasn't stopped), people were trying some of the other Chinese apps, and they were not very compelling as the exodus never happened.
It's a chicken and egg problem. Without users, a new social platform lacks content, so it can't attract users. Unless something decidedly new and compelling comes along, users will probably stick with what they know... unless something happens that really pisses them off.
If I'm being honest though, I don't think privacy concerns will be what does it. The TikTok generation doesn't give a fig about privacy. You can build a panopticon around them and they won't even notice.
They also use them to flag people who've been previously banned and the systems work across venues. The idea that verification in the real world is cursory is not accurate.
The vast majority of places I frequent do not even have a person at the door checking IDs. If the bar tender/server thinks you look young, they ask for ID. I clearly do not look to be too young, so there's that. The last place I went to with an actual scanner was more of a nightclub that had a cover charge.
There's a fine line between night clubs and bars (and a venue can operate as both, depending on the night).
Functioning as a bar where people come in, drink and eat - generally not checking ID's at the door.
Functioning as a night club, generally checking ID's at the door. Almost no places I've been to scan ID's. I'm also middle aged and not going to night clubs hardly ever. Pretty much just a couple concerts a year in the big city. Those venues scan ID's.
sure, but it is what it is. the places with scanners may be more sophisticated than i give them credit, but you cannot deny there are places that do not card every person every time you visit. online places will never not know it was you. if you cannot see the differences, then you're just deliberately being obstinate about it
> Handing an ID to a bouncer at a bar or similar is not logging anything.
Some of the bars in the party areas of my college town have a digital scanner they hold the ID up against, and they even had a screen showing a scrolling Wall of Shame of fake IDs. And they had this like 20 years ago. So I would not necessarily agree with you here
Like prohibition and the overtaxing of cigarettes in Australia, ID fraud will just become criminalised and the government will lose all control. There are pros and cons to this.
I think we should go a step further and log every activity a person takes the blockchains. There will be no ID theft because your DNA will be used to cryptographically authenticate your user.
It would be a good market, I would like to pay for an ID in my or compatible countries. As far as the systems work that I have seen, this is more or less a permanent pass.
But the real problem is that governments again try to censor online content, nothing else.
My country doesn't even run children's homes without many incidents and nobody cares for that. But it tries to track citizens through things like corona apps. It cannot propose any trusted entity that could verify and ID information about me.
ID system should be based on commercial bank. If you need to prove your identity or whatever about yourself just tell them to ask your bank and bank will ask you which information about yourself you are willing to share with whoever requested to confirm something about you.
When ID is tied to your bank account you guard it like you guard your bank account. Because it is the same thing. This will drastically lower the incentives to "share" your identity with anyone.
What's more this system is already operational in many countries.
I wonder how many months until this suggestion becomes slightly embarrassing. I barely want my banks to know what I buy and to be responsible for my money. I really don't want them knowing everywhere I go online. Especially when "my" bank goes under and all of my data gets sold off to whoever takes it over.
> I barely want my banks to know what I buy and to be responsible for my money. I really don't want them knowing everywhere I go online.
Bank ID systems, at least the ones I’m familiar with, don't work like that. Your bank confirms your identity to the authentication provider, and the authentication provider sends you on to the site you are logging into. The bank does not see the site you are visiting.
And what about debanked people? Are they now also deporned - and deyoutubed? Many of those people are debanked because they were politically risky, for example whistleblowers - are we now saying whistleblowers can't upload whistleblowing?
The proposed system moves sources of identity from the nation to private banks under it. So banks own people. Propose a financial regulation to the national congress/parliament and you stop existing, digitally or potentially physically as well. That's feudalism. Or Chinese struggles-of-nations warlord era situation which is often grouped up into that concept as close enough things.
Banks can be state-owned as well as private. Moreover, some countries have a particular bank that serves all citizens, even if they would not be able to bank elsewhere.
You use eID when explicitly interacting with a govt entity or bank or otherwise similar institution because you have to and want to prove who you are. Yes, I do want to prove who I am when I file taxes, vote or want to start a business...
You don't use it when just browsing randomly on the internet. You don't use it to buy games on steam. Your computer isn't forced to store it because a law arbitrarily says so.
Why not, seems to be made exactly for this purpose if you look at the "‘Age over 18’: true" flag. What's bad about that solution?
> The technical solution for an EU age verification app is privacy-preserving, open source and user-friendly.
> First, the user downloads the app onto their phone and sets it up by certifying their age.
This can be done with a biometric passport/ID card, a national eID (e.g. national ID Card or other electronic identification mean), a pre-installed third-party app (e.g. a banking app), or in person (e.g. at the post office). Only the information confirming that the user is over the age will be saved in the app. No name, no birthday, or any other data is saved.
> After completing this step, the communication between the app and the provider certifying the user’s age (e.g. eID, third-party app) ends. No further data is exchanged.
> The app is then ready to be used online. When an online platform asks to verify the user’s age, the user can use the app to communicate they are over a certain age (e.g. ‘Age over 18’: true) to the platform.
The EU app still requires that you let them violate your privacy in exchange for a batch of about 30 easily trackable tokens that expire after 3 months. It also bans rooting/jailbreaking, bans third party operating systems like GrapheneOS, and requires that you install Google Play Services/IOS equivalent for "anti-tampering".
if it's done by the government, what prevents the goverment to not allowing opposition members to access social media? I think social media and porn are harmful for children but still
I don't disagree with random browsing. I do use it to buy games on steam as any online purchase on my card uses it. And my computer doesn't store it, my phone does.
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.
The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.
The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.
The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
The two methods that seem feasible are making it hard to copy (putting it in the secure element in your phone, for example, which I don't love) or doing tokens that can only be used a limited number of times per day, like in : https://eprint.iacr.org/2006/454
If it's a rolling cert with rate limits I think that solves the problem, particularly if access to the client cert allows the client to make a financial transaction, e.g. of $100. So you wouldn't share the client cert with randoms because they would just take your $100 and you'd be blocked.
Continuous age verification isn't possible, so you'll have to store some sort of proof of age somewhere, and that proof will always be sharable.
Let's say Facebook has verified my age somehow. I could share my Facebook login credentials, or the token that their authorization server sends back in response. You can create some hurdles to doing that, like requiring a second factor, but I can just share that too.
You might as well go down the route of accepting that possibility. These systems are never going to hold up in the face of a determined enough teenager.
That really depends. A zero knowledge system would show to the verifier that the person is authorized for access _right now_, but thats just the answer to a particular challenge. Outside of the verifier who knows they came up with a random challenge without bias or influence, the response would mean nothing.
I think a lot of age verification systems are the solution to the real core of legislation - to make companies liable for underage viewing of content. To put such legislation in place without providing a feasible way to accomplish age verification would be argued as discriminatory.
In that sense, a zero knowledge system which doesn't give a company non-repudiation so that they can defend themselves in court may very well be insufficient. And that will require tracking identity long-term, although it could be done with a third-party auditor under break-the-glass situations with proper transparency.
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.
I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
Further, "porn tokens" are the pointy end of the wedge, because it's easy to misconstrue any opposition as advocating for "kids should have access to porn, actually". The broad end that is being hammered towards is "kids aren't allowed on social media because it's harmful to them" AKA "free speech tokens".
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
Would be great, if we could all agree on that and simply everyone who is tasked implementing it in code refusing, and then letting non-engineers themselves try to do it and fail, and then have a good laugh about the figurative middle finger we gave them for their bs.
No it really can’t. Age verification requires identification.
Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.
The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
There's already authorities that know your age, so verifying age with them to get the credential isn't the part that needs to be anonymous. The issue is them knowing what you do with your credential, which anonymous credentials solves by making it impossible to track tokens back to the credential holder. As far as sharing, there are some possible mitigations.
Right. And the possible sharing mitigations generally amount to tracking.
This isn’t even getting to the issue that mandating government-issued credentials is the “foot in the door”. If you mandate the use of government creds for accessing websites, it’s an obvious step to turn around and demand that sites report credential use to “fight credential fraud”.
Yep look who is backing these regulations. It's absolutely for no other purpose than to further enable surveillance capitalism and the surveillance state.
> Age verification can be achieved without destroying anonymity and privacy online
Yeah its extremely simple. You provide a simple message asking the user if they are an adult, and they either click "yes" or "no". That method requires exchanging zero personal information with anyone, other than a simple boolean value to the site/service you are using.
Any other system requires letting a third party violate your privacy.
Yes, but this is not popular among technologists (see the average sentiment towards age verification here). Legislators aren't going to build technology. This will happen if age verification actually becomes a widespread requirement. But until that point the prospective builders will be fighting the entire premise of such systems.
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.
I mean, it's more than that. I _want_ to protect kids' right to be part of the human connectome. The "protect the kids" (by disallowing them their freedom of thought on the internet) is just naked ageism.
I did. Restricting children’s access to certain things is not ageism.
We can argue the merits of restricting children’s access to the internet, or certain books, or alcohol, or pornography, or whatever else. We can debate the merits of those various restrictions based on the benefits and costs to both the children and society at large.
But it is not ageism to attempt to protect children. It is not ageism even of the restriction is a bad idea. To claim it is ageism is an emotional appeal (“ageism bad!”), not a logical one.
It depends on what you're restricting and why. Restricting access to things based on age can absolutely be ageism if the thing does not need to be restricted.
I don’t think it’s ever “ageism” in the normal sense to restrict children’s activities for their safety. But even if that’s the right term in some cases, it hinges on “if the thing does not need to be restricted”.
The burden is still to demonstrate that a restriction is wrong. If that can’t be demonstrated, then labeling it ageism is a purely emotional appeal.
I used a rhetorical device to demonstrate why restricting children’s activities is not simply ageism.
I don’t know how you can seriously come here and accuse me of engaging in bad faith when I’ve taken the time to make my viewpoint explicit multiple times in this thread now, including directly to you.
Hyperbole is a rhetorical device, if that’s what you mean.
Just because I had a hard time following your logic doesn’t mean I didn’t engage in good faith. You also seem to be arguing in a heated way with every person who responds to you.
Ageism is a legally defined form of discrimination as well as the subject of ethical discussions. It's a real, defined thing. Just because we disagree on what qualifies as ageism doesn't mean you get to call foul and say it's irrational/emotional.
This is literally a “think of the children[‘s freedom]” appeal. You’re not arguing for or against the restriction on its merits.
In the US at least there’s also no such thing legally as age discrimination against minors so far as I’m aware.
Edit:
Let me frame this differently. “Ageism” is basically by definition bad, so applying the term “ageism” to a restriction is a an attempt to label the restriction bad without establishing that on its own merits.
If you try to provide a consistent definition of “ageism” that applies to restricting access to the internet but not restricting access to alcohol, you will most certainly have to resort to phrases like “reasonable restrictions” (if not, I’m very interested in your definition), which means that there’s still a need to establish what is reasonable. Applying the label “ageism” without establishing reasonableness is then a circular argument.
You* are using “ageism” as a synonym for “bad”. You are also labeling restrictions as “ageism” without establishing that they are actually bad.
In effect you are saying “that’s bad!” without accepting the burden of establishing why it’s bad, but hiding this behind a different term that carries more emotional weight. It’s a very politically effective strategy but it’s not logically sound.
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
The anti-tampering measures require banning rooting/jailbreaking, banning the installation of non-Google/Apple approved operating systems (ex: GrapheneOS), and require that you install Google Play Services/IOS equivalent.
The app also requires that you first send your personal information to a closed source backend, in exchange for easily trackable tokens used as "proof".
> banning the installation of non-Google/Apple approved operating systems (ex: GrapheneOS)
Do you have a link where I could read more about this? GrapheneOS is known for being the alternative Android where many bank apps in the EU still work, and this is the first I have heard that the age verification app definitely wouldn’t run on GrapheneOS.
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
Young people setting up a MITM and getting deeper into tech rather than consuming short-form-content is something I'd appreciate as a nice bonus effect.
Of course the EU solution isn't perfect and there are bypasses (there will always be and have always been), but let's appreciate it that way rather than too many PII, if it must come. I'd prefer the Age/RTA header and parental responsibility too.
Isn't MITM always a possibility? One person has access, then they could be the man in the middle and stream or access, store and send to others or they get "hacked" with reasonable deniability, etc. and suddenly others have access without age verification again. For example someone could install a device grabbing all frames sent through a display port cable.
I’m in the UK and we recently got the Online Safety Act. We failed, this legislation is very popular with voters and not getting rolled back. Those that dislike it use a VPN and aren’t interested in fighting. I’d say most of the public here is exhausted with cost of living and internet freedom just isn’t relevant to their voting habits.
I grew up around a lot of the hacker ethos, open internet, Information Wants To Be Free etc… feels like a part of my identity is being striped away by my government.
The hacker ethos and open internet happened when the government was worse. It was illegal to send encryption outside of the US. Hackers used civil disobedience, some risked jail time, some actually went to jail, some are still in jail today or dead, and the world got a bit better as a result of their courage to break laws.
Push back to legislation must be on going and any time it is defeated, the success is only temporary. The government can just line up and try again shortly afterwards and they only needs to successful once.
You can win the battle but lose the war. By the time the average folk realize the extent of these issues it will be way too late.
I think back to how the technology space was 25 years ago. When the biggest privacy fear was that a Pentium 3 had a processor ID and Windows XP would send your system specs to get a security update. Look at how far we have fallen since then and the pace is only speeding up.
This is why we need verification technology that protects identity. Implemented as anonymous verification, without distinguishing between adult age, or permissioned by parent.
That solution doesn't negate parental freedom of choice, it facilitates it.
I am baffled at how often the "they don't want it, because of their ulterior surveillance motivations, therefore it isn't a solution" argument is made. "They" don't want it because it is a solution to the nominal problem, that they cannot abuse, and would negate their ability to use it as a cover with a large well-meaning voting constituency.
Two problems, nominal and ulterior, resolved in the right way by one solution.
When a nominally sensible problem is used as a cover for overreach, solving the nominal problem in a healthy way is the best offense. The alternative is an endless war of attrition, and the "hope" that politicians resist the efforts of well-paid lobbyists and tens of millions of well-meaning voting parents forever. That is a ridiculous strategy, doomed to fail, delivering irreversible damage. As is already evident by the abusable laws that are accumulating.
I worry at the lack of political acumen and foot-gun reflexes in the ethically-motivated technical community.
Stop endlessly fighting to lose less. Just play the winning move already. Stop the irreversible damage.
I think part of the issue people are missing is what the late Randy Pausch would call a “head fake”. My specific autism is not privacy, digital security, none of that. So I will be honest about my gaps. But from my little corner what this is about is geopolitics - specifically a potential war with China. If you zoom out to the macro level first understand the reason China setup the Great Firewall. Why countries like Iran cut the internet whenever there are protests. These are, first and foremost, defensive measures against foreign influence. America is subject to these same outside forces. The difference is that our free and open society makes things like "a Great Firewall" simply unpalatable to the American people. And rightly so. But it is also becoming increasingly evident that these malign actors are using our own values against us.
Russia for example aims to sow discord. One classic example is the Black Lives Matter movement. This was not a Russian disinformation campaign - but they did propagate views that exist outside the bell curve of the moderate. They push scenes of cops being under siege for the right and racist policing for the left. They amplify the voices of the most angry, the most extreme and the most radical on both sides of the spectrum to create confusion, distrust and societal division.
China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
There is hardly a nation on the earth that is not involved in some way in the American discourse - each pushing and pulling to their own aims and individual agenda. Historically there was a sort of Nash equilibrium with Americans caught somewhere in the center. But as the loudest voices, or rather the most well funded, begin to dominate the discussion via social media and covert funding, we are seeing it become increasingly problematic for American democracy. That is why you are starting to see this consensus over 'verification' and 'identification' begin to coalesce. The government, both left and right of center, has begun to realize the long term ramifications of these actors.
So how do you solve that inherent tension between our intrinsic right to free-speech and those who would abuse it to cause us actual harm? An independent, 3rd party verifier with limited scope makes sense - but would that solve the greater geopolitical implications? In truth I've long expected social media like Reddit, Facebook, et al. to formulate a body of their own like the MPAA. But likewise I don't think there is a clear answer here. Do you trust the Tech Oligarchs with this power over the Government itself? This is core to the problem. How do you 'censor' the internet without really 'censoring' Americans? I think this is part of what the last administration was trying to do with the failed "Disinformation Governance Board". And that failure is what has led us to where we are now.
The original twitter thread is right to say this isn't a left-versus-right issue. This is undeniably a censorship mechanism designed to exclude a set of voices from the internet as we know it today. As with the patriot act, they choose to wrap the bitter pill in a bacon-flavored rhetoric of safety and protecting the youth from perverts and degenerates. But what has failed to be acknowledged is the intrinsic cost of having an open society in a world where that openness has become an attack surface. Make no mistake: the goal is censorship. But the solution space to what you call 'the nominal problem' is less trivial than I think you believe.
Agree with all of this. It's fascinating how social media is this soup of the most virulent propaganda imaginable for every possible interest. It's a FFA between all these different powers and you are just trying to keep up with friends and watch cat videos. That they are targeting the current largest empire makes a lot of sense.
I think at an individual level the best thing to do is to opt out of this stuff and not use these corporate systems with algorithmic feeds. Only those will have the intrusive age verification anyway.
> Russia for example aims to sow discord. One classic example is the Black Lives Matter movement. This was not a Russian disinformation campaign - but they did propagate views that exist outside the bell curve of the moderate. They push scenes of cops being under siege for the right and racist policing for the left. They amplify the voices of the most angry, the most extreme and the most radical on both sides of the spectrum to create confusion, distrust and societal division.
> China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
The only reason these approaches work is because there is generally a lot of truth in the things they push and a complete lack of transparency on that reality from powerful Americans, both government and oligarchy. If it wasn't "a lot of truth with some bullshit mixed in" but "only bullshit", it wouldn't work. If the state of the US hadn't made the bullshit realistic and plausible, it wouldn't work.
Those are the issues to fix. You name the PATRIOT Act, yet another thing that has caused much more harm than benefit.
> China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
They mostly bring light to the worst things that happen in the US, which would otherwise go underreported because the people suffering them have no power and the media is already entirely controlled by Bezos et al.
It's laughable to defend this on the basis of foreign influence. The bad actor influencers are inside the house. They're called Jeff Bezos, Rupert Murdoch, and so on. And the information they spread isn't any more truthful or beneficial than that spread by the likes of China.
Rupert Murdoch has done more for misinformation, polarization and extremism over the last 2 decades than China and Russia combined. He's foreign, by the way.
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Dumb- BUT immediate links to sites of the right legislators!
Adam B. Schiff
Sorry, this legislator cannot be contacted with our tool. To message them, visit their website instead.
Alex Padilla
Sorry, this legislator cannot be contacted with our tool. To message them, visit their website instead.
>Thank you for contacting me regarding the Kids Online Safety Act (KOSA). I appreciate hearing from you and welcome the opportunity to respond.
>Keeping children safe and holding accountable bad actors online is an important priority for the 119th Congress, and I am grateful for your input. My staff and I keep track of every message we receive from constituents like you, and your feedback is invaluable in guiding my priorities.
>As you may know, KOSA seeks to establish new guardrails to protect children online by requiring that social media platforms give parents the option to enable the strongest privacy settings possible on their children’s accounts. It also would require audits of how online platforms affect the health and well-being of children. Further, it would create a “duty of care” instructing online platforms to mitigate content seen by children promoting eating disorders, suicide, sexual exploitation, and other dangers. KOSA has been introduced and referred to the Committee on Commerce, Science, and Transportation, of which I am not a member.
>As a parent, I believe that we must do everything we can in Congress to safeguard children online and will continue to support strong solutions to combat child exploitation. That is why I voted in the Judiciary Committee to advance the Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act to crack down on the proliferation of child sex abuse material online, support victims, and increase accountability and transparency for online platforms.
>Please be assured that I will keep your concerns in mind should this bill be considered by the Senate.
>Transparency has been a goal of mine throughout my time in Congress. You can find detailed information on every bill introduced in the Senate on Congress.gov, including the summary and full text of the legislation, which Senators have co-sponsored it, and the most recent action taken by Congress.
>An ongoing job of a Senator is to help constituents solve problems with federal agencies, access services, and get their questions answered promptly. On my website, I offer a guide to the services my office can provide, as well as a contact form where you can share your priorities with me. You can also connect with me online via Facebook or Twitter, and you can always reach my office by phone at (202) 224-3841.
>Thank you again for your thoughts. I hope you will continue to share your views and ideas with me.
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Let them pry ID from our cold dead hands. If a site requires ID, it doesn't get my business.
Example, Discord wanted my ID to enable certain features, I declined, I now can't use those features, fine by me. If they started asking for ID anyway, I'd say no and see what happens, even if that means they lock me out entirely. There's no universe where they get my ID.
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.
If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?
I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)
Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.
(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.
Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.
Theres also no old-age cutoff, when their mental faculties significantly decline.
Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
Maybe it should be about intelligence. All kinds of people destroy ecosystem after ecosystem, simply by acting in stupid ways and thereby creating tons of bad incentives for businesses, who will stop at nothing to maximize their profits, zero ethics. The whole system is rigged up to trend towards supporting stupid behavior and attracting more of that, simply because there are so many people doing stupid things. Engagement and attention economy, no matter how stupid or rotten.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.
—
After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-
The platform just has to be designed non-addictively.
Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
I'm curious how much of that will keep occurring though? These underage influencers I assume had a following that existed that they want to manage. But if you can't start one without an agency or an adult running things won't that dampen the amounts of them?
That's the legal loophole that I'm sure a tiny number of people are using. In the real world, reportedly around 3/4 of kids under 16 that were using social media still are by either having changed their age during the window and using a sibling or older friend to do face scans for age recognition, or by creating new accounts and again using an older friend/sibling/relative etc. for the age verification. I heard about the ways children of some of my cousins got around it at Christmas, and their parent's didn't care!
The most embarrassing thing is that our Government thought the idiotic idea was workable in the first place... But of course now they've gone and made things worse, because now kids' profiles pretend to be older, so more inappropriate stuff (like gambling ads for those who put an over-18 birthdate) can get targeted at them - great job, eSafety Commissioner!
The number of times I've had to lie to websites on my kid's behalf is horrendous. I resent governments and companies for putting me in that situation.
But it's a good lesson, I suppose. It changed the lesson to my kids about lying from "lying is bad", to a more sophisticated "lying is bad for these reasons, and so these lies are bad, but those lies are not."
Yeah, I think it's overall bad for society, but on an individual I'd definitely do it too (within reason) for certain services if I had kids that age.
But it feels like by making silly laws like this that aren't likely to be respected by much of the population is bad for the rule of law, which is bad for society. But fair enough, the rule of law is only a good thing as long as the laws are (on the whole) good.
We have a lot of this problem in Australia because as much as we pretend not to be, we're pretty authoritarian in regulating personal behaviour. For example in my state they're currently criminalising riding EN15194 compliant e-bikes above 10 km/h (literally 6 mph, slower than a jog) in my state on 90% of the bicycle path network (90% of the network are 'shared paths' so you'd only be allowed to ride at the bike's full 25 km/h (15 mph) motor limit on the small amount of dedicated bicycle paths). That and requiring anyone with any e-bike to have a drivers license - which cuts out people who can't have a license due to disability, medical issues but who could still ride a bike, or anyone under-16.
It's very silly, almost completely unenforceable and again just going to create huge non-compliance and further teach people that laws are silly things to be ignored... I really don't think that is good for society, and I've observed that the more Government has tried to regulate our behaviour, the less responsibility people seem to take, and the more the Government tries to further regulate.
So I think a big criteria of evaluating any new bill is "are most people actually going to respect this law", but all my experience with politicians is that they prefer magical thinking of believing that anything you make a law will immediately be fixed, even if it's impossible to adequately enforce or even technologically impossible to implement. Every time I've been involved in public consultation processes I'm constantly arguing practicality of the actual bill and they're arguing about the ideals that drove the poorly thought out laws...
>If you are going to implement age controls, you should implement a ban on underage influencers as well.
That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.
In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years? The direction we give kids coming up always seems to lag behind reality by 10 or 20 years. Perhaps we shouldn't stand in the way of the new generation figuring things out for themselves in this brave new world. The old playbooks to a solid middle class life are increasingly outdated.
> Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years?
Also so they don't end up stupid and useless like a potted plant. People with too little education are easy to manipulate and dim. They're perfect fodder for the propaganda machines.
It would be nice if we could just let kids loose like wild animals and they'd, somehow, figure everything out. But no, we actually have to try. Otherwise they end up illiterate and eating so much candy they throw up. Because they're kids.
None of your concerns are relevant. We're not talking about 6 year olds here but presumably 12-16 year olds. And the issue isn't whether they drop out of school, but whether school must be their sole focus.
I don't think it truly is, but I do think that the younger generations think it is.
My nieces and nephews really don't know what they are going to do in their futures because so much is uncertain right now.
If it feels like a longshot to expect normal 9-5 office jobs to be around in 5 years, and it's also a longshot being an influencer, then why not go for the influencer thing?
I have long thought that all content (local and remote) should be properly labeled with metadata. Just like the cans of soup in the supermarket, you don't have to open it to find out if it has peanuts, lactose, or MSG in it; you should be able to filter data before accessing it.
You could define a set of 5 or six categories (nudity, sex, drugs, violence, etc.) and have a scale from 1 to 10 for each. Each content producer would rate each category according to defined criteria.
Then each user, or their parent, can set what their own acceptable level is. If you set your violence level at 4 then nothing level 5 or higher will load.
We need a truly distributed point-to-point internet asap. Politicians going to do everything to limit free speech and free ideas in the name of protecting children while they already got all the powers to investigate and stop child abuse.
Did you intend to link to Meshtastic as an example of how not to achieve your goals? Because it definitely isn't capable of scaling up to anything like the whole internet, and the project struggles to agree on any goals they want to reliably achieve.
There are so many caveats and limitations that bringing it up in this context is downright dishonest. The most you could fairly say is that some of the philosophy driving some of the meshtastic developers is what you want to see applied to the development of an internet-scale network (which in reality would have less technology in common with meshtastic than with the current internet).
Alas it is the great contradiction. Federated technologies are brilliant for peer-to-peer but many struggle to scale because the designed redundancy tends to crush their efficiency.
Really depends on the context. Email works because of its limits. Remove those limits and weaknesses start to appear.
So a mesh isn't made up of point to point connections? I'm pretty sure if you have several they start to look like a mesh (and every security site's banner)
Sure but I cant communicate with you in a point to point fashion, in a mesh network I am hoping that I have possibly hundreds of disinterested nodes between us. But like, are those nodes coordinating on censorship? Are some of the nodes recording your metadata? Are the nodes incentivized to carry the quantity of traffic you require?
Really the "fix" the ultimate goal has to be direct point to point.
You would think so but some future authoritarian or paternalistic government might disagree. Maybe the government will say that a newspaper should not report on the poorly built bridge that collapsed and killed some people. News of disasters (or death in any form) might be considered two sensitive for children to accidentally be exposed to through a newspaper.
A great argument for why the bloated executive powers should be clawed back by Congress. But not a strong argument for why Congress should stay hands-off.
Also, if something like that happens, that's a blatant 1st Amendment violation and will be enjoined as fast as the case can run up the judiciary. Today's SCOTUS is very 1st-Amendment-friendly (to the chagrin and delight of various flavors of both left and right).
HN has 'user-submitted content' which tends to be one of the categories that these laws target. Newspapers can also run stories on disturbing events that can also fall under these laws. They are often incredibly broadly defined such that it's easier to describe what they don't cover.
I want that. I'm tired of bots being half the internet traffic or more. It's driving the general public insane and anonymity on the internet has zero utility. If journalists need to send sensitive information, they'll always be able to use Tor.
I think there's plenty of utility. People can express opinions that they hold honestly but would fear social retribution for if it could be tied back to them publicly. For example, any political opinion that I hold that's modestly center or right of center I would not appreciate being attached to my name online since people are completely incapable of nuance or compartmentalization.
If you wouldn’t make a political statement in a town hall setting where you’re going to show ID, then you probably shouldn’t say it on the internet.
But keep in mind that these laws don’t result in your identity being public. They will ultimately result in the sites you’re posting on know that you’re an enumerated individual. The ultimate benefit as I see it is removing outsized leverage over public opinion by botting likes on your statement or otherwise operating tons of accounts. It should also eliminate threats of violence from the digital public square, since building a prosecution pipeline against those would be easy to do. Same with child grooming, but I’ll acknowledge there’s a way to make that argument in a glib way, as an excuse to realize some of the other goals. It is a real problem though.
As with many detractors of anonymity, it seems that you're assuming that the authorities and neighbors you'll deal with will always be virtuous, and not corrupt nor vindictive toward opponents. Maybe you'd like to expose the town's government's corruption or mismanagement at the town hall, but the town is run by a family with a lot of influence and power over everything that happens within the town. You live in the town, fear for your safety, and have no good way of anonymously opposing their corruption, so you stay silent and they get to keep their power.
I don't see how these laws wouldn't make your identity public to someone, even if it's not the public at large. But it'd be enough for that someone to be an individual or entity who turns out to be interested in silencing your voice. Their knowledge of your identity would probably give them power to silence you not only on their platform but also on other platforms, if access to those other platforms is also tied to one single identity.
Bots are a problem but I suspect there are other ways of dealing with them, ways that don't involve making anonymity or pseudonymity impossible.
I want to read your comment. But first, so I know I'm not dealing with a bot, what is your full name and address? Please upload a photo of your ID as well. Thanks.
In the age of AI I think it’s only necessary and inevitable to implement some of kind of internet ID system to stop the massive onslaught of AI generated fraud, malicious hacking, and spam. If age verification is a Trojan horse to erase online anonymity, so be it, I see that as a worthy goal.
Humans are inherently social, and social networks are based on trust. Trust is primarily a function of reputation, peer pressure, and legal consequences. Reputation requires tying behavior to a stable identity. Peer pressure only works when you’re not anonymous. For there to be legal consequences for bad behavior, we must identify bad actors. I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
Also I don’t think that the “pro privacy” activists really understand the scale and severity of harm being done to children through the internet. I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
My first question to you is whether you are a pro-privacy advocate yourself, znnajdla. I don't see any biographical information listed in your profile so I'd initially assume that you value privacy on some degree. I am curious as to whether there are contexts where you want to be able to post an opinion through a pseudonym, without your ideas being easily tied to and subjected to judgments based on your legal name, your ethnic background, national origin, etc. Would you be willing to give up pseudonymity forever?
You speak positively about peer pressure, but on a basic level, peer pressure is power excercised against non-conformists. Robbers and abusers are non-conformists, but activists and reformers are also non-conformists. Peer pressure is often used in certain highly oppressive societies to enforce values I'd consider downright evil. Such societies take great care in limiting independent, anonymous access to digital tools and networks. Personally, I'd really like to keep living in a free society where there are ways to communicate and express non-conformist ideas without having to worry about who can easily stamp out such ideas. I think digital ID opens the way to oppressive societies which can wholesale block specific individuals' access to any effective communication tools. Digital ID us an overcorrection to a problem that DOES need to be corrected, but not in a way that destroys various essential aspects of free societies.
> I don't see any biographical information listed in your profile so I'd initially assume that you value privacy on some degree.
Extremely powerful entities like the CIA or NSA could easily personally identify me from my HackerNews profile if they wanted to, as could a dedicated attacker. The problem with "privacy" on the internet right now is that it's a lie - you only have privacy from your peers and ordinary citizens, but not from powerful entities. It would be better if we had a level playing field and everyone could be identified by everyone. Then the normal evolved human behaviours of trust-based social networks could function properly, and we could also fight AI-bot-based social media control, scam, and fraud.
It's not "privacy" it's "information asymmetry" which I'm attacking.
We will see how your opinion changes when someone steals your ID and voice and you end up being defrauded due to the government chosing the cheapest Indian shop to mishandle your data
There is a vast difference between being scammed and being defrauded. The latter can happen without any interaction with you by criminals using your leaked personal data. An AI empowered scam is just that. A scam can be avoided. Leaked IDs, voice and identity not so much
My point is that the data that you don't provide cannot be leaked
> would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
We should sedate everyone and lock them in secure concrete cells, with food and water provided through tubes. My proposal will save far more than a single child from being killed. I really think the "pro existence" activists don't understand the scale of the harms, and how we can prevent them all by having everyone be permanently unconscious!
> Trust is primarily a function of reputation, peer pressure, and legal consequences.
The trust is somewhat of a one-way street. We are supposed to trust the entities in power. If we break their trust, there are consequences. If said entities break our trust, we can do little about it.
> I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
For some, perhaps. However, I also would rather protect people from a potentially grim future. What is permissible and acceptable now may not always be the case in the future. The Holocaust, for example, only ended 81 years ago. The notion of another one, even against different groups, seems completely infeasible -- the same as the first one.
> I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
Tone is hard to read in text, but are you be facetious? If not, you are essentially saying that you would support shutting down the Internet to protect even just one child. Yet, despite these real and active harms that already exists, you will continue to still use and profit off the Internet in the meantime?
> you will continue to still use and profit off the Internet in the meantime
If I stop my internet use that won't save anybody, so there's no point in doing it. If shutting down the whole internet is necessary to save a life, I would support it. The only reason I don't is because that's not possible and even if it were possible it would not actually save more people than it would harm right now.
There are several holocausts going on around the world right now. It's not completely infeasible for there to be another one. I doubt there's been a time in human history where there wasn't at least one holocaust going on.
The German one stands out only because they fought us.
I concur, but only if we sub genocide for holocaust. The two terms are similar, but not interchangeable. I think the distinction is important, but that should not detract from your main point.
You do have a point that Holocaust stands out because we fought Germany, but I would also argue that what makes the Holocaust unique was the speed and efficacy in which it was systematically carried out.
I've heard that we could use zero-knowledge ID proofs to show someone is of age without revealing any more but I don't think that's the plan and the demand for age restrictions doesn't feel like a grassroots effort of concerned parents. It feels like an NGO/bureaucrat driven law and I assume its purpose is to de-anonymize people on the internet.
In some cases, it also seems like a lobbyist-driven effort that would benefit certain companies likely to be hired by the government to provide identification services.
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
That's the cynical view, yes, but we can see educational standards and performance going down in the United States, we have seen plenty of scientific and medical studies showing problems with children and more specifically teenagers using social media. I'm not one to want to want to limit someone's rights, but it seems like the trade-off here is in favor of requiring age verification at least for social media companies.
Separately I still don't fully agree with concerns raised regarding social media and identification for everyone. Bots, people who are online just stirring up trouble, &c. are causing pretty significant challenges and problems for society. If you spew a bunch of racist stuff for example I think people deserve to know who you are.
And you know we do this all the time. Folks want gun registries and things like that (and I agree, as a matter of practice, but not principal) so I'm not sure why we're ok with that form of requiring identification to exercise your rights and against this one other than political priorities.
Maybe requiring identification to speak online is not the intent but it would likely be the practical effect of the laws that were originally intended just to help children. It's not enough to think about laws' intent, but also their practical effects.
We haven't even mentioned the censoriousness that already takes place in various online forums not because a user said something racist or was stirring up trouble, but because moderators were vindictive, petty, or lazy, or because the automated moderation tools in place were heavy-handed and unintelligent. I don't look forward to that kind of moderation spreading everywhere and made more efficient by reducing everyone to a single identity. (Maybe Joe Contrarian has some opinions worth listening to, but it's just easier for the moderator of a forum to see that he was already publicly blacklisted by another unrelated forum, and just blacklist him on this one, too.)
At the end of the day they are private websites and the owners get to decide all of that stuff. Start your own, or just stop posting and let such folks have their echo chambers. One of our problems in society is that folks seem to think there is a need to post on the Internet on some forum - stop giving others power over you. You’re just posting to a bunch of anonymous people. They may be bots for all you know. Who cares?
> Maybe requiring identification to speak online is not the intent but it would likely be the practical effect of the laws that were originally intended just to help children. It's not enough to think about laws' intent, but also their practical effects.
Right we should analyze trade-offs. But you are quite focused on censorship which I am also generally concerned with. But are you really being censored by being identified and associated with what you say online? In public you aren’t anonymous - why must that extend to this digital public square?
It will spread to everywhere else if we allow it for social media. In Australia for example, mandatory age verification has already spread to video games.
Big social media companies are likely overjoyed to be able to get discrete, government issued info of a person's full legal name, date of birth, residential address (as is printed on US drivers licenses) for advertising and demographic profile targeting purposes. And then be able to correlate it with their existing social media history/clicks/profile, browser fingerprinting, IP address, daily usage patterns, geolocation. It's a massive gift to them.
I doubt they need that to identify you. There are also lots of other problems like algorithmic manipulation. But also just stop using these junky websites. Everyone always complains about Meta doing this, TikTok doing that, and it's like if all they do is make you mad, stop being their user/customers?
It's very hard to stop being their users/customers when they're the only platform where people are gathering for that particular purpose. The nature of walled gardens and network effects often mean that there isn't a viable alternative.
It's bad when the choice one has is between 1) using a platform that's significantly problematic or 2) being disconnected from everyone you'd like to connect with because they're only using that platform.
It’s pretty easy. I haven’t had social media besides LinkedIn since, I think 2013? I participate in all sorts of events, I know about things going on in my neighborhood and city, and I have quite a few friends. You don’t need this stuff and it’s just going to suck up more and more of your time and attention misleading you in to believing you need it.
You’re not connected with anyone. It’s a surrogate activity.
Be careful saying you don’t use social media or soon you’ll have a wholly off-topic sub-thread about whether or not HN is social media too, even though we’ve all read the same tired arguments from both sides about a billion times in other threads.
You're right, and if someone wants to say I have social media because of this forum that's totally fine. I just mean I don't use any of the major social media platforms, well, except LinkedIn. And I just haven't gotten over the hump yet on deleting that one too.
Good: some commenters here realize it's an attack on privacy
Bad: some still entertain the idea that we should do age verification using some sort of crypto primitives
There is no reason for age verification at all.
I am from the goatse generation. Rotten.com. steakandcheese. Horrific stuff tbh, I mostly stayed away from it, and I didn't need a helicopter government to protect me from it.
The moment you accept the narrative that kids need to be protected from the Internet you have already lost.
You've already condemned those kids to a life of slavery. So much for protecting them.
What we need is not online verification, but a competent government that does its existing job well.
Who's been arrested over the Epstein files? Who is protecting those kids?
No one.
That same government wants to "protect" your kids by KYCing everyone.
Over a decade ago, on the website of a cable news network named after vermin, you could watch an uncensored video of terrorists setting someone on fire.
Right? I especially don't understand where some of the "think of the children" attitude on porn sites, as they for the most part already ask for your age and if you didn't get some kind of amusement out of seeing tits as a teenager you're a liar
It's a function of our society becoming more puritan and conservative in the past 10-15 years. This has been a slow burn.
We are back to perceiving viewing boobies as an existential threat to people. Currently, sexuality is being demonized all around, and sexual morality is once again becoming a currency in society.
I encourage people to talk to some Gen Z kids. They're much more puritan than millennials. They're focused on virginity and the moral superiority of monogamy. It's bizarre.
I spend most of my social media time on tumblr, and it's really funny to see the whiplash of attitudes between the older and younger gen z. The younger ones tend to be the puritans and the older ones are all polyamorous bisexual furries who want to have sex with robots (obviously exaggerating but not by much)
Nah, that already didn't work because corps are very good at creating network effects in children and will set up multi-billion-dollar businesses around them. And then the kids with protective parents become the weird ones in school. I'll die on the hill of curtailing this stuff in a privacy-preserving way.
> I'll die on the hill of curtailing this stuff in a privacy-preserving way.
At some point you'll realize the contradiction in not trusting these "multi-billion-dollar businesses" to the point that you are risking enslaving humanity and "dying on this hill" and yet at the same time trusting those same businesses to implement this dystopian system in a privacy-preserving way.
When that realization hits, it will be a loud sound, possibly heard by nearby telepaths.
Unfortunately, their most prominent call to action doesn't seem to address the various state-specific and non-US legislation (focusing on KOSA instead). Here it is:
There are lots of ways to implement identity verification while preserving privacy. It's actually a super interesting engineering problem. Estonia has an excellent model to build on. The government can maintain a "traditional" ID system based on documents and in-person verification, and provide you with a device similar to a yubi-key or Bitcoin hardware wallet that could be used to share specific, cryptographically verifiable claims with third parties, like your age, or even just a boolean "over 18", but also your name or other information if you choose, with a way to control the access and audit which parties have verified which claims with the govt.
In Poland online banks to that. You can verify your identity for government purposes with the use of your online bank. No need for government to set up a scheme to confirm millions of people in person.
govt, banks, whatever. I don't care who administers the system as much as I care about the fact that they're highly regulated, and you have the control needed to expose as little information as necessary to confirm things about yourself to third parties (like just "over 18" for many sites, or your full identity for other things if necessary).
We simply don't need online age verification. It's not the state or private business' job to parent children. It's their parents job.
This is not only unnecessary, but will with 100% certainty lead to negative downstream affects, either via leaks, or the state being able to find people for things that aren't crimes once they're adults.
There's simply no good reason for it that outweighs the bad. But what it really boils down to is completely unnecessary.
Really the hill to die on is that the first amendment should preclude any content-based restrictions for anyone. If you believe children shouldn't be exposed to certain materials that's between you and your kids, and should not involve the government whatsoever
Nothing against Twitter, but I just don't feel like logging in, so that site makes it way easier to read this. Also it doesn't take like 900TiB of RAM to render.
While we've been agonizing over Age Verification (real or planned), Greece has apparently introduced a ban on anonymity on social media. I'm not liking where the world is headed, but I have no idea how to push back against it.
So many pieces of law are flawed today, and the reason why should be concerning to all.
I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.
The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.
There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...
Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
A photo identifies you. This is the digital equivalent of having a photo taken of you upon entering the mag store, stored digitally forever, shared with government, and tied to every magazine you read and purchase.
> I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
First, that's easily enough to identify you from biometric data, and it's naive to assume it won't be resold. Second, I kept getting asked for ID into my 40s because I looked young. People don't all age in the same way, so this system will fail for people at the tails of a normal distribution - some 15 year olds will easily pass for 25 and vice versa.
In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification. It's not explicitly part of the proposed law but there are only a handful of companies who meet the qualifications to provide this service (id.me, Persona) and this is how they do it.
I believe if you are a "minor" then you can go the post-a-selfy route.
If someone wanted to be a martyr and just uploaded all their personal documents so they could be accessed by everyone, I wonder if an interesting court case might follow.
I could imagine it ending with a court ruling that people are responsible to protect their own personal documents which... yeah, that would muddy the waters in a world where every website expects to see your ID.
The verification apps are starting to require live video selfies to verify that the person doing the verifying is the same face as the person on the scanned ID credential.
> In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification.
That's not just the plan - that's what's already legally required in many US states.
These laws were introduced by the explicitly religious right-wing groups like Exodus Cry and Morality in Media, as ways to de facto outlaw pornography (in their own words). They've since been laundered into the mainstream so the general public is unaware of the root cause.
Whether it can be done this way is besides the point. It is about how regimes like ours in the US that have demonstrated an interest in spying on their subjects choose to regulate this over time.
Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.
Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia
Contacting my representatives is about as effective as making a silent wish. Whenever I've done it, I'll either get no response, or a boilerplate reply which basically says "I'm doing this, go fuck yourself". Then I'll be added to their spam list. The truth is that my reps don't represent me and they're going to do what they want regardless. After all, I'm not the one backing the truck of money up to their front door.
Took forever to get a response and likely achieved little, but to their credit the response wasn't entirely canned and did at least give the impression that they understood what I'm saying
I don't know what the correct solution to this problem looks like but in my opinion it's not only the kids that need saving. Social Media is eroding our societies and adults are just as hooked if not more.
Since it is so harmful to let children use social media, why aren't parents being put in prison for abuse and neglect when they let their children use social media? Why should everyone else have to suffer when it's parents that should be punished?
Because this is a golden opportunity erode privacy riots under a complete guise of "protecting the children." Same goes for "preventing terrorism" and various other attempts to appeal to authority.
Kids will always find ways around regulation. Look at cigarettes, vapes, alcohol, weed; they will just get it from their dealers. Pornography? I expect something like: download a Torrent, get it from a classmate, share HardDrives in school, get it through an older brother.
And porn companies should always be held responsible for not doing their due diligence and freely distributing porn to minors. Which is already illegal in teh US and most places.
I'm not suggesting that actually. I look at my nephews and see them buy cigerettes, vapes, etc from small dealers instead of stores. Not saying we should just let them smoke, just expecting that they will be able to circumvent online age restrictions as well.
My question is: are digital age verifications the best way to protect kids from harmfull effects of pornography? And my worry is: what unwanted side-effects will age verification have for our society as a whole.
After reading these comments, I don't want to hear any of you suggest that kids shouldn't be allowed to have unrestricted access to smartphones or social media ever again.
I don't understand. I would assume most people don't think small kids should smoke crack. That doesn't mean that they are automatically in favor of creating a 24/7 survelliance state just to prevent that from happening.
it didn't have to be like this. If we had trusted NGOs with strong funding and a track record of independence and integrity they could shim between token generation and application. Allowing governments to produce identity tokens and applications to verify them with the shim blocking each side from knowing of the other.
I agree, doxxing yourself to some shady gray-market adjacent data broker is not acceptable as age verification, and age verification was safer using the honor system as before. But for some communities, especially social media communities, some kind of verification is better than none, otherwise what's to stop them from being overwhelmed with alt accounts that are used simply for harassment or other targeted objectives?
People should not be able to misrepresent themselves on the internet, it may have been safe in low volumes but it is scary now and will be outright dangerous as a modality in the hands of AI agents. If you think teen mental health is bad now, wait until social media campaign capabilities previously only available to nation states fall into the hands of ordinary school bullies.
Maybe age verification isn't the way to mitigate this obvious risk, but there has to be something that can be done to stop rampant sockpuppeting.
Age verification requires identity verification once — but it doesn't require revealing your identity ever to a third party. With FHE (fully homomorphic encryption), identity data is encrypted on your device and never leaves it in plaintext. Not to the merchant, not to us as the verification service — nobody. We only compute on encrypted data and return a yes/no. I'm building this at identified.app
Honestly, not even in favor of legislating any kind of increased device-side control or age gating. I understand the "this should be up to the parents" angle but I'd push it further: modern tech already allows parents too much control over their children. Freaky helicopter parents are already perfectly enabled to spy on their kids location, device usage, inspect and monitor their conversations, and it's already normalized to an insane degree. Absolutely no reason to make it an out of the box experience to tempt otherwise sane parents to go mad with that kind of abusive power.
To anyone reading this, please take the extra step beyond striking down age-verification laws, and start taking measures to prove to Congress that it's not needed.
Your nextdoor neighbor whose misbehaving child that's permanently on their phone? Help them out.
Your friend that joked about sending death threats to someone? Scold and report him.
That girl endlessly scrolling Instagram? Get her help.
Please take a step back and examine how insane the internet is and how it's affecting our everyday lives. Political violence and mental illness is increasing, and the internet is solely to blame for this.
"If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary."
Federalist 51
We're all too familiar with the latter part of that quote, but we're completely oblivious to the former. At this point, we've all but proven that the government needs to step in and regulate internet access. And unfortunately for us, they're going to do it in the most dystopian, authoritarian way possible.
I want to be on the side of freedom and strike this bill down. But when it is struck down, everyone is going to cheer, go on their merry way, and continue to let demorilization, radicalization, and mental illness infect the psyche of the everyday human being, and do nothing about it. And then the cycle will repeat itself.
At this point, I actually hope this bill passes. Not because I want it to, but because maybe then everyone will stop using the internet for everything, and some sanity will return.
We can't just place sole blame on "the internet". Who made "the internet" the way it is today? By and large, it's the same people who are pushing age restriction and verification: Meta. They do not have your best interests in mind. They only seek to deliver new ways of controlling you.
I'm not a fan of online age verification, but this is completely absurd:
> Every website. Every platform. Every app. Every service. Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used...
No. Nobody's proposing you need to verify your identity to read articles on the New York Times or Wikipedia or political blogs. And nobody is proposing you need to verify your identity to leave comments on a news article or blog post. And any proposed law around that would run into massive first-amendment constitutional hurdles. It would be struck down easily.
There's always going to be a spectrum of websites that range from open and anonymous (like news and political discussion) to strongly identity-verified (like online banking). I don't like online age verification for particular sites, but at the same time I think it's completely misleading to see it as this slippery slope to a world where anonymous speech no longer exists.
We can have reasoned arguments around how people's usage of sites is tracked and how to prevent that, without making this about free speech and "the hill to die on".
I agree that it's an exaggeration to say that every website, platform, app, and service will require identity verification. I don't think it's inconceivable to think of a future where every website, platform, app, and service that matters will require identity verification (every one that has a significant userbase.) I can easily envision a future where it's impossible to anonymously or pseudonymously post a controversial opinion on any online forum where it's likely to be seen by a significant number of people. Such platforms are likely to be targeted by whoever mandates identify verification and imposes penalties for not implementing it.
To the contrary. America has incredibly strong first amendment speech protection. Any kind of legislation that would prevent people from reading or posting political speech unless they verified their identity, especially where it is popular, is going to immediately be ruled as unconstitutional.
That's the difference. Porn sites aren't places intended for the free exchange of political opinions so age verification can pass muster. If anyone tried to do that with newspaper sites or political blogs or anything like that, the courts would shut down that law instantly.
We've spent the past three decades trying to invent ways to deduce identity and build profiles of what would otherwise be anonymous users. When the government steps in and compels people to formally identify themselves by their government names, what would you expect these companies to do? They're not gonna say "no thanks."
Why the heck would the government compel people to formally identify themselves to read or comment on a newspaper or a blog? That's absurd and unconstitutional in the US.
You're starting from an assumption that is invalid to begin with.
I don't know. Why would the government compel someone to formally identify himself to put cash in a box at the bank? Why would the government compel people to take off their shoes to get on a plane? Or submit biometric data drive a car? KYC for a phone line...
It's not invalid. I have no reason to believe that this isn't going to creep.
We have extremely strong first amendment protections that form part of our constitution. That's why it's not going to creep. It would be a blatant violation of the first amendment.
I think your interpretation, even if correct, is not the current position of the legislature. This post and the thread attached to it is about how it's currently happening. Personally, I don't see a future where you don't have a digital ID. If the government can compel you to provide an ID to, say, travel or operate a vehicle in public, I don't see a compelling 1A argument that it can't do the same to operate computing device on the public internet.
While I don't agree on your characterization of the legislature, it doesn't even matter. That's the whole purpose of having checks and balances, and a Supreme Court that can strike down unconstitutional legislation.
And your analogy between driving a vehicle and posting on a website doesn't work because there is no constitutional protection for driving vehicles or taking commercial air flights. However there is a constitutional protection for speech, above all political speech. That's the difference.
Hopefully this will give yet another push towards decentralized, open source services. Platforms where noone and everyone is responsible and the state does not get to decide the rules.
I don't think most people have been inconvenienced enough yet. ID verification is invasive enough and should cause enough friction to push another bunch over the edge.
This seems hyperbolic as it's actually a long path between age verification to full digital identity tracking. But I agree that pushing the burden of verification to websites is ridiculous. Like the GDPR requirements where every webpage has an annoying consent modal, the verification and preferences should be controlled on the device you use to access these digital services. My browser should know and enforce my cookie preferences in a way that has a uniform user experience. Likewise, if I am a minor, my parent should provide me with a device (or profile on a device) which knows my age and can use that to inform online services of the age of the user rather than needing to go through a separate process for each service.
For a forum that supposedly consists of hackers and tech-savvy people, this number of comments supporting age verification is concerning.
The author has said a lot about what kind of future awaits with mass surveillance and AI, but I believe it’s not enough. Technofascism Is not that far away.
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.
States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.
Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.
Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.
I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
Go look directly at the sun without any protection or go listen to sounds of 120dB if you want to test your hypothesis that light and sound can't be unhealthy.
Or maybe you aren't being litteral and are just saying that what children see and hear has no influence on their developmemt. Either way, total bullshit.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
"Age verification is the Trojan horse. And once it is inside the gates, the surveillance state becomes operational."
Braindead meme. "Age verification" is not a "Trojan Horse". No one, regardless of age, _wants_ to use age verification. They are being effectively _forced_ to ask for it or use it. Age verification (identity verification) is a tradeoff. A "Trojan Horse" is something that people actually want, not an obvious tradeoff, a sacrifice, a compromise. No one is being "fooled" into complying with identity verification in the form of age verification
The surveillance state is already operational. If you use "platforms" then you are already inside the gates with the enemy. The surveillance apparatus is operated by so-called "tech" companies that perform data collection, surveillance and online ad services as a "business model". These companies provide access to and information about internet users to advertisers and law enforcement
If "age verification" dissuades some people from accessing "platforms" (servers) run by so-called "tech" companies, then that is a loss for the companies and a privacy gain for those people. The "hill to die on" is not using "platforms"
These companies are the reason that "age verification" is proceeding. They push the allegedly harmful content because it makes money for them. Further, the companies' "platforms" make "age verification" possible. This is because they intermediate transmissions between internet users through these so-called "platforms". Governments need not comply with laws that protect individuals from government surveillance when they can target "platforms" instead
It is disturbing that anyone would want to "die on a hill" to save "platforms" from "age verification". These third parties are surveillance companies. They built the surveillance state. They already know who you are, they do not need government-issued ID
If the people spreading this "Trojan Horse" meme cared about surveillance, including identity verification, then they would not be defending "platforms" from regulation, they would stop using the "platforms"
Usually Fear is the realm of governments. Modern republics are basically legitimized around the fears of something terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, immigration, globalization, unrecycled waste or greenhouse effect.
Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock.
its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
There is a suddenconcerted international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. https://github.com/upper-up/meta-lobbying-and-other-findings...
Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
The push for online age verification started gaining momentum globally, with several countries implementing regulations. Here's a brief timeline:
- 2022: The European Union introduced the Digital Services Act (DSA), establishing a framework for digital services accountability and content moderation.
- 2023:
- *France*: Passed a law requiring age verification for social media and porn websites.
- *UK*: Enacted the Online Safety Act, requiring "highly effective" age assurance for platforms accessible to children.
- 2024:
- *Australia*: Announced plans to ban social media for under-16s.
- *Italy*: Implemented mandatory age verification for sensitive content websites.
- 2025:
- *Denmark*: Proposed banning social media for under-15s.
- *Malaysia*: Required social media platforms to ban users under 16.
- 2026:
- *EU*: Rolling out digital age verification across member states.
- *Norway*: Proposed banning social media for under-15s.
- *Spain*: Announced plans to ban social media access for under-16s.
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.
This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.
All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.
Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.
Critique of the author's style is fine. An opposing view should honestly be presented as such.
"But age verification requires identity verification. Identity verification requires digital IDs."
Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
It's not necessary to give it to every website. Verification to the website can be a true/false from the OS. In fact that's how it already works now.
I would say it's not really an ID no, which is the point. The post is claiming that a digital ID is necessary for age verification, but clearly it isn't.
Reminder: Age Verification are not being passed to protect anyone but social media companies. But in addition, they will be used for a massive surveillance state. This is the DMCA of the 2020s, but far worse.
“It profits me but little, after all, that a vigilant authority… averts all dangers from my path… if this same authority is the absolute master of my liberty and my life.”
I would say be careful what you choose to believe. Online identity verification is the only way to end the war that’s being waged on the American people by foreign states via social media. If I were a bad actor, I would very much want to convince the public that this is a bad idea.
No, I would say it's not that easy, at least not on the scale that they're currently creating accounts. There are ways to do further verification like credit agencies do or how Google does it for businesses (you have to be able to receive a physical mail item with a card containing an ID code).
Enjoy dying on that hill then because without mandatory ID for potentially harmful services like social media, we will continue to descend further into the brainrot that many of you suffer from today.
Brainrot isn’t wrongthink. Brainrot is brinksmanship and zero sum discourse. As a member of the public it’s virtually impossible to know where the real consensus is on any issue today due to wishful thinking backed up by gigantic botnets. Brainrot will make people certain that they’re part of some majority consensus to the point that they will fight legislation like this because being provably part of a fringe line of thinking would cause them psychological pain. Right now, everyone (including the “moon mission was fake” fringe) thinks they’re part of a majority consensus. Even sovereign citizens and flat earthers believe they’re in a much larger cohort than they really are. A lot of these ideas are harming people offline in addition to degrading their personal mental health.
I’m betting that bot activity plummets once accounts are tied to real identities. That’s a discourse benefit. I’m also betting that discussion will become a lot more rational once people have to put their names on what they are saying. Death threats also become more easily prosecutable.
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to everyone so that we wouldn't have been a need to bring in age verification.
The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")
They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.
This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.
meta desperately want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.
So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.
The Cashier Standard – Age Verification Without Surveillance
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.
Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.
I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.
While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be some trolls out there who will do it just for the laughs.
This is far from a perfect system since a dishonest site could also make use of the option. But it theoretically works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.
Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.
Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that turns it into a physical problem. I'm merely pointing out that's what you're doing and the issues with it.
Second, it doesn't matter what it costs, it's inconvenient and I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.
Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.
EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.
Regardless how stupid this argument is, rags will always pounce on it.
This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.
It's the same with tough on crime. "What, you want criminals to keep getting away with it?!"
> Pretty much every person in every culture cares deeply about their children.
I would substitute "deeply" for "superficially". Like, if my parents found some way to prohibit porn when I was an adolescent, I wouldn't say they cared deeply about me. I would say they were misguided and authoritary. The "care deeply" idea you are putting forward is just trying to distil whatever societal norm currently is into the youngs.
Because it is the moral responsibility of adults to care for not just their children but all our children. Occasional surrendering of rights is appropriate in that endeavor.
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.
Protect the children refers to a type of property, not a type of human.
I agree. I don't call it "age verification" though - it is age sniffing. And it has nothing to do with children - that is the lie.
What is fascinating is to see how governments ALL fall for it. There is zero resistance. This is fascinating to me. It shows how little real effort is necessary once you have the lobbyists in place. Kind of scary to witness too.
It is an apartheid system. All apartheid slavery systems will eventually die, so age sniffing will die too. But it will most likely be a long fight as more and more money will be invested by crazy corporations such as Palantir and others.
The whole "debate" is already not logical by the way. Let's for a moment assume the "but but but the kids!" is a real argument rather than a strawman argument, which it is. Ok so ... I am a "concerned parent", for the sake of discussion. I have three young kids. I am not a tech nerd. The kids see "unfitting content" on the antisocial media such as facebook and what not. So, what do I do? Well ... they have a smartphone? Aha, so ... I am not so concerned? Having no smartphone is no option? Ok so ... I say they can have a smartphone, but they may not use antisocial media. Ok. First - in any free society, is it acceptable that this kind of censorship is done on ALL kids? What if I, as a parent, do not agree with this? Well, tough luck - the laws force you into the age sniffing routine suddenly. But, even those parents who want the state to act as totalitarian: why would I want to hand over control to ANY politician for that matter? That makes no sense to me. I am aware that some parents may think differently, but do all parents think like that, even IF they buy into the "we protect the children" lie? I don't want ANY information from ANY of my computers to go into private hands here. So the whole argument already makes zero sense from the get go.
Of course those who know how things work, they know that this is the build up towards identifying everyone on the world wide web at all times AND to make access to information conditional, e. g. if the state does not know you, you can not access information. Aka a passport system for the www. Built right into the operating system too. Windows already complied. MacOSX too. The battle for Linux will be interesting; it may be some hybrid situation, like systemd. And the systemd distributions will all succumb to age sniffing, courtesy of Poettering "this is really harmless if we store your age in the database, just trust me".
>And it has nothing to do with children - that is the lie.
You're not qualified to say that because you aren't a proponent of age verification. That's just imputing motives.
As a proponent of age verification and can tell you it's absolutely about protecting kid from damaging services like porn. It's a common sense control and that's why it has bi-partisan support in the US during a time where there is nearly 0 bi partisan support.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.
It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
The Internet pretty much runs our lives now, so: I do.
Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.
No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
but I think the parents counterpoint was that the important parts of the internet (paying bills, buying things, registering for classes) don't require or presuppose anonymity.
You took away the context of my question and thus gave an irrelevant answer.
The subject at hand is age verification and anonymity on the internet.
For effective communication, one usually tries to make their input to a conversation relevant. https://en.wikipedia.org/wiki/Cooperative_principle#Maxims_i...
Even then your answer isn't really an answer, because you're giving examples of things the internet is required for. Certain situations can require having a car, that doesn't mean you need to care about cars more than the minimum necessary to operate one.
> If you love your family, you must stop online age verification.
> If you want the best for your children, you must stop online age verification.
> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.
Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,
"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.
They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."
So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.
Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.
Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.
What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.
Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.
Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.
But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
> The best way to not be in a digital cage is to opt out of the current digital products.
Bullshit. These are all-encompassing monopolies and government services. More likely, they'll ban you and you'll end up having to go to court out of desperation to demand that they service you.
This is very limited thinking. If you lacked this sort of imagination 20 years ago, you wouldn't have been able to predict today.
> Frankly I would welcome a world in which kids are not using Instagram or TikTok.
This is the sort of passive reactionary nonsense that causes the danger that we're in. Everything isn't something to give up lightly, even if you think that it will force your neighbor to turn his music down, or get rid of bad reality television. I don't like kids on social media either. I don't like adults on it. I think kids are suffering more from surveillance than from TikTok.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.
The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.
Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
Why? Kids have had access to the internet for over 30 years. What is the tiktok brainwashing (I don't use it), and how do you qualify the danger of it from say google news brainwashing, or even (gasp) public school brainwashing? I mean, if we're going to group ban information, at least let people in the local communities make those decisions. Otherwise, we're going to get the Epstein class making these decisions.
I see the societal turmoil and strife this will feed as much more dangerous and concrete than some abstract high-minded discussion about slippery slopes. Our society is being torn apart as we speak. We don't have the luxury of philosophizing about what-ifs.
Dogs are on to something! Tone matters in persuasion. A whole lot. If the author were interested in persuading (as I assume he must be, given his strongly held convictions) then he should consider his tone more carefully.
Yeah, calling people "dogs" for pointing out that TFA is a hyperbolic (AI-written) screed without substance would ruffle some feathers.
Edit: yes it is hyperbolic and ridiculous to suggest people will be "enslaved" because they don't have access to the internet. Do you realize that makes everybody who grew up in the 90s or earlier a "slave"?
It's mind boggling how far Stallman saw into the future. Saddest part is we're losing this war. They're going to destroy freedom of computation, freedom of information, and it turns out that... Nobody cares. Nobody but a bunch of nerds.
For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.
There can be no free speech if the state can imprison you for what you say, and they know everything you say.
I dropped the word ‘online’ from the above paragraph, because on is the real world. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?
And not distributing porn to children is a porn company's responsibility.
You are repeating a very common talking point but its not a good one.
Age verification laws make it possible to hold services providers liable for breaking the law (it's already illegal to distribute porn to minors in many places, like the US).
It's both true and completely irrelevant that parents should do a better job protecting their children from harmful services online.
Yes, my argument, to restate it, is that rhetoric can be misused to counterproductive effect, as is the case here.
Carefully note that I have neither affirmed nor contradicted anything of the substance of his argument. So defending his position to me is a non sequitur.
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Yes
That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?
That's why any kid can go watch any movie in the cinema right?
Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?
Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.
> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
Responsible parties like porn companies that distribute porn to minors? Parents are still accountable with age verification laws.
If parents suck at parenting, they will suffer.
If porn companies distribute porn to minors, which is illegal in many places such as the US, they will not suffer. Unless you start holding them accountable.
Every major adult content site has warnings that you have to be over 18 when you enter the site. Its extremely easy to use parental controls to block these sites for a kid, and parental controls don't require violating user privacy.
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.
Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
It can't be disingenuous if I actually mean for you to take my argument at face value. There is no hidden motive that I haven't stated. I mean for you to focus on the author's communication style, in case you missed how bad it is, notice what's wrong with it, and seek better sources of information about the issue.
You have accused me of "rhetoric", but that is no accusation at all. Rhetoric is the art of persuasive speech. I have not accused the author of "rhetoric" but of "poor rhetoric". Perhaps that is what you mean to accuse me of.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.
If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously and without the content of your speech being sold to Walmart and the cops. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.
It's important that you engage with that. Denial is not something to brag about.
Who is the "they" that you refer to? Did you know that many people are in favor of age verification? Like, many parents of children who are at a loss for how to protect them from early access to obscene material? Could it be that that is why a large segment of government actors are moving in this direction? Or must it necessarily be a secretive nefarious play by some evil tech companies in cahoots with that one administration?
Or is it actually the opponents of age verification who are the ones targeting my children by encouraging early access to obscene materials for grooming?
That last point sounds like a conspiracy theory. It should. I wrote it that way to be provocative, and I hope that you, as a result, dismiss it out of hand. But I want you to understand that TFA's argument also sounds like a conspiracy theory to me. If you want me to engage, you want to make a serious attempt at persuasion.
To that end, I do appreciate that you have not adopted the tone of the article.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
There is actually extremely little evidence for anything when comes to individuals, sexual content, and their sexual fantasies. There is even less evidence available for anything when it comes to minors.
I've read papers on the topic, and the good papers always point out that there is almost no focus on any potential positives. Many "authors" have already made up their minds before they've even started conducting research, and that's only if they manage to get funding (everything sex related gets very little or no research funding).
That's a discussion that's entirely tangential to age verification. However, I think porn should be illegal entirely as it's just prostitution. As such I think porn companies should not exist, the same as brothels or heroin dealers. If they have to exist for practical reasons along with other objectively harmful things, such as alcohol, marijuana or gambling, then obviously they should be regulated to ensure they're not targeting minors.
That does not detract from the fact that the people arguing for age verification are using "think of the children" in order to push surveillance.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
Im completely OK with verifying someone's age before distributing age-restricted services to them. That's what an age restricted service is, and obviously we shouldnt let porn companies distribute porn to minors (its already illegal most place). Just dont use porn, facebook, online gambling etc. if you dont want to share your identity.
I can see why it's unfortunate but the idea posited that that it's somehow illegal in the US is ridiculous. You have no right to watch porn anonymously at the expense of holding porn companies liable for distributing porn to minors.
Internet 1.0 was largely read only, ephemeral, or decentralized. Chat rooms, IRC, personal webpages, etc. There was anonymity and there were not age restricted services.
Internet 2.0 introduced age restricted services and the enforcement lagged. The enforcement is now catching up. You can still do all the Internet 1.0 things anonymously but you can no longer gamble online as a 14 year old and hopefully soon you wont be able to watch porn either.
Private companies now can link all your online activities to you. Not an advertisement ID, but directly to you and your loans and your health data and whatever they're selling in the black market. Every data breach is a 100 times. It was already almost possible to directly know about you by buying data, now it's easier.
The point of this is not to verify age really. It is to verify identity. There's no way to prove someone is some age without presenting a legal ID.
Also, it's not just porn, facebook, online gambling etc. It is the OS based on some bills. So ALL your activities.
> There's no way to prove someone is some age without presenting a legal ID.
Sure there is.
Verifiable Credentials and other similar standards allow this to be delegated in such a way that there is no need to present ID or even let the site know who you are. The site can issue a request to a third party that simply provides back "Yep, we attest that this request was approved by someone over 18".
Depending on the exact scheme, the request may forward you to a broker, who will then forward the request (and your web session) on to the trusted third party of your choice which has already performed ID verficiation (usually a bank). The bank sends a signed response back to the broker, the broker sends a signed response back to the requesting site.
Is it perfect? Maybe not 100%, the broker knows there was a request from a restricted site forwarded to a given bank. The bank knows you have approved a request. There is likely to be an identifier of some sort sent from the site all the way through to the back-end so you know you're not being MITM'd. But in theory nobody should have the full picture.
No practical way I should say. Realistically, it's pretty clear that lawmakers really just want to shove it through in the simplest way possible. Which is probably private third parties.
And private third parties are very shady. They have effective monopolies and no significant public face to care about. I think we have seen this pattern play out in healthcare, compliance and other industries already.
Also idk about banks being the effective gatekeepers to the internet and eventually all technology. Just feels like its not their place to do that.
This argument as framed doesn’t make any sense. Porn is (and WAS) Internet 1.0.
There was porn before most everything on the web. Porn is also speech / art.
Anonymous access should be available for any website that wants to share their content on the Internet provided they have the rights to that content.
States that seek to limit that could make a legal argument that they have the right to limit access, but in the end it’s infringing speech. Worse, it’s unenforceable.
And yes, I would make the same arguments for people posting hateful shit or misinformation.
The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.
I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.
[1] - https://news.ycombinator.com/item?id=46152074
This is exactly the way it should be done. Device with parental controls enabled disables content client-side when the header is detected. As far as I can tell, it's a global optimum, all trade-offs considered.
Well why haven't all the big tech companies done it then?
They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"
And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.
The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.
If it is about making everyone identifiable how come California's version doesn't require providing any identifying information when setting it up on a child's device?
Because "making everyone identifiable" isn't an explicit design goal. Rather it is merely an implicit imperative (of Facebook et al, who are pushing these laws) that casts its shadow over the design. That shadow is what results in a design based around sending identifying information from the client to the server. Once this dynamic is normalized, servers will demand ever-more identifying information and evidence that it is correct.
Note that this design does the exact opposite of giving parents control to protect their own children - rather it puts the ultimate decision making ability into the hands of corporate attorneys! For example, we can easily imagine a "Facebook4Kidz" site that does the bare legal minimum to avoid liability for addicting kids to dopamine drips, and no more. Client side software based around RTA headers would allow parents to choose to filter things like that out, whereas when the server is making the decision its anything goes as long as the corporate attorneys have given it the green light.
Facts!
> The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content.
The reason that mainstream politicians are pushing is because the public wants something done to protect their kids.
Are there likely to be bad actors pushing for it for nefarious reasons as well? Sure. Are the 'solutions' inadequate and often tech- and privacy-illiterate? Absolutely. Is the entire impulse to demand that government 'fix' this issue wrong? Maybe.
But the idea that this is all a smoke-screen from top to bottom needs to die. Not just because it's wrong, but because it's also unhelpful. If you wade into the debate saying "It's all a lie, this was never about the kids!" you're easily dismissed as a nut and an absolutist who doesn't appreciate that real people want their real kids to be protected.
Yep, and the tech companies had years to address these concerns and did not, so now the creaky gears of government regulation are turning. They (meaning YOU, a lot of tech company employees who are now outraged about this) could have headed this off years ago and provided a solution on their own terms.
So, why are those "real people" actually not willing to do their job? I am so pissed with parents who think the government is supposed to solve their own inability to raise a child.
We expect every other consumer product/toy that kids are intended to use to be safe by default. This is like asking why parents shouldn't be responsible for testing all their kids toys for lead paint.
Yet when it comes to internet/social media technology, it's suddenly a parenting failure if they don't pre-vet every platform and website and device before allowing their kids to use it.
As a society, we collectively protect kids from stuff they aren't ready to handle. We don't let them gamble, or buy alcohol, cigarettes, or porn. For the most part, everyone buys in to this and parents can pretty much count on it. Are there exceptions, sure but they create scandals and consequences when they are discovered.
But social media and content platforms didn't feel that they had any social obligations. They did not honor this societal convention to keep inappropriate content away from kids. And the top people at these companies actually don't let their own kids use the platforms, they know how harmful they are and they know about all the addictive hooks and dark patterns of engagement that are baked into them.
Well for a start not all of them are very tech savvy, and we've built a world in which tech is essential to their day to day lives, including for their kids.
If school demands the kids have a variety of devices to do their work, and they have no idea how to lock those down to exclude (for example) social media services that we know have been designed to be as addictive as possible, can you not see why they might want someone to intervene?
(edit: Beyond that there are also tons of bad reasons, I'm not going to try and justify them. There are a lot of bad parents and just in general people who are not firing on all cylinders out there. And many of them absolutely love a government regulation to be brought in for just about anything.
We can and should argue with these people and point out why they're wrong. But saying it's "nothing to do with actually stopping kids seeing the content" fails here too.)
If public school is supposed to be free, the school should supply the required devices and take on the burden of securing those devices.
For private schools, the parents are more involved in the first place, but I would expect them to also have guidance for parents to help the less tech savvy among them.
"should" is doing a lot of heavy lifting here.
Right. I submit we are solving the wrong problem. Just establishing age vertification doesn't magically make these vast amounts of bad parents good parents. There is a ton of other things they can and will fail at, which their kids have to absorb. If we really cared about those kids, we'd have to reconsider a lot of things. And I know what I am talking about, had to grow up with an undiagnosed ADHS+anciety mother. It was hell. And even 30 years after i moved out, she still can't see what she failed at and continues to fail at. Age verification wouldn't have helped me. MAKING her seek treatment might have helped.
No argument here, I'm not saying they're right to demand that age verification is brought in to protect kids, or that we should give up privacy etc etc.
But coming at it from the angle that "It was never about protecting kids!" is itself incorrect and unhelpful to the debate.
Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)
We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.
OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.
Any "verification" means unacceptable privacy violations.
The best route is better parental controls, that are not enabled by default. Locking down the OS like ransomware until the user submits to age verification is the wrong approach, and what Apple did in the UK needs to be highly illegal.
> Any "verification" means unacceptable privacy violations.
So I'm not necessarily arguing for age controls here, but purely on a technical level what do you think of schemes like Verifiable Credentials, which delegate verification to third parties that have already established your identity?
In theory you can set up a system that works like this:
1. User goes to restricted site and sets up an account
2. Site forwards them on to a verification service with a request "IsOver18?"
3. User selects their bank from a dropdown on the broker site
4. Broker forwards them to the bank, with a request "IsOver18?"
5. User logs in and selects "Sure, prove I am over 18 to this request"
6. Bank sends a signed response to the broker "Yep"
7. Broker verifies and sends its own signed response to the site "Yep"
8. The site tags the account as "Over 18 Status verified"
In this situation, the restricted site doesn't get anything other than a boolean answer from the broker. The broker can link a request to a given bank but doesn't get anything that gives away your identity. The bank knows your identity and that it has approved a request, but not necessarily where the request came from.
Verification broker tracks sites which make requests and records it attached to personal data. Site either sells or leaks personal data along with history of all sites visited which require age verification.
Also your solution requires a bank account, not something everyone has. Many do, but not all. Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.
> Verification broker tracks sites which make requests and records it attached to personal data.
How? What personal data?
The broker doesn't get anything other than "Site X wants to verify over 18, the user selected forward to Bank Y" and "Bank Y responds with TRUE"
> Also your solution requires a bank account, not something everyone has
True. Banks are only one example of an already trusted identity provider in this situation. But I get that there are gaps.
> Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.
Verification need only happen once per site, when setting up an account. This does introduce the possibility of a secondary market for approved accounts though, sure.
User installs a browser extension which forwards the request to everyoneisover18.com, owner of that site has a script set up to log into their bank and pass the verification challenge
Restricted-site.com gets the signed response from the broker, not the bank. In your situation there's not any need for "everyoneisover18.com" to defer to a real bank for a faked response as it signs things itself.
But restricted-site.com doesn't trust everyoneisover18.com's key, it only trusts realbroker.com's key, so the response isn't accepted. If it is found to trust fake brokers like that it gets in trouble with the law.
That's why everyoneisover18.com forwards the request to my bank or my broker and gets my signature on the behalf of literally anyone. I may charge them $5 for this service.
> That's why everyoneisover18.com forwards the request to my bank or my broker
Doesn't work. The response won't be signed by real-broker.com.
The permission request/response itself goes direct from the server at restricted-site.com to the server at real-broker.com over TLS, so you can't MITM it, it's not controlled by the client and you won't be able to just pass out a cached response.
Your malicious client plugin could potentially forward the client session details to you, so you could operate the broker page, then log in to your bank's portal and approve that request, but I don't think that's going to scale very well and I imagine your bank is likely going to rate limit you.
real-broker opens a web page allowing them to verify somehow. The browser extension sends me their URL and cookies so I can load the same page and verify myself. All automated of course.
You could, you could also go to their house and go through the process for them, but in either case I don't think it's going to scale very well (rate-limiting would seem to be called for, maybe with 2FA as well, to mitigate this sort of thing and remove the possibilities for automation).
But sure, you could subvert it on a small scale, just as you can borrow someone else's driving license to register in 'normal' systems already. You could also register an account, validate it and then sell the login details, regardless of what proof of age scheme you use.
The point is the scheme is no worse at validation than asking for ID and it protects user privacy by keeping all ID details away from individual websites, which is the more important part IMHO.
What rate limit would you recommend?
My cellphone provider will be pleased be paid to deliver all those 2FA text messages. Who's sending them? How are they getting paid? Maybe I'm actually my own phone company, so I get paid for delivering them to myself.
> Who's sending them?
Your bank, like they have 2FA for every other access to your account. 2FA also doesn't need to be via SMS, and even when it is that's dirt cheap. Rate limits can be a couple of approvals per hour with daily limits of a small handful. Or a leaky-bucket style algortihm where you can do a few at a time, but you only get one more per hour. Whatever way it's done it precludes your large-scale automation attempt.
I tire of this now. We've entirely wandered off from "Here's a way to prove age without the privacy implications, that works just as well as handing over scans of ID"
So if you have an actual point, please make it.
> These are facts, whether you like them or not.
[Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.
Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/ "Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"
> Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms
Indeed. For example, the strongest evidence for harm shows that negative mental health is correlated with increasing social media use, but it's an important question of whether using social media more causes mental health problems, or mental health problems mean more social media use (or both, which would suggest a spiraling effect is important to look out for and prevent).
> Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
This is entirely false scare tactic nonsense, and you really need to look at where you sourced that idea and no longer use them as a reference point. There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.
We have not just woefully bad parental controls, but in the name of privacy, modern platforms make it exceptionally hard to implement parental controls. What is being pushed here is largely a mandate that a system for parents to control what their kids can reach needs to exist and Internet companies need to support it.
(Steam is, FWIW, probably one of the best actors in this regard already, Steam Family is incredibly nuanced in the features and tools it gives parents. I have a lot of gripes about Steam but this is not a place they will have difficulty complying with the law. Heck, Steam is better at parental controls than Nintendo and Disney).
> There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.
The Parents Decide Act (PDA) goes considerably farther than superficially similar sounding laws like California's.
The California law requires that an OS allow the parent or guarding to associate an age or birthdate with the account when setting up a child's account on a device that will primarily be used for the child. It does not require any verification of the age information that the parent provides.
The PDA requires that the birthdate be provided for anyone who has an account on the device, and leaves much of the details up to the Federal Trade Commission to work out in the first 180 days after is passed. The wording of the list of things the Commission is to do suggests that the OS is supposed to actually verify age information, rather than just accept whatever a parent enters when setting up the child's device and account, and that it also has to verify that it will require the birthdate of the parent and verify that.
A Steam Deck is just an Arch Linux box. There is, intentionally by by design, no method of securing it against its user. Anyone with root access can change anything on it. There is no method of enforcing an age verification scheme on it in a way that cannot be removed or altered by a sufficiently bright and motivated teenager.
previous discussion on the same topic: https://news.ycombinator.com/item?id=47509984#47526575
the conclusion from that discussion was that kids should not be allowed Steam Decks because that would provide them a way of getting around age certification.
The California bill, which is not called the Parents Decide Act, lets parents decide. The federal Parents Decide Act doesn't say whether parents can decide or not - it says a commission shall decide whether parents shall be able to decide, and we can predict what that commission will decide.
> any possible excuse to suggest tech companies shouldn't be accountable
The entire impetus for these bills is for Facebook (the sponsor of these bills) to escape liability for how they're currently harming kids. Facebook's only goal here is to be receiving headers that say the user is over 18, so they can continue business as usual under the assertion that any users must be adults.
Then you recognize that the solution definitely does not require privacy invasion, since presumably Facebook does not want actual proof because they hope teenagers will get around it.
That being said, the antiregulatory wonks are not all working for Facebook, and some are indeed manifestly just always opposed to any regulation at all no matter what harm is occurring.
Bear in mind the alternative: Things like Discord collecting personal data to do verification at the website level. A push for a simple "user is over 18" header is incredibly preferable from a privacy standpoint and parents being able to control and monitor it themselves.
This legislation does not require it out of the gate, but it sets up the precedent and the incentives such that it will eventually be required down the line. That's the problem with anything that gives more power, and the expectation of even more power, to the server (ie to big tech).
FWIW I personally would be supportive of legislation where the data flow went the proper way of server->client, for the user-agent to decide. Consider: Any website over a certain size must publish an appropriate set of well known tags asserting whether its content is suitable for kids of certain ages, has social aspects, the type of content, etc. Any device preloaded with an operating system over a certain marketshare must include parental control software that uses tags, as an option in the set up flow. The parental control software "fails closed" and doesn't display websites without tags. The long tails of the open web, bespoke devices, new OSes, etc remain completely unaffected.
>If it was then this kind of solution would be being legislated for.
What's more likely a global conspiracy to get age verification passed to allow these unnamed groups to identify everyone for some unknown purpose or politicians just not understanding tech?
The way people try to pretend that there can't be any organic desire for these proposals is so bizarre and is a major cause for all these proposed solutions being so technically dubious. Refusal to recognize the problem means you won't be part of solving the problem.
You do realize that for whatever reason more and more people in government positions are on the path of authoritarian agendas? Its a pretty important topic right now. All of this privacy related stuff is happening in quick succession.
I mean I cannot believe I have to post these, but here we go:
https://www.politico.com/news/2025/09/13/california-advances...
https://www.yahoo.com/news/articles/reddit-user-uncovers-beh...
https://www.techdirt.com/tag/age-verification/
Your argument has two main flaws. First, it relies on an inherent connection between age verification and authoritarianism that is just taken for granted as true. Meta could easily be in favor of age verification because it reduces their liability and raises the barriers to entry for potential competitors. It doesn't inherently have to be authoritarianism.
But more importantly even if that connection is true, your argument relies on the current proposals of age verification being the only way to satisfy the organic desire for protecting kids from the unfettered internet. OP gave an example that could be a compromise position that addresses the need and isn't authoritarian. Why can't you support that effort?
I can support any effort that puts the responsibility into the hands of the parent without a mechanism that advances identity verification to protect their children.
The way it stands now. this issue is being used by people in power to advanced an authoritarian agenda. Its really clear to see, if you only have the will to look.
>I can support any effort that puts the responsibility into the hands of the parent without a mechanism that advances identity verification to protect their children.
Which brings us right back to what I said here[1]. We don't have to agree on the motivations behind this push. Even if you believe this is all an authoritarian conspiracy, that conspiracy could be undermined by proposals like OP's, but instead people make enemies out of these potential allies which just further empowers the people who you consider to be authoritarians. It's a failure of basic political coalition building.
[1] - https://news.ycombinator.com/item?id=47957480
Im happy to have dialog with anyone that wants to protect children under the circumstances I already described. But if these initiatives push forward IDing people to have protection, then Im sorry you are on the wrong side of life and are involved in making the future of our society worse. I don't see you as an enemy, more misguided then anything. Im sure people are going to turn this into friends and enemies, but I don't look at it that way. I have to defend freedom under all circumstances. In most cases I support deontology over utilitarianism because I have seen how far we have slid in terms of being free as a people because we want to make everyone safe..
Taking away freedoms, for any reason, is not the answer. They make us less secure [0] and promote bad actors to make things worse.
[0]: https://news.clemson.edu/the-safer-you-feel-the-less-safely-...
>Im happy to have dialog with anyone that wants to protect children under the circumstances I already described.
But you're ignoring my point that your dialog is actively counterproductive when you don't engage with the root of the problem.
Nowhere in here did I advocate for "taking away freedoms" or for the age verification policies as discussed in this article. The only aspect of this issue that I have argued is that there is a real organic demand from people who want help in preventing children from having unfettered access to the internet.
The reason you see me as "misguided" is because you are refusing to actually listen to what I'm saying. And then you magnify the divide with your rhetoric implying I'm out to take away your freedom. Maybe you don't look at me as an enemy, but your rhetoric and behavior is actively repellent when it could instead be welcoming as you claim to sympathetic to the only issue I have actually advocated for here.
How am I not engaging with the root of the problem? I just see it differently than you. And thats ok. I dont think the problem is solved by id verification. This is the position I have been arguing all along and Im not seeing how my position is getting in the way of what you are talking about.
The politicians that want to identify everyone capitalize on organic desire for these proposals in the form of fear-mongering and "Think of the children!"
Citizens that want these laws are unthinking drones who don't want to raise their children, and instead want legislators to do it for them.
Politicians that want these laws are the people who, ideally, want to track your every move online for a multitude of reasons, not least of which are censoring speech and controlling narratives.
>organic desire for these proposals
Even if everything you said was true and there was a global conspiracy among the politicians, the tech crowd consistently denies and demeans these organic desires. We could cut the legs out from under these politicians if we listened to these people's concerns, considered actual solutions like OP did at the top of this thread, and turned these people into allies against those politicians. But instead we deny the actual desire to protect children and accuse them of either having ulterior motives or being sheep, turning them into permanent enemies thereby empowering those (hypothetically) conspiratorial politicians.
The public, and consumers in general often state a want or need for something that they don't actually want or that would harm their quality of life, it is correct to demean or deride these wants when they're identified, some aspects of human nature are amusing.
But there is a global conspiracy, a synchronised effort among western leaders to implement near identical solutions to this engineered "problem", the responsibility remains squarely on the shoulders of parents, I say this as a parent.
>The public, and consumers in general often state a want or need for something that they don't actually want or that would harm their quality of life, it is correct to demean or deride these wants when they're identified, some aspects of human nature are amusing.
Thank you for proving my point by doing the exact thing I said tech people do. Do you think that if you demean and deride enough people, the problem will go away?
Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.
Because you can't have a tech company offering third party identity verification solutions if you just go with something like an RTA header.
The tech companies are the ones lobbing for age verification.
The entire point of this scheme is mass surveillance and shifting responsibility away from big tech companies. It has nothing at all to do with "protecting" kids. Preventing kids from accessing adult material is not even remotely a goal, it is a pretext. Just like every other "think of the children" argument.
> sticking their fingers
I actually think it was giant wads of cash.
Or could have a header saying this is not adult-only content, and a parentally-controlled device will block things that don't participate.
That's a good idea. There could be two headers, the existing RTA header that adult sites use today [1] and another static header that explicitly states there shall be no adult content.
[1] - https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-R... [THESE ARE ADULT SITES, NSFW]
What is adult content? I know parents who have no problem with their kids seeing porn. I know parents who give their kids a beer. I know parents who take their kids to violent movies. I used to know parents who will give their kids cigarettes. Most parents I know will disagree with their kids doing one of the above. I know songs that were played on the radio in 1960 that would not be allowed today, even though today we allow some swearing on the radio.
That's between parents and their local governments. Yes when I was a kid my mom let me watch whatever and go wherever. The parent in my example ultimately decides what a kid may or may not do which is in alignment with existing laws. If the parent is endangering their kid that is up to them and their government to sort out.
Point being, put the controls entirely into the hands of the device owner. Options can be to default to:
- Block everything by default unless header states otherwise.
- Block only sites that state they are adult.
- Do nothing. Obey the operator. (Controls disabled on child accounts or make them an adult or otherwise unrestricted account on the device).
I think the options are just limited to our imagination.
> - Block only sites that state they are adult.
This is the problem. What is an "adult" web site? Websites that show porn? Websites that show gore? Websites that show violence? Websites that show non-porn naked people? Websites that have curse words? Websites that promote cults and alternate religions?
Why is it the site's responsibility to "state" that they are adult, given whatever parameters they dream up? Why is it the government's responsibility to say "This is adult content, but that isn't adult content?" Shouldn't the parent get to decide which categories of content count as "adult"?
> Websites that promote cults and alternate religions?
Websites that promote any religions. No way should under-18s be exposed to that.
Let’s not pretend like this is a brand new problem. Even pre-Internet, there have always (well, let’s just say definitely the whole lifetime of anyone GenX or younger) been tons of first-amendment-protected content falling under all 3 of these categories: “obviously fine for children” (e.g. Sesame Street, Paw Patrol), “obviously not appropriate for children” (Hustler magazine, Pornhub), and “Controversial / maybe ok for teens / still probably not okay for 6-year-olds” (e.g. sex ed, depictions of rape, graphic violence). This last category is obviously one where Opinions May Vary, but the way we have handled it in the past has been laws. Nearly every state has statutes prohibiting sale, display, rental, or distribution to minors of material deemed “harmful to minors” - the distinction between the second and third categories is determined by a court if it really has to be. This has worked fine in the offline sphere, and it’s why I couldn’t walk into a video store when I was 8 and rent a stack of porn tapes.
At minimum, it would be a reasonable legislation topic to at minimum mandate that websites publishing obviously “Harmful to minors” content tag it as such[1]. And also it would be ideal to create some kind of campaign to tag the first category as safe (honestly Apple and Google ought to be working together on that one). If you in good faith operate a site in the controversial category, that would be no different than selling books on sex ed in a Barnes & Noble - protected.
Parents could then choose, with simple device controls:
- Allow only “tagged safe” pages (parents with very young kids, or who have a hard time monitoring use)
- Allow safe + no-tag (open-minded parents who choose to err on the side of allow, and monitor the controversial stuff themselves)
- Allow all (parents who want to be solely responsible to regulate it)
I find it frustrating that people are talking like we have to either have a completely “no rules” Internet where obviously any kindergartener is going to stumble upon super disgusting stuff, or this gross surveillance state Internet, where people have to show ID to use any site. Neither of those are how things were before the Internet and it doesn’t have to be how things are now.
[1] you might ask, what do we do when say, a Russian porn site doesn’t want to comply with this tagging. In my opinion, it seems reasonable that someone could put obviously bad faith sites like that into a block list database. In a place like the UK I would expect that to be a government regulator, but there’s no reason why that couldn’t just be something private companies do in the US. As a parent, I would pay two bucks a month to subscribe to a service like that if it were integrated into the operating systems my kids use.
That was our struggle with implementing "blocking" tech at a school I worked at. Is a kid looking up how to do a breast self exam porn? What about a self testicular exam.. What about actual Sex Ed kinds of sites?
Then those parents can turn off their browser/client’s age protections. I think that’s actually a decent argument for the solution posed by this thread.
There is such a thing as making the "kid ok" header so rare or "18+" so eager that nobody takes it seriously, so that'd need to be kept in mind.
> I know parents who have no problem with their kids seeing porn.
Surely you mean at least teenagers, and not literally children, right? Consider the prevalence of violence, racial stereotyping, and escalation of fetishism into degeneracy that clearly exists within this medium; what's the line that these parents draw? Are they making sure it's only something vanilla? Or is there no line whatsoever?
They don't care. The kids won't think to ask until they are teens, and they are not showing it until then, but it is technically available.
There are already laws defining this. Had to draw the line somewhere, and they did.
In which legal jurisdiction and culture? Many or most website are have users from many locations.
Is the header a json encoded map from country code to age rating?
The US. If they want to serve users in other countries, or if certain states make their own rules, it's business as usual whether to serve different content there or serve a different header or take the legal risk.
That seems unworkable and a practical matter
It's the exact same problem that age verification faces. There are different laws in different jurisdictions and operators have to figure out how to comply with the ones that matter to them.
Think of the (current) header as meaning "we would have blocked you if we saw you were under 18" or whatever equivalent and it should make sense.
They already do this, like there's Victoria's Secret's US website vs Qatar.
> I know parents who have no problem with their kids seeing porn.
I don't agree with showing actual children porn, but I also totally expect teenagers to find some way to get access to it in the age of the Internet.
Part of the challenge with this is cultural. Different places in the world think about sex, sexuality, and even the concept of what is a child differently. In the US, showing a woman's bare breasts to a person under 18 is generally considered wrong, and in many cases is illegal. In most of Europe it wouldn't even raise an eyebrow, because bare breasts are on television, sometimes in commercials even.
Set aside for a moment the question of age verification and age limits, we cannot even agree in any sort of universal sense what even qualifies as porn or adult content, and at what age someone should be able to see it. There's a difference between a 7 year old and a 17 year old seeing the same type of content, and there's also a difference between a photographic nude and a video of people engaged in coitus.
The story is basically the same for everything else you listed.
These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do. What the GP suggested using RTA headers though puts the control into the parent's hands, which is as it should be.
We don't need to care what France or China thinks when we make our laws that are about our own citizens. They do the same over there.
> These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do.
Yes there's a chance our rules spill over there naturally, and I don't consider that wrong either.
I considered many of the same points you mentioned.
Though, one area I am still struggling to grasp is the harm that governments are trying to mitigate. If a child were to see inappropriate material, then what harm can truly arise? Also, why do governments need to enact such laws when the onus of protecting children should be on their parents?
I am not trying to start any kind of flame war, but I really cannot see any other basis for all this prohibition that is not somehow traceable back to Western religious beliefs and the societies born and molded from such beliefs.
It seems like you might be a big believer in cultural relativism and that nothing can be right or wrong, so this may be unsuccessful, but many of us do believe that it’s harmful to the normal development of children to be exposed to certain types of content. It is mostly about maturity. A five-year-old who sees explicit sexual acts performed on a screen is going to be curious about it and be interested in trying it. He or she will likely have no sense of what would be problematic (e.g. trying to initiate such an act with a peer or an adult. Consider how they probably don’t understand ideas of consent). It’s why it’s generally considered grooming for people to exhibit that type of thing to children. Children who have been groomed frequently abuse other children (including by force), and can be taken advantage of by pedophile adults.
I think it’s important, as tough as it can be to identify where exactly the line is, distinguish the concept of a 16-year-old cranking his hog to some Internet porn (which yes, probably pretty harmless and inevitable), with little kids being exposed to explicit types of content. And little kids are curious, so just the fact that they make an attempt to find the content doesn’t mean they’re ready for it.
I appreciate your well thought out response, and I apologize for the length of my response:
As to whether I believe in cultural relativism depends on the level of abstraction we are discussing. I believe there is no way to logically prove that something is morally right or wrong in a similar manner to how a mathematical concept can be proven from pure logic alone. But this fact does not often influence my beliefs in terms of morality in the context of social contracts, diplomacy, legal frameworks, etc.. To draw a parallel, I do not believe in complete free will, but I live my life as though it does exist (I believe in more of a 'sandbox' like an RPG video game with clear constraints and limitations).
> many of us believe that it’s harmful to the normal development of children to be exposed to certain types of content.
Are these beliefs supported by evidence or are they merely conjecture? Do not get me wrong, I am not saying I completely disagree. A child exposed to various types of abuse and neglect can have detrimental effects to his or her development, and there is plenty of evidence to support a statistical relationship.
> A five-year-old who sees explicit sexual acts performed on a screen is going to be curious about it and be interested in trying it.
I believe that is quite presumptuous. By that logic, if a child is exposed to comedic content, will that child become funnier? Such conclusions remind me of the debate as to whether violent video games and other media increase aggression and acts of violence in children. The data clearly does not support this conclusion. Now, I would not say there never has been/will be a case of a child trying to replicate a sexual act due to exposure -- much like violent content -- but outliers do not define the norm.
> He or she will likely have no sense of what would be problematic (e.g. trying to initiate such an act with a peer or an adult. Consider how they probably don’t understand ideas of consent).
Understanding consent is irrelevant. Children legally and morally (as determined by my culture) cannot consent to any sexual activity under any circumstances. Consent is de facto impossible. This is a social contract that I also strongly agree with.
> It’s why it’s generally considered grooming for people to exhibit that type of thing to children.
I was under the impression the intention behind the action was more important than the action itself. There is a difference in intentions between a child stumbling upon an adult getting undressed compared to an adult undressing and exposing themselves in front of a child. One action is happenstance and the other is predatory and abusive. It's why family pictures that might have a naked baby in a bathtub is not often considered CSAM.
> Children who have been groomed frequently abuse other children (including by force), and can be taken advantage of by pedophile adults.
I believe this myth is perpetuated too often. The vast majority adults that of sexually abuse children have no history of childhood sexual abuse. Certainly, some do perpetuate the abuse, but it's not as common as some might think. It is just another attempt for abusers to garner sympathy and decrease their punishment. It's very similar to the myth that public urination can result in a registered sex offender. To my knowledge, there are no instances of this type of case in the United States. However, it is a clever little lie to tell comfort folks into living next to a registered sex offender convict of a more heinous crime.
As for children-on-children abuse, I am not certain your claim holds up, but I admit I am less knowledgeable in this area.
Fundamentally, the laws around requiring ID to view adult content do not really prevent any of the harm we are discussing. Sure, I child might not accidentally stumble upon explicit content on Pornhub. However, the laws do not stop Chester Child Molester from sending their dick pics to a kid on Discord or Roblox or whatever.
Why is it the if a child stumbles upon a parent's firearm and hurts themselves or another, the parent can be held liable in both civil and criminal court. However, if a child stumbles upon sexually explicit content via a parent's computer, the onus is placed upon everyone but the parent(s)? If the harm of exposure of sexual material to youth is so damaging, then should parents not also be held to such civil and criminal punishments?
i can make arguments as to potential merits of kids having a beer/cigarette, listening to swear words, or witnessing casual violence. i cant make an argument for letting kids see hardcore pornography in any capacity.
I have hard time imagining what is that argument, that apply to the thing you mention but that doesn't apply to hardcore pornography.
Or do you also think we should forbid hardcore pornography also for adults?
Swear words and violence don't cause addiction, alcohol can but it's way less likely and also easier to restrict... idk why a kid should have cigs even once though
there may be valid use cases in certain demographics eg the disabled. to me it is evidently advantageous teaching a teenager how to have a smoke or have a drink properly , so that they don't go overboard with self directed learning for a valid activity (loosening social inhibition). we could totally teach teenagers the generation and consumption of dispassionate violent relationship simulacra. may I ask what would be advantageous about this ?
it is literally always the same thing - who gets to make these decisions? if you come from a family of alcholics (there are many) you will view alcohol for what it is, one of the most dangerous drugs that someone decide should be "legal." if you come from family that lost loved ones to smoking - same thing with smokes. hardcode porn, eh, they will eventually start putting this into practice ("hard" part is personal preference) so while probably not the greatest thing to have kids exposed to who makes these decisions? Personally, if you gave me a choice between smokes and porn and I had to choose one for my kid - I would choose hardcode porn. the core issue as always - who is making decisions on what kids should or shouldn't be exposed to?! and what do you do when whenever someone else gets that power then decides that reading or math or fishing or camping or ... is not allowed?
There are things where like 90% of people will find common ground
why 90%? and who decide is it 90%? or 87%? or 94%? are we going to have a referendum to decide on this? we need 100% people to vote on this referendum or small fraction will work? ...?
Practically it's hard to ban something new across the entire country without overwhelming support like that. There are enough people who strongly think kids shouldn't be able to buy alcohol or cigarettes that it ended up getting banned in every form, in all US states (even before federal law). Wouldn't be possible with a slight majority opinion, even if an individual proposition only needs 50% of votes.
> without overwhelming support like that.
this is 1,000,000% not accurate. there are things that vast majority of people support that are never going to happen (e.g. universal background check for gun purchases) and there are things that ruling party easily gets through that are wildly unpopular.
I said it's hard to ban something without support, not that it's easy to ban with support. Not to mention, gun background checks are more controversial than you're making them out to be, in fact this is an example I would use. Even if more than 50% like the idea of a background check, not so many will trust the implementation, and not everyone will vote.
Just for completeness sake and just for fun about 40 or so states allow private sales of firearms without a background check. Of course it is on the seller to know they are not selling to a felon and they may be on the hook if the buyer does something bad though I am straying a bit off topic from age/ID verification and tracking.
Yes, the RTA header was primarily a solution specific to porn sites. The broader problem is that parental controls don't have reliable standardized signals to filter on which has led to the current nonfunctional mess.
So ideally you want a standardized header that can be used to self classify content into any number of arbitrary and potentially overlapping categories. The presence of that header should then be legally mandated with specific categories required to be marked as either present or absent.
So for example HN might be "user generated T, social media T, porn F" or similar with operators being free to include arbitrary additional categories (but we know from experience that most of them won't).
While this would be required by law, I imagine browser vendors might also drop support to load sites that don't send the header in order to coerce global compliance.
Just an opinion which I know is not super valuable but categories won't help with most sites. Anything that permits user contributed content can become any rating at any minute unless all content would require approval by a moderator before anyone could see it. A few forums support that concept but it requires a proportionate number of moderators or I suppose a very accurate and reliable AI moderator if that is even a thing. I think it's easier and probably legally safer to just tag anything that is not guaranteed to be 100% child safe at all times as adult and let parents decide if they with to approve-list the site in parental controls.
I always love seeing pros and cons of whitelist vs blacklist sorts of strategies in different scenarios.
Yeah, and this is a good one. Blacklist is less likely to be ignored by parents. Both have risks of corps doing CYA strats, but less so with the blacklist. Whitelist has the advantage of being more feasible without an actual law, and also better matching how parenting works. Generally kids are given whitelists irl.
An outstanding idea. Those lobbying for age verification hate it though, because they want to be the arbiters of age, and all that juicy PII that they can analyze and resell.
What PII? They get a boolean "old enough"
Think about how they validate how old you are. Meta and Google, who are lobbying in support of this legislation,will force you to sign up with your real ID, and be the arbiter for questions like “are you old enough for this website”. For every request that you make through some third-party website that needs to know your age, Meta and Google will know where you tried to login, and for which content. They will then resell this data to the highest bidder. Additionally, through all their ad networks and tracking, they will follow your session and have verified ID to match your entire browsing history. This is the end of anonymity and privacy on the Internet.
None of this is true. The fact that there are many, many companies out here today that are doing exactly what you are claiming for the non-CA age verification laws (like in TN and TX), yet you went down the conspiracy route for Meta and Google shows how much you are being played like a fiddle.
They can feed you an conspiracy and you'll eat it up because you were primed to have a cognitive bias, and will ignore the actual, real-world harms going on.
Rupert loves people like you
If technically competent people specify and build this system, sure. But it’ll be specified by complete idiot politicians, influenced by Google and Meta, who 100% DO want to know your government name, DOB, etc., so we’ll end up flashing our IDs at the camera, turning around to be scanned, etc. The platform owners will tell us they “deeply care about our privacy.”
Age verification companies literally require your personal information to function. They don't want you to be able to send them a simple boolean over Tor in exchange for whatever trackable token you need to access something.
Old enough for the 13yr content, the 15yr content, or the 18yr content?
And on what date does that change?
Are you a collaborator, or just stupid?
I'm not so sure. I think the push is from the government actually. But companies are not exactly opposed to it. Quite the contrary. Big corporations see compliance as a moat. Tobacco companies supported stricter regulations on tobacco advertisements, because they had the deep pockets required to follow the changing laws. Mr. Altman is all-in on AI regulation, because it will mire down competitors while OpnAI has already "slipped past the wire" and done all their training pre-crackdown. When given a choice between regulating their industry (platforms and operating systems) vs regulating someone else's (porn sites and the like) they'll always helpfully "volunteer" to be the first to be regulated. It's just good business.
"The government" is the same as those lobbying the government. The people in the government get paid to push it, so they push it, and get paid more when it goes through, by the people who want that PII to analyze.
Interesting, I've never heard of this. I see an example that involves an HTTP response header "Rating: RTA-5042-1996-1400-1577-RTA". But does this actually still get used by parental controls? I didn't run into a lot of documentation about this, including on the very badly designed RTA web site https://www.rtalabel.org/
For anyone curious about the value, the numbering on the value is just a fixed number everybody decided to use for some reason that isn't clear to me.
I would deeply prefer to do it this way, but my goodness the RTA org needs a serious brush up of their web site and information on how to use this.
But does this actually still get used by parental controls?
Some parental control applications will look for it but it is not yet legislated to be mandatory on a majority of user-agents.
All I am suggesting is we legislate the header to be added to URL's that may contain material not appropriate for small children and mandate the majority of user-agents the ones that are default installed on tablets and operating systems look for said header to trigger optional parental controls. Child accounts created by parents on the device should not be able to install alternate user-agents or bypass the controls (at least not easily). Parents should be guided through this on device setup.
Indeed their site is old and rarely touched. The ideas and concepts have not changed. It really could just be a static text site formatted in ways that law makers are used to or someone could modernize it.
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
The porn companies already set the RTA header. It was designed by an organisation funded by the porn companies.
https://en.wikipedia.org/wiki/Association_of_Sites_Advocatin...
It seems there is a GitHub repo somewhere mapping Meta money to lobbyists inside other companies Which is at least interesting
That was created by AI. Ai is great at playing six degrees of Kevin Bacon
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
There's a great relevant quip: "If you think that the cost of compliance is high, try noncompliance".
If only it were true today :|
Sure but the government doesn't police corporations in the US anymore. The Hayes code was before neoliberalism.
The Hayes code wasn't policed by the government.
Quite true. The US corporations act like a giant global rabid dog. Fake legislation appears in the USA - lo and behold, it is copy/pasted into the EU. At the least lobbyists are getting rich right now.
At least the EU has GDPR. In the US, our personal data is collected by every app and website and company and packaged, sold and sifted through by a vast collection of private data brokers which the government already ingests.
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.
This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
An age header is not the answer. Why should a site have to decide what content is appropriate for a 18 year old and what content is not? Who is qualified to make that decision for every 17 year old in the world? Do they know my 17 year old? Do they know the rules in our home? What if I'm OK with my kid seeing sex-education stuff, but some lawyer at Wikipedia just decides to tag sex ed articles as 18+? Now I have a shitty choice: Open up the floodgates of "18+" to my kid, do it temporarily while the kid browses the sex ed sites, or not let the kid browse them.
Letting a company or government decide what's appropriate for what exact specific age is fraught with problems.
Right. Perhaps now, a parental filter could be an AI whose prompt is dictated by the parents, which can look at the contents before validating it.
Then this leads to a very unwelcome view that most of the problems we face are actually rooted in parents' unwillingness to invest too much time in education :)
Yes, that's how parental filters already work. They use a combination of rta tags and external data to block pages. Even works with Google safe search, firewall devices, etc. The rta ecosystem is already built out and viable.
I think the better tack is to stop acting like these laws are being pushed by honest actors with good faith intentions of protecting children.
What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children. Sanctions and embargoes are always an option.
>fined
Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.
I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).
I’m here to die on this hill!
People were wrong.
We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.
An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.
But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.
For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.
If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.
>But if someone is willing and able to pay
If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.
But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.
There was a random comment here on HN few days back that adult contents have lower chargeback rates than everything else.
So ig stop spreading hallucinatory misinformations?
link?
> Back in the late 90s or so, there was a proposal
This one: https://www.w3.org/PICS/
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.
I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything as I could not predict what people may upload which made for a very long header.
> unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube
YT already does this. I never watch YT signed in, and I often see videos that require you to be logged in as the video is age restricted.
Agreed though in my example the point would be to set the header in the case the child is logged in but for whatever reason the site does not know their age. Instead of a third party site, a header is sent with the video tagged as adult that triggers parental controls if they are enabled by the device owner.
Yeah this seems like the best tradeoff. You avoid the central control infrastructure and you provide information to clients. It's also a great match with free computing devices, which can then utilize the new information, empowering users (eg parents -> parental control on device, or individuals who want to skip some kinds of content).
There are issues today with this approach such as lacking granular information for sites that have many kinds of context, but if you stop investing in the central control infra and invest in this instead that could be remedied.
I agree with the general idea, but I would like this header to be more fine grained than just a binary "adult" or not. For example, so that you can distinguish between content that is age appropriate for teenagers and older from content that is suitable for all ages.
It should indicate which exact HTML elements are classified, so that a social media feed can selectively tag posts on the home feed.
A MIME type for every genre.
How are they supposed to fine sites out of their jurisdiction?
One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.
[1] - https://news.ycombinator.com/item?id=47950843
The question was about fining entities outside of the original jurisdiction, so I am not sure what you have in mind that could be done by network/security engineers here.
In terms of fines if they do not pay the fine their country is at risk of sanctions or embargoes which is probably a bit heavy handed but may incentivize their government to also enforce the rules, collect fines keeping some for themselves and passing the original fine back to the countries implementing child safety controls.
This is extremely naive and short-sighted. There is a literal example of this happening rn, and hopefully you will see why your approach isn't that good.
UK's OFCOM is currenly issuing legal threats to 4chan, for allegedly serving adult content and not willing to implement age verification. 4chan's lawyer tells them to pound sand[0], on the basis that 4chan is hosted in the US and has zero business presence in the UK, and UK is more than welcome to ban the website on their end through UK ISPs. The saga has been ongoing for a while, and the lawyer has been pretty prolific online talking about the case.
Anyway, following your approach, UK should embargo US over 4chan not willing to implement age verification as required by UK law? I plainly don't see this happening, or even being considered, ever.
0. https://www.bbc.com/news/articles/c624330lg1ko
4chan servers are in the US and the owner is in Japan. If the US wanted to they could seize all the servers but they will not because they have real time monitoring of all activity on the boards and have ever since Christopher testified before congress and the site was sold. If anything 5-eyes want that site to be unrestricted. 4chan has been a goldmine of people self reporting for wanting to shoot up or bomb places, as has Reddit leading to many body-cam videos of the site users and in some cases the moderators being busted.
The IP addresses are all captured by Cloudflare. It is literally next to impossible to post on 4chan without enabling javascript on Cloudflare or buying a 4chan-pass which leaves a money trail not perfect, nothing is but most mentally unstable people do not think these things through.
Should legislation be added to require the RTA header 4chan could and likely would add it in a heart-beat. They already have some decent security headers in place.
> If the US wanted to they could seize all the servers
Are you sure you didn't misread what I said? Asking because I am not sure how what you are saying has anything to do with my point.
Why would the US even consider seizing the servers? 4chan isn't breaking any US laws, and US indicated zero interest in pursuing 4chan.
The case I am describing is about 4chan breaking UK laws (by refusing to implement age verification), and UK OFCOM is threatening 4chan with fines and more. 4chan, as you said, is located in the US, so they claim they don't care about what UK wants, and that 4chan won't implement age verification due to 4chan not having such a requirement under their operating jurisdiction (US).
The only thing UK can do is block 4chan within their country, and that's pretty much it.
4chan isn't breaking any US laws
They break US laws every single day. Every loli in /b/ and /gif/ thread violate several laws and yes people do debate this endlessly which I will not, discuss that with lawyers that deal with CSAM. On that alone they could easily seize all the servers if they wanted to but that will never happen because like I said it's an goldmine for people self reporting they are going to shoot up a place or show intent for a myriad of other crimes. The feds would never throw away such an easy mode treasure trove nor would I expect them to. The site started glowing hard in 2008 and glowed even harder after 2012. I even showed people how to extract IP addresses using the hashes in the thread and post ID prior to their moving to Cloudflare and the users still went into full cope.
All of this aside it would be trivial to add the RTA header to the entire site. They could add it in the Cloudflare interface in a few seconds. It would cost them nothing. Only groomers would have their jimmies rustled even despite most of the groomers having moved to Roblox.
The header should be the other way around. It should inform your site will not contain adult material. The local government should scan sites participating.
Anyway, yes, that would just solve the problem and not destroy anything. What is the reason why nobody is talking about it.
Servers can then infer user’s ages by whether or not the client renders pages given those headers or not no? See if secondary page requests (e.g images, scripts) are made or not from a client? A bad actor could use this to glean age information from the client and see whether the person viewing the page is a small child. That should be scary
I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device. Some parents have assessed the situation and trust their children to be psychologically ready for adult situations. The client could be literally any age.
Today devices do not default to accounts being child accounts. Some day this may change and may require an initial administrator password or something to that affect but this can evolve over time.
>I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device.
Not being able to detect all children doesn't mean that being able to detect 80% of them is somehow less disturbing.
The point and overall goal should be to not signal anything to the server operator unless a credit card is being used. Everyone is whomever they claim to be as far as anyone is concerned, until payments are required which today means sharing identity and age (via the credit card information on file with the financial institution and is shared today).
In the case of RTA the only signalling taking place is a server header being transmitted to the client. The client could be anyone at any age. Nothing to explicitly leak or disclose. Server operators can guess all they desire as some do using AI based on user behavior of which they sometimes get wrong.
This is also how age attestation works. The client could be anyone at any age, all the server knows is they've opted to see over-18 content
That's true. But leaking an age threshold is not the same as private companies being able to link all your online activities to a single legal person.
Adults could also use this to filter out unwanted content without needing to rely on outdated filter lists.
“solutions” like this presume that age verification/gating is the goal. it’s not. it’s a cover story.
the goal is eradicating anonymous publishing. the goal is making strong government ID mandatory to use the internet.
any privacy preserving age gating system is useless toward that goal, so it is irrelevant.
> fine sites not participating into oblivion.
That would also amount to compelled speech.
That would also amount to compelled speech.
I disagree. The legal requirement to apply a warning label is a well known, understood and accepted process that is applied to a myriad of hazards to children and adults. As just one example businesses in some states, most notably California are compelled to add warning labels to foods and other products that could cause cancer.
That's not the best example, since the levels set for Prop 65 warnings are so low that the warnings are effectively useless; every single commercial building in CA now somehow causes cancer.
Surely we both understand the point I was making in that labels are already compelled by laws today.
Fine, cigarettes must be labelled as being a risk of causing cancer. The punishment for failing to do this is both civil and federal penalties including massive fines and federal prison time.
Now that I think about it, perhaps that example did a good job of demonstrating how ill-conceived requirements can wind up having zero effect except for just making everything a little bit more inconvenient.
Nobody was talking about their utility. Are they constitutional?
Do you believe using the Internet should require a license? Isn’t that what covers these product warning labels?
I never implied an internet license. Rather if a server operator a business has content that may be adult in nature they must label their site. Businesses require a license already but that is unrelated to this.
Clients could refuse to show content that does not have headers set.
On other hand servers might choose to lie. After all that is their free speech right.
So maybe you need some third party vetting list. Ofc, that one should be fully liable for any damages misclassification can cause... But someone would step up.
Compelled to disclaim facts is good compelled speech, though.
If they can scrape and fine, they can just make a list and the browser can use that.
RTA = restricted to adults
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
Such censorship shouldn't exist in the first place.
I could be misunderstanding the context but to me that sounds like a moderation issue assuming we even want small children on social media in the first place. There should probably be a dedicated child-safe social media site that limits what communication can take place for small children and has severe punishments for adults pretending to be children for the purposes of grooming.
Moderation is like law enforcement, it doesn't prevent crimes from happening it just punishes the people they can catch. There exist severe punishments for the kinds of behavior I'm talking about, but unsurprisingly, this does not stop kids from being harmed and it doesn't undo it.
This isn't hypothetical, by the way. There are adults catfishing kids into producing CSAM [0], kidnapping and assaulting minors [1], [2], and in the most extreme case, there's a borderline cult of crazy young adults who do terrorize people for fun [3].
It is a constant game of whackamole by moderators/admins to keep this behavior out of online spaces where kids hang out.
I recognize that this is a "think of the children" argument, but indeed that's the point. The anonymous web was created without thinking about the children, just like how all social media was created without thinking about how it could be used to harm people. Age verification is the smallest step towards mitigating that harm.
Now I disagree very strongly with the laws proposed (and indeed, I've been writing/calling/talking with state reps about this locally, because I don't want my state's bill passed). But the technical challenge needs to address the real problems that legislators are trying to go after.
[0] https://www.justice.gov/usao-wdnc/pr/discord-user-who-catfis...
[1] https://www.nbcnews.com/news/us-news/kidnapping-roblox-rcna2...
[2] https://www.nbcmiami.com/news/local/nebraska-man-charged-wit...
[3] https://www.fbi.gov/contact-us/field-offices/boston/news/ope...
I am only interesting in protected the majority of children which I believe my proposal more than covers. There will always be exceptions. Today teens share porn, warez, pirated movies and music with small children in rated-G video games. I am not proposing anything for that. It is up to businesses to detect and block such things.
Point being, there will be a myriad of exceptions. I am not looking to address the exceptions. Those can be a game of whack-a-mole as they are today. I am proposing something that would prevent the vast majority of children from being exposed to the trash we today call social media and of course also porn sites.
Look, please don't sideline/marginalize people by using the "whataboutism" term. Thats being used more and more to silence dialog from people that see problems outside the focus of a specific area. Its important that we see ALL sides of the problem.
Fair enough. Even though I do not perceive it that way I removed it in the event a majority of others have come to this conclusion.
Thank you for understanding. I know sometimes topics can get out of hand with comments about related things, but I this case. We might be better off looking at all the extremities.
These aren't exceptions or whataboutism. It's the debate being had on the floors of state legislatures.
> It is up to businesses to detect and block such things.
Which is exactly why age verification legislation is hitting the books. No one (serious) cares about whether kids can download porn and R rated movies. Parental controls already exist if the threat model is preventing access to specific content that is able to report itself as _being_ that content.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content. They define specifically what classifies as an addictive stream and put the onus on service providers to assert that they're not delivering addictive streams of media to kids. An HTTP header isn't enough, because it's not about the content being shown to kids but the design patterns of how it's accessed.
Essentially: age verification isn't about porn. 18+ content stirs the pot a bit with the evangelical crowd but it's really not what people are worried about when it comes to controlling digital media access with age gates.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content.
That sounds simple to me. If a type of content is addictive then require the RTA header.
- Adult content, or possible adult content.
- User contributed or generated content (this covers most of social media)
- Site psychological profiles that are deemed addictive (TikTok and their ilk)
Overall we are describing things that are harmful to the development of the minds of small children. If adults wish to avoid such content they can create a child account on their device for themselves to be excluded from this behavior as well. I use a child account in a couple of popular video games to avoid most of the trash talking and spam. I'm not hiding my age as the games have my debit card information but rather I opt-in to parental controls.
This is assuming children should be on social media at all, which I for one would debate.
How would this work with sites like YouTube which allow sharing of content, potentially not appropriate for children, but the content is generated by the site's users? Who will be fined for "violations"? And how would such a fine be levied, especially internationally?
I think that initially the onus would be on Youtube to figure this out. They have some very intelligent engineers. For example, if the Youtube client is receiving affiliate funds then they are easy to ID and fine. If they are random people then Youtube would have to share the violation data with the other countries and the US or UK would have to pressure those countries to participate in fining the end user. There could be financial incentives for the foreign country to participate. They can also just force label a video to be adult as they do today when enough people report it which is admittedly not uniformly applied.
This already has been solved. Youtube disables viewing via embeds for any content that has been age restricted. Either you view it on Youtube which requires logging in to see age restricted content in the first place, or you get the ! icon and the warning about needing to log in.
>I a small server operator and a client of the internet will not participate in any other methods period, full-stop.
You will however follow the law if it mandates you to do else.
Which is we "age verification" should be stopped before it's too late.
I have probably never met anyone that is not committing at least three (3) felonies per day. That is at least how legal theory is applied. It's a fun topic to research. As a side note it would be interesting to see how far down the totem pole they venture in terms of verification of what sites are using age/ID verification and tracking.
Many people don't follow many laws, how do you know that person will follow that law?
An anecdote: I am 40 years old and I have an Onlyfans account. I enjoy some hippie chick that makes pottery and takes pics of herself without clothes on.
I went on vacation to Tennessee and tried to log in and it said I needed to verify with their identity verification provider. Of course I refused.
Now I am home in a different state and still cannot log in. I contacted support and because I was detected in TN once irrespective of my name and address and credit card info in their system they refuse to let me back in.
Support said they canceled my subscriptions for me because you can't even access that part of your account.
It's ridiculous this is where things have landed. And it's not even stopping porn in the slightest it's just making it harder for honest people to pay for what they like. And so the government can track us more easily. Wish I could do something other than vote with my wallet.
> it's just making it harder for honest people to pay for what they like.
I have a female friend who creates that kind of content. Her take is that this is very much intentional. There is a general crackdown on porn in the US. They're not just trying to make it difficult for the clients, but also difficult for people to make this kind of content, distribute it and get paid for it.
Of course none of this makes sense. There are VPNs and there is bittorrent. All of this is just making this kind of stuff more underground. In China porn is fully illegal, but people still share bootleg porn on thumb drives.
In China, people generally share porn through closed social network groups.
Given what we know about China's internet, is there any real privacy in those groups if authorities wanted to crack down on them?
encryption exists. so to an extent, yes there is privacy, but in the end, people are always the weak link.
New man-in-the-middle attack: proxy the request through an IP address tagged as a prohibited location, and you can permanently deny access without ever needing to modify or even decided SSL/TLS.
I have the same issue with sales tax.
I moved to asia about 1.5 years ago. But b/c my credit card's billing address is still in the state of WA, Apple and other subscriptions think they should still charge me a sales tax. To remove the sales tax, I have to cancel the subscription and re-create it (losing my grandfather'd rates).
It's insane.
THe government shouldn't be raising anyone's children, that's what parents are for. If you're a bad parent, your kids will get access to bad things and could become an adult failure.
The future of your family and your legacy is up to you, not the government. We don't need age verification to restrict the social darwinism of raising children.
I wish I could upvote this comment harder. I started having unsupervised internet access (with the family computer in the living room) when I was 8. I'm a functional and successful adult because I trusted my parents. When my mother forbade me from registering on online forums I complied. When I read "fellation" in some minecraft chat (albeit somewhat later) I asked my mom what it was and understood that "sex" was something for the grown-ups and that I shouldn't worry about it. All because I would never even conceive that my parents wouldn't do what's best for me, and was unconditionally loved (even though I didn't know about this concept).
I would rather have parenting licenses than online age verification
I'm a functional and successful adult despite doing plenty online behind my parents' back as a kid. I don't think that part of our upbringings had as much of an effect on us as you suspect.
And I also suspect you did not grow up with kids whose parents clearly would like them to go away and stop bothering them. I also did lots of dumb stuff in my parents' back. The nuance here is that when you know that your parents love you, you'll tell them once you do something that's actually harmful/a big mistake, because you trust they'll help you instead of punishing you. I've seen people make "questionable" life choices, in my opinion, because they've learned, consciously or not, to not seek help from others and always hide/blame on others every problem them encounter.
Yeah I'm not sure why the govt or any other 3rd party needs to get involved. If I don't want my kids to look at porno online I will educate them on porn. If I don't trust my kids to listen to me then I will install an open source monitoring software and educate them on trust.
Letting the govt dictate what is age restricted is an easy way for the govt to control speech and narrative. For example, children's books that feature LGBT characters are being reclassified as adult [1], thus requiring additional verification. If I do/don't want my kids to read LGBT books, it's my decision. The govt should not dictate that. What else will the govt reclassify? Anything involving people of color?
[1] https://www.ala.org/bbooks/book-ban-data
“If I don't want my kids to look at porno online I will educate them on porn“
I can’t tell if this is a joke, is this a joke?
No? I guess I missed a word ("educate them on the dangers of porn" perhaps?) but I don't see how the omission makes a huge difference.
I just love the idea that the solution to kids not doing what you want them to is telling them not to do a thing. It’s so optimistic.
Education isn't based on the premise that they'll never disobey. It's to help them recognize when things become dangerous or are getting to be a problem. Of course kids will do things they're told not to do - this is just helping them tap the brakes and understand how to recover. The attitude that the only solution is perfect enforcement is (in my opinion at least) partially to blame for the lack of self-awareness that makes the more vulnerable to later addiction problems in the first place.
Not sure if this is sarcastic but that's exactly how drug education works in the US. Sure it's optimistic but almost everything about raising kids is optimistic.
DARE made me more curious about doing drugs.
I was curious about drugs after DARE because I learned about stuff I'd never heard about before. But it didn't make me want to _try_ drugs. And if DARE weren't enough, watching Euphoria was definitely enough to make me not ever want to touch drugs.
when i tell my 15 year old kid not to smoke, he obeys. sounds like a skill issue on your part.
> sounds like a skill issue on your part.
I am in fact a terrible parent. I rarely try to get better, and when I do I make it worse.
This points get brought up in every thread about this topic, and although I agree with it completely, I feel it's the wrong point to make. They don't want to raise our children. Caring about the children is just pretense. The goal is surveillance. So this is a moot point, really.
I do agree fundamentally, but you are making a lot of assumptions about the parents here. Many do not have parents able to do this. Do they not deserve some protection against such content?
Blaming the parents for their failures is not going to help the kids.
That being said the current approach really has nothing to do with protecting kids and everything with tracking us.
I keep thinking we can't fight age verification by just saying "no" to it, and have to offer an alterative.
Maybe we need to turn it on its head, point out that if we want legislation to help out with this, we could choose legislation that gives power to parents. Age verification laws put the power directly into the law itself, they're a blanket solution that gives all the power to legislators and that prevents parents from making decisions about what's appropriate for their kids and what isn't.
If the market isn't delivering the level of parental controls people want, then sure, maybe legislation is needed. But it should be legislation that improves parental such that parents can make decisions about what's appropriate for their children.
Yeah I agree. Let me decide what's appropriate for my kids. Like for video games or movies... A game rated M for foul language and nothing else might be OK for my adolescent kid. A game rated M for excessive nudity and sex probably not.
> and have to offer an alterative.
It's called "software." It already just exists. It's sold for the purposes of locking devices down so they're safer for children to use.
> point out that if we want legislation to help out with this
Make this software tax deductible. The end.
As much as this is true, no disagreement, there is the issue that we are all fighting against systems that have billion of dollars of studies and A/B testing designed to completely subvert said parenting abilities.
It was difficult enough back 20 years ago when you have TV advertising that just shot gun out the messaging in the hope of landing a target, now it is algorithmically targeted. Even if you can keep this stuff under control in the home, outside of the home these influences can still bleed in from others.
But having the government use mandatory age restrictions, that is a wild over correction. They shouldn't be parenting kids in the same way corporations shouldn't be doing it either.
Alas we are walking into the wild contradictions of libertarian thinking and authoritarianism. Liberal companies have no checks and balances, authoritarian governments take peoples freedoms in the name of "safety".
The deeper questing is to all of these technologies that have have imbalanced positive and negative outcomes. If you cannot balance it, you either have the worst outcomes happen or you end up with an authoritarian reflex to control the technology and those that use it. Rarely do we take the middle path, that being government control of the businesses.
That is seen as touching the political third rail, but that instinct is now by design.
You can see the thinking that goes, the best solution was to never invent it to begin with, but that is just wishful stuff that doesn't really contribute.
I have no solutions and barely any responses other than, this is some predicament we find ourselves in.
Western society, for better or worse, is set up such that parents need to resume work as soon as possible. Saying the government has no responsibility in child rearing ignores the economic reality of parents.
"Because I have a job, it is now impossible for me to raise my children. I have to outsource this to a council of legislators because I'm simply too busy!"
Bad argument, bad outcomes. These are exactly the "bad parents" I was referring to in my original comment. The government HAS no responsibility in raising your child, but they would LOVE to change that. It's absolutely imperative for the human race that that does not happen.
Besides the bad reinterpretation of my point, how to solve the problem? It is simply insufficient to say "yeah both parents work full time with the sword of damocles hanging over their head but too bad so sad". Without changing the economic situation there is no changing the child rearing situation. One caused the other. It's all well and good to say this is imperative for the species but I see no solution offered. The economic situation must change and the government is responsible for this.
By western you mean America? Cause this is true only in America.
Absolutely true in Australia. The parents I know are either rich enough to outsource it or basically fighting for their life managing work and childrearing.
And to add salt to the wound, it's the people on the positive side of the economic bell curve that have strong familial support networks where grandparents and uncles and aunts can contribute to childrearing, while those on the other side of the curve can't always rely on having those support networks. A generalisation of course, but a relevant one.
what’s the maternity leave situation in Australia?
Better than the US, but that does not make it true that only parents in the US are struggling.
It's also true in the UK. High housing costs, high living costs and low wages means two parents need to work as much as possible.
what’s the maternity leave situation in UK?
Statutory Maternity Pay can be paid for up to 39 weeks.
The first 6 weeks: 90% of average weekly earnings
The remaining 33 weeks: £187.18 or 90% of average weekly earnings (whichever is lower)
So not much after the first 6 weeks
Some data for non-statuary maternity pay https://www.incomesdataresearch.co.uk/resources/insights/mat...
Later
From the people I know, the financial pressure seems to build around 6 months as their employer's maternity pay is fading into the distance, but they struggle on a bit longer.
I admit there may be different definitions of "as soon as possible" between the USA and other countries. Most people here would love to be able to afford at least 1 year if not more.
so we can’t really compare US (zero), with this, yes? not saying going to work after XX weeks is great either :(
Yes we can compare, and your original comment was wildly incorrect. You aren't going to get proven correct by digging into this further
Just because the the US provides zero paid leave by law doesn't mean women don't take maternity leave - it's often self funded of course. How about you look into that and compare, instead of trying to ask specific questions to arrive at a gotcha
heard it here first that extensive maternity leave and zero maternity leave is a “gotcha” :)
Also, different kids mature at different rates. I wouldn't give a shit about my kid watching, say, an R rated movie if I understand they'll be able to handle it and understand it's fiction. If I had a 14 or 15 year old and they had a healthy understanding of sex and the dangers of porn, I wouldn't give a shit if they managed to see some poorly drawn tits online. Why? Because if you didn't intentionally seek out lewd content as a teenager you're either very very religious or a liar
Duke Nukem 3D had bouncy pixels that made it "tickle down there". Also: monochrome women "eating bananas".
> THe government shouldn't be raising anyone's children, that's what parents are for.
The government does raise children. It's called the public school system.
but no parent actually keeps the government out of it. don't you go to the police when your child is harmed?
California and a slew of other states deemed it necessary to step in and take over for parents with transgender kids didn’t they? even threatening to take a child from their parents should they refuse gender dysphoria treatments.
it seems to me the left already opened this can of worms.
There's an angle everyone misses.
Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.
And half of it will be from adults trying to avoid privacy invasion.
Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
A tech company doing scans for validation could actually connect to a state database to verify the ID is legit and is not already being used for a different account. It would then be saved. I don't think real world vs tech world usage of fake IDs are the same at all.
>Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
Not necessarily true. There's a local stripclub that scans and saves the scan to fight chargebacks and the like. It is definitely logging stuff. They've told me that they were going through the logs once and the bartender ended up googling my fullname. We're cool and I didn't care, but this what you said is not a blanket true statement. I trust a physical business that I can visit far more than some ID verification company that is going to get hacked at some point.
I've seen this before in London too in some venues. They have full-on computers that scan your passport and take your photo, for the express purpose of storing this info.
why would you trust a physical location who typically wouldn’t have a robust architecture or any opsec but not trust an online first business that likely has opsec and monitoring?
The tech companies care even less than the bouncers do.
They just want a plausible defence should it ever end up in court.
tech companies care even less? how do you arrive to that conclusion? tech companies log/store EVERYTHING. this would be an absolute boon for them to be able unequivocally assign to you all of the data they track about you. suddenly, anonymous analytics become identified data and not just deanonymized data.
Logs of location data on people are already worth real money. The FBI has admitted to buying it. The companies that do age verification will absolutely be selling that data unless there are severe penalties for doing so, and what are the odds that the U.S. government passes a law making it illegal for the FBI to buy data?
That's bad enough if you're a U.S. citizen. If you're a non-U.S. citizen, now you're in the situation where all these U.S. social media sites are collecting personal information from you and reselling it, but you have no legal protection unless your government risks tariffs and invasion threats to pass legislation against it, which the U.S. will probably ignore anyways.
This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
> This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
I guess, but like, who? During the time TikTok was not available on an app store (even though the service wasn't stopped), people were trying some of the other Chinese apps, and they were not very compelling as the exodus never happened.
It's a chicken and egg problem. Without users, a new social platform lacks content, so it can't attract users. Unless something decidedly new and compelling comes along, users will probably stick with what they know... unless something happens that really pisses them off.
If I'm being honest though, I don't think privacy concerns will be what does it. The TikTok generation doesn't give a fig about privacy. You can build a panopticon around them and they won't even notice.
How does a tech company calling into a government database to verify your identity maintain your anonymity?
It does the opposite: allowing the government to track your online activity as a side-effect of site owners' validating your ID every time you visit.
That's the point, and it's a big part of why opposing online age verification is a hill to die on.
My mistake. My question was rhetorical but I thought this whole thread was rooted in the parallel conversation about anonymous credentials systems.
> Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything.
> Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo.
Many/most bars do scan IDs now. Ostensibly it's to verify that it's real, but they do use those systems to keep a log of everyone who enters.
They also use them to flag people who've been previously banned and the systems work across venues. The idea that verification in the real world is cursory is not accurate.
The vast majority of places I frequent do not even have a person at the door checking IDs. If the bar tender/server thinks you look young, they ask for ID. I clearly do not look to be too young, so there's that. The last place I went to with an actual scanner was more of a nightclub that had a cover charge.
There's a fine line between night clubs and bars (and a venue can operate as both, depending on the night).
Functioning as a bar where people come in, drink and eat - generally not checking ID's at the door.
Functioning as a night club, generally checking ID's at the door. Almost no places I've been to scan ID's. I'm also middle aged and not going to night clubs hardly ever. Pretty much just a couple concerts a year in the big city. Those venues scan ID's.
Anecdotal evidence is weak (not) evidence.
This is true but your orignal reply was also anecdotal.
sure, but it is what it is. the places with scanners may be more sophisticated than i give them credit, but you cannot deny there are places that do not card every person every time you visit. online places will never not know it was you. if you cannot see the differences, then you're just deliberately being obstinate about it
Well then it’s a good thing my fake id is from a state or foreign government without a checkable database
> Handing an ID to a bouncer at a bar or similar is not logging anything.
Some of the bars in the party areas of my college town have a digital scanner they hold the ID up against, and they even had a screen showing a scrolling Wall of Shame of fake IDs. And they had this like 20 years ago. So I would not necessarily agree with you here
Like prohibition and the overtaxing of cigarettes in Australia, ID fraud will just become criminalised and the government will lose all control. There are pros and cons to this.
I think we should go a step further and log every activity a person takes the blockchains. There will be no ID theft because your DNA will be used to cryptographically authenticate your user.
Sam, we're not going to use your weird eye scanning orb.
It would be a good market, I would like to pay for an ID in my or compatible countries. As far as the systems work that I have seen, this is more or less a permanent pass.
But the real problem is that governments again try to censor online content, nothing else.
My country doesn't even run children's homes without many incidents and nobody cares for that. But it tries to track citizens through things like corona apps. It cannot propose any trusted entity that could verify and ID information about me.
ID system should be based on commercial bank. If you need to prove your identity or whatever about yourself just tell them to ask your bank and bank will ask you which information about yourself you are willing to share with whoever requested to confirm something about you.
When ID is tied to your bank account you guard it like you guard your bank account. Because it is the same thing. This will drastically lower the incentives to "share" your identity with anyone.
What's more this system is already operational in many countries.
>Banks
I wonder how many months until this suggestion becomes slightly embarrassing. I barely want my banks to know what I buy and to be responsible for my money. I really don't want them knowing everywhere I go online. Especially when "my" bank goes under and all of my data gets sold off to whoever takes it over.
> I barely want my banks to know what I buy and to be responsible for my money. I really don't want them knowing everywhere I go online.
Bank ID systems, at least the ones I’m familiar with, don't work like that. Your bank confirms your identity to the authentication provider, and the authentication provider sends you on to the site you are logging into. The bank does not see the site you are visiting.
And what about debanked people? Are they now also deporned - and deyoutubed? Many of those people are debanked because they were politically risky, for example whistleblowers - are we now saying whistleblowers can't upload whistleblowing?
That's just feudalism with less extra steps
From what I know about feudalism, this is a non-obvious statement. Care to develop ?
The proposed system moves sources of identity from the nation to private banks under it. So banks own people. Propose a financial regulation to the national congress/parliament and you stop existing, digitally or potentially physically as well. That's feudalism. Or Chinese struggles-of-nations warlord era situation which is often grouped up into that concept as close enough things.
Banks can be state-owned as well as private. Moreover, some countries have a particular bank that serves all citizens, even if they would not be able to bank elsewhere.
are they state-owned? Which American bank is required to serve everyone, regardless of nationality or other factors?
Plenty European countries have eID without these issues.
You use eID when explicitly interacting with a govt entity or bank or otherwise similar institution because you have to and want to prove who you are. Yes, I do want to prove who I am when I file taxes, vote or want to start a business...
You don't use it when just browsing randomly on the internet. You don't use it to buy games on steam. Your computer isn't forced to store it because a law arbitrarily says so.
Why not, seems to be made exactly for this purpose if you look at the "‘Age over 18’: true" flag. What's bad about that solution?
> The technical solution for an EU age verification app is privacy-preserving, open source and user-friendly.
> First, the user downloads the app onto their phone and sets it up by certifying their age. This can be done with a biometric passport/ID card, a national eID (e.g. national ID Card or other electronic identification mean), a pre-installed third-party app (e.g. a banking app), or in person (e.g. at the post office). Only the information confirming that the user is over the age will be saved in the app. No name, no birthday, or any other data is saved.
> After completing this step, the communication between the app and the provider certifying the user’s age (e.g. eID, third-party app) ends. No further data is exchanged.
> The app is then ready to be used online. When an online platform asks to verify the user’s age, the user can use the app to communicate they are over a certain age (e.g. ‘Age over 18’: true) to the platform.
https://digital-strategy.ec.europa.eu/en/faqs/eu-age-verific...
The EU app still requires that you let them violate your privacy in exchange for a batch of about 30 easily trackable tokens that expire after 3 months. It also bans rooting/jailbreaking, bans third party operating systems like GrapheneOS, and requires that you install Google Play Services/IOS equivalent for "anti-tampering".
I usually buy games on steam using a process that does involve my bank, do they actually take bitcoin or cash posted in an envelope?
if it's done by the government, what prevents the goverment to not allowing opposition members to access social media? I think social media and porn are harmful for children but still
I don't disagree with random browsing. I do use it to buy games on steam as any online purchase on my card uses it. And my computer doesn't store it, my phone does.
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.
The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.
The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.
The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
Matthew Green talks about this in his blog on the subject: https://blog.cryptographyengineering.com/2026/03/02/anonymou...
The two methods that seem feasible are making it hard to copy (putting it in the secure element in your phone, for example, which I don't love) or doing tokens that can only be used a limited number of times per day, like in : https://eprint.iacr.org/2006/454
If it's a rolling cert with rate limits I think that solves the problem, particularly if access to the client cert allows the client to make a financial transaction, e.g. of $100. So you wouldn't share the client cert with randoms because they would just take your $100 and you'd be blocked.
Finally, a way to use blockchain for good.
This scheme does coincidentally introduce the ability to pay for things anonymously using porno tokens, part of a government mandated crypto currency.
So your bank says sorry, only 3 porns a day for you?
Where did you read that? Not in my post.
What rate limit would you use?
Maybe 256 authentications a day.
So only 256 porns a day for you. If you access a 257th porn site, your bank will know about it!
Why would my bank know about it?
Whoever is verifying my identity.
Continuous age verification isn't possible, so you'll have to store some sort of proof of age somewhere, and that proof will always be sharable.
Let's say Facebook has verified my age somehow. I could share my Facebook login credentials, or the token that their authorization server sends back in response. You can create some hurdles to doing that, like requiring a second factor, but I can just share that too.
You might as well go down the route of accepting that possibility. These systems are never going to hold up in the face of a determined enough teenager.
That really depends. A zero knowledge system would show to the verifier that the person is authorized for access _right now_, but thats just the answer to a particular challenge. Outside of the verifier who knows they came up with a random challenge without bias or influence, the response would mean nothing.
I think a lot of age verification systems are the solution to the real core of legislation - to make companies liable for underage viewing of content. To put such legislation in place without providing a feasible way to accomplish age verification would be argued as discriminatory.
In that sense, a zero knowledge system which doesn't give a company non-repudiation so that they can defend themselves in court may very well be insufficient. And that will require tracking identity long-term, although it could be done with a third-party auditor under break-the-glass situations with proper transparency.
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.
I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
Further, "porn tokens" are the pointy end of the wedge, because it's easy to misconstrue any opposition as advocating for "kids should have access to porn, actually". The broad end that is being hammered towards is "kids aren't allowed on social media because it's harmful to them" AKA "free speech tokens".
If you want an easy way to take something down, turn it into a vice. Then most of the people will do the rest of the work.
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
Would be great, if we could all agree on that and simply everyone who is tasked implementing it in code refusing, and then letting non-engineers themselves try to do it and fail, and then have a good laugh about the figurative middle finger we gave them for their bs.
No it really can’t. Age verification requires identification.
Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.
The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
There's already authorities that know your age, so verifying age with them to get the credential isn't the part that needs to be anonymous. The issue is them knowing what you do with your credential, which anonymous credentials solves by making it impossible to track tokens back to the credential holder. As far as sharing, there are some possible mitigations.
Right. And the possible sharing mitigations generally amount to tracking.
This isn’t even getting to the issue that mandating government-issued credentials is the “foot in the door”. If you mandate the use of government creds for accessing websites, it’s an obvious step to turn around and demand that sites report credential use to “fight credential fraud”.
But likewise, someone can share (or have stolen) their ID
https://news.ycombinator.com/item?id=47951372
The destruction of privacy is the whole point.
Only for a subset of people. Many would accept solutions that preserve privacy. Divide and conquer. Remove supporters from the anti-privacy group.
Yep look who is backing these regulations. It's absolutely for no other purpose than to further enable surveillance capitalism and the surveillance state.
This is something that's technologically feasible, but will never happen in practice.
> Age verification can be achieved without destroying anonymity and privacy online
Yeah its extremely simple. You provide a simple message asking the user if they are an adult, and they either click "yes" or "no". That method requires exchanging zero personal information with anyone, other than a simple boolean value to the site/service you are using.
Any other system requires letting a third party violate your privacy.
Yes, but this is not popular among technologists (see the average sentiment towards age verification here). Legislators aren't going to build technology. This will happen if age verification actually becomes a widespread requirement. But until that point the prospective builders will be fighting the entire premise of such systems.
Apple and Google have already implemented private age verification.
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.
It's addressing a real problem in a bad way.
I mean, it's more than that. I _want_ to protect kids' right to be part of the human connectome. The "protect the kids" (by disallowing them their freedom of thought on the internet) is just naked ageism.
So do you want 5 year olds driving on the highway and 8 year olds doing shots of tequila or are you ageist?
Or perhaps protecting kids isn’t really ageism at all.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize.[1]
[1] https://news.ycombinator.com/newsguidelines.html
I did. Restricting children’s access to certain things is not ageism.
We can argue the merits of restricting children’s access to the internet, or certain books, or alcohol, or pornography, or whatever else. We can debate the merits of those various restrictions based on the benefits and costs to both the children and society at large.
But it is not ageism to attempt to protect children. It is not ageism even of the restriction is a bad idea. To claim it is ageism is an emotional appeal (“ageism bad!”), not a logical one.
It depends on what you're restricting and why. Restricting access to things based on age can absolutely be ageism if the thing does not need to be restricted.
I don’t think it’s ever “ageism” in the normal sense to restrict children’s activities for their safety. But even if that’s the right term in some cases, it hinges on “if the thing does not need to be restricted”.
The burden is still to demonstrate that a restriction is wrong. If that can’t be demonstrated, then labeling it ageism is a purely emotional appeal.
You jumped to children behind the wheel of vehicles and doing tequila shots. There is no way that was a serious effort at good faith discourse.
I used a rhetorical device to demonstrate why restricting children’s activities is not simply ageism.
I don’t know how you can seriously come here and accuse me of engaging in bad faith when I’ve taken the time to make my viewpoint explicit multiple times in this thread now, including directly to you.
Hyperbole is a rhetorical device, if that’s what you mean.
Just because I had a hard time following your logic doesn’t mean I didn’t engage in good faith. You also seem to be arguing in a heated way with every person who responds to you.
Either way it’s probably best if we both move on
I did not accuse you of not engaging in good faith. You accused me of that.
I don’t think I responded to anyone in a heated manner, though I will readily admit to being annoyed when you accused me of bad faith.
Agree we should move on.
If the 5-year-old has passed a proper driving test, why not?
Quit arguing as though the topic is binary. It's not.
I’m not saying anything is binary. I’m saying it’s not ageism to restrict child access. It could be a bad idea but that doesn’t make it ageism.
It depends on what you're depriving them of too. Those are very extreme examples with little to no upside.
Disagree. We can discuss what restrictions are appropriate or reasonable without calling it ageism.
Calling it ageism is an emotional appeal, not a principled stance.
Ageism is a legally defined form of discrimination as well as the subject of ethical discussions. It's a real, defined thing. Just because we disagree on what qualifies as ageism doesn't mean you get to call foul and say it's irrational/emotional.
This is literally a “think of the children[‘s freedom]” appeal. You’re not arguing for or against the restriction on its merits.
In the US at least there’s also no such thing legally as age discrimination against minors so far as I’m aware.
Edit:
Let me frame this differently. “Ageism” is basically by definition bad, so applying the term “ageism” to a restriction is a an attempt to label the restriction bad without establishing that on its own merits.
If you try to provide a consistent definition of “ageism” that applies to restricting access to the internet but not restricting access to alcohol, you will most certainly have to resort to phrases like “reasonable restrictions” (if not, I’m very interested in your definition), which means that there’s still a need to establish what is reasonable. Applying the label “ageism” without establishing reasonableness is then a circular argument.
You’ve lost me.
You* are using “ageism” as a synonym for “bad”. You are also labeling restrictions as “ageism” without establishing that they are actually bad.
In effect you are saying “that’s bad!” without accepting the burden of establishing why it’s bad, but hiding this behind a different term that carries more emotional weight. It’s a very politically effective strategy but it’s not logically sound.
* actually jMyles
Fair point
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
The EU's proposed design still has a ton of issues.
Elaborate please
The anti-tampering measures require banning rooting/jailbreaking, banning the installation of non-Google/Apple approved operating systems (ex: GrapheneOS), and require that you install Google Play Services/IOS equivalent.
The app also requires that you first send your personal information to a closed source backend, in exchange for easily trackable tokens used as "proof".
> banning the installation of non-Google/Apple approved operating systems (ex: GrapheneOS)
Do you have a link where I could read more about this? GrapheneOS is known for being the alternative Android where many bank apps in the EU still work, and this is the first I have heard that the age verification app definitely wouldn’t run on GrapheneOS.
They are interested - interested specifically in opposing it. These groups don't care about age verification - it is a trojan horse for censorship.
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
Young people setting up a MITM and getting deeper into tech rather than consuming short-form-content is something I'd appreciate as a nice bonus effect.
Of course the EU solution isn't perfect and there are bypasses (there will always be and have always been), but let's appreciate it that way rather than too many PII, if it must come. I'd prefer the Age/RTA header and parental responsibility too.
Isn't MITM always a possibility? One person has access, then they could be the man in the middle and stream or access, store and send to others or they get "hacked" with reasonable deniability, etc. and suddenly others have access without age verification again. For example someone could install a device grabbing all frames sent through a display port cable.
I’m in the UK and we recently got the Online Safety Act. We failed, this legislation is very popular with voters and not getting rolled back. Those that dislike it use a VPN and aren’t interested in fighting. I’d say most of the public here is exhausted with cost of living and internet freedom just isn’t relevant to their voting habits.
I grew up around a lot of the hacker ethos, open internet, Information Wants To Be Free etc… feels like a part of my identity is being striped away by my government.
The hacker ethos and open internet happened when the government was worse. It was illegal to send encryption outside of the US. Hackers used civil disobedience, some risked jail time, some actually went to jail, some are still in jail today or dead, and the world got a bit better as a result of their courage to break laws.
Push back to legislation must be on going and any time it is defeated, the success is only temporary. The government can just line up and try again shortly afterwards and they only needs to successful once.
You can win the battle but lose the war. By the time the average folk realize the extent of these issues it will be way too late.
I think back to how the technology space was 25 years ago. When the biggest privacy fear was that a Pentium 3 had a processor ID and Windows XP would send your system specs to get a security update. Look at how far we have fallen since then and the pace is only speeding up.
Do you just use a VPN? If not what website have you seen age/identity verification on that you find most ridiculous?
This is why we need verification technology that protects identity. Implemented as anonymous verification, without distinguishing between adult age, or permissioned by parent.
That solution doesn't negate parental freedom of choice, it facilitates it.
I am baffled at how often the "they don't want it, because of their ulterior surveillance motivations, therefore it isn't a solution" argument is made. "They" don't want it because it is a solution to the nominal problem, that they cannot abuse, and would negate their ability to use it as a cover with a large well-meaning voting constituency.
Two problems, nominal and ulterior, resolved in the right way by one solution.
When a nominally sensible problem is used as a cover for overreach, solving the nominal problem in a healthy way is the best offense. The alternative is an endless war of attrition, and the "hope" that politicians resist the efforts of well-paid lobbyists and tens of millions of well-meaning voting parents forever. That is a ridiculous strategy, doomed to fail, delivering irreversible damage. As is already evident by the abusable laws that are accumulating.
I worry at the lack of political acumen and foot-gun reflexes in the ethically-motivated technical community.
Stop endlessly fighting to lose less. Just play the winning move already. Stop the irreversible damage.
I think part of the issue people are missing is what the late Randy Pausch would call a “head fake”. My specific autism is not privacy, digital security, none of that. So I will be honest about my gaps. But from my little corner what this is about is geopolitics - specifically a potential war with China. If you zoom out to the macro level first understand the reason China setup the Great Firewall. Why countries like Iran cut the internet whenever there are protests. These are, first and foremost, defensive measures against foreign influence. America is subject to these same outside forces. The difference is that our free and open society makes things like "a Great Firewall" simply unpalatable to the American people. And rightly so. But it is also becoming increasingly evident that these malign actors are using our own values against us.
Russia for example aims to sow discord. One classic example is the Black Lives Matter movement. This was not a Russian disinformation campaign - but they did propagate views that exist outside the bell curve of the moderate. They push scenes of cops being under siege for the right and racist policing for the left. They amplify the voices of the most angry, the most extreme and the most radical on both sides of the spectrum to create confusion, distrust and societal division.
China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
There is hardly a nation on the earth that is not involved in some way in the American discourse - each pushing and pulling to their own aims and individual agenda. Historically there was a sort of Nash equilibrium with Americans caught somewhere in the center. But as the loudest voices, or rather the most well funded, begin to dominate the discussion via social media and covert funding, we are seeing it become increasingly problematic for American democracy. That is why you are starting to see this consensus over 'verification' and 'identification' begin to coalesce. The government, both left and right of center, has begun to realize the long term ramifications of these actors.
So how do you solve that inherent tension between our intrinsic right to free-speech and those who would abuse it to cause us actual harm? An independent, 3rd party verifier with limited scope makes sense - but would that solve the greater geopolitical implications? In truth I've long expected social media like Reddit, Facebook, et al. to formulate a body of their own like the MPAA. But likewise I don't think there is a clear answer here. Do you trust the Tech Oligarchs with this power over the Government itself? This is core to the problem. How do you 'censor' the internet without really 'censoring' Americans? I think this is part of what the last administration was trying to do with the failed "Disinformation Governance Board". And that failure is what has led us to where we are now.
The original twitter thread is right to say this isn't a left-versus-right issue. This is undeniably a censorship mechanism designed to exclude a set of voices from the internet as we know it today. As with the patriot act, they choose to wrap the bitter pill in a bacon-flavored rhetoric of safety and protecting the youth from perverts and degenerates. But what has failed to be acknowledged is the intrinsic cost of having an open society in a world where that openness has become an attack surface. Make no mistake: the goal is censorship. But the solution space to what you call 'the nominal problem' is less trivial than I think you believe.
Agree with all of this. It's fascinating how social media is this soup of the most virulent propaganda imaginable for every possible interest. It's a FFA between all these different powers and you are just trying to keep up with friends and watch cat videos. That they are targeting the current largest empire makes a lot of sense.
I think at an individual level the best thing to do is to opt out of this stuff and not use these corporate systems with algorithmic feeds. Only those will have the intrusive age verification anyway.
> Russia for example aims to sow discord. One classic example is the Black Lives Matter movement. This was not a Russian disinformation campaign - but they did propagate views that exist outside the bell curve of the moderate. They push scenes of cops being under siege for the right and racist policing for the left. They amplify the voices of the most angry, the most extreme and the most radical on both sides of the spectrum to create confusion, distrust and societal division.
> China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
The only reason these approaches work is because there is generally a lot of truth in the things they push and a complete lack of transparency on that reality from powerful Americans, both government and oligarchy. If it wasn't "a lot of truth with some bullshit mixed in" but "only bullshit", it wouldn't work. If the state of the US hadn't made the bullshit realistic and plausible, it wouldn't work.
Those are the issues to fix. You name the PATRIOT Act, yet another thing that has caused much more harm than benefit.
> China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
They mostly bring light to the worst things that happen in the US, which would otherwise go underreported because the people suffering them have no power and the media is already entirely controlled by Bezos et al.
It's laughable to defend this on the basis of foreign influence. The bad actor influencers are inside the house. They're called Jeff Bezos, Rupert Murdoch, and so on. And the information they spread isn't any more truthful or beneficial than that spread by the likes of China.
Rupert Murdoch has done more for misinformation, polarization and extremism over the last 2 decades than China and Russia combined. He's foreign, by the way.
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Heroes @ EFF have our guide (USA residents):
https://www.eff.org/pages/help-us-fight-back#main-content
Of course Chuck Schumer won't let me contact him using this helpful tool.
Perhaps we NYers should organize a rally outside his office in Manhattan like we did for PIPA/SOPA?
Dumb- BUT immediate links to sites of the right legislators!
Here is the response from Adam Schiff
>Thank you for contacting me regarding the Kids Online Safety Act (KOSA). I appreciate hearing from you and welcome the opportunity to respond.
>Keeping children safe and holding accountable bad actors online is an important priority for the 119th Congress, and I am grateful for your input. My staff and I keep track of every message we receive from constituents like you, and your feedback is invaluable in guiding my priorities.
>As you may know, KOSA seeks to establish new guardrails to protect children online by requiring that social media platforms give parents the option to enable the strongest privacy settings possible on their children’s accounts. It also would require audits of how online platforms affect the health and well-being of children. Further, it would create a “duty of care” instructing online platforms to mitigate content seen by children promoting eating disorders, suicide, sexual exploitation, and other dangers. KOSA has been introduced and referred to the Committee on Commerce, Science, and Transportation, of which I am not a member.
>As a parent, I believe that we must do everything we can in Congress to safeguard children online and will continue to support strong solutions to combat child exploitation. That is why I voted in the Judiciary Committee to advance the Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act to crack down on the proliferation of child sex abuse material online, support victims, and increase accountability and transparency for online platforms.
>Please be assured that I will keep your concerns in mind should this bill be considered by the Senate.
>Transparency has been a goal of mine throughout my time in Congress. You can find detailed information on every bill introduced in the Senate on Congress.gov, including the summary and full text of the legislation, which Senators have co-sponsored it, and the most recent action taken by Congress.
>An ongoing job of a Senator is to help constituents solve problems with federal agencies, access services, and get their questions answered promptly. On my website, I offer a guide to the services my office can provide, as well as a contact form where you can share your priorities with me. You can also connect with me online via Facebook or Twitter, and you can always reach my office by phone at (202) 224-3841.
>Thank you again for your thoughts. I hope you will continue to share your views and ideas with me.
Yeah I have the same senators. Emailed them directly from their website. There should be links right above those messages.
They do have a physical address, and stamps aren't that expensive.
TIL it's not free to mail your rep. Mailing your MP is free in Canada.
Use every means necessary. If that can be organized, do it.
man the EFF owns
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Any tips for writing the letter, maybe even a starting point?
Let them pry ID from our cold dead hands. If a site requires ID, it doesn't get my business.
Example, Discord wanted my ID to enable certain features, I declined, I now can't use those features, fine by me. If they started asking for ID anyway, I'd say no and see what happens, even if that means they lock me out entirely. There's no universe where they get my ID.
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.
If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?
I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)
Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.
(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.
Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.
Theres also no old-age cutoff, when their mental faculties significantly decline.
Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
Maybe it should be about intelligence. All kinds of people destroy ecosystem after ecosystem, simply by acting in stupid ways and thereby creating tons of bad incentives for businesses, who will stop at nothing to maximize their profits, zero ethics. The whole system is rigged up to trend towards supporting stupid behavior and attracting more of that, simply because there are so many people doing stupid things. Engagement and attention economy, no matter how stupid or rotten.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.
—
After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-
The platform just has to be designed non-addictively.
Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
I'm curious how much of that will keep occurring though? These underage influencers I assume had a following that existed that they want to manage. But if you can't start one without an agency or an adult running things won't that dampen the amounts of them?
>Underage influencers
Anyone who has hone so far as to become an influencer is already a lost cause. No law could save them.
That’s not really a loophole though. We have child actors in Harry Potter.
Perhaps we should stop that too.
That's the legal loophole that I'm sure a tiny number of people are using. In the real world, reportedly around 3/4 of kids under 16 that were using social media still are by either having changed their age during the window and using a sibling or older friend to do face scans for age recognition, or by creating new accounts and again using an older friend/sibling/relative etc. for the age verification. I heard about the ways children of some of my cousins got around it at Christmas, and their parent's didn't care!
The most embarrassing thing is that our Government thought the idiotic idea was workable in the first place... But of course now they've gone and made things worse, because now kids' profiles pretend to be older, so more inappropriate stuff (like gambling ads for those who put an over-18 birthdate) can get targeted at them - great job, eSafety Commissioner!
The number of times I've had to lie to websites on my kid's behalf is horrendous. I resent governments and companies for putting me in that situation.
But it's a good lesson, I suppose. It changed the lesson to my kids about lying from "lying is bad", to a more sophisticated "lying is bad for these reasons, and so these lies are bad, but those lies are not."
Yeah, I think it's overall bad for society, but on an individual I'd definitely do it too (within reason) for certain services if I had kids that age.
But it feels like by making silly laws like this that aren't likely to be respected by much of the population is bad for the rule of law, which is bad for society. But fair enough, the rule of law is only a good thing as long as the laws are (on the whole) good.
We have a lot of this problem in Australia because as much as we pretend not to be, we're pretty authoritarian in regulating personal behaviour. For example in my state they're currently criminalising riding EN15194 compliant e-bikes above 10 km/h (literally 6 mph, slower than a jog) in my state on 90% of the bicycle path network (90% of the network are 'shared paths' so you'd only be allowed to ride at the bike's full 25 km/h (15 mph) motor limit on the small amount of dedicated bicycle paths). That and requiring anyone with any e-bike to have a drivers license - which cuts out people who can't have a license due to disability, medical issues but who could still ride a bike, or anyone under-16.
It's very silly, almost completely unenforceable and again just going to create huge non-compliance and further teach people that laws are silly things to be ignored... I really don't think that is good for society, and I've observed that the more Government has tried to regulate our behaviour, the less responsibility people seem to take, and the more the Government tries to further regulate.
So I think a big criteria of evaluating any new bill is "are most people actually going to respect this law", but all my experience with politicians is that they prefer magical thinking of believing that anything you make a law will immediately be fixed, even if it's impossible to adequately enforce or even technologically impossible to implement. Every time I've been involved in public consultation processes I'm constantly arguing practicality of the actual bill and they're arguing about the ideals that drove the poorly thought out laws...
>If you are going to implement age controls, you should implement a ban on underage influencers as well.
That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
This is almost verbatim the same argument that people make in support of allowing child labor in factories.
Children do not need, nor are they entitled to, any kind of "freedom" to work for a living.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.
In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
>They should focus on school.
Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years? The direction we give kids coming up always seems to lag behind reality by 10 or 20 years. Perhaps we shouldn't stand in the way of the new generation figuring things out for themselves in this brave new world. The old playbooks to a solid middle class life are increasingly outdated.
> Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years?
Also so they don't end up stupid and useless like a potted plant. People with too little education are easy to manipulate and dim. They're perfect fodder for the propaganda machines.
It would be nice if we could just let kids loose like wild animals and they'd, somehow, figure everything out. But no, we actually have to try. Otherwise they end up illiterate and eating so much candy they throw up. Because they're kids.
None of your concerns are relevant. We're not talking about 6 year olds here but presumably 12-16 year olds. And the issue isn't whether they drop out of school, but whether school must be their sole focus.
Since when did being an influencer become 'one of the few remaining methods' to make a decent income?
I don't think it truly is, but I do think that the younger generations think it is.
My nieces and nephews really don't know what they are going to do in their futures because so much is uncertain right now.
If it feels like a longshot to expect normal 9-5 office jobs to be around in 5 years, and it's also a longshot being an influencer, then why not go for the influencer thing?
Less education, more peddling products on Instagram is... certainly an opinion that exists.
I have long thought that all content (local and remote) should be properly labeled with metadata. Just like the cans of soup in the supermarket, you don't have to open it to find out if it has peanuts, lactose, or MSG in it; you should be able to filter data before accessing it.
You could define a set of 5 or six categories (nudity, sex, drugs, violence, etc.) and have a scale from 1 to 10 for each. Each content producer would rate each category according to defined criteria.
Then each user, or their parent, can set what their own acceptable level is. If you set your violence level at 4 then nothing level 5 or higher will load.
There are some showstoppers here, though. You have to either:
A) Change the laws in all countries (a non-starter), or B) Restrict access to only countries that obey those laws
And Option B is a non-starter to the freedom crowd.
Not to mention all the other issues with labeling, such as:
A) How to label in an internationally-agreeable way B) How to prevent abusive mislabeling
It's fraught, this path.
V-Chip all over again. Now with mandatory browser extensions which hook into the OS' parental controls.
It's no better.
We need a truly distributed point-to-point internet asap. Politicians going to do everything to limit free speech and free ideas in the name of protecting children while they already got all the powers to investigate and stop child abuse.
https://meshtastic.org/
Did you intend to link to Meshtastic as an example of how not to achieve your goals? Because it definitely isn't capable of scaling up to anything like the whole internet, and the project struggles to agree on any goals they want to reliably achieve.
It is something, at least you can chat with your friends freely.
There are so many caveats and limitations that bringing it up in this context is downright dishonest. The most you could fairly say is that some of the philosophy driving some of the meshtastic developers is what you want to see applied to the development of an internet-scale network (which in reality would have less technology in common with meshtastic than with the current internet).
Alas it is the great contradiction. Federated technologies are brilliant for peer-to-peer but many struggle to scale because the designed redundancy tends to crush their efficiency.
Really depends on the context. Email works because of its limits. Remove those limits and weaknesses start to appear.
>We need a truly distributed point-to-point internet asap.
Yes.
A mesh network isnt point to point. Its a mesh.
So a mesh isn't made up of point to point connections? I'm pretty sure if you have several they start to look like a mesh (and every security site's banner)
Sure but I cant communicate with you in a point to point fashion, in a mesh network I am hoping that I have possibly hundreds of disinterested nodes between us. But like, are those nodes coordinating on censorship? Are some of the nodes recording your metadata? Are the nodes incentivized to carry the quantity of traffic you require?
Really the "fix" the ultimate goal has to be direct point to point.
The real internet is made of point to point connections. Doesn't mean anything.
It’s not online age verification. It’s online identity verification.
Would you vote for that? Prove who you are to visit this website? Would you do it to access Hacker News? Your newspaper?
Didn’t think so.
It's turning using a computer into a privilege that can be revoked by the government at any time, for any reason.
Neither HN nor my newspaper run content that needs age-gating.
You would think so but some future authoritarian or paternalistic government might disagree. Maybe the government will say that a newspaper should not report on the poorly built bridge that collapsed and killed some people. News of disasters (or death in any form) might be considered two sensitive for children to accidentally be exposed to through a newspaper.
A great argument for why the bloated executive powers should be clawed back by Congress. But not a strong argument for why Congress should stay hands-off.
Also, if something like that happens, that's a blatant 1st Amendment violation and will be enjoined as fast as the case can run up the judiciary. Today's SCOTUS is very 1st-Amendment-friendly (to the chagrin and delight of various flavors of both left and right).
HN has 'user-submitted content' which tends to be one of the categories that these laws target. Newspapers can also run stories on disturbing events that can also fall under these laws. They are often incredibly broadly defined such that it's easier to describe what they don't cover.
I want that. I'm tired of bots being half the internet traffic or more. It's driving the general public insane and anonymity on the internet has zero utility. If journalists need to send sensitive information, they'll always be able to use Tor.
I think there's plenty of utility. People can express opinions that they hold honestly but would fear social retribution for if it could be tied back to them publicly. For example, any political opinion that I hold that's modestly center or right of center I would not appreciate being attached to my name online since people are completely incapable of nuance or compartmentalization.
If you wouldn’t make a political statement in a town hall setting where you’re going to show ID, then you probably shouldn’t say it on the internet.
But keep in mind that these laws don’t result in your identity being public. They will ultimately result in the sites you’re posting on know that you’re an enumerated individual. The ultimate benefit as I see it is removing outsized leverage over public opinion by botting likes on your statement or otherwise operating tons of accounts. It should also eliminate threats of violence from the digital public square, since building a prosecution pipeline against those would be easy to do. Same with child grooming, but I’ll acknowledge there’s a way to make that argument in a glib way, as an excuse to realize some of the other goals. It is a real problem though.
As with many detractors of anonymity, it seems that you're assuming that the authorities and neighbors you'll deal with will always be virtuous, and not corrupt nor vindictive toward opponents. Maybe you'd like to expose the town's government's corruption or mismanagement at the town hall, but the town is run by a family with a lot of influence and power over everything that happens within the town. You live in the town, fear for your safety, and have no good way of anonymously opposing their corruption, so you stay silent and they get to keep their power.
I don't see how these laws wouldn't make your identity public to someone, even if it's not the public at large. But it'd be enough for that someone to be an individual or entity who turns out to be interested in silencing your voice. Their knowledge of your identity would probably give them power to silence you not only on their platform but also on other platforms, if access to those other platforms is also tied to one single identity.
Bots are a problem but I suspect there are other ways of dealing with them, ways that don't involve making anonymity or pseudonymity impossible.
I disagree. I won't repeat my comment, but it carries all the information you need to know.
Thank you for not repeating your comment. Have an excellent day!
I want to read your comment. But first, so I know I'm not dealing with a bot, what is your full name and address? Please upload a photo of your ID as well. Thanks.
In the age of AI I think it’s only necessary and inevitable to implement some of kind of internet ID system to stop the massive onslaught of AI generated fraud, malicious hacking, and spam. If age verification is a Trojan horse to erase online anonymity, so be it, I see that as a worthy goal.
Humans are inherently social, and social networks are based on trust. Trust is primarily a function of reputation, peer pressure, and legal consequences. Reputation requires tying behavior to a stable identity. Peer pressure only works when you’re not anonymous. For there to be legal consequences for bad behavior, we must identify bad actors. I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
Also I don’t think that the “pro privacy” activists really understand the scale and severity of harm being done to children through the internet. I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
My first question to you is whether you are a pro-privacy advocate yourself, znnajdla. I don't see any biographical information listed in your profile so I'd initially assume that you value privacy on some degree. I am curious as to whether there are contexts where you want to be able to post an opinion through a pseudonym, without your ideas being easily tied to and subjected to judgments based on your legal name, your ethnic background, national origin, etc. Would you be willing to give up pseudonymity forever?
You speak positively about peer pressure, but on a basic level, peer pressure is power excercised against non-conformists. Robbers and abusers are non-conformists, but activists and reformers are also non-conformists. Peer pressure is often used in certain highly oppressive societies to enforce values I'd consider downright evil. Such societies take great care in limiting independent, anonymous access to digital tools and networks. Personally, I'd really like to keep living in a free society where there are ways to communicate and express non-conformist ideas without having to worry about who can easily stamp out such ideas. I think digital ID opens the way to oppressive societies which can wholesale block specific individuals' access to any effective communication tools. Digital ID us an overcorrection to a problem that DOES need to be corrected, but not in a way that destroys various essential aspects of free societies.
> I don't see any biographical information listed in your profile so I'd initially assume that you value privacy on some degree.
Extremely powerful entities like the CIA or NSA could easily personally identify me from my HackerNews profile if they wanted to, as could a dedicated attacker. The problem with "privacy" on the internet right now is that it's a lie - you only have privacy from your peers and ordinary citizens, but not from powerful entities. It would be better if we had a level playing field and everyone could be identified by everyone. Then the normal evolved human behaviours of trust-based social networks could function properly, and we could also fight AI-bot-based social media control, scam, and fraud.
It's not "privacy" it's "information asymmetry" which I'm attacking.
We will see how your opinion changes when someone steals your ID and voice and you end up being defrauded due to the government chosing the cheapest Indian shop to mishandle your data
Not going to be any worse than the oncoming onslaught of AI-powered scam, fraud, and hacks that are enabled by a lack of legal consequences.
There is a vast difference between being scammed and being defrauded. The latter can happen without any interaction with you by criminals using your leaked personal data. An AI empowered scam is just that. A scam can be avoided. Leaked IDs, voice and identity not so much
My point is that the data that you don't provide cannot be leaked
> would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
We should sedate everyone and lock them in secure concrete cells, with food and water provided through tubes. My proposal will save far more than a single child from being killed. I really think the "pro existence" activists don't understand the scale of the harms, and how we can prevent them all by having everyone be permanently unconscious!
> Trust is primarily a function of reputation, peer pressure, and legal consequences.
The trust is somewhat of a one-way street. We are supposed to trust the entities in power. If we break their trust, there are consequences. If said entities break our trust, we can do little about it.
> I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
For some, perhaps. However, I also would rather protect people from a potentially grim future. What is permissible and acceptable now may not always be the case in the future. The Holocaust, for example, only ended 81 years ago. The notion of another one, even against different groups, seems completely infeasible -- the same as the first one.
> I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
Tone is hard to read in text, but are you be facetious? If not, you are essentially saying that you would support shutting down the Internet to protect even just one child. Yet, despite these real and active harms that already exists, you will continue to still use and profit off the Internet in the meantime?
> you will continue to still use and profit off the Internet in the meantime
If I stop my internet use that won't save anybody, so there's no point in doing it. If shutting down the whole internet is necessary to save a life, I would support it. The only reason I don't is because that's not possible and even if it were possible it would not actually save more people than it would harm right now.
There are several holocausts going on around the world right now. It's not completely infeasible for there to be another one. I doubt there's been a time in human history where there wasn't at least one holocaust going on.
The German one stands out only because they fought us.
I concur, but only if we sub genocide for holocaust. The two terms are similar, but not interchangeable. I think the distinction is important, but that should not detract from your main point.
You do have a point that Holocaust stands out because we fought Germany, but I would also argue that what makes the Holocaust unique was the speed and efficacy in which it was systematically carried out.
I've heard that we could use zero-knowledge ID proofs to show someone is of age without revealing any more but I don't think that's the plan and the demand for age restrictions doesn't feel like a grassroots effort of concerned parents. It feels like an NGO/bureaucrat driven law and I assume its purpose is to de-anonymize people on the internet.
In some cases, it also seems like a lobbyist-driven effort that would benefit certain companies likely to be hired by the government to provide identification services.
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
Because it's not about children but requiring identification to speak online.
That's the cynical view, yes, but we can see educational standards and performance going down in the United States, we have seen plenty of scientific and medical studies showing problems with children and more specifically teenagers using social media. I'm not one to want to want to limit someone's rights, but it seems like the trade-off here is in favor of requiring age verification at least for social media companies.
Separately I still don't fully agree with concerns raised regarding social media and identification for everyone. Bots, people who are online just stirring up trouble, &c. are causing pretty significant challenges and problems for society. If you spew a bunch of racist stuff for example I think people deserve to know who you are.
And you know we do this all the time. Folks want gun registries and things like that (and I agree, as a matter of practice, but not principal) so I'm not sure why we're ok with that form of requiring identification to exercise your rights and against this one other than political priorities.
Maybe requiring identification to speak online is not the intent but it would likely be the practical effect of the laws that were originally intended just to help children. It's not enough to think about laws' intent, but also their practical effects.
We haven't even mentioned the censoriousness that already takes place in various online forums not because a user said something racist or was stirring up trouble, but because moderators were vindictive, petty, or lazy, or because the automated moderation tools in place were heavy-handed and unintelligent. I don't look forward to that kind of moderation spreading everywhere and made more efficient by reducing everyone to a single identity. (Maybe Joe Contrarian has some opinions worth listening to, but it's just easier for the moderator of a forum to see that he was already publicly blacklisted by another unrelated forum, and just blacklist him on this one, too.)
At the end of the day they are private websites and the owners get to decide all of that stuff. Start your own, or just stop posting and let such folks have their echo chambers. One of our problems in society is that folks seem to think there is a need to post on the Internet on some forum - stop giving others power over you. You’re just posting to a bunch of anonymous people. They may be bots for all you know. Who cares?
> Maybe requiring identification to speak online is not the intent but it would likely be the practical effect of the laws that were originally intended just to help children. It's not enough to think about laws' intent, but also their practical effects.
Right we should analyze trade-offs. But you are quite focused on censorship which I am also generally concerned with. But are you really being censored by being identified and associated with what you say online? In public you aren’t anonymous - why must that extend to this digital public square?
It will spread to everywhere else if we allow it for social media. In Australia for example, mandatory age verification has already spread to video games.
I'm with you on the slippery slope argument. I do mean that I think we would solve most problems with just an implementation on social media.
In the US for buying games online we've had age verification for a long time. For in-store purchases you see that too. Same with movies.
Shows what my gaming preferences are when I have never come across these restrictions here. Sonic Mania is not exactly risque stuff.
Indeed, social media companies seem to big proponents of the US legislation.
https://www.politico.com/news/2025/09/13/california-advances...
Big social media companies are likely overjoyed to be able to get discrete, government issued info of a person's full legal name, date of birth, residential address (as is printed on US drivers licenses) for advertising and demographic profile targeting purposes. And then be able to correlate it with their existing social media history/clicks/profile, browser fingerprinting, IP address, daily usage patterns, geolocation. It's a massive gift to them.
I doubt they need that to identify you. There are also lots of other problems like algorithmic manipulation. But also just stop using these junky websites. Everyone always complains about Meta doing this, TikTok doing that, and it's like if all they do is make you mad, stop being their user/customers?
It's very hard to stop being their users/customers when they're the only platform where people are gathering for that particular purpose. The nature of walled gardens and network effects often mean that there isn't a viable alternative.
It's bad when the choice one has is between 1) using a platform that's significantly problematic or 2) being disconnected from everyone you'd like to connect with because they're only using that platform.
It’s pretty easy. I haven’t had social media besides LinkedIn since, I think 2013? I participate in all sorts of events, I know about things going on in my neighborhood and city, and I have quite a few friends. You don’t need this stuff and it’s just going to suck up more and more of your time and attention misleading you in to believing you need it.
You’re not connected with anyone. It’s a surrogate activity.
Be careful saying you don’t use social media or soon you’ll have a wholly off-topic sub-thread about whether or not HN is social media too, even though we’ve all read the same tired arguments from both sides about a billion times in other threads.
You're right, and if someone wants to say I have social media because of this forum that's totally fine. I just mean I don't use any of the major social media platforms, well, except LinkedIn. And I just haven't gotten over the hump yet on deleting that one too.
Good: some commenters here realize it's an attack on privacy
Bad: some still entertain the idea that we should do age verification using some sort of crypto primitives
There is no reason for age verification at all.
I am from the goatse generation. Rotten.com. steakandcheese. Horrific stuff tbh, I mostly stayed away from it, and I didn't need a helicopter government to protect me from it.
The moment you accept the narrative that kids need to be protected from the Internet you have already lost.
You've already condemned those kids to a life of slavery. So much for protecting them.
What we need is not online verification, but a competent government that does its existing job well.
Who's been arrested over the Epstein files? Who is protecting those kids?
No one.
That same government wants to "protect" your kids by KYCing everyone.
Give me a break.
Over a decade ago, on the website of a cable news network named after vermin, you could watch an uncensored video of terrorists setting someone on fire.
Right? I especially don't understand where some of the "think of the children" attitude on porn sites, as they for the most part already ask for your age and if you didn't get some kind of amusement out of seeing tits as a teenager you're a liar
It's a function of our society becoming more puritan and conservative in the past 10-15 years. This has been a slow burn.
We are back to perceiving viewing boobies as an existential threat to people. Currently, sexuality is being demonized all around, and sexual morality is once again becoming a currency in society.
I encourage people to talk to some Gen Z kids. They're much more puritan than millennials. They're focused on virginity and the moral superiority of monogamy. It's bizarre.
I spend most of my social media time on tumblr, and it's really funny to see the whiplash of attitudes between the older and younger gen z. The younger ones tend to be the puritans and the older ones are all polyamorous bisexual furries who want to have sex with robots (obviously exaggerating but not by much)
Nah, that already didn't work because corps are very good at creating network effects in children and will set up multi-billion-dollar businesses around them. And then the kids with protective parents become the weird ones in school. I'll die on the hill of curtailing this stuff in a privacy-preserving way.
> I'll die on the hill of curtailing this stuff in a privacy-preserving way.
At some point you'll realize the contradiction in not trusting these "multi-billion-dollar businesses" to the point that you are risking enslaving humanity and "dying on this hill" and yet at the same time trusting those same businesses to implement this dystopian system in a privacy-preserving way.
When that realization hits, it will be a loud sound, possibly heard by nearby telepaths.
That's fine, say hi to the telepaths in advance for me
The Electronic Frontier Foundation set up a resource page for this:
https://eff.org/age
Their guide:
https://www.eff.org/files/2026/04/09/condensed-age_verificat...
Unfortunately, their most prominent call to action doesn't seem to address the various state-specific and non-US legislation (focusing on KOSA instead). Here it is:
https://www.eff.org/pages/help-us-fight-back
The EFF has long been a skin suit my dude, they're just here to dissipate and confuse opposition.
>they're just here to dissipate and confuse opposition
What do you mean by this?
There are lots of ways to implement identity verification while preserving privacy. It's actually a super interesting engineering problem. Estonia has an excellent model to build on. The government can maintain a "traditional" ID system based on documents and in-person verification, and provide you with a device similar to a yubi-key or Bitcoin hardware wallet that could be used to share specific, cryptographically verifiable claims with third parties, like your age, or even just a boolean "over 18", but also your name or other information if you choose, with a way to control the access and audit which parties have verified which claims with the govt.
In Poland online banks to that. You can verify your identity for government purposes with the use of your online bank. No need for government to set up a scheme to confirm millions of people in person.
govt, banks, whatever. I don't care who administers the system as much as I care about the fact that they're highly regulated, and you have the control needed to expose as little information as necessary to confirm things about yourself to third parties (like just "over 18" for many sites, or your full identity for other things if necessary).
The irony of posting ethical social reflection on X though...
https://xcancel.com/GlennMeder/status/2049088498163216560
sadly not having a twitter account in order to read the fucking internet was my hill to die on.
We simply don't need online age verification. It's not the state or private business' job to parent children. It's their parents job.
This is not only unnecessary, but will with 100% certainty lead to negative downstream affects, either via leaks, or the state being able to find people for things that aren't crimes once they're adults.
There's simply no good reason for it that outweighs the bad. But what it really boils down to is completely unnecessary.
And the piece nobody is even considering...
Responsible parents don't have separate OS accounts for their children.
I’d wager most people want more censorship of the internet.
Really the hill to die on is that the first amendment should preclude any content-based restrictions for anyone. If you believe children shouldn't be exposed to certain materials that's between you and your kids, and should not involve the government whatsoever
It is not like a digital control for id verification could be used anyway to control a narrative in war times right?
If you don't use X/Twitter anymore, XCancel makes it possible to read threads when not logged in: https://xcancel.com/GlennMeder/status/2049088498163216560
Nothing against Twitter, but I just don't feel like logging in, so that site makes it way easier to read this. Also it doesn't take like 900TiB of RAM to render.
If this does not work, use nitter instead
While we've been agonizing over Age Verification (real or planned), Greece has apparently introduced a ban on anonymity on social media. I'm not liking where the world is headed, but I have no idea how to push back against it.
So many pieces of law are flawed today, and the reason why should be concerning to all.
I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.
The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.
There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.
Yeah, yeah, but the children...
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...
Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
A photo identifies you. This is the digital equivalent of having a photo taken of you upon entering the mag store, stored digitally forever, shared with government, and tied to every magazine you read and purchase.
> I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
a convenient record of your face is all we need
doing a selfie with the webcam
First, that's easily enough to identify you from biometric data, and it's naive to assume it won't be resold. Second, I kept getting asked for ID into my 40s because I looked young. People don't all age in the same way, so this system will fail for people at the tails of a normal distribution - some 15 year olds will easily pass for 25 and vice versa.
In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification. It's not explicitly part of the proposed law but there are only a handful of companies who meet the qualifications to provide this service (id.me, Persona) and this is how they do it.
I believe if you are a "minor" then you can go the post-a-selfy route.
If someone wanted to be a martyr and just uploaded all their personal documents so they could be accessed by everyone, I wonder if an interesting court case might follow.
I could imagine it ending with a court ruling that people are responsible to protect their own personal documents which... yeah, that would muddy the waters in a world where every website expects to see your ID.
The verification apps are starting to require live video selfies to verify that the person doing the verifying is the same face as the person on the scanned ID credential.
Imagine so if that was a pltr right Or like someone who uses pltr What could possibly go wrong? People are being paranoid for no reason!
> In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification.
That's not just the plan - that's what's already legally required in many US states.
These laws were introduced by the explicitly religious right-wing groups like Exodus Cry and Morality in Media, as ways to de facto outlaw pornography (in their own words). They've since been laundered into the mainstream so the general public is unaware of the root cause.
Whether it can be done this way is besides the point. It is about how regimes like ours in the US that have demonstrated an interest in spying on their subjects choose to regulate this over time.
Now Persona has your picture and PII. Pray they never have a breach.
Reddit is one thing but would you do the same for a porn site?
Does it not sound insane to you that you need to expose your biometrics to a corporation just to make anonymous posts on a forum?
It really is insane that some people don't realize how biometrics are just as bad as every other option.
I have a fair bit of fatalism on this one.
Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.
Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia
Exceeding gloomy take I know
Contacting my representatives is about as effective as making a silent wish. Whenever I've done it, I'll either get no response, or a boilerplate reply which basically says "I'm doing this, go fuck yourself". Then I'll be added to their spam list. The truth is that my reps don't represent me and they're going to do what they want regardless. After all, I'm not the one backing the truck of money up to their front door.
Yeah I emailed a representative in the UK too.
Took forever to get a response and likely achieved little, but to their credit the response wasn't entirely canned and did at least give the impression that they understood what I'm saying
I don't know what the correct solution to this problem looks like but in my opinion it's not only the kids that need saving. Social Media is eroding our societies and adults are just as hooked if not more.
Since it is so harmful to let children use social media, why aren't parents being put in prison for abuse and neglect when they let their children use social media? Why should everyone else have to suffer when it's parents that should be punished?
(it's because it's not about protecting children)
Because this is a golden opportunity erode privacy riots under a complete guise of "protecting the children." Same goes for "preventing terrorism" and various other attempts to appeal to authority.
Just a reminder that the YC funds many of the companies pushing these laws and building the surveillance state.
Kids will always find ways around regulation. Look at cigarettes, vapes, alcohol, weed; they will just get it from their dealers. Pornography? I expect something like: download a Torrent, get it from a classmate, share HardDrives in school, get it through an older brother.
>Kids will always find ways around regulation.
And porn companies should always be held responsible for not doing their due diligence and freely distributing porn to minors. Which is already illegal in teh US and most places.
It’s just defence in depth and wholly appropriate for it to be imperfect
And bootleggers will always bootleg, and smugglers will always smuggle. For that matter, murderers will find ways to murder.
Shall we just abolish all laws? None of them have any effect whatsoever, if they are even slightly imperfect... by your rule.
Yeah his point suggests we should stop ID'ing at liquor stores, physical porn stores, etc.
I'm not suggesting that actually. I look at my nephews and see them buy cigerettes, vapes, etc from small dealers instead of stores. Not saying we should just let them smoke, just expecting that they will be able to circumvent online age restrictions as well.
My question is: are digital age verifications the best way to protect kids from harmfull effects of pornography? And my worry is: what unwanted side-effects will age verification have for our society as a whole.
After reading these comments, I don't want to hear any of you suggest that kids shouldn't be allowed to have unrestricted access to smartphones or social media ever again.
How did you think this was going to be enforced?
I don't understand. I would assume most people don't think small kids should smoke crack. That doesn't mean that they are automatically in favor of creating a 24/7 survelliance state just to prevent that from happening.
it didn't have to be like this. If we had trusted NGOs with strong funding and a track record of independence and integrity they could shim between token generation and application. Allowing governments to produce identity tokens and applications to verify them with the shim blocking each side from knowing of the other.
But we don't have that, so he's probably right.
The question used to be: should we have online censorship?
Now, the question is: what should the implementation details of online censorship be?
Yes, we should have online censorship, and you agree with me. Proof: child porn
I agree, doxxing yourself to some shady gray-market adjacent data broker is not acceptable as age verification, and age verification was safer using the honor system as before. But for some communities, especially social media communities, some kind of verification is better than none, otherwise what's to stop them from being overwhelmed with alt accounts that are used simply for harassment or other targeted objectives?
People should not be able to misrepresent themselves on the internet, it may have been safe in low volumes but it is scary now and will be outright dangerous as a modality in the hands of AI agents. If you think teen mental health is bad now, wait until social media campaign capabilities previously only available to nation states fall into the hands of ordinary school bullies.
Maybe age verification isn't the way to mitigate this obvious risk, but there has to be something that can be done to stop rampant sockpuppeting.
I can't agree with this enough and yet I think the long term danger is masked by the current problems for the majority of voters. I'm not hopeful.
Age verification requires identity verification once — but it doesn't require revealing your identity ever to a third party. With FHE (fully homomorphic encryption), identity data is encrypted on your device and never leaves it in plaintext. Not to the merchant, not to us as the verification service — nobody. We only compute on encrypted data and return a yes/no. I'm building this at identified.app
Id rather we focused on human Vs bot verification given the state of social media influence right now
Honestly, not even in favor of legislating any kind of increased device-side control or age gating. I understand the "this should be up to the parents" angle but I'd push it further: modern tech already allows parents too much control over their children. Freaky helicopter parents are already perfectly enabled to spy on their kids location, device usage, inspect and monitor their conversations, and it's already normalized to an insane degree. Absolutely no reason to make it an out of the box experience to tempt otherwise sane parents to go mad with that kind of abusive power.
To anyone reading this, please take the extra step beyond striking down age-verification laws, and start taking measures to prove to Congress that it's not needed.
Your nextdoor neighbor whose misbehaving child that's permanently on their phone? Help them out.
Your friend that joked about sending death threats to someone? Scold and report him.
That girl endlessly scrolling Instagram? Get her help.
Please take a step back and examine how insane the internet is and how it's affecting our everyday lives. Political violence and mental illness is increasing, and the internet is solely to blame for this.
"If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary." Federalist 51
We're all too familiar with the latter part of that quote, but we're completely oblivious to the former. At this point, we've all but proven that the government needs to step in and regulate internet access. And unfortunately for us, they're going to do it in the most dystopian, authoritarian way possible.
I want to be on the side of freedom and strike this bill down. But when it is struck down, everyone is going to cheer, go on their merry way, and continue to let demorilization, radicalization, and mental illness infect the psyche of the everyday human being, and do nothing about it. And then the cycle will repeat itself.
At this point, I actually hope this bill passes. Not because I want it to, but because maybe then everyone will stop using the internet for everything, and some sanity will return.
We can't just place sole blame on "the internet". Who made "the internet" the way it is today? By and large, it's the same people who are pushing age restriction and verification: Meta. They do not have your best interests in mind. They only seek to deliver new ways of controlling you.
https://tboteproject.com/
I'm not a fan of online age verification, but this is completely absurd:
> Every website. Every platform. Every app. Every service. Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used...
No. Nobody's proposing you need to verify your identity to read articles on the New York Times or Wikipedia or political blogs. And nobody is proposing you need to verify your identity to leave comments on a news article or blog post. And any proposed law around that would run into massive first-amendment constitutional hurdles. It would be struck down easily.
There's always going to be a spectrum of websites that range from open and anonymous (like news and political discussion) to strongly identity-verified (like online banking). I don't like online age verification for particular sites, but at the same time I think it's completely misleading to see it as this slippery slope to a world where anonymous speech no longer exists.
We can have reasoned arguments around how people's usage of sites is tracked and how to prevent that, without making this about free speech and "the hill to die on".
I agree that it's an exaggeration to say that every website, platform, app, and service will require identity verification. I don't think it's inconceivable to think of a future where every website, platform, app, and service that matters will require identity verification (every one that has a significant userbase.) I can easily envision a future where it's impossible to anonymously or pseudonymously post a controversial opinion on any online forum where it's likely to be seen by a significant number of people. Such platforms are likely to be targeted by whoever mandates identify verification and imposes penalties for not implementing it.
To the contrary. America has incredibly strong first amendment speech protection. Any kind of legislation that would prevent people from reading or posting political speech unless they verified their identity, especially where it is popular, is going to immediately be ruled as unconstitutional.
That's the difference. Porn sites aren't places intended for the free exchange of political opinions so age verification can pass muster. If anyone tried to do that with newspaper sites or political blogs or anything like that, the courts would shut down that law instantly.
We've spent the past three decades trying to invent ways to deduce identity and build profiles of what would otherwise be anonymous users. When the government steps in and compels people to formally identify themselves by their government names, what would you expect these companies to do? They're not gonna say "no thanks."
Why the heck would the government compel people to formally identify themselves to read or comment on a newspaper or a blog? That's absurd and unconstitutional in the US.
You're starting from an assumption that is invalid to begin with.
I don't know. Why would the government compel someone to formally identify himself to put cash in a box at the bank? Why would the government compel people to take off their shoes to get on a plane? Or submit biometric data drive a car? KYC for a phone line...
It's not invalid. I have no reason to believe that this isn't going to creep.
We have extremely strong first amendment protections that form part of our constitution. That's why it's not going to creep. It would be a blatant violation of the first amendment.
I think your interpretation, even if correct, is not the current position of the legislature. This post and the thread attached to it is about how it's currently happening. Personally, I don't see a future where you don't have a digital ID. If the government can compel you to provide an ID to, say, travel or operate a vehicle in public, I don't see a compelling 1A argument that it can't do the same to operate computing device on the public internet.
While I don't agree on your characterization of the legislature, it doesn't even matter. That's the whole purpose of having checks and balances, and a Supreme Court that can strike down unconstitutional legislation.
And your analogy between driving a vehicle and posting on a website doesn't work because there is no constitutional protection for driving vehicles or taking commercial air flights. However there is a constitutional protection for speech, above all political speech. That's the difference.
This whole problem is basically parents admitting they cant parent.
Hopefully this will give yet another push towards decentralized, open source services. Platforms where noone and everyone is responsible and the state does not get to decide the rules.
I dont think most people actually want that in practice. That's why we dont have it right now.
I don't think most people have been inconvenienced enough yet. ID verification is invasive enough and should cause enough friction to push another bunch over the edge.
This seems hyperbolic as it's actually a long path between age verification to full digital identity tracking. But I agree that pushing the burden of verification to websites is ridiculous. Like the GDPR requirements where every webpage has an annoying consent modal, the verification and preferences should be controlled on the device you use to access these digital services. My browser should know and enforce my cookie preferences in a way that has a uniform user experience. Likewise, if I am a minor, my parent should provide me with a device (or profile on a device) which knows my age and can use that to inform online services of the age of the user rather than needing to go through a separate process for each service.
For a forum that supposedly consists of hackers and tech-savvy people, this number of comments supporting age verification is concerning.
The author has said a lot about what kind of future awaits with mass surveillance and AI, but I believe it’s not enough. Technofascism Is not that far away.
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.
States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.
Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.
Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.
I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
What I'm hearing you say:
> Our freedom is already being eroded, saying that it is being eroded more is just fear mongering.
> They want to hurt you, instead of fear mongering, find a middle ground where they're hurting you differently.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
Go look directly at the sun without any protection or go listen to sounds of 120dB if you want to test your hypothesis that light and sound can't be unhealthy.
Or maybe you aren't being litteral and are just saying that what children see and hear has no influence on their developmemt. Either way, total bullshit.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
You posted a giant, AI generated block of junk science.
Online age verification is an example of the Motte-and-bailey fallacy (https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy, https://slatestarcodex.com/2014/11/03/all-in-all-another-bri...).
It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
Ok, maybe that’s a silly thought, but… couldn’t this be provided by Apple/Google anonymously?
When you set up kids devices in your family they ask you to provide the birthday anyway.
I’m keen to see the arguments against this.
Further empowering and depending on either of those companies as a middleman in our lives should make us nauseous.
"Age verification is the Trojan horse. And once it is inside the gates, the surveillance state becomes operational."
Braindead meme. "Age verification" is not a "Trojan Horse". No one, regardless of age, _wants_ to use age verification. They are being effectively _forced_ to ask for it or use it. Age verification (identity verification) is a tradeoff. A "Trojan Horse" is something that people actually want, not an obvious tradeoff, a sacrifice, a compromise. No one is being "fooled" into complying with identity verification in the form of age verification
The surveillance state is already operational. If you use "platforms" then you are already inside the gates with the enemy. The surveillance apparatus is operated by so-called "tech" companies that perform data collection, surveillance and online ad services as a "business model". These companies provide access to and information about internet users to advertisers and law enforcement
If "age verification" dissuades some people from accessing "platforms" (servers) run by so-called "tech" companies, then that is a loss for the companies and a privacy gain for those people. The "hill to die on" is not using "platforms"
These companies are the reason that "age verification" is proceeding. They push the allegedly harmful content because it makes money for them. Further, the companies' "platforms" make "age verification" possible. This is because they intermediate transmissions between internet users through these so-called "platforms". Governments need not comply with laws that protect individuals from government surveillance when they can target "platforms" instead
It is disturbing that anyone would want to "die on a hill" to save "platforms" from "age verification". These third parties are surveillance companies. They built the surveillance state. They already know who you are, they do not need government-issued ID
If the people spreading this "Trojan Horse" meme cared about surveillance, including identity verification, then they would not be defending "platforms" from regulation, they would stop using the "platforms"
Usually Fear is the realm of governments. Modern republics are basically legitimized around the fears of something terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, immigration, globalization, unrecycled waste or greenhouse effect.
Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock. its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
There is a sudden concerted international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. https://github.com/upper-up/meta-lobbying-and-other-findings...
Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
I asked META itself:
The push for online age verification started gaining momentum globally, with several countries implementing regulations. Here's a brief timeline:
- 2022: The European Union introduced the Digital Services Act (DSA), establishing a framework for digital services accountability and content moderation.
- 2023:
- 2024: - 2025: - 2026:It's true for a lot of things in Western countries.
Evident when the fight against "hate" was suddenly everywhere, and also during covid.
Politicians looks to each other. There’s nothing new in that.
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.
This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
You'll need to explain how age verification fixes that.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
That's not what they said.
Playing devil's advocate outside of debate club only serves to promote the devil's point of view.
State your well reasoned opinion where you have considered the facts. Or just say you are in support of this openly.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.
All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.
Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.
Critique of the author's style is fine. An opposing view should honestly be presented as such.
In other news Greece is banning online anonymity. The final form of age verification is here.
https://www.euractiv.com/news/greece-to-ban-anonymity-on-soc...
"But age verification requires identity verification. Identity verification requires digital IDs."
Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
Are you going to give your cc number to every website in the world? Also, is that really an ID?
It's not necessary to give it to every website. Verification to the website can be a true/false from the OS. In fact that's how it already works now.
I would say it's not really an ID no, which is the point. The post is claiming that a digital ID is necessary for age verification, but clearly it isn't.
Reminder: Age Verification are not being passed to protect anyone but social media companies. But in addition, they will be used for a massive surveillance state. This is the DMCA of the 2020s, but far worse.
“It profits me but little, after all, that a vigilant authority… averts all dangers from my path… if this same authority is the absolute master of my liberty and my life.”
I would say be careful what you choose to believe. Online identity verification is the only way to end the war that’s being waged on the American people by foreign states via social media. If I were a bad actor, I would very much want to convince the public that this is a bad idea.
It's easy for a corporate level bad actor to pay homeless people $20 to scan their ID, and easy for a state level bad actor to fabricate IDs at will
No, I would say it's not that easy, at least not on the scale that they're currently creating accounts. There are ways to do further verification like credit agencies do or how Google does it for businesses (you have to be able to receive a physical mail item with a card containing an ID code).
There’s age verification when you buy a gun. Not on a gun handle.
Kids should not be able/allowed to buy/use devices that are dangerous for them
But the device itself should not care at the fallacious idea “it might be able to”
Enjoy dying on that hill then because without mandatory ID for potentially harmful services like social media, we will continue to descend further into the brainrot that many of you suffer from today.
I'm curious to hear your theory on how it saves us from the brain rot!
Presumably it makes people fearful to post things that differ from the norm, which is what I'm assuming parent means by brainrot (wrongthink).
Brainrot isn’t wrongthink. Brainrot is brinksmanship and zero sum discourse. As a member of the public it’s virtually impossible to know where the real consensus is on any issue today due to wishful thinking backed up by gigantic botnets. Brainrot will make people certain that they’re part of some majority consensus to the point that they will fight legislation like this because being provably part of a fringe line of thinking would cause them psychological pain. Right now, everyone (including the “moon mission was fake” fringe) thinks they’re part of a majority consensus. Even sovereign citizens and flat earthers believe they’re in a much larger cohort than they really are. A lot of these ideas are harming people offline in addition to degrading their personal mental health.
I’m betting that bot activity plummets once accounts are tied to real identities. That’s a discourse benefit. I’m also betting that discussion will become a lot more rational once people have to put their names on what they are saying. Death threats also become more easily prosecutable.
Also becomes a lot easier to target people for speech you don't like, especially if you're the government.
Not really.
Age Verification is very offensive. It assumes guilt and creates risk to no societal benefit. https://en.wikipedia.org/wiki/Right_to_privacy
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to everyone so that we wouldn't have been a need to bring in age verification.
The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")
They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.
This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.
meta desperately want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.
So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
what can we do?
Agree
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.
The Cashier Standard – Age Verification Without Surveillance
https://news.ycombinator.com/item?id=47809795
https://claude.ai/public/artifacts/7fe74381-a683-4f49-9c2b-1...
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
More and more places I go in other states besides Utah, try to scan IDs when purchasing alcohol.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
California grocery stores scan ID too
How does this prevent a second market for one time codes? I as an adult can just get a code and sell it someone else.
Stings that catch adults reselling codes.
It doesn't have to be perfect.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.
Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
This type of system is a horrible idea for the following reasons:
1) the cards can just be re-sold which creates a black market and defeats the "cashier physically saw the person buying the card" angle
2) nickle and dimes people for simply browsing the internet (verification can dystopia anyone?)
3) related to #2, it creates winners in the private sector since presumably you need central authorities handing out these codes
I abhor the idea of digital ID verification, but if we're going to do it, let's not create a web of new problems while we're at it.
Is it even theoretically possible to have bearer anonymity and no reselling option at the same time?
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.
I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.
While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be some trolls out there who will do it just for the laughs.
This is far from a perfect system since a dishonest site could also make use of the option. But it theoretically works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
I set up a porn site that requires your token. Psych! It's not a porn site, it just disables your ID when you enter a token.
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.
Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.
Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
'consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.'
I don't think it applies, the difference is that codes are digital and can be sold over the internet, anonymously, in a scallable manner.
I still like this solution because all the solutions I've seen have flaws and this one being so easy to explain makes it great to campaign for.
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that turns it into a physical problem. I'm merely pointing out that's what you're doing and the issues with it.
Second, it doesn't matter what it costs, it's inconvenient and I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.
Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.
EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
Why is it always “think of the children” used to abrogate the rights of adults?
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.
Regardless how stupid this argument is, rags will always pounce on it.
This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.
It's the same with tough on crime. "What, you want criminals to keep getting away with it?!"
> Pretty much every person in every culture cares deeply about their children.
I would substitute "deeply" for "superficially". Like, if my parents found some way to prohibit porn when I was an adolescent, I wouldn't say they cared deeply about me. I would say they were misguided and authoritary. The "care deeply" idea you are putting forward is just trying to distil whatever societal norm currently is into the youngs.
Because it is the moral responsibility of adults to care for not just their children but all our children. Occasional surrendering of rights is appropriate in that endeavor.
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.
Protect the children refers to a type of property, not a type of human.
I agree. I don't call it "age verification" though - it is age sniffing. And it has nothing to do with children - that is the lie.
What is fascinating is to see how governments ALL fall for it. There is zero resistance. This is fascinating to me. It shows how little real effort is necessary once you have the lobbyists in place. Kind of scary to witness too.
It is an apartheid system. All apartheid slavery systems will eventually die, so age sniffing will die too. But it will most likely be a long fight as more and more money will be invested by crazy corporations such as Palantir and others.
The whole "debate" is already not logical by the way. Let's for a moment assume the "but but but the kids!" is a real argument rather than a strawman argument, which it is. Ok so ... I am a "concerned parent", for the sake of discussion. I have three young kids. I am not a tech nerd. The kids see "unfitting content" on the antisocial media such as facebook and what not. So, what do I do? Well ... they have a smartphone? Aha, so ... I am not so concerned? Having no smartphone is no option? Ok so ... I say they can have a smartphone, but they may not use antisocial media. Ok. First - in any free society, is it acceptable that this kind of censorship is done on ALL kids? What if I, as a parent, do not agree with this? Well, tough luck - the laws force you into the age sniffing routine suddenly. But, even those parents who want the state to act as totalitarian: why would I want to hand over control to ANY politician for that matter? That makes no sense to me. I am aware that some parents may think differently, but do all parents think like that, even IF they buy into the "we protect the children" lie? I don't want ANY information from ANY of my computers to go into private hands here. So the whole argument already makes zero sense from the get go.
Of course those who know how things work, they know that this is the build up towards identifying everyone on the world wide web at all times AND to make access to information conditional, e. g. if the state does not know you, you can not access information. Aka a passport system for the www. Built right into the operating system too. Windows already complied. MacOSX too. The battle for Linux will be interesting; it may be some hybrid situation, like systemd. And the systemd distributions will all succumb to age sniffing, courtesy of Poettering "this is really harmless if we store your age in the database, just trust me".
>And it has nothing to do with children - that is the lie.
You're not qualified to say that because you aren't a proponent of age verification. That's just imputing motives.
As a proponent of age verification and can tell you it's absolutely about protecting kid from damaging services like porn. It's a common sense control and that's why it has bi-partisan support in the US during a time where there is nearly 0 bi partisan support.
Very unpopular opinion here on HN: one can't stop it without direct physical action against those who push it.
What do you mean by direct physical action? Do you have some examples?
I will be permabanned on HN for these examples.
Use a throwaway
We now know all the arguments. No more need to persuade anyone.
People will show what they are made of.
An attestation-like system to detect humanity at time of post is absolutely for useful online spaces in the era of AI slop.
The writing style of the author is very annoying.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
It could be done with anonymous credentials though. No tracing to who the human is.
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
Until people hit "attest" and then copy the text from ChatGPT.
Those people would be subjected to permanent, identity-bound bans.
Being unpersoned for using copy/paste one time is certainly one of the political proposals of all time
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.
It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
> who cares this much about the internet?
The Internet pretty much runs our lives now, so: I do.
Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.
No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
So you're worried that due to age checks you'll no longer be able to anonymously
> register for classes, pay bills, submit something to the government
is that right?
> is that right?
No.
You asked a specific question, and I answered a specific question (which I even quoted in my response).
but I think the parents counterpoint was that the important parts of the internet (paying bills, buying things, registering for classes) don't require or presuppose anonymity.
You took away the context of my question and thus gave an irrelevant answer. The subject at hand is age verification and anonymity on the internet. For effective communication, one usually tries to make their input to a conversation relevant. https://en.wikipedia.org/wiki/Cooperative_principle#Maxims_i...
Even then your answer isn't really an answer, because you're giving examples of things the internet is required for. Certain situations can require having a car, that doesn't mean you need to care about cars more than the minimum necessary to operate one.
> If you love your family, you must stop online age verification.
> If you want the best for your children, you must stop online age verification.
> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.
Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,
"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.
They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."
So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.
Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.
Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
What if all services require ID?
Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.
What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.
Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.
Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.
But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
> The best way to not be in a digital cage is to opt out of the current digital products.
Bullshit. These are all-encompassing monopolies and government services. More likely, they'll ban you and you'll end up having to go to court out of desperation to demand that they service you.
This is very limited thinking. If you lacked this sort of imagination 20 years ago, you wouldn't have been able to predict today.
> Frankly I would welcome a world in which kids are not using Instagram or TikTok.
This is the sort of passive reactionary nonsense that causes the danger that we're in. Everything isn't something to give up lightly, even if you think that it will force your neighbor to turn his music down, or get rid of bad reality television. I don't like kids on social media either. I don't like adults on it. I think kids are suffering more from surveillance than from TikTok.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.
The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
Chrome, Android, and Gmail are optional to use.
So is social media.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.
Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
I'm mostly alarmed by kids parroting Andrew Tate and a whole generation being raised propagandised by Tiktok brainwashing.
Why? Kids have had access to the internet for over 30 years. What is the tiktok brainwashing (I don't use it), and how do you qualify the danger of it from say google news brainwashing, or even (gasp) public school brainwashing? I mean, if we're going to group ban information, at least let people in the local communities make those decisions. Otherwise, we're going to get the Epstein class making these decisions.
If you've been paying attention, you know what GP is talking about and you don't make such silly equivocations.
I guess I haven't been paying attention. Could you post a link so I can see what you are all talking about?
Here's a bunch of interrelated links:
https://www.mediamatters.org/tiktok/tiktok-prompting-users-f...
https://www.dcu.ie/antibullyingcentre/recommending-toxicity#
https://www.theguardian.com/media/2024/feb/06/social-media-a...
https://www.nytimes.com/2021/10/25/technology/facebook-like-...
https://news.sky.com/story/the-x-effect-how-elon-musk-is-boo...
https://www.kcl.ac.uk/news/gen-z-men-and-women-most-divided-...
I see the societal turmoil and strife this will feed as much more dangerous and concrete than some abstract high-minded discussion about slippery slopes. Our society is being torn apart as we speak. We don't have the luxury of philosophizing about what-ifs.
Is Google tracking which teenagers make which posts on 4chan?
Curious about via Google Chrome versus not
Responding to tone but not to content is what a dog does.
Dogs are on to something! Tone matters in persuasion. A whole lot. If the author were interested in persuading (as I assume he must be, given his strongly held convictions) then he should consider his tone more carefully.
looks like you ruffled some feathers with this one
Tone was off
Yeah, calling people "dogs" for pointing out that TFA is a hyperbolic (AI-written) screed without substance would ruffle some feathers.
Edit: yes it is hyperbolic and ridiculous to suggest people will be "enslaved" because they don't have access to the internet. Do you realize that makes everybody who grew up in the 90s or earlier a "slave"?
Nothing "hyperbolic" about the points made. If anything it's not nearly extreme enough. People have no idea how bad things really are.
A lot of people dismissed RMS's "Right to Read"[1] essay long ago. All the things it was warning about have come to pass, in spades.
1: https://www.gnu.org/philosophy/right-to-read.html
It's mind boggling how far Stallman saw into the future. Saddest part is we're losing this war. They're going to destroy freedom of computation, freedom of information, and it turns out that... Nobody cares. Nobody but a bunch of nerds.
>They are counting on you caring more about sounding reasonable than protecting your kids from a system designed to control them forever.
Do you actually have an argument to make?
He’s 100% correct.
For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.
There can be no free speech if the state can imprison you for what you say, and they know everything you say.
I dropped the word ‘online’ from the above paragraph, because on is the real world. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?
That’s not a world I want to live in.
>For a start, child are parents responsibility
And not distributing porn to children is a porn company's responsibility.
You are repeating a very common talking point but its not a good one.
Age verification laws make it possible to hold services providers liable for breaking the law (it's already illegal to distribute porn to minors in many places, like the US).
It's both true and completely irrelevant that parents should do a better job protecting their children from harmful services online.
Yes, my argument, to restate it, is that rhetoric can be misused to counterproductive effect, as is the case here.
Carefully note that I have neither affirmed nor contradicted anything of the substance of his argument. So defending his position to me is a non sequitur.
Yeah, fair enough.
The goal should probably be convincing people at the margins, and not turning away those in opposition.
Preaching to the echo chamber is probably less productive.
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Yes
That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?
That's why any kid can go watch any movie in the cinema right?
Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?
Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.
> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
So why are there laws that dont let them buy cigarettes and alcohol?
I don’t believe there are, at least not here in Australia.
In Australia, I’m fairly certain it is not an offence for a minor to purchase alcohol or tobacco.
It is an offence to supply alcohol or tobacco to a minor.
Did social media exist when you grew up?
Xanga and MySpace are what my friends had; yes
It's weird that none of your arguments or proposals hold accountable the responsible parties.
You want to force us to compromise when we were minding our own goddamn business.
Responsible parties like porn companies that distribute porn to minors? Parents are still accountable with age verification laws.
If parents suck at parenting, they will suffer.
If porn companies distribute porn to minors, which is illegal in many places such as the US, they will not suffer. Unless you start holding them accountable.
Every major adult content site has warnings that you have to be over 18 when you enter the site. Its extremely easy to use parental controls to block these sites for a kid, and parental controls don't require violating user privacy.
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
Invoking the concept of enslavement to describe even a grotesque digital surveillance state is the really off-color part.
Maybe you're not the target, then.
I haven't heard too many people say these extreme-sounding, yet at least arguably true points out loud.
Someone should be saying them, and the fact that it's not your particular cup of tea may not be the biggest issue here.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.
Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
It can't be disingenuous if I actually mean for you to take my argument at face value. There is no hidden motive that I haven't stated. I mean for you to focus on the author's communication style, in case you missed how bad it is, notice what's wrong with it, and seek better sources of information about the issue.
You have accused me of "rhetoric", but that is no accusation at all. Rhetoric is the art of persuasive speech. I have not accused the author of "rhetoric" but of "poor rhetoric". Perhaps that is what you mean to accuse me of.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.
If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
> how unreasonable he sounds
It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously and without the content of your speech being sold to Walmart and the cops. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.
It's important that you engage with that. Denial is not something to brag about.
> they're targeting your children
Who is the "they" that you refer to? Did you know that many people are in favor of age verification? Like, many parents of children who are at a loss for how to protect them from early access to obscene material? Could it be that that is why a large segment of government actors are moving in this direction? Or must it necessarily be a secretive nefarious play by some evil tech companies in cahoots with that one administration?
Or is it actually the opponents of age verification who are the ones targeting my children by encouraging early access to obscene materials for grooming?
That last point sounds like a conspiracy theory. It should. I wrote it that way to be provocative, and I hope that you, as a result, dismiss it out of hand. But I want you to understand that TFA's argument also sounds like a conspiracy theory to me. If you want me to engage, you want to make a serious attempt at persuasion.
To that end, I do appreciate that you have not adopted the tone of the article.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
Do you think children are harmed by porn? Did you know it's illegal to distribute porn to a minor in the US?
It seems reasonable to me to hold porn companies responsible for distributing porn to minors.
There is actually extremely little evidence for anything when comes to individuals, sexual content, and their sexual fantasies. There is even less evidence available for anything when it comes to minors.
I've read papers on the topic, and the good papers always point out that there is almost no focus on any potential positives. Many "authors" have already made up their minds before they've even started conducting research, and that's only if they manage to get funding (everything sex related gets very little or no research funding).
The difficulty of getting funding for such "research" is probably due to ethical concerns (regarding the methodology itself).
As for focusing on potential positives, I'd be surprised to see any studies focusing on the potential positives of gambling or doomscrolling.
Any arguments of "lack of evidence of harm" sounds a lot like what tobacco or asbestos companies used to claim not long ago.
That's a discussion that's entirely tangential to age verification. However, I think porn should be illegal entirely as it's just prostitution. As such I think porn companies should not exist, the same as brothels or heroin dealers. If they have to exist for practical reasons along with other objectively harmful things, such as alcohol, marijuana or gambling, then obviously they should be regulated to ensure they're not targeting minors.
That does not detract from the fact that the people arguing for age verification are using "think of the children" in order to push surveillance.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
It seems the Epstein class didn't like this comment
Im completely OK with verifying someone's age before distributing age-restricted services to them. That's what an age restricted service is, and obviously we shouldnt let porn companies distribute porn to minors (its already illegal most place). Just dont use porn, facebook, online gambling etc. if you dont want to share your identity.
I can see why it's unfortunate but the idea posited that that it's somehow illegal in the US is ridiculous. You have no right to watch porn anonymously at the expense of holding porn companies liable for distributing porn to minors.
Internet 1.0 was largely read only, ephemeral, or decentralized. Chat rooms, IRC, personal webpages, etc. There was anonymity and there were not age restricted services.
Internet 2.0 introduced age restricted services and the enforcement lagged. The enforcement is now catching up. You can still do all the Internet 1.0 things anonymously but you can no longer gamble online as a 14 year old and hopefully soon you wont be able to watch porn either.
Private companies now can link all your online activities to you. Not an advertisement ID, but directly to you and your loans and your health data and whatever they're selling in the black market. Every data breach is a 100 times. It was already almost possible to directly know about you by buying data, now it's easier.
The point of this is not to verify age really. It is to verify identity. There's no way to prove someone is some age without presenting a legal ID.
Also, it's not just porn, facebook, online gambling etc. It is the OS based on some bills. So ALL your activities.
> There's no way to prove someone is some age without presenting a legal ID.
Sure there is.
Verifiable Credentials and other similar standards allow this to be delegated in such a way that there is no need to present ID or even let the site know who you are. The site can issue a request to a third party that simply provides back "Yep, we attest that this request was approved by someone over 18".
Depending on the exact scheme, the request may forward you to a broker, who will then forward the request (and your web session) on to the trusted third party of your choice which has already performed ID verficiation (usually a bank). The bank sends a signed response back to the broker, the broker sends a signed response back to the requesting site.
Is it perfect? Maybe not 100%, the broker knows there was a request from a restricted site forwarded to a given bank. The bank knows you have approved a request. There is likely to be an identifier of some sort sent from the site all the way through to the back-end so you know you're not being MITM'd. But in theory nobody should have the full picture.
No practical way I should say. Realistically, it's pretty clear that lawmakers really just want to shove it through in the simplest way possible. Which is probably private third parties.
And private third parties are very shady. They have effective monopolies and no significant public face to care about. I think we have seen this pattern play out in healthcare, compliance and other industries already.
Also idk about banks being the effective gatekeepers to the internet and eventually all technology. Just feels like its not their place to do that.
This argument as framed doesn’t make any sense. Porn is (and WAS) Internet 1.0.
There was porn before most everything on the web. Porn is also speech / art.
Anonymous access should be available for any website that wants to share their content on the Internet provided they have the rights to that content.
States that seek to limit that could make a legal argument that they have the right to limit access, but in the end it’s infringing speech. Worse, it’s unenforceable.
And yes, I would make the same arguments for people posting hateful shit or misinformation.