> Google’s hash match may well have established probable cause for a warrant to allow police to conduct a visual examination of the Maher file.
Very reasonable. Google can flag accounts as CP, but then a judge still needs to issue a warrant for the police to actually go and look at the file. Good job court. Extra points for reasoning about hash values.
> a judge still needs to issue a warrant for the police to actually go and look at the file
Only in the future. Maher's conviction, based on the warrantless search, still stands because the court found that the "good faith exception" applies--the court affirmed the District Court's finding that the police officers who conducted the warrantless search had a good faith belief that no warrant was required for the search.
I wonder what happened to fruit of the poisoned tree? Seems a lot more liberty oriented than "good faith exception" when police don't think they need a warrant (because police never seem to "think" they need a warrant).
This exactly. Bad people have to go free in order to incentivize good behavior by cops.
You and I (as innocent people) are more likely to be affected by bad police behavior than the few bad people themselves and so we support the bad people going free.
>You and I (as innocent people) are more likely to be affected by bad police behavior than the few bad people themselves and so we support the bad people going free.
I know anecdotes aren't data, but my only negative interactions with cops have basically been for traffic tickets. Meanwhile my negative interactions with criminals have been far more numerous, along with several second-order effects caused by their mere existence (like not going to certain neighborhoods at night because of high crime rates). I don't think there's ever been a neighborhood law abiding citizens had to avoid because of fear of cops.
Maybe I'm some kind of crazy outlier, but I'm pretty sure that most innocent people are the same.
> I don't think there's ever been a neighborhood law abiding citizens had to avoid because of fear of cops.
I think there's a fair number of stories of POC be accosted by police officers because they were in a neighborhood they didn't "belong" in, so your statement is likely inaccurate.
The threat to innocent people posed by incompetent or tyrannical police is arguably much greater than by ordinary criminality.
In small towns across America, corrupt police departments hassle outsiders and issue minor citations as a way to generate revenue. If someone is found to have large amounts of cash for some reason, they often will confiscate it in a process called civil forefeiture. Many US police officers act with impunity because their misconduct will be protected by local prosecutors and judges. There absolutely are towns and neighborhoods good people should avoid because of the police.
Dan White shot the mayor and a supervisor in cold blood and confessed everything to the cops. They managed to stop him from spilling out his premeditation on tape by interrupting him as his confession was getting rolling and the DA failed to win the easiest conviction of his career. The cops then went on a spree of beating people gratuitously in the Castro.
Cops aren't there to enforce the law without fear or favor. They routinely engage in petty corruption and complain when they have to be professional when on duty.
There is a reality distortion field in existence now because almost every police interaction is recorded (body cams are everywhere nowadays) and the ones that go bad are put on full blast across social media and the news, despite them being somewhere on the order of 1 in 1,000,000 encounters.
Seriously, if car accidents were reported like police accidents, we probably would have been forced by confused ideologues to ban automobiles 2 years ago.
Personally I’m a cis white male, who’s been a mostly law abiding citizen, and I’ve had dozens of poor interactions with police throughout my life. Additionally I have a probably unusual number of family and friends who work in law enforcement. The stories I’ve heard about co-workers from them are absolutely terrifying. My father’s (retired police officer) advice when I became a teenager was “only call the police when what is about to happen is worse than going to jail.”
I deeply respect the difficulty of the profession and don’t believe that all or even most police are bad people, but there are way too many who have no business being in that profession.
> Bad people have to go free in order to incentivize good behavior by cops.
And they will, next time, and everyone knows it. We don't need an actual example of a bad person going free if the potential is certain enough.
Unless, of course, you're trying to encourage good behaviour in the general case (rather than a codified list of specifics); but that's expecting police officers to be experts in right and wrong. As obvious as such things are to me, I'm aware that a lot of people struggle a lot more with these things. (Or, perhaps, struggle less: I spend a lot of time thinking about morality and ethics, more than is reasonable to expect a salaried worker to spend.)
I think its okay that we expect cops to be good _after_ the rule exists, rather than set the bad guys free to (checks notes) incentivize cops to take our new rule super seriously.
It would seem that the inverse would need to apply in order for the justice system to have any semblance of impartiality. That is that we now have to let both of them off the hook, since neither had been specifically informed they weren’t allowed to do the thing beforehand.
That is why many people think this should be tossed out. Ignorance that an action was a crime is almost never an acceptable defense, so it should not be an acceptable offense either.
> we now have to let both of them off the hook, since neither had been specifically informed they weren’t allowed to do the thing beforehand.
I'm not trying to be funny, or aggressive, or passive aggressive, seriously: there's two entities in the discussion, the cops, and the person with a photograph with a hash matching child porn. I'm phrasing that as passively as possible because I want to avoid the tarpit of looking like I'm appealing to emotion:
Do you mean the hash-possessor weren't specifically informed it was illegal to possess said hash?
> It would seem that the inverse would need to apply in order for the justice system to have any semblance of impartiality...That is why many people think this should be tossed out.
Of course, I could be missing something here because I'm making a hash of parsing the first bit. But, no, if the cops in good faith make a mistake, there's centuries of jurisprudence behind not letting people go free for it, not novel with this case.
> Do you mean the hash-possessor weren't specifically informed it was illegal to possess said hash?
This is literally the doctrine behind the good faith argument and qualified immunity. If they have not been informed that this specific act, done in this specific way is not allowed then it is largely permissible.
A stupid but equivalent defense from the possessor would be “it’s in Googles possession, not mine, so I had a good faith belief that I did not possess the files”. It’s clearly wrong based on case law, but I wouldn’t expect the average person to have a great grasp of how possession works legally (nor would I claim to be an expert on it).
This is effectively what the good faith doctrine establishes for police, even though they really ought to at least have an inkling given that the law is an integral part of their jobs. As long as they can claim to be sufficiently stupid, it is permissible. That is not extended to the defense, for whom stupidity is never a defense.
> But, no, if the cops in good faith make a mistake, there's centuries of jurisprudence behind not letting people go free for it, not novel with this case.
Acting in good faith would be getting a warrant regardless, because the issue is not that time-sensitive and there are clear ambiguities here. They acted brashly under the assumption that if they were wrong, they could claim stupidity. It encourages the police to push the boundaries of legal behavior, because they still get to keep the evidence even if they are wrong and have committed an illegal search.
It is, yet again, rules for thee but not for me. Frankly, with the asymmetry of responsibility and experience with laws, the police should need to clear a MUCH higher bar to come within throwing distance of “good faith”.
>This is literally the doctrine behind the good faith argument and qualified immunity. If they have not been informed that this specific act, done in this specific way is not allowed then it is largely permissible.
For criminal actions an entirely different set of standards exists, and has longstanding legal precedent. Two in particular: mens rea and strict liability
Right, and my argument is that the double standard itself is not just. I do know as a matter of practicality that I don't really have a legal leg to stand on here; the law is what judges say it is, and they've said it is the way it currently is.
I do not find "the justice system treats them differently, therefore they are different and the justice system is just in treating them differently" to be a compelling argument that the double standard is just. It's just a circular appeal to authority; any behavior by the justice system is morally permissible under that idea, simply because the justice system declares it to be so.
My question is how is it just that differing standards apply? And furthermore, how is it just that that leniency is granted to the benefactor of a severe power imbalance? Unconstitutional search and seizure could absolutely be a crime; in this situation, a citizen would likely be charged under the CFAA, which is a crime.
Those are only nominally different, insofar as the justice system chooses to call some acts one and some acts another. It doesn't speak to the nature of the act, only what we choose to classify it as.
I.e. unconstitutional searches could be criminal activity if the judiciary just decides to classify it differently.
There are certainly differences in the nature of the act that we could talk about, but how the judiciary classifies them is only a nominal difference.
Idk why you keep saying way out there stuff, then repeating back mundane stuff.
Yeah, the difference between a bad thing and a good thing is what judges say.
No, there is a difference bigger than "nominal", which means in name only. Go out on the street and try explaining why someone having child porn, and cops handed a subscriber name + child porn image by Google exactly the same thing, there's only a "nominal" difference.
I'm genuinely at a loss for how this doesn't make sense. "Crime" is absolutely a nominal status. Things can be made into a crime or no longer a crime arbitrarily. Abortion was legal across the US, and then it wasn't. Abortion didn't change at all, but how we refer to it did. Ditto for possession/distribution of alcohol, some kinds of firearms, slavery, etc, etc.
I am not arguing that possession of child pornography is good or permissible, my point is that the things police do are only "police actions" rather than "crimes" because we choose to refer to them as such. We could pass a law tomorrow that says unlawful search and seizure is a crime, and then the "crime" label would apply to the police as well. The specific crime would be different, but both would be categorically "crime". It is undesirable to make possession of CP by police a crime because it would interfere with their ability to investigate it, but those justifications do not apply to why unlawful search and seizure should not be a crime or at the very least fruit of the poisoned tree.
> I'm genuinely at a loss for how this doesn't make sense
I'm really not trying to be mean or making charged comments in any of the following, I apologize if it reads that way. I really appreciate your investment in this thread, it wasn't a driveby, you mean what you're saying, you're not trying to score points AFAICT. I think working through my discomfort is the best way to pay that forward. I save the most concise / assuming / judgey version of this for the end of the post.
There's just something very...off....with the whole thing. Like it reads like an intellectual exercise, I get the same vibe as watching someone work really really hard to make a philosophical argument to stir conversation.
You have these absolutes and logical atoms that seem straightforward and correct, but they're handwaving away a whole field and centuries of precedent.
There's this shuttling back and forth between wide scope and narrow scope thats really hard to engage with. Like, yes, we know "crime" is a nominal thing. My mind immediately jumps to "yes, calling things 'bad' is nominal and subjective" ---
Then, my mind transports me back to my sophomore year english class where someone starts free-associating about how nothing can be 100% confirmed to be real. I'm frustrated there, because, yes, that's true but doesn't shed any light, there's nothing to be gained from mining that vein, and doesn't map to how people have to engage with the world day to day.
You also have a very hard time accepting that this isn't reducible down to "unlawful search and seizure via 4th amendment violation" --- I don't mean to be aggressive, here: after a day and a lot of your thoughts, I still genuinely don't know if you understand that these things have ambiguities and that's why there's a whole industry around them.
I think we agree on:
- calling things bad is subjective.
- similarly, calling things "crimes" is subjective, and part of that is contextual (ex. we allow some people to do some things, but not others)
Then from there, I bet you'd agree to:
- therefore, we need some sort of dispute process to sort these things out
- lets say that's called the current legal system
Then from there, it feels like you're asking us to agree to:
- if something is declared judged to be bad moving forward, it is okay to punish those who did the bad thing in the past, no matter the circumstances
- now lets apply that specifically:
- if cops did a thing that's not allowed moving forward, then it is a moral imperative for the cops to drop every case that involved doing the thing that's not allowed moving forward
That's just way too far for anyone who isn't doing a philosophical exercise.
ex. Miranda v. Arizona established what we call "Miranda rights" -- now that a judge says there's a specific incantation to recite that courts will accept as proof criminals were advised of their rights. Are all cases where the Miranda rights were not read suddenly dropped? No, that'd be laughable, no society would tolerate the legal system dropping every case where someone was arrested in that scenario.
The most concise thing I can say, which unfortunately is judgemental due to the conciseness, is the whole thing reeks of an engineering mind expecting their understanding of the law to be an absolute, somehow overlooking that the whole point of the legal system above entry-level courts is there are no absolutes. From there, lets say you know that and accept that, because that's very likely. Then what happens with the Miranda rights thing? That's one of countless examples, but it's useful because A) I'm sure you grok Miranda Rights if you're in USA B) the principles you're espousing being applied there would lead to an obviously unacceptable outcome, so if your instinct is to say "yeah, do it, free everyone who talked to the cops!" I know you're just killing time doing a thought exercise --- which I do sometimes too! Not judgement.
I do want to apologize for the hostility or frustration of that comment. It had read as a drive by to me, but it wasn't a productive way to engage regardless. I sincerely appreciate you engaging, and I think your post does bring interesting points and I appreciate you taking the time to write them down.
> There's this shuttling back and forth between wide scope and narrow scope thats really hard to engage with.
I can very much see how it reads that way. My intent was to address the comment one or two up from yours saying that they were different because one is a crime, but that is very much a different conversation than this specific case. It feels a bit like I'm having two separate conversations on my end too, which is somewhat difficult for me to do without either writing a novel or losing track of nuance. I'll make an effort to keep this more constrained so it feels less like arguing with a moving target, that is certainly not my intent.
I'm with you on the parts that we agree on, and the parts that you think I'd agree with.
> Then from there, it feels like you're asking us to agree to:
The part that feels, to me, like it's not asking too much is that we already ask this of every other citizen in their everyday life. E.g. (and I apologize for not having a less contentious example) the ATF has repeatedly refused to set quantifiable standards for when someone is selling enough firearms to need an FFL. It's all about being "engaged in the business" and whether sales are for profit or collecting; there is no hard and fast "you must if you have X sales that meet Y criteria".
That's actually much more clear than it used to be; it used to just be "engaged in the business" and you just had to guess whether liquidating a collection made you in the business or not.
It doesn't feel like a huge step forward to say that the people pursuing crimes need to handle ambiguity at least as carefully as a private citizen. Especially considering that police can get a warrant as a definitive answer, where a judge typically won't answer hypotheticals from a citizen.
Furthermore, that was exactly how it worked until the good-faith exception was made in United States v Leon, in 1984 (not a joke, but I did have a chuckle. It's hyperbole but a cute coincidence). A significant portion of Americans were alive when the good faith doctrine didn't exist, and this evidence would have been fruit of the poisoned tree.
It's a little hard for me to accept that the Overton Window has shifted so dramatically that people are unwilling to accept a system they were born with.
> if cops did a thing that's not allowed moving forward, then it is a moral imperative for the cops to drop every case that involved doing the thing that's not allowed moving forward
I would argue for more nuance than that, but that's close. Briefly, I am arguing that evidence obtained via searches that are found not to be supported by the 4th Amendment is necessarily fruit of the poisoned tree, and should not be admissible as evidence in the case nor as evidence to obtain a warrant for a later search. That may result in the charges being dropped in some cases, and not dropped in others where there is other substantial evidence.
Conjecturing about this case, it seems like they would probably have to drop the charges. I don't know though, maybe they have other evidence obtained via other means they could use.
> ex. Miranda v. Arizona established what we call "Miranda rights"
Aside, but Miranda is an interesting example because he was re-tried without using his confession and the conviction stuck that time. An interesting example that a fruit of the poisoned tree policy does not necessarily require dropping charges.
I am perhaps out of the Overton Window here, but I don't see why that is an insane outcome of Miranda though I will certainly acknowledge that there would be fallout. My line of thinking is essentially that the text of the 4th did not change, which means that Miranda rights were free for anyone to claim at virtually any point in history (presuming they thought to make the argument). The outcome is necessarily prejudiced; either against defendants who could have argued for rights they didn't know they had, or against the judiciary for failing to establish that those rights exist at an earlier point. It makes sense to me for that to be prejudiced against the judiciary, because they are the arbiters of what rights people have, and had the ability to suggest and establish those rights at any point they wanted. Essentially if we were going to assign who is responsible for knowing that Miranda rights should exist before they did exist, I would expect that of the arbiters of rights far more than the defense attorney.
I am totally okay with that being unpopular, though. I'm not arguing for the majority of people, just myself.
> the principles you're espousing being applied there would lead to an obviously unacceptable outcome, so if your instinct is to say "yeah, do it, free everyone who talked to the cops!" I know you're just killing time doing a thought exercise --- which I do sometimes too! Not judgement.
Just to reiterate briefly, I do not think they should be immediately set free, but I do think they would be due a retrial without their confession (in the Miranda case specifically) if their confession is material to their conviction. It's not a thought exercise to me, but I may be outside the Overton Window.
I am aware that this would potentially result in some guilty people going free, but I would eat my hat if there wasn't a single person in jail or prison who was innocent and coerced into a confession that could have been avoided if they had known their Miranda rights. I also know that there are no absolutes in the law. It is absolutely a vague mess propped up by piles of precedent that can even be conflicting.
My contention is that given the ambiguity of the law and the power the government wields, defendants should be offered the full protection of the law as we currently understand it. I find the situation frustrating, which makes me look for a source to blame, but I think my real underlying sentiment is a feeling that it is unfair for citizens and defendants to suffer the consequences of the ambiguity the legal system.
It is hard for me to fathom the despair of someone who was innocent but confessed to a crime after a 12 hour investigation without knowing that they could remain silent or demand a lawyer. I cannot fathom the despair of watching the Miranda trial and knowing that their lawyer could have argued the same thing, but didn't, and now they're stuck in prison for however many years without any recourse.
That doesn't directly apply to this situation, because I do think this guy is guilty, but these precedents will be used in cases against innocent people. I find it a condemnation of our justice system if we are willing to risk the rights of innocent people to nail a few convictions.
If you have the time, I would really encourage reading the dissenting opinion in United States v Leon (I'll link it below). Justice Brennan has a far more well articulated opinion than me, that is likely less far outside the Overton Window. I'll leave a snippet that I find persuasive here, but the whole thing is worth at least a skim.
" In contrast to the present Court’s restrictive reading, the Court in Weeks recognized that, if the Amendment is to have any meaning, police and the courts cannot be regarded as constitutional strangers to each other; because the evidence-gathering role of the police is directly linked to the evidence-admitting function of the courts, an individual’s Fourth Amendment rights may be undermined as completely by one as by the other."
> I do not find "the justice system treats them differently... circular appeal to authority.
You may not know it, but you are effectively referencing the difference between a "rule of law" and a "rule by law" in the important parts at least.
This goes back to and falls under social contract theory, and the "rule of law" in society is meant as the final protection for its members, and to provide non-violent conflict resolution impartially, justly, and fairly, equal under the law, and accessible.
The moment the required components cease to exist, is the momentous beginning of a trend towards the failure of society, as it will naturally mean increasing violence and reversion to the natural order, rule of violence.
There is a valid argument to be made that despite many people claiming we have the former, we are actually living in the latter.
The latter allows many miscarriages, such as the infamous soviet judiciary example of, "you show me the person, I'll show you the crime".
Possession laws historically are also particularly problematic in this regard because evidence can be planted, or in the case of digital systems, induced creation of evidence involuntary (given how systems work and how callbacks can be injected by pointing software to a third-party resource to download), regardless there are many potential situations where the viewing of such horrifying material is unrelated to the choice of a person accused.
That of course doesn't appear to be the case here given what's been written, but nonetheless it is important to have firm, objective, and rational requirements to protect citizens. The trade-off is some small number of bad guys may get to go free as a result, and that's a tradeoff anyone should be glad for when it comes to corruption and how it devolves into tyranny unchecked.
The law rarely differentiates mitigating circumstances, often leading to a guilty until proven innocent situation for most, when these types of structural flaws are allowed. For example, there are locksmith tools that are considered burglary tools, and mere possession in some places is grounds for arrest (a felony), these tools share in common the physical shapes for other legitimate item uses.
System's without appropriate procedures and process for punishing abuses almost always leads to totalitarianism when no feedback system is in place to prevent such abuses from getting out of hand, which is why any true American should be up in arms when abuses happen as a result of corruption. Corruption can occur for a number of reasons that do not benefit a person. For a full treating of corruption, Johnston wrote a book on it ("Syndromes of Corruption").
Unfortunately, many judges today view the constitution as only being binding on government itself (in isolation), and have long taken the literal or constructive ruling instead of going with the spirit of the law, lessening our protections over time gradually but surely. This will eventually lead us to societal collapse.
It is a sad state of affairs, but regardless of the nature of the crime, the ends do not justify the means absent direct survival threats (which cannot be soundly argued in this case). Ends justifying means is only valid against existential threats.
When those means are allowed to change arbitrarily, the very next time it will be you or someone close to you on the sacrificial altar as a matter of some corrupt officials convenience, maybe merely for engaging in your protected rights to free speech to limit corrupt behavior or expressing disagreement in retaliation; there will only be an indirect link.
These tools are then ready made to be used in retaliation arbitrarily.
That said,
In this case, at least from what I've read, it appears a fairly clear cut case of fruit of the poisoned tree.
Law Enforcement could easily have applied for a warrant based on the probable cause of the hash matches, but instead chose not to. There is also the question of methodology Google uses in how they manage and enter new hashes into their hash database (which went unanswered).
They would have needed a warrant to justify everything else that came later. That is classic fruit of the poisoned tree. Thus it is a constitutional violation.
Additionally, I'm sure it comes as no surprise to most HN readers who are programmers, but hashes are not unique they are loose fingerprints related to structure but not giving fine detail for a exact match.
At its core, it is a finite field, which means that there can potentially be an infinite number of paths/files out there that match a same given hash.
Using a hash to match results of file structure, are not fool proof, and as a result of this ambiguity, it can potentially impinge legitimate activities, or obscure a chain of evidence without recourse.
For example, say that initial hash was not correctly identified when it was added to that watch list because maybe their AI false positived on it? Or it was submitted without review as being related to some censorable activity (legitimate under 1st amendment), all you have is a hash you can't verify the content.
This is how censorship or social credit can easily happen under a color of law in private parties hands; this has been covered extensively related to EU discussions on Client-Side-Scanning (and why its unreasonable given the repercussions for false-positives).
When you match only hashes, you don't know what the underlying content is aside from a likelihood/probability that it may be the same as some other file, which is why you need to be able to verify it is the exact same.
When your job is to find such people/things you should be doing what's needed (within the law) to ensure the strongest case possible.
The technical details matter, and processes must follow objective measures and be rational, and follow the constitution. Law is procedural, these are professionals. They should have gotten a warrant at the hash match.
Hashing collisions have happened in the past, mathematically this is known and expected given the structure of cryptographic hashes.
The investigators should have gotten a warrant to confirm.
The Failure to get a warrant here was a procedural failure, and left the door open for challenge. From what I can see in the write-up, it should be dismissed.
Failure to do so, effectively sets a precedent such that anything not directly addressed by a previous court can be construed as done in good faith to deprive people of their constitutional rights, allowing contradiction and further paralysis in the courts moving forward, and also promotes the interpretations that case law and legislative law override constitution protections.
Arguably, if the constitutional violation is found and admitted, there is no valid good faith exemption that can be applied to nullify the constitutional cure. The constitution supersedes everything else, including procedure, law, and case law.
The violation must be cured lest the entire constitution lose its power to non-enforcement (and the based system degrades towards tyranny), something that has been arguably happening as a result of long-standing corruption which goes unpunished.
Yes, the accused crimes may be heinous, but everyone is equal under the law. The moment this ceases to be true and happen with any regularity, is the day the rule of law has failed, and society then fails back to a natural law of violence. No one wants that.
It may not happen overnight, but it will happen regardless because history is full of examples where these dynamics cause those outcomes.
In many cases with very few exceptions today, judges who are older come to believe they are above the law, and fail to check their power, and in the end, they violate their sworn oaths. There is no punishment for them for this in most cases, and they never go back and correct mistakes they make (afaik, it is a rare exception if it happens at all).
Any true American should have a solid educational foundation in Social Contract Theory, and the basis for society. You are right to be concerned about the circular reasoning. In the absence of external objective measures, processes, and procedures, such circular reasoning inevitably devolves into delusion.
Your argument is a bit disingenuous because it's not applicable in situation where there is clear law clarifying that something can't be done.
You're pretending that cops are using this in situations where it's known that a warrant is needed, as opposed to it being an exception to "fruit of the poisonous tree" doctrine when new caselaw is being made.
> Acting in good faith would be getting a warrant regardless
That's not what "good faith" means, that's just something entirely made up by you. From a reasonable perspective that could be described as foolish and a waste of time and the public's resources.
> It encourages the police to push the boundaries of legal behavior, because they still get to keep the evidence even if they are wrong and have committed an illegal search.
There's a constant tension between technology, crime and the police that's reflected in the history of 4th amendment jurisprudence and it's not at all like what you describe. The criminals are pushing the boundaries to which the police must catch up, and the law must determine what is fair as society changes over time. I'm not particularly pro cop, but you don't seem to be reasonable about any of this.
> You're pretending that cops are using this in situations where it's known that a warrant is needed, as opposed to it being an exception to "fruit of the poisonous tree" doctrine when new caselaw is being made.
The ACLU has a decent article about it [1].
Beyond that, there is a substantial power imbalance between law enforcement and private citizens implying that private citizens should be favored by the law where possible to even that out (this is well upheld in case law and documents from the founding of the country). As a private citizen, if you want to do something but are not sure about its legality, do you a) yell "YOLO" and go ahead and do it, b) consult a lawyer, or c) just not do it at all? I believe law enforcement should be held to that same bar.
> That's not what "good faith" means, that's just something entirely made up by you. From a reasonable perspective that could be described as foolish and a waste of time and the public's resources.
"Good faith" is at odds with recklessness and negligence; an action cannot be made both recklessly or negligently and in good faith (supported by majority opinion in Leon v United States, which established the good faith exception). I cannot see a way in which taking an action of unknown legality, while possessing both the time and means to take an alternate action of known legality, is not acting with reckless disregard or negligence to the rule of law and thus incompatible with good faith.
From Leon v United States: "The deference accorded to a magistrate's finding of probable cause for the issuance of a warrant does not preclude inquiry into the knowing or reckless falsity of the affidavit on which that determination was based, and the courts must also insist that the magistrate purport to perform his neutral and detached function and not serve merely as a rubber stamp for the police."
> There's a constant tension between technology, crime and the police that's reflected in the history of 4th amendment jurisprudence and it's not at all like what you describe. The criminals are pushing the boundaries to which the police must catch up, and the law must determine what is fair as society changes over time.
I would genuinely encourage you to review the history of 4th Amendment jurisprudence. It has been continually weakened to the point that only the most flagrant and loudly-announced violations are found unconstitutional, and even then the punishments are virtually non-existent.
Again, the ACLU has a very informative document on it literally called "The Crisis in Fourth Amendment Jurisprudence" [2]. Criminals aren't doing anything particularly new; stashing files somewhere and even encrypting them isn't anything new. Encrypting something in a way that was virtually undecipherable was possible even when the 4th Amendment was written. These are not novel criminal techniques, but the broad liberties given to police with regards to the 4th very much are.
You are welcome to consider me unreasonable. I think there is a fundamental gap in core beliefs causing that. I do not believe criminals are doing anything categorically new, nor that crime is suddenly worse, nor that crime is currently so bad that it demands an exceptional response. Under that set of beliefs, I think opposition to exceptional police powers is reasonable. You seem to believe the opposite, and I can see how my opposition seems unreasonable. I would say that you have fallen victim to unfounded propaganda, and I presume you have a similar accusation to level at me.
Regardless, I do appreciate you engaging in good faith and I wish you weren't ratio-ed on your comment. I do think you have brought interesting points to the discussion.
I'm not OP, but I consider your response quite reasonable.
I found your reply informative and specific, though I took a more fundamental approach in my response to OP, focusing on components required for a "rule of law" compared to a "rule by law" (kafka/soviet style).
I've done a lot of historic reading, and any time those components fail, violence increases proportionally to the lack of agency for non-violent resolution available.
It may just be speculation on my part, but given the repeated cycle, and detailed accounts, it seems there are parallels in objective measures of these components compared with witnessed events, which are what citizens use as a signal to determine whether they take violent action.
The events seem to follow quite accurately along what's been written in social contract theory, and from Thomas Paine's time & writings.
Obviously these type of writings are from times that are dark and violent, and violence benefits few if any which is why there's good cause in trying to prevent that kind of degradation in existing systems, and the dynamics that cause it.
> the law must determine what is fair as society changes over time
This has been the ideal that has been put forth generally quite a bit, but it also almost always neglects the structural failings that must equally be addressed at the same time.
For example, at what point would you say that case law overrides the constitution?
According to the law, it holds the constitution as supreme, and that no representative has the authority to exceed or violate that which is granted in the constitution, this pertains to the judiciary as well as the executive and legislative. It is up to the courts to enforce this as the last pillar of society (for non-violent conflict resolution).
In a general society with a rule of law, when there is a admitted constitutional violation, it must be immediately cured.
Issuing a decision that prevents a constitutional remedy while recognizing the violation is arbitrary, a direct contradiction, exceeds the authority granted, negates the constitution, shows a violation of a sworn oath to uphold the constitution, and causes the entire "rule of law" and its institutional credibility to be called into question.
If you allow exceptions to the constitution, the fundamental component, "equality under the law", fails, and that means we don't have a "rule of law".
The natural outcome of this being increasing violence, which no one wants because it benefits no one.
Bringing society as a whole from a "rule of law" to a "rule by law", which inevitably (over time) causes society to fail violently towards totalitarianism/tyranny, is stupid but may have short term benefits for the corrupt. The harms of such are systemic and grow exponentially.
It is not a matter of catching up to offenders, it is a matter of competency. This is a professional occupation where corruption is an ongoing structural issue, and actions must be reasonable to protect both society and the individual rights equally.
No true American would accept soviet-style kafka courts without any of the normal protections regardless of the crime, and that is the danger faced with the decision here.
Corruption of the state will always seek to use such types of systems to justify their existence often inducing, planting evidence, or causing such crimes to be committed. Some may be actual offenders, but others may not and no differentiation is made. It may even be done for political purposes such as with The Gulag Archipelago.
It is a slippery slope which cannot be walked back later as the damage will have already been done with the punishment being front-loaded.
If there is a question regarding a boundary of a policy or process, you get a legal opinion, and base your actions on that opinion. This is well established in many sectors, including but not limited to the Business Judgment Rule. This is what is needed for this to be done in "good faith".
Allowing a blanket good-faith exemption and exclusion for government to do anything not directly covered by existing case-law without repercussion is a dangerous precedent towards tyranny, especially when they had the probable cause at the start to do it the right way.
It seems like you neglect many of these important foundational subjects. The lack of accountability control encourages the police and related apparatus to violate the law, and thus violate the public trust when this is unenforced.
The main outcome in a society absent a "rule of law" is overwhelming violence.
This is what most people fail to realize, and many today embrace magical thinking and delusion.
Mass delusion has greatly overtaken this country and will soon destroy it if it is not stopped.
It is of critical importance to base our protective systems in objective measures which are external, and rational thinking and critical reasoning that logically follows without contradiction or circular reasoning.
To fail at rational thinking, is to embrace delusion and become schizophrenic, a common malady in the totalitarian state (Joost Meerloo). This is covered well in topics on the banality of evil, and the radical evil (WW2).
The crime accused is repugnant, but equality under the law, and constitutional protections are sacrosanct, and far more important than any single person.
The 4th amendment is about unreasonable searches and seizures, it is also about "persons, houses, papers, and effects", that is, not files stored in someone else's computer.
The police here considered that a hash match was a reasonable enough condition to conduct a search, and that Google's TOS allowed it. They were wrong, but it is not obvious that they were by just reading the 4th amendment, and the situation is rather new, so it is reasonable to assume that the police acted in good faith.
If I have documents in a locked briefcase in a hotel room, does the police get to read and copy them with the hotel operator's permission while I am in the shower? Assume that the locked briefcase is not particularly tamper proof. Anyone with decent lock picking skills can open one.
Is it your locked briefcase or the hotel's? I believe hotels have the ability to unlock their own safe, so I suspect they're allowed to ask the hotel's permission to look without a warrant.
Also, if a hotel cleaner found illegal material lying in your room, the police don't need a warrant to seize it and prosecute you.
If it's your briefcase then I think they need a warrant.
The type of person who cannot draw a line of semantic equivalence between papers and files on a computer, is uniquely devoted to entertaining a level of obtuseness it is hard to seriously entertain. On par with people who think that "Arms" in the second amendment must only apply to muskets, cannons and such, and nothing after.
Tell me. When you put a bunch of papers in a folder, then put them in a cabinet, arguably under some semblance of organization in order to make later retrieval easier, what are you doing?
Filing.
The entire desktop metaphor, (the basis around which most computer UI is based), was chosen in part specifically for it's compatibility with non-digital processes at the time of software and the personal computer's fruition. Files on a disk, are in a literal sense, your papers. They are stored in directories(lists of things and where to find them, a.k.a. folders), areanged under the abstractive auspices of a "file system", and at times "archived" for convenient storage or transport. Gee. Same verbage as what you do with papers... In fact, your papers have nothing to do with dead trees except as an accident of it being the first prevalent medium for persistent info storage. Your papers covers the set of information through which you conduct your business with the outside world.
Those packets of paper are files. Those collections of 0's and 1s on a disk are files. Files are papers. Papers are protected. The involvement of a computer in the chain suddenly nullifying the essence of what point was being made by the Founders is as worthy of ridicule as thinking they went to war with their colonial parent state only because of a tax spiff. Or the civil war being only about slavery. It's evidence of a worldview most tragically impoverished; either by accident (which while regrettable, is at least amenable to remedy), or intention to push a state of affairs; to which one can only shake one's head and push on with their own life, and hope that maybe there are enough like minded individuals out there to counterbalance the individual's in questions aspirations.
Already spent more cycles on this than I should have, good day.
And one thing we learn as we've been hanging around in Time long enough to recognize larger cycles, is the world changes, people dont. Even as we change the world.
The rule established in this case is new, hence TFA, and all the time the lawyers and judge wasted on it :)
If I may suggest where wires are getting crossed:
You are sort of assuming it's like a logic gate: if 4th amendment violation, bad evidence, criminal must go free. So when you say "the rule", you mean "the 4th amendment", not the actual ruling.
That's not how it works, because that simple ultimatum also has edge cases. So we built up this whole system around nominating juries and judges, and paying lawyers, over centuries, to argue out complicated things like weighing intentionality.
The court ruled that at the time, when the State Police opened the file, they had no reason to believe that a warrant was required. While the search was later ruled unconstitutional, no court had ruled it was unconstitutional *at the time of the search*. One of the cornerstones of American jurisprudence is that you cannot go back in time and overrule decisions based on contemporary jurisprudence.
From the opinion: 'the exception can also apply where officers “committed a
constitutional violation” by acting without a warrant under circumstances
that “they did not reasonably know, at the time, [were] unconstitutional.”'
If you're interested, the discussion of a good faith exemption (and why fruit of the poison tree doesn't apply here) begins at page 40 of the doc.
As someone not from the US the fact that "uwu we didn't know" is an adequate defense for the police to do something illegal is really weird. Is there some crucial context I'm missing?
It dates back to the constitutional ban on "ex post facto" laws. Meaning, the government can't retroactively make something illegal. Which is a good thing, IMO.
So, for example, it's illegal at the federal level to manufacture machine guns (and I'm not going to get into a gun debate or nuances as to what defines a machine gun--it's just an example). But a machine gun is legal as long as it was manufactured before the ban went into place. Because the government can't say "hey, destroy that thing that was legal to manufacture, purchase, and own when it was manufactured."
This concept is extrapolated here to say "The cops didn't do anything illegal at the time. We have determined this is illegal behavior now, but we can't use that to overturn police decisions that were made when the behavior wasn't illegal. In the future, cops won't be able to do this."
The government has totally said “destroy the thing that we said was legal to manufacture, purchase, and own when it was manufactured.” That was the entire point of the bump stock ban, which attempted to reclassify an item that they had previously said was not a machine gun into a machinegun, and therefore illegal to own (and was always illegal to own, so they weren’t going to compensate people for them either).
More strictly, machine guns aren’t banned by the federal government, but rather you have to have paid a tax to own it, and they’ve banned paying the tax for gun made after X date. If they decide to ban the ownership, grandfathering is not guaranteed.
> Because the government can't say "hey, destroy that thing that was legal to manufacture, purchase, and own when it was manufactured."
Actually that's a totally normal way for bans to work.
If a state decides to ban a book from school libraries, the libraries don't get to keep the books on the shelves because they already had it.
The ban on ex post facto laws merely means that, if a ban on a given book is passed today a librarian can't be punished for having it on the shelves yesterday.
Grandfathering in exceptions is just politics - make a bitter pill easier to swallow for the people most impacted; delay the costs of any remediation; deal with historical/museum pieces; and simplify enforcement.
>It dates back to the constitutional ban on "ex post facto" laws.
Not really, that's not now constitutionality works with respect to the government. Ex post facto is when the government wants to act against you, not when you want the government to behave. They use new decisions regarding constitutionality to undo previous decisions all the time, they just don't want to in this specific case and are using the "well they would have been able to get a warrant anyway if they had known they'd needed one" to justify it.
It wasn't illegal (unconstitutional) at the time they did it, which is different from not knowing. They would have had to see the future to know.
Also keep in mind "illegal" and "unconstitutional" are different levels - "illegal" deals with specific laws, "unconstitutional" deals with violating a person's rights. Laws can be declared unconstitutional and repealed.
Laws can also be unconstitutional and remain a law--the law just can't be enforced. For example, in the state of Texas sodomy is still technically illegal, just the law is unenforceable. But if the Supreme Court overrules previous court decisions and says anti-sodomy laws are constitutional, the Texas law immediately becomes enforceable again.
I don't know. I feel that if something is declared "unconstitutional" today, then it was always unconstitutional (from inception of or amendment to the constitution). Unlike "illegal" in which laws can come and go, so something that is illegal today can be legal tomorrow. And just like "ignorance is no excuse for breaking a law", I don't thing ignorance should be an excuse for doing something unconstitutional.
Just another way cops can be terrible at their job and get away with it. If only citizens could use the Chappelle defense, "I'm sorry officer, I didn't know I couldn't do that".
Let's be clear. This guy had CSAM and was caught using digital forensics. The cops would've been able to secure the search warrant at the time had they been required to do so.
This isn't some innocent person who is spending time in prison because of a legal technicality.
I understand but this is literally how rights are eroded away. It's all good when it's the worst people on the planet, but very quickly it's abused against every one else. Once these rights go away, they don't come back.
The systemic downsides of police overreach happen whether or not a particular person was guilty. In general, throwing out the evidence is an effective way to fight back against overreach. I'm not worried about this guy, I'm worried about everyone else.
The idea that they would have been able to get a warrant limits the damage, but it's still iffy.
The opinion says at the time the warrantless search occurred, one appellate court had already held "that no warrant was required in those circumstances" (p 42). Only a year after the search occurred, did another appellate court rule the other way.
This is the main argument that the search met the good faith exception to the exclusionary rule (i.e. the rule that says you have to exclude evidence improperly obtained). This exception is supported in the opinion (at p41) with several citations including United States v. Ganias, 824 F.3d 199, 221–22 (2d Cir. 2016)
IANAL, but as I understood, this exception is specifically about cases where precedence is established. This same trick or others substantially like it won't work in the future, but because it was not a "known trick", the conviction still stands.
Not only that, prior to the search another court had ruled that no warrant was required. The new ruling overrides the old one, but the search was in good faith.
Prior to the search. A lower court had ruled that no warrant was required. The search was in good faith. The new ruling overturns the earlier ruling, but before, it had been ruled legal to do this kind of warrantless search.
I'm trying to imagine a more "real-world" example of this to see how I feel about it. I dislike that there is yet another loophole to gain access to peoples' data for legal reasons, but this does feel like a reasonable approach and a valid goal to pursue.
I guess it's like if someone noticed you had a case shaped exactly like a machine gun, told the police, and they went to check if it was registered or not? I suppose that seems perfectly reasonable, but I'm happy to hear counter-arguments.
The main factual components are as follows: Party A has rented out property to Party B. Party A performs surveillance on or around the property with Party B's knowledge and consent. Party A discovers very high probability evidence that Party B is committing crimes within the property, and then informs the police of their findings. Police obtain a warrant, using Party A's statements as evidence.
The closest "real world" analogy that comes to mind might be a real estate management company uses security cameras or some other method to determine that there is a crime occurring in a space that they are renting out to another party. The real estate management company then sends evidence to the police.
In the case of real property -- rental housing and warehouse/storage space in particular -- this happens all the time. I think that this ruling is imminently reasonable as a piece of case law (ie, the judge got the law as it exists correct). I also thing this precedent would strike a healthy policy balance as well (ie, the law as it exists if interpreted how the judge in this case interprets it would a good policy situation).
Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
I don't think there is, and I don't think you can reduce reality to being as simple as "owner has more right over property than renter" renter absolutely has at least a few rights in at least a few defined contextx over owner because owner "consented" to accept money in trade for use of property.
> Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
Yes. Entering property for regular maintenance. Any time a landlord or his agent enters a piece of property, there is implicit surveillance. Some places are more formal about this than others, but anyone who has rented, owned rental property, or managed rental property knows that any time maintenance occurs there's an implicit examination of the premises also happening...
But here is a more pertinent example: the regular comings and goings of people or property can be and often are observed from outside of a property. These can contribute to probable cause for a search of those premises even without direct observation. (E.g., large numbers of disheveled children moving through an apartment, or an exterior camera shot of a known fugitive entering the property.)
Here the police could obtain a warrant on the basis of landlord's testimony without the landlord actually seeing the inside of the unit. This is somewhat similar to the case at hand, since what Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
> I don't think you can reduce reality to being as simple as "owner has more right over property than renter"
But I make no such reduction, and neither does the opinion. In fact, quite the opposite -- this is contributory why the court determines a warrant is required!
> ...Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
Google cannot have calculated that hash without examining the data in the image. They, or systems under there control obviously looked at the image.
It should not legally matter whether the eyes are meat or machine... if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
> It should not legally matter whether the eyes are meat or machine
But it does matter, and, perhaps ironically, it matters in a way that gives you STRONGER (not weaker) fourth amendment rights. That's the entire TL;DR of the fine article.
If the court accepted this sentence of yours in isolation, then the court would have determined that no warrant was necessary in any case.
> if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
I don't disagree. In particular: I believe that the "Reasonable Person", to the extent that we remain stuck with the fiction, should be understood as having stronger privacy expectations in their phone or cloud account than they do even in their own bedroom or bathroom.
With respect to Google's actions in this case, this is an issue for your legislator and not the courts. The fourth amendment does not bind Google's hands in any way, and judges are not lawmakers.
The point of the analogy is that the contents of ones files should be considered analogous to the contents of ones mind.
Whatever reasons we had in the past for deciding that financial or health data, or conversations with attorneys, or bathrooms and bedrooms, are private, those reasons should apply to ones documents which includes ones files.
Or at least if not, we should figure out and be able to show exactly how and why not with some argument that actually holds water.
Only after that does it make any sense to either defend or object to this development.
If I import hundreds of pounds of poached ivory and store it in a shipping yard or move it to a long term storage unit, the owner and operator of those properties are allowed to notify police of suspected illegal activities and unlock the storage locker if there is a warrant produced.
Maybe the warrant uses some abstraction of the contents of that storage locker like the shipping manifest or customs declaration. Maybe someone saw a shadow of an elephant tusk or rhino horn as I was closing the locker door.
Pretty much all rental storage, shipping container, 3rd party semi trailer pool, safe deposit box type services and business agreements stipulate that the user of the arbitrary box gets to deny the owner of the arbitrary box access so long as they're holding up their end of the deal. The point is that the user is wholly responsible for the security of the contents of the arbitrary box and the owner bears no liability for the contents. This is why (well run) rental storage places make you use your own lock and if you don't pay they add an additional lock rather than removing yours.
I don't think that argument supports the better analogy of breaking into a computer or filing cabinet owned by someone renting the space. Just because someone is renting space doesn't give you the right to do whatever you want to them. Cameras in bathrooms of a rented space would be another example.
But he wasn’t running a computer in a rented space, he was using storage space on google’s computers.
In an older comment I argued against analogies to rationalize this. I think honestly at face value it is possible to evaluate the goodness or badness of the decision.
> In an older comment I argued against analogies to rationalize this. I think honestly at face value it is possible to evaluate the goodness or badness of the decision.
I generally do agree that analogies became anti-useful in this thread relatively quickly.
However, I am not sure that avoiding analogies is actually possible for the courts. I mean, they can try, but at some point analogies are unavailable because most of the case law -- and, hell, the fourth amendment itself -- is written in terms of the non-digital world. Judges are forced to reason by analogy, because legal arguments will be advanced in terms of precedent that is inherently physical.
So there is value in hashing out the analogies, even if at some point they become tenuous, primarily because demonstrating the breaking points of the analogies is step zero in deviating from case law.
Yes, that is why I presented an alternative to the analogy of "import hundreds of pounds of poached ivory and store it in a shipping yard or move it to a long term storage unit".
Like having the right to avoid being videoed in the bathroom, we have the right to avoid unreasonable search of our files by authorities, whether stored locally or on the cloud
I have this weird experience where people that get all their legal news from tech websites have really pointed views about fourth amendment jurisprudence and patent law.
I agree. This is a case where the physical analogy leads us to (imo) the correct conclusion: compelling major property management companies to perform regular searches of their tenant's properties, and then to report any findings to the police, is hopefully something that most judges understand to be a clear violation of the fourth amendment.
> The issue of course being the government then pressuring or requiring these companies to look for some sort of content as part of routine operations.
>> Party A discovers very high probability evidence that Party B is committing crimes within the property ...
> This isn't accurate: the hashes were purposefully compared to a specific list. They didn't happen to notice it, they looked specifically for it.
1. I don't understand how the text that comes on the right side of the colon substantiates the claim on the left side of the colon... I said "discovers", without mention of how it's discovered.
2. The specificity of the search cuts in exactly the opposite direction than you suggest; specificity makes the search far less invasive -- BUT, at the same time, the "everywhere and always" nature of the search makes it more invasive. The problem is the pervasiveness, not the specificity. See https://news.ycombinator.com/user?id=aiforecastthway
> And of course, what happens when it's a different list?
The fact that the search is targeted, that the search is highly specific, and that the conduct plainly criminal, are all, in fact, highly material. The decision here is not relevant to most of the "worst case scenarios" or even "bad scenarios" in your head, because prior assumptions would have been violated prior to this moment in the legal evaluation.
But with respect to your actual argument here... it's really a moot point. If the executive branch starts compelling companies to help them discover political enemies on basis of non-criminal activity, then the court's opinions will have exactly as much force as the army that court proves capable of raising, because such an executive would likely have no respect for the rule of law in any case...
It is reasonable for legislators to draft laws on a certain assumption of good faith, and for courts to interpret law on a certain assumption of good faith, because without that good faith the law is nothing more than a sequence of forceless ink blotches on paper anyways.
I don't think that changes anything. I think it's entirely reasonable for Party A to be actively watching the rented property to see if crimes are being committed, either by the renter (Party B) or by someone else.
The difference I do see, however, is that many places do have laws that restrict this sort of surveillance. If we're talking about an apartment building, a landlord can put cameras in common areas of the building, but cannot put cameras inside individual units. And with the exception of emergencies, many places require that a landlord give tenants some amount of notice before entering their unit.
So if Google is checking user images against known CSAM image hashes, are those user images sitting out in the common areas, or are they in an individual tenant's unit? I think it should be obvious that it's the latter, not the former.
Maybe this is more like a company that rents out storage units. Do storage companies generally have the right to enter their customers' storage units whenever they want, without notice or notification? Many storage companies allow customers to put their own locks on their units, so even if they have the right to enter whenever they want, regularly, in practice they certainly do not.
But like all analogies, this one is going to have flaws. Even if we can't match it up with a real-world example, maybe there's still no inconsistency or problem here. Google's ToS says they can and will do this sort of scanning, users agree to it, and there's no law saying Google can't do that sort of thing. Google itself has no obligation to preserve users' 4th Amendment rights; they passed along evidence to the police. I do think the police should be required to obtain a warrant before gaining access to the underlying data; the judge agrees on this, but the police get away with it in the original case due to the bullshit "good faith exception".
Ok. But that would also be invasion of privacy. If the property you rented out was being used for trafficking and you don’t want to be involved with trafficking, then the terms would have to first explicitly set what is not allowed. Then it would also have to explicitly mention what measures are taken to enforce it and what punishments are imposed for violations. It should also mention steps that are taken for compliance.
Without full documentation of compliance measures, enforcement measures, and punishments imposed, violations of the rule cannot involve law enforcement who are restricted to acting on searches with warrants.
> If the property you rented out was being used for trafficking and you don’t want to be involved with trafficking, then the terms would have to first explicitly set what is not allowed.
I don't believe that's the case. You don't need to state that illegal activities are not allowed; that's the default.
> Then it would also have to explicitly mention what measures are taken to enforce it
When Airbnb used to allow cameras indoors, they did -- after some backlash -- require hosts to disclose the presence of the cameras.
> ... and what punishments are imposed for violations.
No, I don't think that is or should be necessary. If you do illegal things, the possible punishments don't need to be enumerated by the person who reports you to the police.
Put another way: if I'm hosting someone on Airbnb in the case where I'm living in the same property, and I walk into the kitchen to see my Airbnb guest dealing drugs, I am well within my rights to call the police, without having ever said anything up-front to my guest about whether or not that's acceptable behavior, or what the consequences might be. Having the drug deal instead caught on camera is no different, though I would agree that the presence of the cameras should have to be disclosed beforehand.
In Google's case, the "camera" (aka CSAM scanning) appears to have been disclosed beforehand.
>Without full documentation of compliance measures, enforcement measures, and punishments imposed, violations of the rule cannot involve law enforcement who are restricted to acting on searches with warrants.
In the case of in-progress child abuse, that wouldn’t require a warrant as entry to prevent harm to a person is an exigent circumstance and falls under the Emergency Aid doctrine. If they found evidence or illegal items within plain view, that evidence would be permitted under the plain view doctrine. However, if they went and searched drawers or opened file cabinets, evidence discovered in that circumstance would not be allowed (opening a file cabinet isn’t required to solve the emergency aid situation typically.)
What’s really fascinating is that Children Protective Services acts as if they never need a warrant even if there is not an exigent circumstance. To my knowledge there hasn’t been a Supreme Court case challenging that and circuits are split. Interesting reading about that if anyone is interested:
I think the real-world analogy would be to say that the case is shaped exactly like a machine gun and the hotel calls the police, who then open the case without a warrant. The "private search" doctrine allows the police to repeat a search done by a private party, but here (as in the machine gun case), the case was not actually searched by a private party.
But this court decision is a real world example, and not some esoteric edge case.
This is something I don’t think needs analogies to understand. SA/CP image and video distribution is an ongoing moderation, network, and storage issue. The right to not be under constant digital surveillance is somewhat protected in the constitution.
I like speech and privacy and am paranoid of corporate or government overreach, but I arrive at the same conclusion as you taking this court decision at face value.
Wait until Trump is in power and corporations are masterfully using these tools to “mow the grass” (if you want an existing example of this, look at Putin’s Russia, where people get jail time for any pro-Ukraine mentions on social media).
Yeah I’m paranoid like I said, but this case it seems like the hash of a file on google’s remote storage flagged as potential match that was used as justification to request a warrant. That seems common sense and did not involve employees snooping pre-warrant.
The Apple CSAM hash detection process, that the launch was rolled back, concerned me namely because it was run on-device with no opt out. If this is running on cloud storage then it sort of makes sense. You need to ensure you are not aiding or harboring actually harmful illegal material.
I get there are slippery slopes or whatever but the fact is you cannot just store whatever you wish in a rental. I don’t see this as opening mass regex surveillance of our communication channels. We have the patriot act to do that lol.
I think the better option is a system where the cloud provider cannot decrypt the files, and they’re not obligated to lift a finger to help the police because they have no knowledge of the content at all
In my opinion, despite the technical merits of an algorithm, encryption is only as trustworthy as the computer who generates and holds a private key.
I would personally not knowingly use a cloud provider to commit a crime. That is a fairly naive take to assume because your browser is https that data at rest and in process isn’t somehow observable.
And I see where you’re coming from but I am afraid that position severely overestimates the will of US people to trade freedom/privacy for security and the legislature to hold citizens’ privacy in such high regard.
I only worry that, in the case that renting becomes a roundabout way of granting more oversight ability to the government, then as home ownership rates decrease, government surveillance power increases.
Sure, it's facilitated through a third party (the owner), but the extrapolated pattern seems to be: "1. Only people in group B will have fewer rights, so people in group A shouldn't worry" followed closely by "2. Sorry, you've been priced out of group A."
In the case of renting, we end up in the situation where those who have enough wealth to own their own home are afforded extra privileges of privacy.
Now to bring this back to the cloud; the cynical part of me looks towards a future of cheap, cloud-only storage devices. Or an intermediate future of devices where cloud is first party and local storage is just enough of a hassle that people don't use it. And the result is that basically everyone now has the present day equivalent of local storage scanning.
If renting de-facto grants fewer rights, then in the future where "you'll own nothing and be happy", you'll also have no rights, and all the way people will say "as a renter, what did you expect?"
OK I agree with you about setting a precedent that future storage will be scanned by default. Additionally who will control the reference hash list?, since making one necessitates hashing that illicit material.
I only hope the court systems escalate it and manage to protect free speech or unreasonable search and seizure or self incrimination or whatever if the CSAM hash comparisons are used against political opponents or music piracy or tax evasion or whatever.
I’m unsure I wrote that from like an ethics standpoint. The silk road guy was got on conspiracy for attempting murder and not drug or human trafficking charges. So I’m unsure of legal side.
I think if you knowingly provided a platform to distribute SA/CP/CSAM and the feds become involved you will be righteously fucked.
Reddit clamped down on the creepy *bait subreddits years ago. Maybe it was self-preservation on the business side or maybe it was forward looking about legal issues.
I’m not a lawyer I was just mentioning things that I would follow for ethics morals and my sense of self preservation.
I'm reasonably certain Reddit's decision to ban /r/jailbait and the like was driven by business/reputation. It was widely discussed for some time before it was banned and, IIRC given a "worst of" award by the admins at one point. Once it got major media coverage, Reddit got its first real content policy.
I don't think the analogy holds for two reasons (which cut in opposite directions from the perspective of fourth amendment jurisprudence, fwiw).
First, the dragnet surveillance that Google performs is very different from the targeted surveillance that can be performed by a drug dog. Drug dogs are not used "everywhere and always"; rather, they are mostly used in situations where people have a less reasonable expectation of privacy than the expectation they have over their cloud storage accounts.
Second, the nature of the evidence is quite different. Drug-sniffing dogs are inscrutable and non-deterministic and transmit handler bias. Hashing algorithms can be interrogated and are deterministic and do not have such bias transferal issues; collisions do occur, but are rare, especially because the "search key" set is so minuscule relative to the space of possible hashes. The narrowness and precision of the hashing method preserves most of the privacy expectations that society is currently willing to recognize as objectively reasonable.
Here we get directly to the heart of the problem with the fictitious "reasonable person" used in tests like the Katz test, especially in cases where societal norms and technology co-evolve at a pace far more rapid than that of the courts.
This analogy can have two opposite meanings. Drug dogs can be anything from a prop used by the police to search your car without a warrant (a cop can always say in court the dog "alerted" them) to a useful drug detection tool.
Don't they?. If you tell the cops that your neighbor has drugs of significant quantity in their house, would they not still need a warrant to actually go into your neighbor's house?
There are a lot of nuances to these situations of third-party involvement and the ruling discusses these at length. If you’re interested in the precise limits of the 4th amendment you should really just read the linked document.
they should as a matter of course. but I guess "papers" you entrust to someone else are a gray area. I personally think that it goes against the separation of police state and democracy, but I'm a nobody, so it doesn't matter I suppose.
Is it reasonable? Even if the hash was md5, given valid image files, the chances of it being an accidental collision are way lower than the chance of any other evidence given to a judge was false or misinterpreted.
This is NOT a secure hash. This is an image similar to hash which has many many matches in not related images.
Unfortunately the decision didn't mention this at all even though it is important. If it was even as good as a md5 hash (which is broken) I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't (and of course if there is the police would close the case). However since this has is not that good the police cannot look at the image unless Google does.
I wish I could get access to the "App'x 29" being referenced so that I could better understand the judges' understanding here. I assume this is Federal Appendix 29 (in which case a more thorough reference would've been appreciated). If the Appeals Court is going to cite the Federal Appendix in a decision like this and in this manner, then the Federal Appendix is as good as case law and West Publishing's copyright claims should be ripped away. Either the Federal Appendix should not be cited in Appeals Court and Supreme Court opinions, or the Federal Appenix is part of the law and belongs to the people. There is no middle there.
> I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't
The footnote in the decision bakes this property into the definition of a hash:
A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.
(Importantly, this is NOT an accurate definition of a hash for anyone remotely technical... of course hashing algorithms with significant hash collisions exist, and is even a design criterion for some hashing algorithms...)
>I wish I could get access to the "App'x 29" being referenced so that I could better understand the judges' understanding here. I assume this is Federal Appendix 29 (in which case a more thorough reference would've been appreciated). If the Appeals Court is going to cite the Federal Appendix in a decision like this and in this manner, then the Federal Appendix is as good as case law and West Publishing's copyright claims should be ripped away. Either the Federal Appendix should not be cited in Appeals Court and Supreme Court opinions, or the Federal Appenix is part of the law and belongs to the people. There is no middle there.
Just go to a law library.
Do you know that judges routinely make decisions based on confidential documents not in the public record? Is that also bad?
The closest with a copy of the Federal Appendix is ~2 hrs away from me (or on LN if I pay for a subscription). It should be free and online, because it probably can't be copyrighted and because simplifying public access to the law is an unambiguous public good.
> Do you know that judges routinely make decisions based on confidential documents not in the public record? Is that also bad?
Of course not; the particularities of a given case is a very different concern from a document whose content is critical to interpretation of precedent. Also, the copyright claims on confidential documents might be valid, whereas any copyright claims on cases in the Federal Appendix probably aren't valid; see how of the government edicts doctrine was applied in Georgia v. Public.Resource.Org.
You're assuming accidential collision. Images can be generated that intentionally trigger the hash algorithm while they still appear as something else (a meme, funny photo, etc.) to a person looking at them. This brings many possibilities for "bad people" to do to people they hate (like an alternative to swatting etc.)
So you're saying that I craft a file that has the same hash as a CSAM one, I give it to you, you upload it to google, but it also happens to be CSAM, and I've somehow framed you?
My point is that a hash (granted, I'm assuming that we're talking about a cryptographic hash function, which is not clear) is much closer to "This is the file" than someone actually looking at it, and that it's definitely more proof of them having that sort of content than any other type of evidence.
I don't understand. If you contend that it's even better evidence than actually having the file and looking at it, how is not reasonable to then need a judge to issue a warrant to look at it? Are you saying it would be more reasonable to skip that part and go directly to arrest?
It seems like a large part of the ruling hinges on the fact that Google matched the image hash to a hash of a known child pornography image, but didn't require an employee to actually look at that image before reporting it to the police. If they had visually confirmed it was the image they suspected it was based on the hash then no warrant would have been required, but the judge reads that the image hash match is not equivalent to a visual confirmation of the image. Maybe there's some slight doubt in whether or not the image could be a hash collision, which depends on the hash method. It may be incredibly unlikely (near impossible?) for any hash collision depending on the specific hash strategy.
I think it would obviously be less than ideal for Google to require an employee visually inspect child pornography identified by image hash before informing a legal authority like the police. So it seems more likely that the remedy to this situation would be for the police to obtain a warrant after getting the tip but before requesting the raw data from Google.
Would the image hash match qualify as probable cause enough for a warrant? On page 4 the judge stops short of setting precedence on whether it would have or not. Seems likely that it would be a solid probable cause to me, but sometimes judges or courts have a unique interpretation of technology that I don't always share, and leaving it open to individual interpretation can lead to conflicting results.
The hashes involved in stuff like this, as with copyright auto-matching, are perceptual hashes (https://en.wikipedia.org/wiki/Perceptual_hashing), not cryptographic hashes. False matches are common enough that perceptual hashing attacks are already a thing in use to manipulate search engine results (see the example in random paper on the subject https://gangw.cs.illinois.edu/PHashing.pdf).
It seems like that is very relevant information that was not considered by the court. If this was a cryptographic hash I would say with high confidence that this is the same image and so Google examined it - there is a small chance that some unrelated file (which might not even be a picture) matches but odds are the universe will end before that happens and so the courts can consider it the same image for search purposes. However because there are many false positive cases there is reasonable odds that the image is legal and so a higher standard for search is needed - a warrant.
>so the courts can consider it the same image for search purposes
An important part of the ruling seems to be that neither Google nor the police had the original image or any information about it, so the police viewing the image gave them more information than Google matching the hash gave Google: for example, consider how the suspect being in the image would have changed the case, or what might happen if the image turned out not to be CSAM, but showed the suspect storing drugs somewhere, or was even, somehow, something entirely legal but embarrassing to the suspect. This isn't changed by the type of hash.
It shouldn't. Google hasn't otherwise seen the image, so the employee couldn't have witnessed a crime. There are reportedly many perfectly legal images that end up in these almost perfectly unaccountable databases.
That makes sense - if they were using a cryptographic hash then people could get around it by making tiny changes to the file. I’ve used some reverse image search tools, which use perceptual hashing under the hood, to find the original source for art that gets shared without attribution (saucenao pretty solid). They’re good, but they definitely have false positives.
Now you’ve got me interested in what’s going on under the hood, lol. It’s probably like any other statistical model: you can decrease your false negatives (images people have cropped or added watermarks/text to), but at the cost of increased false positives.
Rather simple methods are surprisingly effective [1]. There's sure to be more NN fanciness nowadays (like Apple's proposed NeuralHash), but I've used the algorithms described by [1] to great effect in the not-too-distant past. The HN discussion linked in that article is also worth a read.
This submission is the first I've heard of the concept. Are there OSS implementations available? Could I use this, say, to deduplicate resized or re-jpg-compressed images?
The hash functions used for these purposes are usually not cryptographic hashes. They are "perceptual hashes" that allows for approximate matches (e.g. if the image has been scaled or brightness-adjusted). https://en.wikipedia.org/wiki/Perceptual_hashing
> Maybe there's some slight doubt in whether or not the image could be a hash collision, which depends on the hash method. It may be incredibly unlikely (near impossible?) for any hash collision depending on the specific hash strategy.
If it was a cryptographic hash (apparently not), this mathematical near-certainty is necessary but not sufficient. Like cryptography used for confidentiality or integrity, the math doesn't at all guarantee the outcome; the implementation is the most important factor.
Each entry in the illegal hash database, for example, relies on some person characterizing the original image as illegal - there is no mathematical formula for defining illegal images - and that characterization could be inaccurate. It also relies on the database's integrity, the user's application and its implementation, even the hash calculator. People on HN can imagine lots of things that could go wrong.
If I were a judge, I'd just want to know if someone witnessed CP or not. It might be unpleasant but we're talking about arresting someone for CP, which even sans conviction can be highly traumatic (including time in jail, waiting for bail or trial, as a ~child molestor) and destroy people's lives and reputations. Do you fancy appearing at a bail hearing about your CP charge, even if you are innocent? 'Kids, I have something to tell you ...'; 'Boss, I can't work for a couple weeks because ...'.
It seems like there just needs to be case law about the qualifications of an image hash in order to be counted as probable cause for a warrant. Of course you could make an image hash be arbitrarily good or bad.
I am not at all opposed to any of this "get a damn warrant" pushback from judges.
I am also not at all opposed to Google searching it's cloud storage for this kind of content. There are a lot of things I would mind a cloud provider going on fishing expeditions to find potentially illegal activity, but this I am fine with.
I do strongly object to companies searching content for illegal activity on devices in my possession absent probable cause and a warrant (that they would have to get in a way other than searching my device). Likewise I object to the pervasive and mostly invisible delivery to the cloud of nearly everything I do on devices I possess.
In other words, I want custody of my stuff and for the physical possession of my stuff to be protected by the 4th amendment and not subject to corporate search either. Things that I willingly give to cloud providers that they have custody of I am fine with the cloud provider doing limited searches and the necessary reporting to authorities. The line is who actually has the bits present on a thing they hold.
I think if the hashes were made available to the public, we should just flood the internet with matching but completely innocuous images so they can no longer be used to justify a search
>please use the original title, unless it is misleading or linkbait; don't editorialize. (@dang)
On topic, I like this quote from the first page of the opinion:
>A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.” United States v. Ackerman, 831 F.3d 1292, 1294 (10th Cir. 2016) (Gorsuch, J.).
It's amusing to me that they use a supreme court case as a reference for what a hash is rather than eg. a textbook. It makes sense when you consider how the court system works but it is amusing nonetheless that the courts have their own body of CS literature.
Maybe someone could publish a "CS for Judges" book that teaches as much CS as possible using only court decisions. That could actually have a real use case when you think of it. (As other commenters pointed out, the hashing definition given here could use a bit more qualification, and should at least differentiate between neural hashes and traditional ones like MD5, especially as it relates to the likeliness that "another set of data will produce the same value." Perhaps that could be an author's note in my "CS for Judges" book.)
> Maybe someone could publish a "CS for Judges" book
At last, a form of civic participation which seems both helpful and exciting to me.
That said, I am worried that lot of necessary content may not be easy to introduce with hard precedent, and direct advice or dicta might somehow (?) not be permitted in a case since it's not adversarial... A new career as a professional expert witness--even on computer topics--sounds rather dreary.
What's so weird about this? CS literature is not legally binding in any way. Of course a judge would rather quote a previous ruling by fellow judge than a textbook, Wikipedia, or similar sources.
From what I understand, a judge is free to decide matters of fact on his own, which could include from a textbook. Also, it is not clear that matters of fact decided by the Supreme Court are binding to lower courts. Additionally, facts and even meanings of words themselves can change, which makes previous findings of fact no longer applicable. That's actually true in this case as well. "Hash" as used in the context of images generally meant something like an MD5 hash (which itself is now more prone to collisions than before). The "hash" in the Google case appears to be a perceptual hash, which I don't think was as commonly used until recently (I could be wrong here). So whatever findings of fact were made by the Supereme Court about how reliable a hash is is not necessarily relevant to begin with. Looking at this specific case, here is the full quote from United States v. Ackerman:
>How does AOL's screening system work? It relies on hash value matching. A hash value is (usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value. Some consider a hash value as a sort of digital fingerprint. See Richard P. Salgado, Fourth Amendment Search and the Power of the Hash, 119 Harv. L. Rev. F. 38, 38-40 (2005). AOL's automated filter works by identifying the hash values of images attached to emails sent through its mail servers.[0]
I don't have access to this issue of Harvard Law Review but looking at the first page, it says:
>Hash algorithms are used to confirm that when a copy of data is made, the original is unaltered and the copy is identical, bit-for-bit.[1]
This is clearly referring to a cryptographic hash like MD5, not a perceptual hash/neural hash as in Google. So the actual source here is not necessarily dealing with the same matters of fact as the source of the quote here (although there could be valid comparisons between them).
All this said, judges feel more confident in citing a Supreme Court case than a textbook because 1. it is easier to understand for them 2. the matter of fact is then already tied to a legal matter, instead of the judge having to make that leap himself and also 3. judges are more likely to read relevant case law to begin with since they will read it to find precedent in matters of law – which are binding to lower courts. This is why a "CS for Judges" could be a useful reference book.
Lastly, I should have looked a bit more closely at the quoted case. This is actually not a supreme court case at all. Gorsuch was nominated in 2017 and this case is from 2016.
> As the district court correctly ruled in the alternative, the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
So this means this conviction is upheld but future convictions may be overturned if they similarly don't acquire a warrant?
> the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
This "good faith exception" is so absurd I struggle to believe that it's real.
Ordinary citizens are expected to understand and scrupulously abide by all of the law, but it's enough for law enforcement to believe that what they're doing is legal even if it isn't?
What that is is a punch line from a Chapelle bit[1], not a reasonable part of the justice system.
The courts accept good faith arguments at times. They will give reduced sentences or even none at all if they think you acted in good faith. There are enough situations where it is legal to kill someone that there are laws to make it clear that is a legal situation where one person can kill another (hopefully they never apply to you).
Note that this case is not about ignorance of the law. This is I knew the law and was trying to follow it - I just honestly thought it didn't apply because of some tricky situation that isn't 100% clear.
The difference between "I don't know" and "I thought it worked like this" is purely a matter of degrees of ignorance. It sounds like the cops were ignorant of the law in the same way as someone who is completely unaware of it, just to a lesser degree. Unless they were misinformed about the origins of what they were looking at, it doesn't seem like it would be a matter of good faith, but purely negligence.
“Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
> “Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
We're talking about orthogonal issues.
Mens rea applies to whether the person performs the act on purpose. Not whether they were aware that the act was illegal.
Let's use fraud as an example since you brought it up.
If I bought an item from someone and used counterfeit money on purpose, that would be fraud. Even if I truly believed that doing so was legal. But it wouldn't be fraud if I didn't know that the money was counterfeit.
At the time, what they did was assumed to be legal because no one had ruled on it.
Now, there is prior case law declaring it illegal.
The ruling is made in such a way to say “we were allowing this, but we shouldn’t have been, so we wont allow it going forward”.
I am not a legal scholar, but that’s the best way I can explain it. The way that the judicial system applies to law is incredibly complex and inconsistent.
This is a deeply problematic way to operate. En masse, it has the right result, but, for the individual that will have their life turned upside down, the negative impact is effectively catastrophic.
This ends up feeling a lot like gambling in a casino. The casino can afford to bet and lose much more than the individual.
Thus, it's not clear that any harm was caused because the right wasn't clearly enshrined and had the police known that it was, they likely would have followed the correct process. There was no intention to violate rights, and no advantage gained from even the inadvertent violation of rights. But the process is updated for the future.
I don't care nearly as much about the 4th amendment when the person is guilty. I care a lot when the person is innocent. Searches of innocent people is costly for the innocent person and so we require warrants to ensure such searches are minimized (even though most warrants are approved, the act of getting on forces the police to be careful). If a search was completely not costly to innocent I wouldn't be against them, but there are many ways a search that finds nothing is costly to the innocent.
If the average person is illegally searched, but turns out to be innocent, what are the chances they bother to take the police to court? It's not like they're going to be jailed or convicted, so many people would prefer to just try to move on with their life rather than spend thousands of dollars litigating a case in the hopes of a payout that could easily be denied if the judge decides the cops were too stupid to understand the law rather than maliciously breaking it.
Because of that, precedent is largely going to be set with guilty parties, but will apply equally to violations of the rights of the innocent.
I want guilty people to go free if their 4th amendment rights are violated, thats the only way to ensure police are meticulous about protecting peoples rights
This specific conviction upheld, yes. But no, this ruling doesn't speak to whether or not any future convictions may be overturned.
It simply means that at the trial court level, future prosecutions will not be able to rely on the good faith exception to the exclusionary rule if warrantless inculpatory evidence is obtained under similar circumstances. If the governement were to try to present such evidence at trial and the trial judge were to admit it over the objection of the defendant, then that would present a specific ground for appeal.
This ruling merely bolsters the 'better to get a warrant' spirit of the Fourth Amendment.
It's crazy that the most dangerous people one regularly encounters can do anything they want as long as they believe they can do it. The good faith exemption has to be one of the most fascist laws on the books today.
> "the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required."
In no other context or career can you do anything you want and get away with it just as long as you say you thought you could. You'd think police offiers would be held to a higher standard, not no standard.
And specifically with respect to the law, breaking a law and claiming you didn't know you did anything wrong as an individual is not considered a valid defense in our justice system. This same type of standard should apply even more to trained law enforcement, not less, otherwise it becomes a double standard.
No this is breaking the law by saying this looked like one of the situations where I already know the law doesn't apply. If Google had looked at the actual image and said it was child porn instead of just saying it was similar to some image that is child porn this would be 100% legal as the courts have already said. That difference is subtle enough that I can see how someone would get it wrong (and in fact I would expect other courts to rule differently)
That's not what this means. One can ask whether the belief is reasonable, that is justifiable by a reasoning process. The argument for applying the GFE in this case is that the probability of false positives from a perceptual hash match is low enough that it's OK to assume it's legit and open the image to verify that it was indeed child porn. They then used that finding to get warrants to search the guy's gmail account and later his home.
If I'm not a professional and I hurt someone while trying to save their life by doing something stupid, that's understandable ignorance.
If a doctor stops to help someone and hurts them because the doctor did something stupid, that is malpractice and could get them sued and maybe get their license revoked.
Would you hire a programmer who refused to learn how to code the claimed "good faith" every time they screwed things up? Good faith shouldn't cover willful ignorance. A cop is hired to know, understand, and enforce the law. If they can't do that, they should be fired.
It's not exactly the same imo, since GS laws are meant to protect someone who is genuinely trying to do what a reasonable person could consider "something positive"
In this case you're correct. But the good faith exemption is far broader than this and applies to even officer's completely personal false beliefs in their authority.
I think the judge chose to relax a lot on this one due to the circumstances. Releasing a man in society found with 4,000 child porn photos in his computer would be a shame.
But yeah, this opens too wide gates of precedence for tyranny, unfortunately...
> It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
That's a bingo. That's exactly what they do, and why so many cops know less about the law than random citizens. A better society would have high standards for the knowledge expected of police officers, including things like requiring 4-year criminal justice or pre-law degree to be eligible to be hired, rather than capping IQ and preferring people who have had prior experience in conducting violent actions.
Yes, this likely explains part of why the Norwegian police behave like professionals who are trying to do their job with high standards of performance and behavior and the police in the US behave like a bunch of drinking buddies that used to be bullies in high school trying to find their next target to harass.
> It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
It is not so lax as that. It's limited to situations where a reasonable person who knows exactly what the law and previous court rulings say might conclude that a certain action is legal. In this case, other Federal Circuit courts have ruled that similar actions are legal.
The good faith exception requires the belief be reasonable. Ignorance of clearly settled law is not reasonable, it should be a situation where the law was unclear, had conflicting interpretations or could otherwise be interpreted the way the police did by a reasonable person.
The problem with the internet nowadays is that a few big players are making up their own law. Very often it is against local laws, but nobody can fight with it. For example someone created some content but other person uploaded it and got better scores which rendered the original poster blocked. Another example: children were playing a violin concert and the audio got removed due to alleged copyright violation. No possibility to appeal, nobody sane would go to court. It just goes this way...
The harshness of sentence is not for the action of keeping the photos in itself, but the individual suffering and social damage caused by the actions that he incentivizes when he consumes such content.
Consumption per se does not incentivize it, though; procurement does. It's not unreasonable to causally connect one to the other, but I still think that it needs to be done explicitly. Strict liability for possession in particular is nonsense.
There's also an interesting question wrt simulated (drawn, rendered etc) CSAM, especially now that AI image generators can produce it in bulk. There's no individual suffering nor social damage involved in that at any point, yet it's equally illegal in most jurisdictions, and the penalties aren't any lighter. I've yet to see any sensible arguments in favor of this arrangement - it appears to be purely a "crime against nature" kind of moral panic over the extreme ickiness of the act as opposed to any actual harm caused by it.
It can. In several public cases it seems fairly clear that there is a "community" aspect to these productions and many of these sites highlight the number of downloads or views of an image. It creates an environment where creators are incentivized to go out of their way to produce "popular" material.
> Strict liability for possession in particular is nonsense.
I entirely disagree. Offenders tend to increase their level of offense. This is about preventing the problem from becoming worse and new victims being created. It's effectively the same reason we harshly prosecute people who torture animals.
> nor social damage involved in that at any point,
That's a bold claim. Is it based on any facts or study?
> over the extreme ickiness of the act as opposed to any actual harm caused by it.
It's about the potential class of victims and the outrageous life long damage that can be done to them. The appropriate response to recognizing these feelings isn't to hand them AI generated material to sate their desires. It's to get them into therapy immediately.
> > Strict liability for possession in particular is nonsense.
> I entirely disagree. Offenders tend to increase their level of offense.
For an example of unintended consequences of strict liability for possession look at Germany where the legal advice for what to do if you come across CSAM is to delete it and say nothing because reporting it to the police would incriminate you for possession and if you deleted a prosecutor could charge you with evidence tampering on top of it.
Also, as I understand it, in the US there have also been cases of minors deliberately taking "nudes" or sexting with other minors leading to charges of production and distribution of CSAM for their own pictures they took of themselves.
The production and distribution of CSAM should 100% be criminalized and going after possession seems reasonable to me. But clearly the laws are lacking if they also criminalize horny teenagers being stupid or people trying to do the right thing and report CSAM they come across.
> The appropriate response to recognizing these feelings is [..] to get them into therapy immediately.
Also 100% agree with this. In Germany there was a widespread media campaign "Kein Täter werden" (roughly "not becoming a predator") targeting adults who find sexually attracted to children. They anonymized the actors for obvious reasons but I like that they portrayed the pedophiles with a wide range of characters from different walks of life and different age groups. The message was to seek therapy. They provided a hotline as well as ways of getting additional information anonymously.
Loudly yelling "kill all pedophiles" doesn't help prevent child abuse (in fact, there is a tendency for abusers to join in because it provides cover and they often don't see themselves as the problem) but feeding into pedophilia certainly isn't helpful either. The correct answer is therapy but also moving away from a culture (and it is cultural not innate) that fetishizes youth, especially in women (sorry, "girls"). This also means fighting all child abuse, not just sexual.
> It's effectively the same reason we harshly prosecute people who torture animals.
Alas harsh sentencing isn't therapy. Arguably incarceration merely acts as a pause button at best. You don't get rid of nazis by making them hang out on Stormfront or 8kun.
I'm not trans, so I can't speak for trans people (not that any individual person could speak for an entire demographic group). And to pre-empt the follow-up question "then what's with the pronouns": gender is multi-faceted and complex, pronouns are just one aspect of it. Think of it as gender non-conformance. Like wearing a dress as a bloke.
If I had to hazard a guess for what might be causing the correlation you're seeing, I'd assume that being trans usually comes downstream from reflecting on social phenomena and cultural expectations. Being trans is by definition not the cultural norm (even the word itself implies some form of "misalignment" of identities and cultural expectations) so if you are familiar enough with it to claim it as your own identity, you probably did a lot of research into it, especially if you're from a generation where it was more of a taboo subject and not acknowledged in the broader frame of cultural references (e.g. good luck if your only exposure to the concept is from films like Ace Ventura). This can lead you down a rabbit hole if you try to understand not just that one aspect of your identity and experience.
Your actual question comes off as a bit upset (but again, that may be cultural - I'm not American nor is English my native language although I did pick it up at an early age) so let me rephrase it in a way that makes me more inclined to answer it: "Why do you think that is?"
This still feels somewhat like proving the null hypothesis as "it's in our genes" is not normally the go-to explanation we accept when wondering about any random part of human behavior but let's start by turning it around. Sure, we can make up all kinds of just-so evopsych rationalizations why human males should be sexually attracted to post-pubescent young and healthy human females but the same reasoning would also predict a preference for a prolific pelvis (making it more likely they successfully give birth) and pregnant women (demonstrating actual fertility) or mothers (demonstrating child-rearing abilities as well as fertility) and so on. Ultimately these are all just-so stories to rationalize a pre-existing assumption about human behavior that contradicts actual archeological research (which adherents often explain by claiming archeology has been corrupted by ideology but let's not get into fallacious claims of being "free of ideology" and where all of that ultimately leads).
The answer then is simple: I say fetishization of youth in women is cultural rather than innate because it is not a consistent phenomenon throughout history nor even globally in the modern age.
It's important to distinguish between the two factors at play in child sexual abuse: sexual attraction (i.e. pedophilia) and power dynamics. This isn't unique to child sexual abuse. Regular rape also often is more about power than attraction. Everyone is familiar with the concept of prison rape and historically, sometimes even today, a male rapist of other men in a prison is not by default considered gay or effeminate and the act may be seen as demonstrating dominance, demeaning and emasculating the victim.
The reason I'm talking about "fetishization" is because our culture (and US culture particularly so) first of all very much embraces narratives of dominance as a positive, from competition over cooperation to the ahistorical "great man" narrative of historical events. This shouldn't be surprising as these narratives are useful to those benefitting from the status quo by placating those who don't, much like fear of hellfire and the promise of heaven placated those caught at the wrong end of medieval Europe's "divine right"-based feudal system (up to a point).
Our culture is very much male-centric (patriarchy is often misunderstood - even by some so-called feminists - to mean that all men are given power over all women but that's literally why intersectionality became a thing before being misrepresented in "oppression olympics" memes, so I'll avoid overloaded terms like those here). This goes hand-in-hand with the "traditional" perspective that the man/father is the head of the household and should rule it with determination and "tough love" the same way the state should lead the people (and the president the state), each family representing a scale model of the dynamics of society at large, justifying the authority of the state in the authority of the father and vice versa.
So youth and feminity in this case acts as a stand-in for submissiveness. Under the "loving care" of a controlling father figure, a youthful woman is sexually pure/innocent ("uncorrupted") and meek/submissive. By evoking signifiers of childhood (e.g. the quintessential "cheerleader" costume, braces, pigtails, lispy speech, lollipops, pastels/pink) this is shifted further into an implausibly childlike innocence and paired with the sexual allure of "corrupting" that innocence (the fantasy of "defloration" leaving a "permanent mark") based on the implicit understanding that the sexual act empowers the penetrating man and permanently devalues the penetrated "girl" lest she remains faithful to the man should he want to "keep" her. Note that we don't even need to adopt the "sex-negative" feminist perspective on penetrative sex as inherently humiliating, the idea of penetrating = empowering and penetrated = disempowered is almost omnipresent in our culture as it is (note that this has nothing to do with passivity - receiving oral sex for example is seen as empowering - and arguably the framing around literal "penetration" alone is imprecise as e.g. right-wing attitudes towards cunnilingus as being emasculating for a male "giver" show).
If all this cultural analysis is too wishy-washy for you, historical records still don't align with the idea that fetishization of female youth is innate. Young adults, i.e. women in their 20s or very early 30s, yes, sure, but not "sweet 16" or "barely legal". Arguably US culture has even gotten better in this over my lifetime given that we went from the early Britney Spears school uniform sexualization to Megan Thee Stallion and with the crackdown on public forums like the `r/jailbait` Subreddit, but there is still a very strong undercurrent, especially among conservative men.
It's been the case that when I encounter people with non-normative pronouns they're trans, but you're right that isn't necessarily the case. My mistake!
I know I asked the initial question, but I guess I'm confused what exactly this conversation is about. Is the idea that people are only ever attracted to sixteen year olds because they learned to be? That feels like a challenging thing to demonstrate in the same way it being "in the genes" is, but perhaps I'm being overly reductive.
Nature vs nurture is not an either-or. I'm not saying "it" isn't "in the genes". I'm saying it's not just genes.
There's a wide range of possible age brackets, body types etc across all genders that can manifest traits most people would find attractive. Post-pubescent girls arguably aren't special in that sense. Especially if you don't isolate them out of their real-world context (which is where it stops being Oscar-winning Hollywood cinema and starts being child sexual abuse) that allows objectifying and dehumanizing them as "jailbait".
Where culture comes in is meaning. Taken at face value, a kid is just a kid. But culturally a kid represents something - naivety, hope, innocence, inexperience, whatever. This turns female youth into a fetish - something imbued with additional meaning. It's not actually the literal youthfulness that is culturally attractive in women (or else most people wouldn't react so violently against the idea of people sexually abusing minors), it's what that youthfulness represents. It's a male power fantasy.
Again, power fantasies aren't inherently a problem. What I'm arguing is that this one very much is a problem because it's so normalized it informs real-world social dynamics, i.e. where people start to forget it's a fantasy. Also I would argue the need for this specific fantasy is also not inherent (i.e. maleness does not inherently create a desire for absolute dominance over others). But I've rambled enough as it is.
> It can. In several public cases it seems fairly clear that there is a "community" aspect to these productions and many of these sites highlight the number of downloads or views of an image. It creates an environment where creators are incentivized to go out of their way to produce "popular" material.
So long as it's all drawn or generated, I don't see why we should care.
> I entirely disagree. Offenders tend to increase their level of offense.
This claim reminds me of similar ones about how video games are "on-ramp" to actual violent crime. It needs very strong evidence to back, especially when it's used to justify harsh laws. Evidence which we don't really have because most studies of pedophiles that we have are, by necessity, focused on the ones known to the system, which disproportionally means ones that have been caught doing some really nasty stuff to real kids.
> I entirely disagree. Offenders tend to increase their level of offense. This is about preventing the problem from becoming worse and new victims being created. It's effectively the same reason we harshly prosecute people who torture animals.
Strict liability for possession means that you can imprison people who don't even know that they have offending material. This is patent nonsense in general, regardless of the nature of what exactly is banned.
> That's a bold claim. Is it based on any facts or study?
It is based on the lack of studies showing a clear causal link. Which is not definitive for the reasons I outlined earlier, but I feel like the onus is on those who want to make it a crime with such harsh penalties to prove said causal link, not the other way around.
Note also that, even if such a clear causal link can be established, surely there is still a difference wrt imputed harm - and thus, culpability - for those who seek out recordings of genuine sexual abuse vs simulated? As things stand, in many jurisdictions, this is not reflected in the penalties at all. Justice aside, it creates a perverse incentive for pedophiles to prefer non-simulated CSAM.
> It's about the potential class of victims and the outrageous life long damage that can be done to them. The appropriate response to recognizing these feelings isn't to hand them AI generated material to sate their desires. It's to get them into therapy immediately.
Are you basically saying that simulated CSAM should be illegal because not banning it would be offensive to real victims of actual abuse? Should we extend this principle to fictional representations of other crimes?
As far as getting them into therapy, this is a great idea, but kinda orthogonal to the whole "and also you get 20+ years in the locker" thing. Even if you fully buy into the whole "gateway drug" theory where consumption of simulated CSAM inevitably leads to actual abuse in the long run, that also means that there are pedophiles at any given moment that are still at the "simulated" stage, and such laws are a very potent deterrent for them to self-report and seek therapy.
With respect to "handing them AI-generated material", this is already a fait accompli given local models like SD. In fact, at this point, it doesn't even require any technical expertise, since image generator apps will happily run on consumer hardware like iPhones, with UI that is basically "type what you want and tap Generate". And unless generated CSAM is then distributed, it's pretty much impossible to restrict this without severe limitations on local image generation in general (basically prohibiting any model that knows what naked humans look like).
> Are you basically saying that simulated CSAM should be illegal because not banning it would be offensive to real victims of actual abuse?
No, it's because it will likely lead to those consuming it turning to real life sexual abuse. The behavior says "I'm attracted to children." We have a lot of good data on where this precisely leads to when entirely unsupervised or unchecked.
> but kinda orthogonal to the whole "and also you get 20+ years in the locker" thing.
You've constantly created this strawman but it appears nowhere in my actual argument. To be clear it should be like DUIs, with small penalties on first time offenses increasing to much larger ones upon repetition of the crime.
> it's pretty much impossible to restrict this
Right. It's impossible to stop people committing murder as well. It's also impossible to catch every perpetrator. Yet we don't strain ourselves to have the laws on the books, and it's quite possible the laws, and the penalties themselves, have a "chilling effect" when it comes to criminality.
Or, if your sensibility of diminished rights is so offended, then it can be a trade. If you want to consume AI child pornography you have to voluntarily add your name to a public list. Those on this list will obviously be restricted from certain careers, certain public settings, and will be monitored when entering certain areas.
> No, it's because it will likely lead to those consuming it turning to real life sexual abuse. The behavior says "I'm attracted to children." We have a lot of good data on where this precisely leads to when entirely unsupervised or unchecked.
For one thing, again, we don't have quality studies clearly showing that.
But let's suppose that we do, and they agree. If so, then shouldn't the attraction itself be penalized, since it's inherently problematic? You're essentially saying that it's okay to nab people for doing something that is in and of itself harmless, because it is sufficient evidence that they will inevitably cause harm in the future.
I do have to note that it is, in fact, fairly straightforward to medically diagnose pedophilia in a controlled setting - should we just routinely run everyone through this procedure and compile the "sick pedo list" preemptively this way? If not, why not?
> You've constantly created this strawman but it appears nowhere in my actual argument.
My "strawman" is the actual situation today that you were, at least initially, trying to defend.
> Right. It's impossible to stop people committing murder as well. It's also impossible to catch every perpetrator. Yet we don't strain ourselves to have the laws on the books, and it's quite possible the laws, and the penalties themselves, have a "chilling effect" when it comes to criminality.
That can be measured, and we did - and yes, they do, but it's specifically the likelihood of getting caught, not so much the severity of the punishment (which is one of the reasons why we don't torture people as form of punishment anymore, at least not officially).
The point, however, was that nobody is "handing" them anything. It's all done with tools that are, at least at present, readily available and legal in our society, and this doesn't change whether you make some ways of using those tools illegal or not, nor is it impossible to detect such private use unless you're willing to go full panopticon or ban the tools.
Laws don't need to be absolutely enforceable to still work. You probably will not go to jail for running that stop sign at the end of your street (but please don't run it).
AI-generated CSAM is real CSAM and should be treated that way legally. The image generators used to generate it are usually trained on pictures of real children.
Assuming the person is a passive consumer with no messages / money exchanged with anyone, it is very hard to prove social harm or damage. Sentences should be proportional to the crime. Treating possession of cp as equivalent of literally raping a child just seems absurd to me. IMO, just for the legal protection of the average citizen, a simple possession should never warrant jail time.
For the record, i'm against any kind of child abuse, and 25 years for an actual abuser would not be a problem.
But...
Should you go to prison for possesing images of an adult being raped? What if you don't even know it's rape? What if the person is underage, but you don't know (looks adult to you)? What about a murder video instead of rape? What if the child porn is digitally created (AI, photoshop, whatever)? What if a murder scene is digitally created (fake bullets, holes+blood made in video editing software)? What if you go to a mainstream porno store, buy a mainsteam professional porno video and you later find out that the actress way a 15yo Traci Lords?
You do in some countries. For instance, knowingly possessing video of the Christchurch massacre is illegal in New Zealand, due to a ruling by NZ’s Chief Censor (yes, that’s the actual title), and punishable by up to 14 years in prison.
It's a reasonable argument, but a concerning one because it hinges on a couple of layers of indirection between the person engaging in consuming the content and the person doing the harm / person who is harmed.
That's not outside the purview of US law (especially in the world post-reinterpretation of the Commerce Clause), but it is perhaps worth observing how close to the cliff of "For the good of Society, you must behave optimally, Citizen" such reasoning treads.
For example: AI-generated CP (or hand-drawn illustrations) are viscerally repugnant, but does the same "individual suffering and social damage" reasoning apply to making them illegal? The FBI says yes to both in spite of the fact that we can name no human that was harmed or was unable to give consent in their fabrication (handwaving the source material for the AI, which if one chooses not to handwave it: drop that question on the floor and focus on under what reasoning we make hand-illustrated cartoons illegal to possess that couldn't be applied to pornography in general).
> The FBI says yes to both in spite of the fact that we can name no
They have two arguments for this (that I am aware of). The first argument is a practical one, that AI-generated images would be indistinguishable from the "real thing", but that the real thing still being out there would complicate their efforts to investigate and prosecute. While everyone might agree that this is pragmatic, it's not necessarily constitutionally valid. We shouldn't prohibit activities based on whether these activities make it more difficult for authorities to investigate crimes. Besides, this one's technically moot... those producing the images could do so in such a way (from a technical standpoint) that they were instantly, automatically, and indisputably provable as being AI-generated.
All images could be mandated to require embedded metadata which describes the model, seed, and so forth necessary to regenerate it. Anyone who needs to do so could push a button, the computer would attempt to regenerate the image from that seed, and the computer could even indicate that the two images matched (the person wouldn't even need to personally view the image for that to be the case). If the application indicated they did not match, then authorities could investigate it more thoroughly.
The second argument is an economic one. That is, if a person "consumes" such material, they increase economic demand for it to be created. Even in a post-AI world, some "creation" would be criminal. Thus, the consumer of such imagery does cause (indirectly) more child abuse, and the government is justified in prohibiting AI-generated material. This is a weak argument on the best of days... one of the things that law enforcement efforts excel at is just this. When there are two varieties of a behavior, one objectionable and the other not, but both similar enough that they might at a glance be mistaken for one another, is that it can greatly disincentivize one without infringing the other. Being an economic argument, one of the things that might be said is that economic actors seek to reduce their risk of doing business, and so would gravitate to creating the legal variety of material.
While their arguments are dumb, this filth's as reprehensible as anything. The only question worth asking or answering is, were (AI-generated) it legal, would it result in fewer children being harmed or not? It's commonly claimed that the easy availability of mainstream pornography has reduced the rate of rape since the mid-20th century.
> the individual suffering and social damage caused by the actions that he incentivizes
That's some convoluted way to say he deserves 25 years because he may (or may not) at some point in his life molest a kid.
Personally i think that the idea of convicting a man for his thoughts is borderline crazy.
User of child pornography need to be arrested, treated, flagged and receive psychological followup all along their lives, but sending them away for 25 years is lazy and dangerous because when he will get out he will be even worst than before and won't have much to loose.
The language is defined by how people actually use it, not by how a handful of activists try to prescribe its use. Ask any random person on the street, and most of them have no idea what CSAM is, but they know full well what "child porn" is. Dictionaries, encyclopedias etc also reflect this common sense usage.
The justification for this attempt to change the definition doesn't make any sense, either. Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad. In fact, I would posit that making this argument in the first place is detrimental to sex-positive outlook on porn.
> Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad.
I think people who want others to stop using the term "child porn" are actually arguing the opposite of this. Porn is good, so calling it "child porn" is making a euphemism or otherwise diminishing the severity of "CSAM" by using the positive term "porn" to describe it.
I don't think the established consensus on the meaning of the word "porn" itself includes some kind of inherent implied positivity, either; not even among people who have a generally positive attitude towards porn.
"Legitimate" is probably a better word. I think you can get the point though. Those I have seen preferring the term CSAM are more concerned about CSAM being perceived less negatively when it is called child porn than they are about consensual porn being perceived more negatively.
Is it still okay to say "pirated movies" or is that not negative enough since movies are okay? Should we call it "intellectual property theft material"?
Stop doing this. You are confusing the perfectly noble aspect of calling it abuse material to make it victim centric with denying the basic purpose of the material. The people who worked hard to get it called CSAM do not deny that it’s pornography for its users.
The distinction you went on to make was necessary specifically for this reason.
> the private search doctrine, which authorizes a government
actor to repeat a search already conducted by a private party without
securing a warrant.
IANAL, etc. Does that mean that if someone breaks in to your house in search of drugs, finds and steals some, and is caught by the police and confesses all that the police can then search your house without a warrant?
IANAL either, but from what I've read before the courts treat searches of your home with extra care under the 4th Amendment. At least one circuit has pushed back on applying private search cases to residences, and that was for a hotel room[0]:
> Unlike the package in Jacobsen, however, which "contained nothing but contraband," Allen's motel room was a temporary abode containing personal possessions. Allen had a legitimate and significant privacy interest in the contents of his motel room, and this privacy interest was not breached in its entirety merely because the motel manager viewed some of those contents. Jacobsen, which measured the scope of a private search of a mail package, the entire contents of which were obvious, is distinguishable on its facts; this Court is unwilling to extend the holding in Jacobsen to cases involving private searches of residences.
So under your hypothetical, I'd expect the police would be able to test "your drugs" that they confiscated from the thief, and use any findings to apply for a warrant for a search of your house, but any search without a warrant would be illegal.
"That, however, does not mean that Maher is
entitled to relief from conviction. As the district court correctly ruled in the
alternative, the good faith exception to the exclusionary rule supports denial of
Maher’s suppression motion because, at the time authorities opened his uploaded
file, they had a good faith basis to believe that no warrant was required."
"Defendant [..] stands convicted following a guilty plea in
the United States District Court for the Northern District of New York
(Glenn T. Suddaby, Judge) of both receiving and possessing approximately
4,000 images and five videos depicting child pornography"
A win for google, for the us judicial system, and for constitutional rights.
You forgot your IANAL, but thankfully it's obvious.
That's a ridiculous desire. In that world, if I delete your comment, and you kill me in retaliation, you should be let free if you argue that my deleting your comment infringed your right to free speech?
What I mean specifically is that because the police saw illegally obtained evidence, all evidence collected afterward after that point should be considered fruit of the poisoned tree and inadmissible
I think reasonable people can disagree on this. Fruit of the poisoned tree doctrines eliminate bad incentives for law enforcement to commit crimes in their investigations. But when doctrines such as this cause the guilty to go free, it erodes public confidence in the rule of law. Like many things, its a tradeoff - and the legal system is a process of discovery and adaptation, not some simplistic set of unchangeable rules. Justice for one must always be upheld. I think this ruling is pretty good at threading the line here.
The public are generally closeted authoritarians (see: pandemic, 9/11), and their opinions on fair trials, evidentiary rules, surveillance, and constitutional freedoms should not be regarded
IANAL. But the argument for the right to search was twofold, one was considered unconstitutional the other wasn't.
If I search you based on probable cause and fear of destruction of evidence, and the judge rules one was valid and the other not, is the evidence inadmissible?
To me it's clearly an if OR search not an if AND search. Otherwise you disincentivize multiple justifications to do something.
Was using those md5 sums on images for flagging images 20 years ago for the government, occasional false positives, but the safety team would review those, not operations. My only role was to burn the users account to a dvd (via a script) and have the police officer pick up the dvd, we never touched the disk, and only burned the disk with a warrant. (we never saw/touched the users data...)
Figured this common industry standard for chain of custody for evidence. Same with police videos, they are uploaded to the courts digital evidence repository, and everyone who looks at the evidence is logged.
Serious question: a Google employee is not law enforcement. So if they are examining child pornography, in even only to identify it, aren’t they breaking the law?
18 U.S.C. § 2252
…
c) Affirmative Defense.—It shall be an affirmative defense to a charge of violating paragraph (4) of subsection (a) that the defendant—
(1) possessed less than three matters containing any visual depiction proscribed by that paragraph; and
(2) promptly and in good faith, and without retaining or allowing any person, other than a law enforcement agency, to access any visual depiction or copy thereof—
(A) took reasonable steps to destroy each such visual depiction; or
(B) reported the matter to a law enforcement agency and afforded that agency access to each such visual depiction.
So the Google employee would both have to “promptly” notify law enforcement AND not show it to any other person unless law enforcement and also have fewer than three pieces of content. And additionally, assigning hashes/etc — does that mean the file was preserved (otherwise what’s the hash representing? A photo they once found and then destroyed? How could law enforcement prove that the hash represented a prohibited image in order to establish probable cause?) “we say some random string of characters represents an image that we saw is child porn, so now you can issue a warrant.” What protections does the user have against a warrant being issued under false information? If the original image no longer exists, then how could the warrant be challenged? If the original image does exist — is Google storing it? And if they are storing it, aren’t they themselves breaking the law?
In other words, do Google servers knowingly have child pornography on them? And does a private employee have immunity from prosecution if they don’t immediately notify law enforcement and also don’t show anyone else? I could see an instance where an employee shows his or her supervisor prior to calling law enforcement — which would be a violation of that law since the law explicitly says “any” person other than law enforcement.
I am being somewhat tedious here, but I am genuinely interested in how Google can handle this without breaking the law themselves — more specifically the employee who, at the time of classifying the image, is in possession of illegal content.
And does Google have some de facto exemption that wouldn’t apply to a smaller founder who had to contend with user created content and followed the same “assigning a hash” process as Google?
What are the best (legal) practices around this, specifically around smaller companies that might have to contend with these issues?
I guess it's always grey area for Google. But most likely, their contractors are overseas, and they contract their work to a third party in a third country. It's entirely possible that the business has gone full-circle and the same studios deliver CP and the CP hashes.
- Defendant was in fact sending CP through his gmail.
- gmail correctly detects and flags it based on hash value
- Google sends message to NCMEC based on hash value
- NCMEC sends it to police based on hash value
Now police are facing the obvious question, is this actually CP? They open the image, determine it is, then get a warrant to search his gmail account, and (later) another warrant to search his home.
The court here is saying they should have got a warrant to even look at the image in the first place. But warrants only issue on probable cause. What's the PC here? The hash value. What's the probability of hash collisions? Non-zero but very low.
The practical upshot of this is that all reports from NCMEC will now go through an extra step of the police submitting a copy of the report with the hash value and some boilerplate document saying 'based on my law enforcement experience, hash values are pretty reliable indicators of fundamental similarity', and the warrant application will then be rubber stamped by a judge.
An analogous situation would be where I send a sealed envelope with some documents to the police, writing on the outside 'I believe the contents of this envelope are proof that John Doe committed [specific crime]', and the police have to get a warrant to open the envelope. It's arguably more legally consistent, but in practice it just creates an extra stage of legal bureaucracy/delay with no appreciable impact on the eventual outcome.
Recall that the standard for issuance of a warrant is 'probable cause', not 'mathematically proven cause'. Hash collisions are a possibility, but a sufficiently unlikely one that it doesn't matter. Probable cause means 'a fair probability' based on independent evidence of some kind - testimony, observation, forensic results or so. Even a shitty hash function that's only 90% reliable is going to meet that threshold. In the 10% of cases where the opened file turns out to be a random image with no pornographic content it's a 'no harm no foul' situation.
and a more detailed examination of common perceptual hashing algorithms (skip to table 3 for the collision probabilities): https://ceur-ws.org/Vol-2904/81.pdf
I think what a lot of people are implicitly arguing here is that the detection system needs to be perfect before anyone can do anything. Nobody wants the job of examining images to check if they're CP or not, so we've outsourced it to machines that do so with good-but-not-perfect accuracy and then pass the hot potato around until someone has to pollute their visual cortex with it.
Obviously we don't want to arrest or convict people based on computer output alone, but how good does it have to be (in % or odds terms) in order to begin an investigation - not of the alleged criminal, but of the evidence itself? Should companies like Google have to submit an estimate of the probability of hash collisions using their algorithm and based on the number of image hashes that exist on their servers at any given moment? Should they be required to submit source code used to derive that? What about the microcode of the silicon substrate on which the calculation is performed?
All other things being equal, what improvement will result here from adding another layer of administrative processing, whose outcome is predetermined?
> Recall that the standard for issuance of a warrant is 'probable cause', not 'mathematically proven cause'. Hash collisions are a possibility, but a sufficiently unlikely one that it doesn't matter. Probable cause means 'a fair probability' based on independent evidence of some kind - testimony, observation, forensic results or so. Even a shitty hash function that's only 90% reliable is going to meet that threshold. In the 10% of cases where the opened file turns out to be a random image with no pornographic content it's a 'no harm no foul' situation.
But do we actually know that? Do we know what the thresholds of "similarity" are in use by google and others, and how many false positives they trigger? Billions of photos are processed daily by googles services (google photo, chat programs, gmail, drive, etc.), and very few people actually send such stuff via gmail, so what if the reality is, that 99.9% of the matches are actually false positives? What about intentional matches, like someone intentionally creating some random SFW meme image, that (when hashed) matches with some illegal image hash, and that photo is then sent around intentionally.. should police really be checking all those emails, photos, etc., without warrants?
Well, that's why I'm asking what threshold of certainty people want to apply. The hypotheticals you cite are certainly possible, but are they likely?
what if the reality is, that 99.9% of the matches are actually false positives
Don't you think that if Google were deluging the cops with false positive reports that turned out to be perfectly innocuous 999 times out of 1000, that police would call them up and say 'why are you wasting our time with this?' Or that defense lawyers wouldn't be raising hell if there were large numbers of clients being investigated over nothing? And how would running it through a judge first improve that process?
What about intentional matches, like someone intentionally creating some random SFW meme image [...]
OK, but what is the probability of that happening? And if such images are being mailed in bulk, what would be the purpose other than to provide cover for CSAM traders? The tactic would only be viable for as long as it takes a platform operator to change up their hashing algorithm. And again, how would the extra legal step of consulting a judge alleviate this?
should police really be checking all those emails, photos, etc., without warrants?
But that's not happening. As I pointed out, police examined the submitted image evidence to determine of it was CP (it was). Then they got a warrant to search the gmail account, and following that another warrant to search his home. The didn't investigate the criminal first, the investigated an image file submitted to them to determine whether it was evidence of a crime.
And yet again, how would bouncing this off a judge improve the process? The judge will just look at the report submitted to the police and a standard police letter saying 'reports of this kind are reliable in our experience' and then tell the police yes, go ahead and look.
> Don't you think that if Google were deluging the cops with false positive reports that turned out to be perfectly innocuous 999 times out of 1000, that police would call them up and say 'why are you wasting our time with this?' Or that defense lawyers wouldn't be raising hell if there were large numbers of clients being investigated over nothing? And how would running it through a judge first improve that process?
Yes, sure.. they send them a batch of photos, thousands even, and someone from the police skims the photos... a fishing expedition would be the right term for that.
> OK, but what is the probability of that happening? And if such images are being mailed in bulk, what would be the purpose other than to provide cover for CSAM traders? The tactic would only be viable for as long as it takes a platform operator to change up their hashing algorithm. And again, how would the extra legal step of consulting a judge alleviate this?
You never visited 4chan?
> But that's not happening. As I pointed out, police examined the submitted image evidence to determine of it was CP (it was). Then they got a warrant to search the gmail account, and following that another warrant to search his home. The didn't investigate the criminal first, the investigated an image file submitted to them to determine whether it was evidence of a crime.
They first entered your home illegally and found a joint on the table, and then got a warrant for the rest of the house. As pointed out in the article and in the title... they should need a warrant for the first image too.
> And yet again, how would bouncing this off a judge improve the process? The judge will just look at the report submitted to the police and a standard police letter saying 'reports of this kind are reliable in our experience' and then tell the police yes, go ahead and look.
Sure, if it brings enough results. But if they issue 200 warrants and get zero results, things will have to change, both for police and for google. This is like saying "that guy has long hair, he's probably a hippy and has drugs, let's get a search warrant for his house". Currently we don't know the numbers, and most people (you excluded) believe that police shouldn't search private data of people just because some algorithm thinks so, without a warrant.
The idea that police are spending time just scanning photos of trains, flowers, kittens and so on in hopes of finding an occasional violation seems ridiculous to me. If nothing else, you would expect NCMEC to wonder why only 0.1% of their reports are ever followed up on.
a fishing expedition would be the right term for that
No it wouldn't. A fishing expedition is where you get a warrant against someone without any solid evidence and then dig around hoping to find something incriminating.
You never visited 4chan?
I have been a regular there since 2009. What point are you attempting to make?
They first entered your home illegally and found a joint on the table, and then got a warrant for the rest of the house. As pointed out in the article and in the title... they should need a warrant for the first image too.
This analogy is flat wrong. I already explained the difference.
most people (you excluded) believe that police shouldn't search private data of people just because some algorithm thinks so, without a warrant.
That is not what I believe. I think they should get a warrant to search any private data. In this case they're looking at a single image to determine whether it's illegal, as a reasonably reliable statistical test suggests it to be.
You're not explaining what difference it makes if a judge issues a warrant on the exact same criteria.
and a more detailed examination of common perceptual hashing algorithms (skip to table 3 for the collision probabilities): https://ceur-ws.org/Vol-2904/81.pdf
And there was a whole lot of explanation of how probable cause works and how it's different from programmers' aspirations to perfection.
Do you have any evidence that this is happening? You don't think someone would have noticed by now if it were?
And as I pointed out, we're not talking about a search warrant on a person, we're talking about whether it's necessary to get a search warrant to look at a picture to determine if it's an illegal image.
The old example is the email server administrator. If the email administrator has to view the contents of user messages as a part of regular maintenance and that email administrator notices lawful violations in those user messages they can report it to law enforcement. In that case law enforcement can receive the material without a warrant only if law enforcement never asked for it before it was gifted to them. There are no fourth amendment protections provided to offenders in this scenario of third party accidental discovery. Typically, in these cases the email administrator does not have an affirmed requirement to report lawful violations to law enforcement unless specific laws claim otherwise.
If on the other hand law enforcement approaches that email administrator to fish for illegal user content then that email administrator has become an extension of law enforcement and any evidence discovered cannot be used in a criminal proceeding. Likewise, if the email administrator was intentionally looking through email messages for violations of law even not at the request of law enforcement they are still acting as agents of the law. In that case discovery was intentional and not an unintentional product of system maintenance.
There is a third scenario: obscenity. Obscenity is illegal intellectual property, whether digital or physical, as defined by criminal code. Possession of obscene materials is a violation of criminal law for all persons, businesses, and systems in possession. In that case an email administrator that accidentally discovers obscene material does have a required obligation to report their discoveries, typically through their employer's corporate legal process, to law enforcement. Failures to disclose such discoveries potentially aligns the system provider to the illegal conduct of the violating user.
Google's discovery, though, was not accidental as a result of system maintenance. It was due to an intentional discovery mechanism based on stored hashes, which puts Google's conduct in line with law enforcement even if they specified their conduct in their terms of service. That is why the appeals court claims the district court erred by denying the defendant's right to suppression on fourth amendment grounds.
The saving grace for the district court was a good faith exception, such as inevitable discovery. The authenticity and integrity of the hash algorithm was never in question by any party so no search for violating material was necessary, which established probably cause thus allowing law enforcement reasonable grounds to proceed to trial. No warrant was required because the evidence was likely sufficient at trial even if law enforcement did not directly view image in question, but they did verify the image. None of that was challenged by either party. What was challenged was just Google's conduct.
So now an algorithm can interpret the law better than a judge. It’s amazing how technology becomes judge and jury while privacy rights are left to a good faith interpretation. Are we really okay with letting an algorithmic click define the boundaries of privacy?
The judge doesn't really understand a hash well. They say things like "Google assigned a hash" which is not true, Google calculated the hash.
Also I'm surprised the 3rd-party doctrine doesn't apply. There's the "private search doctrine" mentioned but generally you don't have an expectation of privacy for things you share with Google
"More simply, a hash value is a string of characters obtained by processing the contents of a given computer file and assigning a sequence of numbers and letters that correspond to the file’s contents."
Google assigned the hashing algorithm (maybe, assuming it wasn't chosen in some law somewhere, I know this CSAM hashing is something the big tech work on together).
Once the hashing algorithm was assigned, individual values are computed or calculated.
I don't think the judge's wording is all that bad but the word "assigned" is making it sound like Google exercised some agency when really all it did was apply a pre-chosen algorithm.
And it should be mostly bijective under most conditions. (This is obviously impossible in practice but hashes with common collisions shouldn't be allowed as legal evidence). Also neural/visual hashes like those used by big tech makes things tricky.
The hash in question has many collisions. It it probably enough to get a warrant put it on a warrant, but it may not be enough to get a warrant without some other evidence. (it can be enough evidence to look for other public signs of evidence, or perhaps because there are a number of images that match different hashes)
There's a password on my Google account, I totally expect to have privacy for anything I didn't choose to share with other people.
The hash is kind of metadata recorded by Google, I feel like Google using it to keep child porn off their systems should be reasonable. Same ballpark as limiting my storage to 1GB based on file sizes. Sharing metadata without a warrant is a different question though.
As should be expected from the lawyer world, it seems like whether you have an expectation of privacy using gmail comes down to very technical word choices in the ToS, which of course neither this guy nor anyone else has ever read. Specifically, it may be legally relevant to your expectation of privacy whether Google says they "may" or "will" scan for this stuff.
Out of curiosity, what is false positive rate of a hash match?
If the FPR is comparable to asking a human "are these the same image?", then it would seem to be equivalent to a visual search. I wonder if (or why) human verification is actually necessary here.
I doubt sha1 hashes are used for this. Those image hashes should match files regardless of orientation, cropping, resizing, re-compression, color correction etc. The collision could be far more frequent with these hashes.
The hash should ideally match even if you use photoshop to cut the one person out of the picture and put that person into a different photo. I'm not sure if that is possible, but that is what we want.
The reason human verification is necessary is that the government is relying on something called the "private search" doctrine to conduct the search without a warrant. This doctrine allows them to repeat a search already conducted by a private party (i.e., Google) without getting a warrant. Since Google didn't actually look at the file, the government is not able to look at the file without a warrant, as that search exceeds the scope of the initial search Google performed.
Naively, 1/(2^{hash_size_in_bits}). Which is about 1 in 4 billion odds for a 32 bit hash, and gets astronomically low at higher bit counts.
Of course, that's assuming a perfect, evenly distributed hash algorithm. And that's just the odds that any given pair of images has the same hash, not the odds that a hash conflict exists somewhere on the internet.
Normal hash functions have pseudo-random outputs and they can collide even when the input space is much smaller than the output space.
In fact, I'll go run ten million values, encoded into 24 bits each, through a 40 bit hash and count the collisions. My hash of choice will be a truncated sha256.
> Out of curiosity, what is false positive rate of a hash match?
No way to know without knowledge of the 'proprietary hashing technology'.
Theoretically though, a hash can have infinitely many inputs that produce the same output.
Mismatching hash values from the same hashing algorithm can prove mismatching inputs, but matching hash values don't ensure matching inputs.
> I wonder if (or why) human verification is actually necessary here
It's not about frequency, it's about criticality of getting it right. If you are going to make a negatively life-altering report on someone, you'd better make sure the accusation is legitimate.
I'd say the focus on hashing is a bit of a red herring.
Most anyone would agree that the hash matching should probably form probable cause for a warrant, allowing a judge to sign off on the police searching (i.e., viewing) the image. So, if it's a collision, the cops get a warrant and open up your linux ISO or cat meme, and it's all good. Probably the ideal case is that they get a warrant to search the specific image, and are only able to obtain a warrant to search your home and effects, etc. if the image does appear to be CSAM.
At issue here is the fact that no such warrant was obtained.
> Most anyone would agree that the hash matching should probably form probable cause for a warrant
I disagree with this. Yes, if we were talking MD5, SHA, or some similar true hash algo, then the probability of a natural collision is small enough that I agree in principle.
But if the hash algo is of some other kind then I do not know enough about it to assert that it can justify probable cause. Anyone who agrees without knowing more about it is a fool.
That's fair. I came away from reading the opinion that this was not a perceptual hash, but I don't think it is explicitly stated anywhere. I would have similar misgivings if indeed it is a perceptual hash.
I think it'll prove far more likely that the government creates incentives to lead Google/other providers to fully do the search on their behalf.
The entire appeal seems to hinge on the fact that Google didn't actually view the image before passing it to NCMEC. Had Google policy been that all perceptual hash hits were reviewed by employees first, this would've likely been a one page denial.
If the hash algorithm were CRC8, then obviously it should not be probable cause for anything. If it were SHA-3, then it's basically proof beyond reasonable doubt of what the file is. It seems reasonable to question how collisions behave.
I don't agree that it would be proof beyond reasonable doubt, especially because neither google nor law enforcement can produce the original image that got tagged.
By original do you mean the one in the database or the one on the device?
If the device spit out the same SHA3, then either it had the exact same image, or the SHA3 was planted somehow. The idea that it's actually a different file is not a reasonable doubt. It's too unlikely.
By the original, I mean the image that was used to produce the initial hash, which Google (rightly) claimed to be CSAM. Without some proof that an illicit image that has the same hash exists, I wouldn't accept a claim based on hash alone.
Oh definitely you need someone to examine the image that was put in the database to show it's CSAM, if the legal argument depends on that. But that's an entirely different question from whether the image on the device is that image.
For non-broken cryptographic hashes (e.g., SHA-256), the false-positive rate is negligible. Indeed, cryptographic hashes were designed so that even nation-state adversaries do not have the resources to generate two inputs that hash to the same value.
These are not the kinds of hashes used for CSAM detection, though, because that would only work for the exact pixel-by-pixel copy - any resizing, compression etc would drastically change the hash.
Instead, systems like these use perceptual hashing, in which similar inputs produce similar hashes, so that one can test for likeness. Those have much higher collision rates, and are also much easier to deliberately generate collisions for.
> Google’s hash match may well have established probable cause for a warrant to allow police to conduct a visual examination of the Maher file.
Very reasonable. Google can flag accounts as CP, but then a judge still needs to issue a warrant for the police to actually go and look at the file. Good job court. Extra points for reasoning about hash values.
> a judge still needs to issue a warrant for the police to actually go and look at the file
Only in the future. Maher's conviction, based on the warrantless search, still stands because the court found that the "good faith exception" applies--the court affirmed the District Court's finding that the police officers who conducted the warrantless search had a good faith belief that no warrant was required for the search.
I wonder what happened to fruit of the poisoned tree? Seems a lot more liberty oriented than "good faith exception" when police don't think they need a warrant (because police never seem to "think" they need a warrant).
This exactly. Bad people have to go free in order to incentivize good behavior by cops.
You and I (as innocent people) are more likely to be affected by bad police behavior than the few bad people themselves and so we support the bad people going free.
>You and I (as innocent people) are more likely to be affected by bad police behavior than the few bad people themselves and so we support the bad people going free.
I know anecdotes aren't data, but my only negative interactions with cops have basically been for traffic tickets. Meanwhile my negative interactions with criminals have been far more numerous, along with several second-order effects caused by their mere existence (like not going to certain neighborhoods at night because of high crime rates). I don't think there's ever been a neighborhood law abiding citizens had to avoid because of fear of cops.
Maybe I'm some kind of crazy outlier, but I'm pretty sure that most innocent people are the same.
> I don't think there's ever been a neighborhood law abiding citizens had to avoid because of fear of cops.
I think there's a fair number of stories of POC be accosted by police officers because they were in a neighborhood they didn't "belong" in, so your statement is likely inaccurate.
The threat to innocent people posed by incompetent or tyrannical police is arguably much greater than by ordinary criminality.
In small towns across America, corrupt police departments hassle outsiders and issue minor citations as a way to generate revenue. If someone is found to have large amounts of cash for some reason, they often will confiscate it in a process called civil forefeiture. Many US police officers act with impunity because their misconduct will be protected by local prosecutors and judges. There absolutely are towns and neighborhoods good people should avoid because of the police.
Dan White shot the mayor and a supervisor in cold blood and confessed everything to the cops. They managed to stop him from spilling out his premeditation on tape by interrupting him as his confession was getting rolling and the DA failed to win the easiest conviction of his career. The cops then went on a spree of beating people gratuitously in the Castro.
Cops aren't there to enforce the law without fear or favor. They routinely engage in petty corruption and complain when they have to be professional when on duty.
There is a reality distortion field in existence now because almost every police interaction is recorded (body cams are everywhere nowadays) and the ones that go bad are put on full blast across social media and the news, despite them being somewhere on the order of 1 in 1,000,000 encounters.
Seriously, if car accidents were reported like police accidents, we probably would have been forced by confused ideologues to ban automobiles 2 years ago.
Given that they're over 100 deaths a day in the US (as of 2022), we probably should consider car accidents more than we do.
(But they pretty much do report on them consistently on local news... People won't stop driving because the social benefit is so large).
That’s called luck.
Personally I’m a cis white male, who’s been a mostly law abiding citizen, and I’ve had dozens of poor interactions with police throughout my life. Additionally I have a probably unusual number of family and friends who work in law enforcement. The stories I’ve heard about co-workers from them are absolutely terrifying. My father’s (retired police officer) advice when I became a teenager was “only call the police when what is about to happen is worse than going to jail.”
I deeply respect the difficulty of the profession and don’t believe that all or even most police are bad people, but there are way too many who have no business being in that profession.
Honest question: are you white?
> Bad people have to go free in order to incentivize good behavior by cops.
And they will, next time, and everyone knows it. We don't need an actual example of a bad person going free if the potential is certain enough.
Unless, of course, you're trying to encourage good behaviour in the general case (rather than a codified list of specifics); but that's expecting police officers to be experts in right and wrong. As obvious as such things are to me, I'm aware that a lot of people struggle a lot more with these things. (Or, perhaps, struggle less: I spend a lot of time thinking about morality and ethics, more than is reasonable to expect a salaried worker to spend.)
I think its okay that we expect cops to be good _after_ the rule exists, rather than set the bad guys free to (checks notes) incentivize cops to take our new rule super seriously.
It would seem that the inverse would need to apply in order for the justice system to have any semblance of impartiality. That is that we now have to let both of them off the hook, since neither had been specifically informed they weren’t allowed to do the thing beforehand.
That is why many people think this should be tossed out. Ignorance that an action was a crime is almost never an acceptable defense, so it should not be an acceptable offense either.
> we now have to let both of them off the hook, since neither had been specifically informed they weren’t allowed to do the thing beforehand.
I'm not trying to be funny, or aggressive, or passive aggressive, seriously: there's two entities in the discussion, the cops, and the person with a photograph with a hash matching child porn. I'm phrasing that as passively as possible because I want to avoid the tarpit of looking like I'm appealing to emotion:
Do you mean the hash-possessor weren't specifically informed it was illegal to possess said hash?
> It would seem that the inverse would need to apply in order for the justice system to have any semblance of impartiality...That is why many people think this should be tossed out.
Of course, I could be missing something here because I'm making a hash of parsing the first bit. But, no, if the cops in good faith make a mistake, there's centuries of jurisprudence behind not letting people go free for it, not novel with this case.
> Do you mean the hash-possessor weren't specifically informed it was illegal to possess said hash?
This is literally the doctrine behind the good faith argument and qualified immunity. If they have not been informed that this specific act, done in this specific way is not allowed then it is largely permissible.
A stupid but equivalent defense from the possessor would be “it’s in Googles possession, not mine, so I had a good faith belief that I did not possess the files”. It’s clearly wrong based on case law, but I wouldn’t expect the average person to have a great grasp of how possession works legally (nor would I claim to be an expert on it).
This is effectively what the good faith doctrine establishes for police, even though they really ought to at least have an inkling given that the law is an integral part of their jobs. As long as they can claim to be sufficiently stupid, it is permissible. That is not extended to the defense, for whom stupidity is never a defense.
> But, no, if the cops in good faith make a mistake, there's centuries of jurisprudence behind not letting people go free for it, not novel with this case.
Acting in good faith would be getting a warrant regardless, because the issue is not that time-sensitive and there are clear ambiguities here. They acted brashly under the assumption that if they were wrong, they could claim stupidity. It encourages the police to push the boundaries of legal behavior, because they still get to keep the evidence even if they are wrong and have committed an illegal search.
It is, yet again, rules for thee but not for me. Frankly, with the asymmetry of responsibility and experience with laws, the police should need to clear a MUCH higher bar to come within throwing distance of “good faith”.
>This is literally the doctrine behind the good faith argument and qualified immunity. If they have not been informed that this specific act, done in this specific way is not allowed then it is largely permissible.
For criminal actions an entirely different set of standards exists, and has longstanding legal precedent. Two in particular: mens rea and strict liability
Right, and my argument is that the double standard itself is not just. I do know as a matter of practicality that I don't really have a legal leg to stand on here; the law is what judges say it is, and they've said it is the way it currently is.
I do not find "the justice system treats them differently, therefore they are different and the justice system is just in treating them differently" to be a compelling argument that the double standard is just. It's just a circular appeal to authority; any behavior by the justice system is morally permissible under that idea, simply because the justice system declares it to be so.
My question is how is it just that differing standards apply? And furthermore, how is it just that that leniency is granted to the benefactor of a severe power imbalance? Unconstitutional search and seizure could absolutely be a crime; in this situation, a citizen would likely be charged under the CFAA, which is a crime.
It's not a double standard though, it's different standards for fundamentally different kinds of actions (state police action vs criminal activity)
Those are only nominally different, insofar as the justice system chooses to call some acts one and some acts another. It doesn't speak to the nature of the act, only what we choose to classify it as.
I.e. unconstitutional searches could be criminal activity if the judiciary just decides to classify it differently.
There are certainly differences in the nature of the act that we could talk about, but how the judiciary classifies them is only a nominal difference.
Idk why you keep saying way out there stuff, then repeating back mundane stuff.
Yeah, the difference between a bad thing and a good thing is what judges say.
No, there is a difference bigger than "nominal", which means in name only. Go out on the street and try explaining why someone having child porn, and cops handed a subscriber name + child porn image by Google exactly the same thing, there's only a "nominal" difference.
I'm genuinely at a loss for how this doesn't make sense. "Crime" is absolutely a nominal status. Things can be made into a crime or no longer a crime arbitrarily. Abortion was legal across the US, and then it wasn't. Abortion didn't change at all, but how we refer to it did. Ditto for possession/distribution of alcohol, some kinds of firearms, slavery, etc, etc.
I am not arguing that possession of child pornography is good or permissible, my point is that the things police do are only "police actions" rather than "crimes" because we choose to refer to them as such. We could pass a law tomorrow that says unlawful search and seizure is a crime, and then the "crime" label would apply to the police as well. The specific crime would be different, but both would be categorically "crime". It is undesirable to make possession of CP by police a crime because it would interfere with their ability to investigate it, but those justifications do not apply to why unlawful search and seizure should not be a crime or at the very least fruit of the poisoned tree.
> I'm genuinely at a loss for how this doesn't make sense
I'm really not trying to be mean or making charged comments in any of the following, I apologize if it reads that way. I really appreciate your investment in this thread, it wasn't a driveby, you mean what you're saying, you're not trying to score points AFAICT. I think working through my discomfort is the best way to pay that forward. I save the most concise / assuming / judgey version of this for the end of the post.
There's just something very...off....with the whole thing. Like it reads like an intellectual exercise, I get the same vibe as watching someone work really really hard to make a philosophical argument to stir conversation.
You have these absolutes and logical atoms that seem straightforward and correct, but they're handwaving away a whole field and centuries of precedent.
There's this shuttling back and forth between wide scope and narrow scope thats really hard to engage with. Like, yes, we know "crime" is a nominal thing. My mind immediately jumps to "yes, calling things 'bad' is nominal and subjective" ---
Then, my mind transports me back to my sophomore year english class where someone starts free-associating about how nothing can be 100% confirmed to be real. I'm frustrated there, because, yes, that's true but doesn't shed any light, there's nothing to be gained from mining that vein, and doesn't map to how people have to engage with the world day to day.
You also have a very hard time accepting that this isn't reducible down to "unlawful search and seizure via 4th amendment violation" --- I don't mean to be aggressive, here: after a day and a lot of your thoughts, I still genuinely don't know if you understand that these things have ambiguities and that's why there's a whole industry around them.
I think we agree on:
- calling things bad is subjective.
- similarly, calling things "crimes" is subjective, and part of that is contextual (ex. we allow some people to do some things, but not others)
Then from there, I bet you'd agree to:
- therefore, we need some sort of dispute process to sort these things out
- lets say that's called the current legal system
Then from there, it feels like you're asking us to agree to:
- if something is declared judged to be bad moving forward, it is okay to punish those who did the bad thing in the past, no matter the circumstances
- now lets apply that specifically:
- if cops did a thing that's not allowed moving forward, then it is a moral imperative for the cops to drop every case that involved doing the thing that's not allowed moving forward
That's just way too far for anyone who isn't doing a philosophical exercise.
ex. Miranda v. Arizona established what we call "Miranda rights" -- now that a judge says there's a specific incantation to recite that courts will accept as proof criminals were advised of their rights. Are all cases where the Miranda rights were not read suddenly dropped? No, that'd be laughable, no society would tolerate the legal system dropping every case where someone was arrested in that scenario.
The most concise thing I can say, which unfortunately is judgemental due to the conciseness, is the whole thing reeks of an engineering mind expecting their understanding of the law to be an absolute, somehow overlooking that the whole point of the legal system above entry-level courts is there are no absolutes. From there, lets say you know that and accept that, because that's very likely. Then what happens with the Miranda rights thing? That's one of countless examples, but it's useful because A) I'm sure you grok Miranda Rights if you're in USA B) the principles you're espousing being applied there would lead to an obviously unacceptable outcome, so if your instinct is to say "yeah, do it, free everyone who talked to the cops!" I know you're just killing time doing a thought exercise --- which I do sometimes too! Not judgement.
I do want to apologize for the hostility or frustration of that comment. It had read as a drive by to me, but it wasn't a productive way to engage regardless. I sincerely appreciate you engaging, and I think your post does bring interesting points and I appreciate you taking the time to write them down.
> There's this shuttling back and forth between wide scope and narrow scope thats really hard to engage with.
I can very much see how it reads that way. My intent was to address the comment one or two up from yours saying that they were different because one is a crime, but that is very much a different conversation than this specific case. It feels a bit like I'm having two separate conversations on my end too, which is somewhat difficult for me to do without either writing a novel or losing track of nuance. I'll make an effort to keep this more constrained so it feels less like arguing with a moving target, that is certainly not my intent.
I'm with you on the parts that we agree on, and the parts that you think I'd agree with.
> Then from there, it feels like you're asking us to agree to:
The part that feels, to me, like it's not asking too much is that we already ask this of every other citizen in their everyday life. E.g. (and I apologize for not having a less contentious example) the ATF has repeatedly refused to set quantifiable standards for when someone is selling enough firearms to need an FFL. It's all about being "engaged in the business" and whether sales are for profit or collecting; there is no hard and fast "you must if you have X sales that meet Y criteria".
That's actually much more clear than it used to be; it used to just be "engaged in the business" and you just had to guess whether liquidating a collection made you in the business or not.
It doesn't feel like a huge step forward to say that the people pursuing crimes need to handle ambiguity at least as carefully as a private citizen. Especially considering that police can get a warrant as a definitive answer, where a judge typically won't answer hypotheticals from a citizen.
Furthermore, that was exactly how it worked until the good-faith exception was made in United States v Leon, in 1984 (not a joke, but I did have a chuckle. It's hyperbole but a cute coincidence). A significant portion of Americans were alive when the good faith doctrine didn't exist, and this evidence would have been fruit of the poisoned tree.
It's a little hard for me to accept that the Overton Window has shifted so dramatically that people are unwilling to accept a system they were born with.
> if cops did a thing that's not allowed moving forward, then it is a moral imperative for the cops to drop every case that involved doing the thing that's not allowed moving forward
I would argue for more nuance than that, but that's close. Briefly, I am arguing that evidence obtained via searches that are found not to be supported by the 4th Amendment is necessarily fruit of the poisoned tree, and should not be admissible as evidence in the case nor as evidence to obtain a warrant for a later search. That may result in the charges being dropped in some cases, and not dropped in others where there is other substantial evidence.
Conjecturing about this case, it seems like they would probably have to drop the charges. I don't know though, maybe they have other evidence obtained via other means they could use.
> ex. Miranda v. Arizona established what we call "Miranda rights"
Aside, but Miranda is an interesting example because he was re-tried without using his confession and the conviction stuck that time. An interesting example that a fruit of the poisoned tree policy does not necessarily require dropping charges.
I am perhaps out of the Overton Window here, but I don't see why that is an insane outcome of Miranda though I will certainly acknowledge that there would be fallout. My line of thinking is essentially that the text of the 4th did not change, which means that Miranda rights were free for anyone to claim at virtually any point in history (presuming they thought to make the argument). The outcome is necessarily prejudiced; either against defendants who could have argued for rights they didn't know they had, or against the judiciary for failing to establish that those rights exist at an earlier point. It makes sense to me for that to be prejudiced against the judiciary, because they are the arbiters of what rights people have, and had the ability to suggest and establish those rights at any point they wanted. Essentially if we were going to assign who is responsible for knowing that Miranda rights should exist before they did exist, I would expect that of the arbiters of rights far more than the defense attorney.
I am totally okay with that being unpopular, though. I'm not arguing for the majority of people, just myself.
> the principles you're espousing being applied there would lead to an obviously unacceptable outcome, so if your instinct is to say "yeah, do it, free everyone who talked to the cops!" I know you're just killing time doing a thought exercise --- which I do sometimes too! Not judgement.
Just to reiterate briefly, I do not think they should be immediately set free, but I do think they would be due a retrial without their confession (in the Miranda case specifically) if their confession is material to their conviction. It's not a thought exercise to me, but I may be outside the Overton Window.
I am aware that this would potentially result in some guilty people going free, but I would eat my hat if there wasn't a single person in jail or prison who was innocent and coerced into a confession that could have been avoided if they had known their Miranda rights. I also know that there are no absolutes in the law. It is absolutely a vague mess propped up by piles of precedent that can even be conflicting.
My contention is that given the ambiguity of the law and the power the government wields, defendants should be offered the full protection of the law as we currently understand it. I find the situation frustrating, which makes me look for a source to blame, but I think my real underlying sentiment is a feeling that it is unfair for citizens and defendants to suffer the consequences of the ambiguity the legal system.
It is hard for me to fathom the despair of someone who was innocent but confessed to a crime after a 12 hour investigation without knowing that they could remain silent or demand a lawyer. I cannot fathom the despair of watching the Miranda trial and knowing that their lawyer could have argued the same thing, but didn't, and now they're stuck in prison for however many years without any recourse.
That doesn't directly apply to this situation, because I do think this guy is guilty, but these precedents will be used in cases against innocent people. I find it a condemnation of our justice system if we are willing to risk the rights of innocent people to nail a few convictions.
If you have the time, I would really encourage reading the dissenting opinion in United States v Leon (I'll link it below). Justice Brennan has a far more well articulated opinion than me, that is likely less far outside the Overton Window. I'll leave a snippet that I find persuasive here, but the whole thing is worth at least a skim.
" In contrast to the present Court’s restrictive reading, the Court in Weeks recognized that, if the Amendment is to have any meaning, police and the courts cannot be regarded as constitutional strangers to each other; because the evidence-gathering role of the police is directly linked to the evidence-admitting function of the courts, an individual’s Fourth Amendment rights may be undermined as completely by one as by the other."
https://www.courtlistener.com/opinion/111262/united-states-v... (you have to click the Dissent tab, I can't link directly to it and dissenting opinions seem to be difficult to find deep links for).
> I do not find "the justice system treats them differently... circular appeal to authority.
You may not know it, but you are effectively referencing the difference between a "rule of law" and a "rule by law" in the important parts at least.
This goes back to and falls under social contract theory, and the "rule of law" in society is meant as the final protection for its members, and to provide non-violent conflict resolution impartially, justly, and fairly, equal under the law, and accessible.
The moment the required components cease to exist, is the momentous beginning of a trend towards the failure of society, as it will naturally mean increasing violence and reversion to the natural order, rule of violence.
There is a valid argument to be made that despite many people claiming we have the former, we are actually living in the latter.
The latter allows many miscarriages, such as the infamous soviet judiciary example of, "you show me the person, I'll show you the crime".
Possession laws historically are also particularly problematic in this regard because evidence can be planted, or in the case of digital systems, induced creation of evidence involuntary (given how systems work and how callbacks can be injected by pointing software to a third-party resource to download), regardless there are many potential situations where the viewing of such horrifying material is unrelated to the choice of a person accused.
That of course doesn't appear to be the case here given what's been written, but nonetheless it is important to have firm, objective, and rational requirements to protect citizens. The trade-off is some small number of bad guys may get to go free as a result, and that's a tradeoff anyone should be glad for when it comes to corruption and how it devolves into tyranny unchecked.
The law rarely differentiates mitigating circumstances, often leading to a guilty until proven innocent situation for most, when these types of structural flaws are allowed. For example, there are locksmith tools that are considered burglary tools, and mere possession in some places is grounds for arrest (a felony), these tools share in common the physical shapes for other legitimate item uses.
System's without appropriate procedures and process for punishing abuses almost always leads to totalitarianism when no feedback system is in place to prevent such abuses from getting out of hand, which is why any true American should be up in arms when abuses happen as a result of corruption. Corruption can occur for a number of reasons that do not benefit a person. For a full treating of corruption, Johnston wrote a book on it ("Syndromes of Corruption").
Unfortunately, many judges today view the constitution as only being binding on government itself (in isolation), and have long taken the literal or constructive ruling instead of going with the spirit of the law, lessening our protections over time gradually but surely. This will eventually lead us to societal collapse.
It is a sad state of affairs, but regardless of the nature of the crime, the ends do not justify the means absent direct survival threats (which cannot be soundly argued in this case). Ends justifying means is only valid against existential threats.
When those means are allowed to change arbitrarily, the very next time it will be you or someone close to you on the sacrificial altar as a matter of some corrupt officials convenience, maybe merely for engaging in your protected rights to free speech to limit corrupt behavior or expressing disagreement in retaliation; there will only be an indirect link.
These tools are then ready made to be used in retaliation arbitrarily.
That said,
In this case, at least from what I've read, it appears a fairly clear cut case of fruit of the poisoned tree.
Law Enforcement could easily have applied for a warrant based on the probable cause of the hash matches, but instead chose not to. There is also the question of methodology Google uses in how they manage and enter new hashes into their hash database (which went unanswered).
They would have needed a warrant to justify everything else that came later. That is classic fruit of the poisoned tree. Thus it is a constitutional violation.
Additionally, I'm sure it comes as no surprise to most HN readers who are programmers, but hashes are not unique they are loose fingerprints related to structure but not giving fine detail for a exact match.
At its core, it is a finite field, which means that there can potentially be an infinite number of paths/files out there that match a same given hash.
Using a hash to match results of file structure, are not fool proof, and as a result of this ambiguity, it can potentially impinge legitimate activities, or obscure a chain of evidence without recourse.
For example, say that initial hash was not correctly identified when it was added to that watch list because maybe their AI false positived on it? Or it was submitted without review as being related to some censorable activity (legitimate under 1st amendment), all you have is a hash you can't verify the content.
This is how censorship or social credit can easily happen under a color of law in private parties hands; this has been covered extensively related to EU discussions on Client-Side-Scanning (and why its unreasonable given the repercussions for false-positives).
When you match only hashes, you don't know what the underlying content is aside from a likelihood/probability that it may be the same as some other file, which is why you need to be able to verify it is the exact same.
When your job is to find such people/things you should be doing what's needed (within the law) to ensure the strongest case possible.
The technical details matter, and processes must follow objective measures and be rational, and follow the constitution. Law is procedural, these are professionals. They should have gotten a warrant at the hash match.
Hashing collisions have happened in the past, mathematically this is known and expected given the structure of cryptographic hashes.
The investigators should have gotten a warrant to confirm.
The Failure to get a warrant here was a procedural failure, and left the door open for challenge. From what I can see in the write-up, it should be dismissed.
Failure to do so, effectively sets a precedent such that anything not directly addressed by a previous court can be construed as done in good faith to deprive people of their constitutional rights, allowing contradiction and further paralysis in the courts moving forward, and also promotes the interpretations that case law and legislative law override constitution protections.
Arguably, if the constitutional violation is found and admitted, there is no valid good faith exemption that can be applied to nullify the constitutional cure. The constitution supersedes everything else, including procedure, law, and case law.
The violation must be cured lest the entire constitution lose its power to non-enforcement (and the based system degrades towards tyranny), something that has been arguably happening as a result of long-standing corruption which goes unpunished.
Yes, the accused crimes may be heinous, but everyone is equal under the law. The moment this ceases to be true and happen with any regularity, is the day the rule of law has failed, and society then fails back to a natural law of violence. No one wants that.
It may not happen overnight, but it will happen regardless because history is full of examples where these dynamics cause those outcomes.
In many cases with very few exceptions today, judges who are older come to believe they are above the law, and fail to check their power, and in the end, they violate their sworn oaths. There is no punishment for them for this in most cases, and they never go back and correct mistakes they make (afaik, it is a rare exception if it happens at all).
Any true American should have a solid educational foundation in Social Contract Theory, and the basis for society. You are right to be concerned about the circular reasoning. In the absence of external objective measures, processes, and procedures, such circular reasoning inevitably devolves into delusion.
Your argument is a bit disingenuous because it's not applicable in situation where there is clear law clarifying that something can't be done.
You're pretending that cops are using this in situations where it's known that a warrant is needed, as opposed to it being an exception to "fruit of the poisonous tree" doctrine when new caselaw is being made.
> Acting in good faith would be getting a warrant regardless
That's not what "good faith" means, that's just something entirely made up by you. From a reasonable perspective that could be described as foolish and a waste of time and the public's resources.
> It encourages the police to push the boundaries of legal behavior, because they still get to keep the evidence even if they are wrong and have committed an illegal search.
There's a constant tension between technology, crime and the police that's reflected in the history of 4th amendment jurisprudence and it's not at all like what you describe. The criminals are pushing the boundaries to which the police must catch up, and the law must determine what is fair as society changes over time. I'm not particularly pro cop, but you don't seem to be reasonable about any of this.
> You're pretending that cops are using this in situations where it's known that a warrant is needed, as opposed to it being an exception to "fruit of the poisonous tree" doctrine when new caselaw is being made.
The ACLU has a decent article about it [1].
Beyond that, there is a substantial power imbalance between law enforcement and private citizens implying that private citizens should be favored by the law where possible to even that out (this is well upheld in case law and documents from the founding of the country). As a private citizen, if you want to do something but are not sure about its legality, do you a) yell "YOLO" and go ahead and do it, b) consult a lawyer, or c) just not do it at all? I believe law enforcement should be held to that same bar.
> That's not what "good faith" means, that's just something entirely made up by you. From a reasonable perspective that could be described as foolish and a waste of time and the public's resources.
"Good faith" is at odds with recklessness and negligence; an action cannot be made both recklessly or negligently and in good faith (supported by majority opinion in Leon v United States, which established the good faith exception). I cannot see a way in which taking an action of unknown legality, while possessing both the time and means to take an alternate action of known legality, is not acting with reckless disregard or negligence to the rule of law and thus incompatible with good faith.
From Leon v United States: "The deference accorded to a magistrate's finding of probable cause for the issuance of a warrant does not preclude inquiry into the knowing or reckless falsity of the affidavit on which that determination was based, and the courts must also insist that the magistrate purport to perform his neutral and detached function and not serve merely as a rubber stamp for the police."
> There's a constant tension between technology, crime and the police that's reflected in the history of 4th amendment jurisprudence and it's not at all like what you describe. The criminals are pushing the boundaries to which the police must catch up, and the law must determine what is fair as society changes over time.
I would genuinely encourage you to review the history of 4th Amendment jurisprudence. It has been continually weakened to the point that only the most flagrant and loudly-announced violations are found unconstitutional, and even then the punishments are virtually non-existent.
Again, the ACLU has a very informative document on it literally called "The Crisis in Fourth Amendment Jurisprudence" [2]. Criminals aren't doing anything particularly new; stashing files somewhere and even encrypting them isn't anything new. Encrypting something in a way that was virtually undecipherable was possible even when the 4th Amendment was written. These are not novel criminal techniques, but the broad liberties given to police with regards to the 4th very much are.
You are welcome to consider me unreasonable. I think there is a fundamental gap in core beliefs causing that. I do not believe criminals are doing anything categorically new, nor that crime is suddenly worse, nor that crime is currently so bad that it demands an exceptional response. Under that set of beliefs, I think opposition to exceptional police powers is reasonable. You seem to believe the opposite, and I can see how my opposition seems unreasonable. I would say that you have fallen victim to unfounded propaganda, and I presume you have a similar accusation to level at me.
Regardless, I do appreciate you engaging in good faith and I wish you weren't ratio-ed on your comment. I do think you have brought interesting points to the discussion.
1. https://www.aclu.org/news/national-security/polices-get-out-...
2. https://www.aclu.org/publications/crisis-fourth-amendment-ju...
I'm not OP, but I consider your response quite reasonable.
I found your reply informative and specific, though I took a more fundamental approach in my response to OP, focusing on components required for a "rule of law" compared to a "rule by law" (kafka/soviet style).
I've done a lot of historic reading, and any time those components fail, violence increases proportionally to the lack of agency for non-violent resolution available.
It may just be speculation on my part, but given the repeated cycle, and detailed accounts, it seems there are parallels in objective measures of these components compared with witnessed events, which are what citizens use as a signal to determine whether they take violent action.
The events seem to follow quite accurately along what's been written in social contract theory, and from Thomas Paine's time & writings.
Obviously these type of writings are from times that are dark and violent, and violence benefits few if any which is why there's good cause in trying to prevent that kind of degradation in existing systems, and the dynamics that cause it.
> the law must determine what is fair as society changes over time
This has been the ideal that has been put forth generally quite a bit, but it also almost always neglects the structural failings that must equally be addressed at the same time.
For example, at what point would you say that case law overrides the constitution?
According to the law, it holds the constitution as supreme, and that no representative has the authority to exceed or violate that which is granted in the constitution, this pertains to the judiciary as well as the executive and legislative. It is up to the courts to enforce this as the last pillar of society (for non-violent conflict resolution).
In a general society with a rule of law, when there is a admitted constitutional violation, it must be immediately cured.
Issuing a decision that prevents a constitutional remedy while recognizing the violation is arbitrary, a direct contradiction, exceeds the authority granted, negates the constitution, shows a violation of a sworn oath to uphold the constitution, and causes the entire "rule of law" and its institutional credibility to be called into question.
If you allow exceptions to the constitution, the fundamental component, "equality under the law", fails, and that means we don't have a "rule of law".
The natural outcome of this being increasing violence, which no one wants because it benefits no one.
Bringing society as a whole from a "rule of law" to a "rule by law", which inevitably (over time) causes society to fail violently towards totalitarianism/tyranny, is stupid but may have short term benefits for the corrupt. The harms of such are systemic and grow exponentially.
It is not a matter of catching up to offenders, it is a matter of competency. This is a professional occupation where corruption is an ongoing structural issue, and actions must be reasonable to protect both society and the individual rights equally.
No true American would accept soviet-style kafka courts without any of the normal protections regardless of the crime, and that is the danger faced with the decision here.
Corruption of the state will always seek to use such types of systems to justify their existence often inducing, planting evidence, or causing such crimes to be committed. Some may be actual offenders, but others may not and no differentiation is made. It may even be done for political purposes such as with The Gulag Archipelago.
It is a slippery slope which cannot be walked back later as the damage will have already been done with the punishment being front-loaded.
If there is a question regarding a boundary of a policy or process, you get a legal opinion, and base your actions on that opinion. This is well established in many sectors, including but not limited to the Business Judgment Rule. This is what is needed for this to be done in "good faith".
Allowing a blanket good-faith exemption and exclusion for government to do anything not directly covered by existing case-law without repercussion is a dangerous precedent towards tyranny, especially when they had the probable cause at the start to do it the right way.
It seems like you neglect many of these important foundational subjects. The lack of accountability control encourages the police and related apparatus to violate the law, and thus violate the public trust when this is unenforced.
The main outcome in a society absent a "rule of law" is overwhelming violence. This is what most people fail to realize, and many today embrace magical thinking and delusion.
Mass delusion has greatly overtaken this country and will soon destroy it if it is not stopped.
It is of critical importance to base our protective systems in objective measures which are external, and rational thinking and critical reasoning that logically follows without contradiction or circular reasoning.
To fail at rational thinking, is to embrace delusion and become schizophrenic, a common malady in the totalitarian state (Joost Meerloo). This is covered well in topics on the banality of evil, and the radical evil (WW2).
The crime accused is repugnant, but equality under the law, and constitutional protections are sacrosanct, and far more important than any single person.
The 4th amendment was written in 1791
The 4th amendment is about unreasonable searches and seizures, it is also about "persons, houses, papers, and effects", that is, not files stored in someone else's computer.
The police here considered that a hash match was a reasonable enough condition to conduct a search, and that Google's TOS allowed it. They were wrong, but it is not obvious that they were by just reading the 4th amendment, and the situation is rather new, so it is reasonable to assume that the police acted in good faith.
If I have documents in a locked briefcase in a hotel room, does the police get to read and copy them with the hotel operator's permission while I am in the shower? Assume that the locked briefcase is not particularly tamper proof. Anyone with decent lock picking skills can open one.
Is it your locked briefcase or the hotel's? I believe hotels have the ability to unlock their own safe, so I suspect they're allowed to ask the hotel's permission to look without a warrant.
Also, if a hotel cleaner found illegal material lying in your room, the police don't need a warrant to seize it and prosecute you.
If it's your briefcase then I think they need a warrant.
The type of person who cannot draw a line of semantic equivalence between papers and files on a computer, is uniquely devoted to entertaining a level of obtuseness it is hard to seriously entertain. On par with people who think that "Arms" in the second amendment must only apply to muskets, cannons and such, and nothing after.
Tell me. When you put a bunch of papers in a folder, then put them in a cabinet, arguably under some semblance of organization in order to make later retrieval easier, what are you doing?
Filing.
The entire desktop metaphor, (the basis around which most computer UI is based), was chosen in part specifically for it's compatibility with non-digital processes at the time of software and the personal computer's fruition. Files on a disk, are in a literal sense, your papers. They are stored in directories(lists of things and where to find them, a.k.a. folders), areanged under the abstractive auspices of a "file system", and at times "archived" for convenient storage or transport. Gee. Same verbage as what you do with papers... In fact, your papers have nothing to do with dead trees except as an accident of it being the first prevalent medium for persistent info storage. Your papers covers the set of information through which you conduct your business with the outside world.
Those packets of paper are files. Those collections of 0's and 1s on a disk are files. Files are papers. Papers are protected. The involvement of a computer in the chain suddenly nullifying the essence of what point was being made by the Founders is as worthy of ridicule as thinking they went to war with their colonial parent state only because of a tax spiff. Or the civil war being only about slavery. It's evidence of a worldview most tragically impoverished; either by accident (which while regrettable, is at least amenable to remedy), or intention to push a state of affairs; to which one can only shake one's head and push on with their own life, and hope that maybe there are enough like minded individuals out there to counterbalance the individual's in questions aspirations.
Already spent more cycles on this than I should have, good day.
And one thing we learn as we've been hanging around in Time long enough to recognize larger cycles, is the world changes, people dont. Even as we change the world.
That rule has been around for quite a while, and looks worse for wear now
> That rule has been around for quite a while
The rule established in this case is new, hence TFA, and all the time the lawyers and judge wasted on it :)
If I may suggest where wires are getting crossed:
You are sort of assuming it's like a logic gate: if 4th amendment violation, bad evidence, criminal must go free. So when you say "the rule", you mean "the 4th amendment", not the actual ruling.
That's not how it works, because that simple ultimatum also has edge cases. So we built up this whole system around nominating juries and judges, and paying lawyers, over centuries, to argue out complicated things like weighing intentionality.
The cited ruling answers your question
The court ruled that at the time, when the State Police opened the file, they had no reason to believe that a warrant was required. While the search was later ruled unconstitutional, no court had ruled it was unconstitutional *at the time of the search*. One of the cornerstones of American jurisprudence is that you cannot go back in time and overrule decisions based on contemporary jurisprudence.
From the opinion: 'the exception can also apply where officers “committed a constitutional violation” by acting without a warrant under circumstances that “they did not reasonably know, at the time, [were] unconstitutional.”'
If you're interested, the discussion of a good faith exemption (and why fruit of the poison tree doesn't apply here) begins at page 40 of the doc.
As someone not from the US the fact that "uwu we didn't know" is an adequate defense for the police to do something illegal is really weird. Is there some crucial context I'm missing?
It dates back to the constitutional ban on "ex post facto" laws. Meaning, the government can't retroactively make something illegal. Which is a good thing, IMO.
So, for example, it's illegal at the federal level to manufacture machine guns (and I'm not going to get into a gun debate or nuances as to what defines a machine gun--it's just an example). But a machine gun is legal as long as it was manufactured before the ban went into place. Because the government can't say "hey, destroy that thing that was legal to manufacture, purchase, and own when it was manufactured."
This concept is extrapolated here to say "The cops didn't do anything illegal at the time. We have determined this is illegal behavior now, but we can't use that to overturn police decisions that were made when the behavior wasn't illegal. In the future, cops won't be able to do this."
The government has totally said “destroy the thing that we said was legal to manufacture, purchase, and own when it was manufactured.” That was the entire point of the bump stock ban, which attempted to reclassify an item that they had previously said was not a machine gun into a machinegun, and therefore illegal to own (and was always illegal to own, so they weren’t going to compensate people for them either).
More strictly, machine guns aren’t banned by the federal government, but rather you have to have paid a tax to own it, and they’ve banned paying the tax for gun made after X date. If they decide to ban the ownership, grandfathering is not guaranteed.
> Because the government can't say "hey, destroy that thing that was legal to manufacture, purchase, and own when it was manufactured."
Actually that's a totally normal way for bans to work.
If a state decides to ban a book from school libraries, the libraries don't get to keep the books on the shelves because they already had it.
The ban on ex post facto laws merely means that, if a ban on a given book is passed today a librarian can't be punished for having it on the shelves yesterday.
Grandfathering in exceptions is just politics - make a bitter pill easier to swallow for the people most impacted; delay the costs of any remediation; deal with historical/museum pieces; and simplify enforcement.
> If a state decides to ban a book from school libraries, the libraries don't get to keep the books on the shelves because they already had it.
That isn’t comparable.
Comparable is a ban on printing that book. Which would not be a ban on existing already printed copies. It would only be a ban on new copies.
>It dates back to the constitutional ban on "ex post facto" laws.
Not really, that's not now constitutionality works with respect to the government. Ex post facto is when the government wants to act against you, not when you want the government to behave. They use new decisions regarding constitutionality to undo previous decisions all the time, they just don't want to in this specific case and are using the "well they would have been able to get a warrant anyway if they had known they'd needed one" to justify it.
It wasn't illegal (unconstitutional) at the time they did it, which is different from not knowing. They would have had to see the future to know.
Also keep in mind "illegal" and "unconstitutional" are different levels - "illegal" deals with specific laws, "unconstitutional" deals with violating a person's rights. Laws can be declared unconstitutional and repealed.
Laws can also be unconstitutional and remain a law--the law just can't be enforced. For example, in the state of Texas sodomy is still technically illegal, just the law is unenforceable. But if the Supreme Court overrules previous court decisions and says anti-sodomy laws are constitutional, the Texas law immediately becomes enforceable again.
The law is super complicated.
I don't know. I feel that if something is declared "unconstitutional" today, then it was always unconstitutional (from inception of or amendment to the constitution). Unlike "illegal" in which laws can come and go, so something that is illegal today can be legal tomorrow. And just like "ignorance is no excuse for breaking a law", I don't thing ignorance should be an excuse for doing something unconstitutional.
Just another way cops can be terrible at their job and get away with it. If only citizens could use the Chappelle defense, "I'm sorry officer, I didn't know I couldn't do that".
Let's be clear. This guy had CSAM and was caught using digital forensics. The cops would've been able to secure the search warrant at the time had they been required to do so.
This isn't some innocent person who is spending time in prison because of a legal technicality.
I understand but this is literally how rights are eroded away. It's all good when it's the worst people on the planet, but very quickly it's abused against every one else. Once these rights go away, they don't come back.
The systemic downsides of police overreach happen whether or not a particular person was guilty. In general, throwing out the evidence is an effective way to fight back against overreach. I'm not worried about this guy, I'm worried about everyone else.
The idea that they would have been able to get a warrant limits the damage, but it's still iffy.
The opinion says at the time the warrantless search occurred, one appellate court had already held "that no warrant was required in those circumstances" (p 42). Only a year after the search occurred, did another appellate court rule the other way.
This is the main argument that the search met the good faith exception to the exclusionary rule (i.e. the rule that says you have to exclude evidence improperly obtained). This exception is supported in the opinion (at p41) with several citations including United States v. Ganias, 824 F.3d 199, 221–22 (2d Cir. 2016)
IANAL, but as I understood, this exception is specifically about cases where precedence is established. This same trick or others substantially like it won't work in the future, but because it was not a "known trick", the conviction still stands.
Not only that, prior to the search another court had ruled that no warrant was required. The new ruling overrides the old one, but the search was in good faith.
Prior to the search. A lower court had ruled that no warrant was required. The search was in good faith. The new ruling overturns the earlier ruling, but before, it had been ruled legal to do this kind of warrantless search.
Davis v. U.S. 564 U.S. 229
I'm trying to imagine a more "real-world" example of this to see how I feel about it. I dislike that there is yet another loophole to gain access to peoples' data for legal reasons, but this does feel like a reasonable approach and a valid goal to pursue.
I guess it's like if someone noticed you had a case shaped exactly like a machine gun, told the police, and they went to check if it was registered or not? I suppose that seems perfectly reasonable, but I'm happy to hear counter-arguments.
The main factual components are as follows: Party A has rented out property to Party B. Party A performs surveillance on or around the property with Party B's knowledge and consent. Party A discovers very high probability evidence that Party B is committing crimes within the property, and then informs the police of their findings. Police obtain a warrant, using Party A's statements as evidence.
The closest "real world" analogy that comes to mind might be a real estate management company uses security cameras or some other method to determine that there is a crime occurring in a space that they are renting out to another party. The real estate management company then sends evidence to the police.
In the case of real property -- rental housing and warehouse/storage space in particular -- this happens all the time. I think that this ruling is imminently reasonable as a piece of case law (ie, the judge got the law as it exists correct). I also thing this precedent would strike a healthy policy balance as well (ie, the law as it exists if interpreted how the judge in this case interprets it would a good policy situation).
Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
I don't think there is, and I don't think you can reduce reality to being as simple as "owner has more right over property than renter" renter absolutely has at least a few rights in at least a few defined contextx over owner because owner "consented" to accept money in trade for use of property.
> Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
Yes. Entering property for regular maintenance. Any time a landlord or his agent enters a piece of property, there is implicit surveillance. Some places are more formal about this than others, but anyone who has rented, owned rental property, or managed rental property knows that any time maintenance occurs there's an implicit examination of the premises also happening...
But here is a more pertinent example: the regular comings and goings of people or property can be and often are observed from outside of a property. These can contribute to probable cause for a search of those premises even without direct observation. (E.g., large numbers of disheveled children moving through an apartment, or an exterior camera shot of a known fugitive entering the property.)
Here the police could obtain a warrant on the basis of landlord's testimony without the landlord actually seeing the inside of the unit. This is somewhat similar to the case at hand, since what Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
> I don't think you can reduce reality to being as simple as "owner has more right over property than renter"
But I make no such reduction, and neither does the opinion. In fact, quite the opposite -- this is contributory why the court determines a warrant is required!
> ...Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
Google cannot have calculated that hash without examining the data in the image. They, or systems under there control obviously looked at the image.
It should not legally matter whether the eyes are meat or machine... if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
> It should not legally matter whether the eyes are meat or machine
But it does matter, and, perhaps ironically, it matters in a way that gives you STRONGER (not weaker) fourth amendment rights. That's the entire TL;DR of the fine article.
If the court accepted this sentence of yours in isolation, then the court would have determined that no warrant was necessary in any case.
> if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
I don't disagree. In particular: I believe that the "Reasonable Person", to the extent that we remain stuck with the fiction, should be understood as having stronger privacy expectations in their phone or cloud account than they do even in their own bedroom or bathroom.
With respect to Google's actions in this case, this is an issue for your legislator and not the courts. The fourth amendment does not bind Google's hands in any way, and judges are not lawmakers.
> Yes. Entering property for regular maintenance.
In every state that I've lived in they must give advance notice (except for emergencies). They can't just show up and do a surprise check.
Only in residential properties, typically. There are also states that have no such requirement even on residential rentals.
In any case, I think it's a bit of a red herring and that the "regular comings and goings" case is more analogous.
But also that, at this point in the thread, we have reached a point where analogy stops being helpful and the actual thing has to be analyzed.
The point of the analogy is that the contents of ones files should be considered analogous to the contents of ones mind.
Whatever reasons we had in the past for deciding that financial or health data, or conversations with attorneys, or bathrooms and bedrooms, are private, those reasons should apply to ones documents which includes ones files.
Or at least if not, we should figure out and be able to show exactly how and why not with some argument that actually holds water.
Only after that does it make any sense to either defend or object to this development.
Fair enough.
If I import hundreds of pounds of poached ivory and store it in a shipping yard or move it to a long term storage unit, the owner and operator of those properties are allowed to notify police of suspected illegal activities and unlock the storage locker if there is a warrant produced.
Maybe the warrant uses some abstraction of the contents of that storage locker like the shipping manifest or customs declaration. Maybe someone saw a shadow of an elephant tusk or rhino horn as I was closing the locker door.
Pretty much all rental storage, shipping container, 3rd party semi trailer pool, safe deposit box type services and business agreements stipulate that the user of the arbitrary box gets to deny the owner of the arbitrary box access so long as they're holding up their end of the deal. The point is that the user is wholly responsible for the security of the contents of the arbitrary box and the owner bears no liability for the contents. This is why (well run) rental storage places make you use your own lock and if you don't pay they add an additional lock rather than removing yours.
I don't think that argument supports the better analogy of breaking into a computer or filing cabinet owned by someone renting the space. Just because someone is renting space doesn't give you the right to do whatever you want to them. Cameras in bathrooms of a rented space would be another example.
But he wasn’t running a computer in a rented space, he was using storage space on google’s computers.
In an older comment I argued against analogies to rationalize this. I think honestly at face value it is possible to evaluate the goodness or badness of the decision.
> In an older comment I argued against analogies to rationalize this. I think honestly at face value it is possible to evaluate the goodness or badness of the decision.
I generally do agree that analogies became anti-useful in this thread relatively quickly.
However, I am not sure that avoiding analogies is actually possible for the courts. I mean, they can try, but at some point analogies are unavailable because most of the case law -- and, hell, the fourth amendment itself -- is written in terms of the non-digital world. Judges are forced to reason by analogy, because legal arguments will be advanced in terms of precedent that is inherently physical.
So there is value in hashing out the analogies, even if at some point they become tenuous, primarily because demonstrating the breaking points of the analogies is step zero in deviating from case law.
Yes, that is why I presented an alternative to the analogy of "import hundreds of pounds of poached ivory and store it in a shipping yard or move it to a long term storage unit".
Like having the right to avoid being videoed in the bathroom, we have the right to avoid unreasonable search of our files by authorities, whether stored locally or on the cloud
Wait until you hear about third party doctrine.
I have this weird experience where people that get all their legal news from tech websites have really pointed views about fourth amendment jurisprudence and patent law.
The issue of course being the government then pressuring or requiring these companies to look for some sort of content as part of routine operations.
I agree. This is a case where the physical analogy leads us to (imo) the correct conclusion: compelling major property management companies to perform regular searches of their tenant's properties, and then to report any findings to the police, is hopefully something that most judges understand to be a clear violation of the fourth amendment.
> The issue of course being the government then pressuring or requiring these companies to look for some sort of content as part of routine operations.
Was that the case here?
Not requiring, but certainly pressure. See https://www.nytimes.com/2013/12/09/technology/tech-giants-is... for example. Also all of the heat Apple took over rolling back its perceptual hashing.
> Party A discovers very high probability evidence that Party B is committing crimes within the property ...
This isn't accurate: the hashes were purposefully compared to a specific list. They didn't happen to notice it, they looked specifically for it.
And of course, what happens when it's a different list?
>> Party A discovers very high probability evidence that Party B is committing crimes within the property ...
> This isn't accurate: the hashes were purposefully compared to a specific list. They didn't happen to notice it, they looked specifically for it.
1. I don't understand how the text that comes on the right side of the colon substantiates the claim on the left side of the colon... I said "discovers", without mention of how it's discovered.
2. The specificity of the search cuts in exactly the opposite direction than you suggest; specificity makes the search far less invasive -- BUT, at the same time, the "everywhere and always" nature of the search makes it more invasive. The problem is the pervasiveness, not the specificity. See https://news.ycombinator.com/user?id=aiforecastthway
> And of course, what happens when it's a different list?
The fact that the search is targeted, that the search is highly specific, and that the conduct plainly criminal, are all, in fact, highly material. The decision here is not relevant to most of the "worst case scenarios" or even "bad scenarios" in your head, because prior assumptions would have been violated prior to this moment in the legal evaluation.
But with respect to your actual argument here... it's really a moot point. If the executive branch starts compelling companies to help them discover political enemies on basis of non-criminal activity, then the court's opinions will have exactly as much force as the army that court proves capable of raising, because such an executive would likely have no respect for the rule of law in any case...
It is reasonable for legislators to draft laws on a certain assumption of good faith, and for courts to interpret law on a certain assumption of good faith, because without that good faith the law is nothing more than a sequence of forceless ink blotches on paper anyways.
I don't think that changes anything. I think it's entirely reasonable for Party A to be actively watching the rented property to see if crimes are being committed, either by the renter (Party B) or by someone else.
The difference I do see, however, is that many places do have laws that restrict this sort of surveillance. If we're talking about an apartment building, a landlord can put cameras in common areas of the building, but cannot put cameras inside individual units. And with the exception of emergencies, many places require that a landlord give tenants some amount of notice before entering their unit.
So if Google is checking user images against known CSAM image hashes, are those user images sitting out in the common areas, or are they in an individual tenant's unit? I think it should be obvious that it's the latter, not the former.
Maybe this is more like a company that rents out storage units. Do storage companies generally have the right to enter their customers' storage units whenever they want, without notice or notification? Many storage companies allow customers to put their own locks on their units, so even if they have the right to enter whenever they want, regularly, in practice they certainly do not.
But like all analogies, this one is going to have flaws. Even if we can't match it up with a real-world example, maybe there's still no inconsistency or problem here. Google's ToS says they can and will do this sort of scanning, users agree to it, and there's no law saying Google can't do that sort of thing. Google itself has no obligation to preserve users' 4th Amendment rights; they passed along evidence to the police. I do think the police should be required to obtain a warrant before gaining access to the underlying data; the judge agrees on this, but the police get away with it in the original case due to the bullshit "good faith exception".
This is an excellent example, I think I get it now and I'm fully on-board. Thanks.
I could easily see an AirBNB owner calling the cops if they saw, for instance, child abuse happening on their property.
Ok. But that would also be invasion of privacy. If the property you rented out was being used for trafficking and you don’t want to be involved with trafficking, then the terms would have to first explicitly set what is not allowed. Then it would also have to explicitly mention what measures are taken to enforce it and what punishments are imposed for violations. It should also mention steps that are taken for compliance.
Without full documentation of compliance measures, enforcement measures, and punishments imposed, violations of the rule cannot involve law enforcement who are restricted to acting on searches with warrants.
> If the property you rented out was being used for trafficking and you don’t want to be involved with trafficking, then the terms would have to first explicitly set what is not allowed.
I don't believe that's the case. You don't need to state that illegal activities are not allowed; that's the default.
> Then it would also have to explicitly mention what measures are taken to enforce it
When Airbnb used to allow cameras indoors, they did -- after some backlash -- require hosts to disclose the presence of the cameras.
> ... and what punishments are imposed for violations.
No, I don't think that is or should be necessary. If you do illegal things, the possible punishments don't need to be enumerated by the person who reports you to the police.
Put another way: if I'm hosting someone on Airbnb in the case where I'm living in the same property, and I walk into the kitchen to see my Airbnb guest dealing drugs, I am well within my rights to call the police, without having ever said anything up-front to my guest about whether or not that's acceptable behavior, or what the consequences might be. Having the drug deal instead caught on camera is no different, though I would agree that the presence of the cameras should have to be disclosed beforehand.
In Google's case, the "camera" (aka CSAM scanning) appears to have been disclosed beforehand.
> You don't need to state that illegal activities are not allowed; that's the default
Technically you would have to say to be able to walk away from accusations of complicity.
>Without full documentation of compliance measures, enforcement measures, and punishments imposed, violations of the rule cannot involve law enforcement who are restricted to acting on searches with warrants.
That's not the only way police get information...
In the case of in-progress child abuse, that wouldn’t require a warrant as entry to prevent harm to a person is an exigent circumstance and falls under the Emergency Aid doctrine. If they found evidence or illegal items within plain view, that evidence would be permitted under the plain view doctrine. However, if they went and searched drawers or opened file cabinets, evidence discovered in that circumstance would not be allowed (opening a file cabinet isn’t required to solve the emergency aid situation typically.)
What’s really fascinating is that Children Protective Services acts as if they never need a warrant even if there is not an exigent circumstance. To my knowledge there hasn’t been a Supreme Court case challenging that and circuits are split. Interesting reading about that if anyone is interested:
https://family.jotwell.com/ending-cps-home-searches-evasion-...
(The 4th Amendment is not limited to actual police BTW.)
With their hidden camera in the bathroom.
I just meant it as an analogy, not that I'm specifically on-board with AirBNB owners putting cameras in bathrooms.
Anyways, that's why I just rent hotel rooms, personally. :)
I think the real-world analogy would be to say that the case is shaped exactly like a machine gun and the hotel calls the police, who then open the case without a warrant. The "private search" doctrine allows the police to repeat a search done by a private party, but here (as in the machine gun case), the case was not actually searched by a private party.
But this court decision is a real world example, and not some esoteric edge case.
This is something I don’t think needs analogies to understand. SA/CP image and video distribution is an ongoing moderation, network, and storage issue. The right to not be under constant digital surveillance is somewhat protected in the constitution.
I like speech and privacy and am paranoid of corporate or government overreach, but I arrive at the same conclusion as you taking this court decision at face value.
Wait until Trump is in power and corporations are masterfully using these tools to “mow the grass” (if you want an existing example of this, look at Putin’s Russia, where people get jail time for any pro-Ukraine mentions on social media).
Yeah I’m paranoid like I said, but this case it seems like the hash of a file on google’s remote storage flagged as potential match that was used as justification to request a warrant. That seems common sense and did not involve employees snooping pre-warrant.
The Apple CSAM hash detection process, that the launch was rolled back, concerned me namely because it was run on-device with no opt out. If this is running on cloud storage then it sort of makes sense. You need to ensure you are not aiding or harboring actually harmful illegal material.
I get there are slippery slopes or whatever but the fact is you cannot just store whatever you wish in a rental. I don’t see this as opening mass regex surveillance of our communication channels. We have the patriot act to do that lol.
I think the better option is a system where the cloud provider cannot decrypt the files, and they’re not obligated to lift a finger to help the police because they have no knowledge of the content at all
In my opinion, despite the technical merits of an algorithm, encryption is only as trustworthy as the computer who generates and holds a private key.
I would personally not knowingly use a cloud provider to commit a crime. That is a fairly naive take to assume because your browser is https that data at rest and in process isn’t somehow observable.
And I see where you’re coming from but I am afraid that position severely overestimates the will of US people to trade freedom/privacy for security and the legislature to hold citizens’ privacy in such high regard.
I only worry that, in the case that renting becomes a roundabout way of granting more oversight ability to the government, then as home ownership rates decrease, government surveillance power increases.
Sure, it's facilitated through a third party (the owner), but the extrapolated pattern seems to be: "1. Only people in group B will have fewer rights, so people in group A shouldn't worry" followed closely by "2. Sorry, you've been priced out of group A."
In the case of renting, we end up in the situation where those who have enough wealth to own their own home are afforded extra privileges of privacy.
Now to bring this back to the cloud; the cynical part of me looks towards a future of cheap, cloud-only storage devices. Or an intermediate future of devices where cloud is first party and local storage is just enough of a hassle that people don't use it. And the result is that basically everyone now has the present day equivalent of local storage scanning.
If renting de-facto grants fewer rights, then in the future where "you'll own nothing and be happy", you'll also have no rights, and all the way people will say "as a renter, what did you expect?"
OK I agree with you about setting a precedent that future storage will be scanned by default. Additionally who will control the reference hash list?, since making one necessitates hashing that illicit material.
I only hope the court systems escalate it and manage to protect free speech or unreasonable search and seizure or self incrimination or whatever if the CSAM hash comparisons are used against political opponents or music piracy or tax evasion or whatever.
Good point.
> You need to ensure you are not aiding or harboring actually harmful illegal material.
Is this actually true, legally speaking?
I’m unsure I wrote that from like an ethics standpoint. The silk road guy was got on conspiracy for attempting murder and not drug or human trafficking charges. So I’m unsure of legal side.
I think if you knowingly provided a platform to distribute SA/CP/CSAM and the feds become involved you will be righteously fucked.
Reddit clamped down on the creepy *bait subreddits years ago. Maybe it was self-preservation on the business side or maybe it was forward looking about legal issues.
I’m not a lawyer I was just mentioning things that I would follow for ethics morals and my sense of self preservation.
I'm reasonably certain Reddit's decision to ban /r/jailbait and the like was driven by business/reputation. It was widely discussed for some time before it was banned and, IIRC given a "worst of" award by the admins at one point. Once it got major media coverage, Reddit got its first real content policy.
> The silk road guy was got on conspiracy for attempting murder and not drug or human trafficking charges
Actually, the murder stuff was not part of his sentencing or what they tried him for.
https://en.m.wikipedia.org/wiki/Ross_Ulbricht
It is worse. Trump will actually put people on concentration camps! Glenn Greenwald explains the issue here:
https://www.youtube.com/watch?v=8EjkstotxpE
It's like a digital 'smell'; Google is a drug sniffing dog.
I don't think the analogy holds for two reasons (which cut in opposite directions from the perspective of fourth amendment jurisprudence, fwiw).
First, the dragnet surveillance that Google performs is very different from the targeted surveillance that can be performed by a drug dog. Drug dogs are not used "everywhere and always"; rather, they are mostly used in situations where people have a less reasonable expectation of privacy than the expectation they have over their cloud storage accounts.
Second, the nature of the evidence is quite different. Drug-sniffing dogs are inscrutable and non-deterministic and transmit handler bias. Hashing algorithms can be interrogated and are deterministic and do not have such bias transferal issues; collisions do occur, but are rare, especially because the "search key" set is so minuscule relative to the space of possible hashes. The narrowness and precision of the hashing method preserves most of the privacy expectations that society is currently willing to recognize as objectively reasonable.
Here we get directly to the heart of the problem with the fictitious "reasonable person" used in tests like the Katz test, especially in cases where societal norms and technology co-evolve at a pace far more rapid than that of the courts.
This analogy can have two opposite meanings. Drug dogs can be anything from a prop used by the police to search your car without a warrant (a cop can always say in court the dog "alerted" them) to a useful drug detection tool.
>yet another loophole
What's the new legal loophole? I believe what's described above is the same as it's been for decades, if not centuries.
Disclosure: I work at Google but not on anything related to this.
If the police “wanted” to look. But what if they were notified of the material? Then the police should not need a warrant, right?
Don't they?. If you tell the cops that your neighbor has drugs of significant quantity in their house, would they not still need a warrant to actually go into your neighbor's house?
Correct. A simple tip does not amount to probable cause by itself.
There are a lot of nuances to these situations of third-party involvement and the ruling discusses these at length. If you’re interested in the precise limits of the 4th amendment you should really just read the linked document.
they should as a matter of course. but I guess "papers" you entrust to someone else are a gray area. I personally think that it goes against the separation of police state and democracy, but I'm a nobody, so it doesn't matter I suppose.
No. What I send through my email is between me and God.
Is it reasonable? Even if the hash was md5, given valid image files, the chances of it being an accidental collision are way lower than the chance of any other evidence given to a judge was false or misinterpreted.
This is NOT a secure hash. This is an image similar to hash which has many many matches in not related images.
Unfortunately the decision didn't mention this at all even though it is important. If it was even as good as a md5 hash (which is broken) I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't (and of course if there is the police would close the case). However since this has is not that good the police cannot look at the image unless Google does.
I wish I could get access to the "App'x 29" being referenced so that I could better understand the judges' understanding here. I assume this is Federal Appendix 29 (in which case a more thorough reference would've been appreciated). If the Appeals Court is going to cite the Federal Appendix in a decision like this and in this manner, then the Federal Appendix is as good as case law and West Publishing's copyright claims should be ripped away. Either the Federal Appendix should not be cited in Appeals Court and Supreme Court opinions, or the Federal Appenix is part of the law and belongs to the people. There is no middle there.
> I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't
The footnote in the decision bakes this property into the definition of a hash:
A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.
(Importantly, this is NOT an accurate definition of a hash for anyone remotely technical... of course hashing algorithms with significant hash collisions exist, and is even a design criterion for some hashing algorithms...)
>I wish I could get access to the "App'x 29" being referenced so that I could better understand the judges' understanding here. I assume this is Federal Appendix 29 (in which case a more thorough reference would've been appreciated). If the Appeals Court is going to cite the Federal Appendix in a decision like this and in this manner, then the Federal Appendix is as good as case law and West Publishing's copyright claims should be ripped away. Either the Federal Appendix should not be cited in Appeals Court and Supreme Court opinions, or the Federal Appenix is part of the law and belongs to the people. There is no middle there.
Just go to a law library.
Do you know that judges routinely make decisions based on confidential documents not in the public record? Is that also bad?
> Just go to a law library.
The closest with a copy of the Federal Appendix is ~2 hrs away from me (or on LN if I pay for a subscription). It should be free and online, because it probably can't be copyrighted and because simplifying public access to the law is an unambiguous public good.
> Do you know that judges routinely make decisions based on confidential documents not in the public record? Is that also bad?
Of course not; the particularities of a given case is a very different concern from a document whose content is critical to interpretation of precedent. Also, the copyright claims on confidential documents might be valid, whereas any copyright claims on cases in the Federal Appendix probably aren't valid; see how of the government edicts doctrine was applied in Georgia v. Public.Resource.Org.
Facts are incredibly relevant to the meaning of a case's holding. The issue with confidential documents isn't their copyrightibility.
We can’t access appendix 29? Is that what you are saying?
You're assuming accidential collision. Images can be generated that intentionally trigger the hash algorithm while they still appear as something else (a meme, funny photo, etc.) to a person looking at them. This brings many possibilities for "bad people" to do to people they hate (like an alternative to swatting etc.)
Yes. How else would you prevent framing someone?
So you're saying that I craft a file that has the same hash as a CSAM one, I give it to you, you upload it to google, but it also happens to be CSAM, and I've somehow framed you?
My point is that a hash (granted, I'm assuming that we're talking about a cryptographic hash function, which is not clear) is much closer to "This is the file" than someone actually looking at it, and that it's definitely more proof of them having that sort of content than any other type of evidence.
These are perceptual hashes designed on purpose to be a little vague and broad so they catch transformed images. Not cryptographic hashes.
I don't understand. If you contend that it's even better evidence than actually having the file and looking at it, how is not reasonable to then need a judge to issue a warrant to look at it? Are you saying it would be more reasonable to skip that part and go directly to arrest?
It seems like a large part of the ruling hinges on the fact that Google matched the image hash to a hash of a known child pornography image, but didn't require an employee to actually look at that image before reporting it to the police. If they had visually confirmed it was the image they suspected it was based on the hash then no warrant would have been required, but the judge reads that the image hash match is not equivalent to a visual confirmation of the image. Maybe there's some slight doubt in whether or not the image could be a hash collision, which depends on the hash method. It may be incredibly unlikely (near impossible?) for any hash collision depending on the specific hash strategy.
I think it would obviously be less than ideal for Google to require an employee visually inspect child pornography identified by image hash before informing a legal authority like the police. So it seems more likely that the remedy to this situation would be for the police to obtain a warrant after getting the tip but before requesting the raw data from Google.
Would the image hash match qualify as probable cause enough for a warrant? On page 4 the judge stops short of setting precedence on whether it would have or not. Seems likely that it would be a solid probable cause to me, but sometimes judges or courts have a unique interpretation of technology that I don't always share, and leaving it open to individual interpretation can lead to conflicting results.
The hashes involved in stuff like this, as with copyright auto-matching, are perceptual hashes (https://en.wikipedia.org/wiki/Perceptual_hashing), not cryptographic hashes. False matches are common enough that perceptual hashing attacks are already a thing in use to manipulate search engine results (see the example in random paper on the subject https://gangw.cs.illinois.edu/PHashing.pdf).
It seems like that is very relevant information that was not considered by the court. If this was a cryptographic hash I would say with high confidence that this is the same image and so Google examined it - there is a small chance that some unrelated file (which might not even be a picture) matches but odds are the universe will end before that happens and so the courts can consider it the same image for search purposes. However because there are many false positive cases there is reasonable odds that the image is legal and so a higher standard for search is needed - a warrant.
>so the courts can consider it the same image for search purposes
An important part of the ruling seems to be that neither Google nor the police had the original image or any information about it, so the police viewing the image gave them more information than Google matching the hash gave Google: for example, consider how the suspect being in the image would have changed the case, or what might happen if the image turned out not to be CSAM, but showed the suspect storing drugs somewhere, or was even, somehow, something entirely legal but embarrassing to the suspect. This isn't changed by the type of hash.
That's the exact conclusion that was reached - the search required a warrant.
the court implied even a hash without collisions would not count when it should.
It shouldn't. Google hasn't otherwise seen the image, so the employee couldn't have witnessed a crime. There are reportedly many perfectly legal images that end up in these almost perfectly unaccountable databases.
That makes sense - if they were using a cryptographic hash then people could get around it by making tiny changes to the file. I’ve used some reverse image search tools, which use perceptual hashing under the hood, to find the original source for art that gets shared without attribution (saucenao pretty solid). They’re good, but they definitely have false positives.
Now you’ve got me interested in what’s going on under the hood, lol. It’s probably like any other statistical model: you can decrease your false negatives (images people have cropped or added watermarks/text to), but at the cost of increased false positives.
> what's going on under the hood
Rather simple methods are surprisingly effective [1]. There's sure to be more NN fanciness nowadays (like Apple's proposed NeuralHash), but I've used the algorithms described by [1] to great effect in the not-too-distant past. The HN discussion linked in that article is also worth a read.
[1] https://www.hackerfactor.com/blog/index.php?/archives/432-Lo...
This submission is the first I've heard of the concept. Are there OSS implementations available? Could I use this, say, to deduplicate resized or re-jpg-compressed images?
Probably yeah, though there’s significant overlap between how much distortion to accept vs the number of false positives.
The hash functions used for these purposes are usually not cryptographic hashes. They are "perceptual hashes" that allows for approximate matches (e.g. if the image has been scaled or brightness-adjusted). https://en.wikipedia.org/wiki/Perceptual_hashing
These hashes are not collision-resistant.
They should be called embeddings.
> Maybe there's some slight doubt in whether or not the image could be a hash collision, which depends on the hash method. It may be incredibly unlikely (near impossible?) for any hash collision depending on the specific hash strategy.
If it was a cryptographic hash (apparently not), this mathematical near-certainty is necessary but not sufficient. Like cryptography used for confidentiality or integrity, the math doesn't at all guarantee the outcome; the implementation is the most important factor.
Each entry in the illegal hash database, for example, relies on some person characterizing the original image as illegal - there is no mathematical formula for defining illegal images - and that characterization could be inaccurate. It also relies on the database's integrity, the user's application and its implementation, even the hash calculator. People on HN can imagine lots of things that could go wrong.
If I were a judge, I'd just want to know if someone witnessed CP or not. It might be unpleasant but we're talking about arresting someone for CP, which even sans conviction can be highly traumatic (including time in jail, waiting for bail or trial, as a ~child molestor) and destroy people's lives and reputations. Do you fancy appearing at a bail hearing about your CP charge, even if you are innocent? 'Kids, I have something to tell you ...'; 'Boss, I can't work for a couple weeks because ...'.
It seems like there just needs to be case law about the qualifications of an image hash in order to be counted as probable cause for a warrant. Of course you could make an image hash be arbitrarily good or bad.
I am not at all opposed to any of this "get a damn warrant" pushback from judges.
I am also not at all opposed to Google searching it's cloud storage for this kind of content. There are a lot of things I would mind a cloud provider going on fishing expeditions to find potentially illegal activity, but this I am fine with.
I do strongly object to companies searching content for illegal activity on devices in my possession absent probable cause and a warrant (that they would have to get in a way other than searching my device). Likewise I object to the pervasive and mostly invisible delivery to the cloud of nearly everything I do on devices I possess.
In other words, I want custody of my stuff and for the physical possession of my stuff to be protected by the 4th amendment and not subject to corporate search either. Things that I willingly give to cloud providers that they have custody of I am fine with the cloud provider doing limited searches and the necessary reporting to authorities. The line is who actually has the bits present on a thing they hold.
I think if the hashes were made available to the public, we should just flood the internet with matching but completely innocuous images so they can no longer be used to justify a search
>please use the original title, unless it is misleading or linkbait; don't editorialize. (@dang)
On topic, I like this quote from the first page of the opinion:
>A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.” United States v. Ackerman, 831 F.3d 1292, 1294 (10th Cir. 2016) (Gorsuch, J.).
It's amusing to me that they use a supreme court case as a reference for what a hash is rather than eg. a textbook. It makes sense when you consider how the court system works but it is amusing nonetheless that the courts have their own body of CS literature.
Maybe someone could publish a "CS for Judges" book that teaches as much CS as possible using only court decisions. That could actually have a real use case when you think of it. (As other commenters pointed out, the hashing definition given here could use a bit more qualification, and should at least differentiate between neural hashes and traditional ones like MD5, especially as it relates to the likeliness that "another set of data will produce the same value." Perhaps that could be an author's note in my "CS for Judges" book.)
> Maybe someone could publish a "CS for Judges" book
At last, a form of civic participation which seems both helpful and exciting to me.
That said, I am worried that lot of necessary content may not be easy to introduce with hard precedent, and direct advice or dicta might somehow (?) not be permitted in a case since it's not adversarial... A new career as a professional expert witness--even on computer topics--sounds rather dreary.
I bet that book would end up with some very strange content, like attributing the invention of all sorts of obvious things to patent trolls.
What's so weird about this? CS literature is not legally binding in any way. Of course a judge would rather quote a previous ruling by fellow judge than a textbook, Wikipedia, or similar sources.
I think the operative word was "amusing"--which it is--but even then there's a difference between:
1. That's weird and represents an operational error that breaks the rules.
2. That's weird and represents a potential deficiency in how the system or rules have been made.
I don't think anyone is suggesting #1, and #2 is a lot more defensible.
They didn't say it was weird.
From what I understand, a judge is free to decide matters of fact on his own, which could include from a textbook. Also, it is not clear that matters of fact decided by the Supreme Court are binding to lower courts. Additionally, facts and even meanings of words themselves can change, which makes previous findings of fact no longer applicable. That's actually true in this case as well. "Hash" as used in the context of images generally meant something like an MD5 hash (which itself is now more prone to collisions than before). The "hash" in the Google case appears to be a perceptual hash, which I don't think was as commonly used until recently (I could be wrong here). So whatever findings of fact were made by the Supereme Court about how reliable a hash is is not necessarily relevant to begin with. Looking at this specific case, here is the full quote from United States v. Ackerman:
>How does AOL's screening system work? It relies on hash value matching. A hash value is (usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value. Some consider a hash value as a sort of digital fingerprint. See Richard P. Salgado, Fourth Amendment Search and the Power of the Hash, 119 Harv. L. Rev. F. 38, 38-40 (2005). AOL's automated filter works by identifying the hash values of images attached to emails sent through its mail servers.[0]
I don't have access to this issue of Harvard Law Review but looking at the first page, it says:
>Hash algorithms are used to confirm that when a copy of data is made, the original is unaltered and the copy is identical, bit-for-bit.[1]
This is clearly referring to a cryptographic hash like MD5, not a perceptual hash/neural hash as in Google. So the actual source here is not necessarily dealing with the same matters of fact as the source of the quote here (although there could be valid comparisons between them).
All this said, judges feel more confident in citing a Supreme Court case than a textbook because 1. it is easier to understand for them 2. the matter of fact is then already tied to a legal matter, instead of the judge having to make that leap himself and also 3. judges are more likely to read relevant case law to begin with since they will read it to find precedent in matters of law – which are binding to lower courts. This is why a "CS for Judges" could be a useful reference book.
Lastly, I should have looked a bit more closely at the quoted case. This is actually not a supreme court case at all. Gorsuch was nominated in 2017 and this case is from 2016.
[0] https://casetext.com/case/united-states-v-ackerman-12
[1] https://heinonline.org/HOL/LandingPage?handle=hein.journals/...
> As the district court correctly ruled in the alternative, the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
So this means this conviction is upheld but future convictions may be overturned if they similarly don't acquire a warrant?
> the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
This "good faith exception" is so absurd I struggle to believe that it's real.
Ordinary citizens are expected to understand and scrupulously abide by all of the law, but it's enough for law enforcement to believe that what they're doing is legal even if it isn't?
What that is is a punch line from a Chapelle bit[1], not a reasonable part of the justice system.
---
1. https://www.youtube.com/watch?v=0WlmScgbdws
The courts accept good faith arguments at times. They will give reduced sentences or even none at all if they think you acted in good faith. There are enough situations where it is legal to kill someone that there are laws to make it clear that is a legal situation where one person can kill another (hopefully they never apply to you).
Note that this case is not about ignorance of the law. This is I knew the law and was trying to follow it - I just honestly thought it didn't apply because of some tricky situation that isn't 100% clear.
The difference between "I don't know" and "I thought it worked like this" is purely a matter of degrees of ignorance. It sounds like the cops were ignorant of the law in the same way as someone who is completely unaware of it, just to a lesser degree. Unless they were misinformed about the origins of what they were looking at, it doesn't seem like it would be a matter of good faith, but purely negligence.
There was a circuit split and a matter of first impression in this circuit.
“Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
> “Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
We're talking about orthogonal issues.
Mens rea applies to whether the person performs the act on purpose. Not whether they were aware that the act was illegal.
Let's use fraud as an example since you brought it up.
If I bought an item from someone and used counterfeit money on purpose, that would be fraud. Even if I truly believed that doing so was legal. But it wouldn't be fraud if I didn't know that the money was counterfeit.
At the time, what they did was assumed to be legal because no one had ruled on it.
Now, there is prior case law declaring it illegal.
The ruling is made in such a way to say “we were allowing this, but we shouldn’t have been, so we wont allow it going forward”.
I am not a legal scholar, but that’s the best way I can explain it. The way that the judicial system applies to law is incredibly complex and inconsistent.
This is a deeply problematic way to operate. En masse, it has the right result, but, for the individual that will have their life turned upside down, the negative impact is effectively catastrophic.
This ends up feeling a lot like gambling in a casino. The casino can afford to bet and lose much more than the individual.
I think the full reasoning here is something like
1. It was unclear if a warrant was necessary
2. Any judge would have given a warrant
3. You didn't get a warrant
4. A warrant was actually required.
Thus, it's not clear that any harm was caused because the right wasn't clearly enshrined and had the police known that it was, they likely would have followed the correct process. There was no intention to violate rights, and no advantage gained from even the inadvertent violation of rights. But the process is updated for the future.
Yeah that is about my understanding as well.
I don't care nearly as much about the 4th amendment when the person is guilty. I care a lot when the person is innocent. Searches of innocent people is costly for the innocent person and so we require warrants to ensure such searches are minimized (even though most warrants are approved, the act of getting on forces the police to be careful). If a search was completely not costly to innocent I wouldn't be against them, but there are many ways a search that finds nothing is costly to the innocent.
If the average person is illegally searched, but turns out to be innocent, what are the chances they bother to take the police to court? It's not like they're going to be jailed or convicted, so many people would prefer to just try to move on with their life rather than spend thousands of dollars litigating a case in the hopes of a payout that could easily be denied if the judge decides the cops were too stupid to understand the law rather than maliciously breaking it.
Because of that, precedent is largely going to be set with guilty parties, but will apply equally to violations of the rights of the innocent.
There is the important question.
I want guilty people to go free if their 4th amendment rights are violated, thats the only way to ensure police are meticulous about protecting peoples rights
It doesn’t seem like it was wrong in this specific case however.
This specific conviction upheld, yes. But no, this ruling doesn't speak to whether or not any future convictions may be overturned.
It simply means that at the trial court level, future prosecutions will not be able to rely on the good faith exception to the exclusionary rule if warrantless inculpatory evidence is obtained under similar circumstances. If the governement were to try to present such evidence at trial and the trial judge were to admit it over the objection of the defendant, then that would present a specific ground for appeal.
This ruling merely bolsters the 'better to get a warrant' spirit of the Fourth Amendment.
Yep, that's basically it.
It's crazy that the most dangerous people one regularly encounters can do anything they want as long as they believe they can do it. The good faith exemption has to be one of the most fascist laws on the books today.
> "the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required."
In no other context or career can you do anything you want and get away with it just as long as you say you thought you could. You'd think police offiers would be held to a higher standard, not no standard.
And specifically with respect to the law, breaking a law and claiming you didn't know you did anything wrong as an individual is not considered a valid defense in our justice system. This same type of standard should apply even more to trained law enforcement, not less, otherwise it becomes a double standard.
No this is breaking the law by saying this looked like one of the situations where I already know the law doesn't apply. If Google had looked at the actual image and said it was child porn instead of just saying it was similar to some image that is child porn this would be 100% legal as the courts have already said. That difference is subtle enough that I can see how someone would get it wrong (and in fact I would expect other courts to rule differently)
Doesn’t address the point. Does everyone get a good faith exception from laws they don’t know or misunderstand, or just the police?
when the law isn't clear and so reasonable people would understand it as you did you should get a pass.
> you do anything you want and get away with it just as long as you say you thought you could.
Isn't that the motto of VC? Uber, AirBnB, WeWork, etc...
Sorry, I should have been more explicit. I thought the context provided it.
> you do any illegal action you want and get away with it just as long as you say you thought you could.
And as for corporations: that's the point of incorporating. Reducing liability.
That's not what this means. One can ask whether the belief is reasonable, that is justifiable by a reasoning process. The argument for applying the GFE in this case is that the probability of false positives from a perceptual hash match is low enough that it's OK to assume it's legit and open the image to verify that it was indeed child porn. They then used that finding to get warrants to search the guy's gmail account and later his home.
Good Samaritan laws tend to function similarly
If I'm not a professional and I hurt someone while trying to save their life by doing something stupid, that's understandable ignorance.
If a doctor stops to help someone and hurts them because the doctor did something stupid, that is malpractice and could get them sued and maybe get their license revoked.
Would you hire a programmer who refused to learn how to code the claimed "good faith" every time they screwed things up? Good faith shouldn't cover willful ignorance. A cop is hired to know, understand, and enforce the law. If they can't do that, they should be fired.
It's not exactly the same imo, since GS laws are meant to protect someone who is genuinely trying to do what a reasonable person could consider "something positive"
It is not "you say you thought you could", it is "you have reasonable evidence a crime is happening".
The reasonable here is "google said it", and it was true.
If the police arrive at a house on a domestic abuse call, and hears screams for help, is breaking down the door done in good faith?
In this case you're correct. But the good faith exemption is far broader than this and applies to even officer's completely personal false beliefs in their authority.
> In no other context or career can you do anything you want and get away with it just as long as you say you thought you could
Many white collar crimes, financial and securities fraud/violations can be thwarted this way
Basically, ignorance of the law is no excuse except when you specifically write the law to say it is an excuse
Something that contributes to the DOJ not really trying to bring convictions against individuals at bigger financial institutions
And yeah, a lot of people make sure to write their industry’s laws that way
I think the judge chose to relax a lot on this one due to the circumstances. Releasing a man in society found with 4,000 child porn photos in his computer would be a shame.
But yeah, this opens too wide gates of precedence for tyranny, unfortunately...
Wow, do I ever not know how I feel about the "good faith exception."
It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
> It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
That's a bingo. That's exactly what they do, and why so many cops know less about the law than random citizens. A better society would have high standards for the knowledge expected of police officers, including things like requiring 4-year criminal justice or pre-law degree to be eligible to be hired, rather than capping IQ and preferring people who have had prior experience in conducting violent actions.
In some countries you are required to study the law in order to become a police officer. It's part of the curriculum in the three year bachelor level course you must pass to become a police officer in Norway for instance. See https://en.wikipedia.org/wiki/Norwegian_Police_University_Co... and https://en.wikipedia.org/wiki/Norwegian_Police_Service
Yes, this likely explains part of why the Norwegian police behave like professionals who are trying to do their job with high standards of performance and behavior and the police in the US behave like a bunch of drinking buddies that used to be bullies in high school trying to find their next target to harass.
> It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
It is not so lax as that. It's limited to situations where a reasonable person who knows exactly what the law and previous court rulings say might conclude that a certain action is legal. In this case, other Federal Circuit courts have ruled that similar actions are legal.
The good faith exception requires the belief be reasonable. Ignorance of clearly settled law is not reasonable, it should be a situation where the law was unclear, had conflicting interpretations or could otherwise be interpreted the way the police did by a reasonable person.
The problem with the internet nowadays is that a few big players are making up their own law. Very often it is against local laws, but nobody can fight with it. For example someone created some content but other person uploaded it and got better scores which rendered the original poster blocked. Another example: children were playing a violin concert and the audio got removed due to alleged copyright violation. No possibility to appeal, nobody sane would go to court. It just goes this way...
The Fourth Amendment didn't help here, unfortunately. Or, perhaps fortunately.
Still, 25 years for possessing kiddie porn, damn.
The harshness of sentence is not for the action of keeping the photos in itself, but the individual suffering and social damage caused by the actions that he incentivizes when he consumes such content.
Consumption per se does not incentivize it, though; procurement does. It's not unreasonable to causally connect one to the other, but I still think that it needs to be done explicitly. Strict liability for possession in particular is nonsense.
There's also an interesting question wrt simulated (drawn, rendered etc) CSAM, especially now that AI image generators can produce it in bulk. There's no individual suffering nor social damage involved in that at any point, yet it's equally illegal in most jurisdictions, and the penalties aren't any lighter. I've yet to see any sensible arguments in favor of this arrangement - it appears to be purely a "crime against nature" kind of moral panic over the extreme ickiness of the act as opposed to any actual harm caused by it.
> Consumption per se does not incentivize it,
It can. In several public cases it seems fairly clear that there is a "community" aspect to these productions and many of these sites highlight the number of downloads or views of an image. It creates an environment where creators are incentivized to go out of their way to produce "popular" material.
> Strict liability for possession in particular is nonsense.
I entirely disagree. Offenders tend to increase their level of offense. This is about preventing the problem from becoming worse and new victims being created. It's effectively the same reason we harshly prosecute people who torture animals.
> nor social damage involved in that at any point,
That's a bold claim. Is it based on any facts or study?
> over the extreme ickiness of the act as opposed to any actual harm caused by it.
It's about the potential class of victims and the outrageous life long damage that can be done to them. The appropriate response to recognizing these feelings isn't to hand them AI generated material to sate their desires. It's to get them into therapy immediately.
> > Strict liability for possession in particular is nonsense.
> I entirely disagree. Offenders tend to increase their level of offense.
For an example of unintended consequences of strict liability for possession look at Germany where the legal advice for what to do if you come across CSAM is to delete it and say nothing because reporting it to the police would incriminate you for possession and if you deleted a prosecutor could charge you with evidence tampering on top of it.
Also, as I understand it, in the US there have also been cases of minors deliberately taking "nudes" or sexting with other minors leading to charges of production and distribution of CSAM for their own pictures they took of themselves.
The production and distribution of CSAM should 100% be criminalized and going after possession seems reasonable to me. But clearly the laws are lacking if they also criminalize horny teenagers being stupid or people trying to do the right thing and report CSAM they come across.
> The appropriate response to recognizing these feelings is [..] to get them into therapy immediately.
Also 100% agree with this. In Germany there was a widespread media campaign "Kein Täter werden" (roughly "not becoming a predator") targeting adults who find sexually attracted to children. They anonymized the actors for obvious reasons but I like that they portrayed the pedophiles with a wide range of characters from different walks of life and different age groups. The message was to seek therapy. They provided a hotline as well as ways of getting additional information anonymously.
Loudly yelling "kill all pedophiles" doesn't help prevent child abuse (in fact, there is a tendency for abusers to join in because it provides cover and they often don't see themselves as the problem) but feeding into pedophilia certainly isn't helpful either. The correct answer is therapy but also moving away from a culture (and it is cultural not innate) that fetishizes youth, especially in women (sorry, "girls"). This also means fighting all child abuse, not just sexual.
> It's effectively the same reason we harshly prosecute people who torture animals.
Alas harsh sentencing isn't therapy. Arguably incarceration merely acts as a pause button at best. You don't get rid of nazis by making them hang out on Stormfront or 8kun.
> and it is cultural not innate
> he/they
I sometimes see people make this assertion, and interestingly enough it's usually trans people. What exactly makes you say this?
I'm not trans, so I can't speak for trans people (not that any individual person could speak for an entire demographic group). And to pre-empt the follow-up question "then what's with the pronouns": gender is multi-faceted and complex, pronouns are just one aspect of it. Think of it as gender non-conformance. Like wearing a dress as a bloke.
If I had to hazard a guess for what might be causing the correlation you're seeing, I'd assume that being trans usually comes downstream from reflecting on social phenomena and cultural expectations. Being trans is by definition not the cultural norm (even the word itself implies some form of "misalignment" of identities and cultural expectations) so if you are familiar enough with it to claim it as your own identity, you probably did a lot of research into it, especially if you're from a generation where it was more of a taboo subject and not acknowledged in the broader frame of cultural references (e.g. good luck if your only exposure to the concept is from films like Ace Ventura). This can lead you down a rabbit hole if you try to understand not just that one aspect of your identity and experience.
Your actual question comes off as a bit upset (but again, that may be cultural - I'm not American nor is English my native language although I did pick it up at an early age) so let me rephrase it in a way that makes me more inclined to answer it: "Why do you think that is?"
This still feels somewhat like proving the null hypothesis as "it's in our genes" is not normally the go-to explanation we accept when wondering about any random part of human behavior but let's start by turning it around. Sure, we can make up all kinds of just-so evopsych rationalizations why human males should be sexually attracted to post-pubescent young and healthy human females but the same reasoning would also predict a preference for a prolific pelvis (making it more likely they successfully give birth) and pregnant women (demonstrating actual fertility) or mothers (demonstrating child-rearing abilities as well as fertility) and so on. Ultimately these are all just-so stories to rationalize a pre-existing assumption about human behavior that contradicts actual archeological research (which adherents often explain by claiming archeology has been corrupted by ideology but let's not get into fallacious claims of being "free of ideology" and where all of that ultimately leads).
The answer then is simple: I say fetishization of youth in women is cultural rather than innate because it is not a consistent phenomenon throughout history nor even globally in the modern age.
It's important to distinguish between the two factors at play in child sexual abuse: sexual attraction (i.e. pedophilia) and power dynamics. This isn't unique to child sexual abuse. Regular rape also often is more about power than attraction. Everyone is familiar with the concept of prison rape and historically, sometimes even today, a male rapist of other men in a prison is not by default considered gay or effeminate and the act may be seen as demonstrating dominance, demeaning and emasculating the victim.
The reason I'm talking about "fetishization" is because our culture (and US culture particularly so) first of all very much embraces narratives of dominance as a positive, from competition over cooperation to the ahistorical "great man" narrative of historical events. This shouldn't be surprising as these narratives are useful to those benefitting from the status quo by placating those who don't, much like fear of hellfire and the promise of heaven placated those caught at the wrong end of medieval Europe's "divine right"-based feudal system (up to a point).
Our culture is very much male-centric (patriarchy is often misunderstood - even by some so-called feminists - to mean that all men are given power over all women but that's literally why intersectionality became a thing before being misrepresented in "oppression olympics" memes, so I'll avoid overloaded terms like those here). This goes hand-in-hand with the "traditional" perspective that the man/father is the head of the household and should rule it with determination and "tough love" the same way the state should lead the people (and the president the state), each family representing a scale model of the dynamics of society at large, justifying the authority of the state in the authority of the father and vice versa.
So youth and feminity in this case acts as a stand-in for submissiveness. Under the "loving care" of a controlling father figure, a youthful woman is sexually pure/innocent ("uncorrupted") and meek/submissive. By evoking signifiers of childhood (e.g. the quintessential "cheerleader" costume, braces, pigtails, lispy speech, lollipops, pastels/pink) this is shifted further into an implausibly childlike innocence and paired with the sexual allure of "corrupting" that innocence (the fantasy of "defloration" leaving a "permanent mark") based on the implicit understanding that the sexual act empowers the penetrating man and permanently devalues the penetrated "girl" lest she remains faithful to the man should he want to "keep" her. Note that we don't even need to adopt the "sex-negative" feminist perspective on penetrative sex as inherently humiliating, the idea of penetrating = empowering and penetrated = disempowered is almost omnipresent in our culture as it is (note that this has nothing to do with passivity - receiving oral sex for example is seen as empowering - and arguably the framing around literal "penetration" alone is imprecise as e.g. right-wing attitudes towards cunnilingus as being emasculating for a male "giver" show).
If all this cultural analysis is too wishy-washy for you, historical records still don't align with the idea that fetishization of female youth is innate. Young adults, i.e. women in their 20s or very early 30s, yes, sure, but not "sweet 16" or "barely legal". Arguably US culture has even gotten better in this over my lifetime given that we went from the early Britney Spears school uniform sexualization to Megan Thee Stallion and with the crackdown on public forums like the `r/jailbait` Subreddit, but there is still a very strong undercurrent, especially among conservative men.
> then what's with the pronouns
It's been the case that when I encounter people with non-normative pronouns they're trans, but you're right that isn't necessarily the case. My mistake!
I know I asked the initial question, but I guess I'm confused what exactly this conversation is about. Is the idea that people are only ever attracted to sixteen year olds because they learned to be? That feels like a challenging thing to demonstrate in the same way it being "in the genes" is, but perhaps I'm being overly reductive.
Nature vs nurture is not an either-or. I'm not saying "it" isn't "in the genes". I'm saying it's not just genes.
There's a wide range of possible age brackets, body types etc across all genders that can manifest traits most people would find attractive. Post-pubescent girls arguably aren't special in that sense. Especially if you don't isolate them out of their real-world context (which is where it stops being Oscar-winning Hollywood cinema and starts being child sexual abuse) that allows objectifying and dehumanizing them as "jailbait".
Where culture comes in is meaning. Taken at face value, a kid is just a kid. But culturally a kid represents something - naivety, hope, innocence, inexperience, whatever. This turns female youth into a fetish - something imbued with additional meaning. It's not actually the literal youthfulness that is culturally attractive in women (or else most people wouldn't react so violently against the idea of people sexually abusing minors), it's what that youthfulness represents. It's a male power fantasy.
Again, power fantasies aren't inherently a problem. What I'm arguing is that this one very much is a problem because it's so normalized it informs real-world social dynamics, i.e. where people start to forget it's a fantasy. Also I would argue the need for this specific fantasy is also not inherent (i.e. maleness does not inherently create a desire for absolute dominance over others). But I've rambled enough as it is.
> It can. In several public cases it seems fairly clear that there is a "community" aspect to these productions and many of these sites highlight the number of downloads or views of an image. It creates an environment where creators are incentivized to go out of their way to produce "popular" material.
So long as it's all drawn or generated, I don't see why we should care.
> I entirely disagree. Offenders tend to increase their level of offense.
This claim reminds me of similar ones about how video games are "on-ramp" to actual violent crime. It needs very strong evidence to back, especially when it's used to justify harsh laws. Evidence which we don't really have because most studies of pedophiles that we have are, by necessity, focused on the ones known to the system, which disproportionally means ones that have been caught doing some really nasty stuff to real kids.
> I entirely disagree. Offenders tend to increase their level of offense. This is about preventing the problem from becoming worse and new victims being created. It's effectively the same reason we harshly prosecute people who torture animals.
Strict liability for possession means that you can imprison people who don't even know that they have offending material. This is patent nonsense in general, regardless of the nature of what exactly is banned.
> That's a bold claim. Is it based on any facts or study?
It is based on the lack of studies showing a clear causal link. Which is not definitive for the reasons I outlined earlier, but I feel like the onus is on those who want to make it a crime with such harsh penalties to prove said causal link, not the other way around.
Note also that, even if such a clear causal link can be established, surely there is still a difference wrt imputed harm - and thus, culpability - for those who seek out recordings of genuine sexual abuse vs simulated? As things stand, in many jurisdictions, this is not reflected in the penalties at all. Justice aside, it creates a perverse incentive for pedophiles to prefer non-simulated CSAM.
> It's about the potential class of victims and the outrageous life long damage that can be done to them. The appropriate response to recognizing these feelings isn't to hand them AI generated material to sate their desires. It's to get them into therapy immediately.
Are you basically saying that simulated CSAM should be illegal because not banning it would be offensive to real victims of actual abuse? Should we extend this principle to fictional representations of other crimes?
As far as getting them into therapy, this is a great idea, but kinda orthogonal to the whole "and also you get 20+ years in the locker" thing. Even if you fully buy into the whole "gateway drug" theory where consumption of simulated CSAM inevitably leads to actual abuse in the long run, that also means that there are pedophiles at any given moment that are still at the "simulated" stage, and such laws are a very potent deterrent for them to self-report and seek therapy.
With respect to "handing them AI-generated material", this is already a fait accompli given local models like SD. In fact, at this point, it doesn't even require any technical expertise, since image generator apps will happily run on consumer hardware like iPhones, with UI that is basically "type what you want and tap Generate". And unless generated CSAM is then distributed, it's pretty much impossible to restrict this without severe limitations on local image generation in general (basically prohibiting any model that knows what naked humans look like).
> Are you basically saying that simulated CSAM should be illegal because not banning it would be offensive to real victims of actual abuse?
No, it's because it will likely lead to those consuming it turning to real life sexual abuse. The behavior says "I'm attracted to children." We have a lot of good data on where this precisely leads to when entirely unsupervised or unchecked.
> but kinda orthogonal to the whole "and also you get 20+ years in the locker" thing.
You've constantly created this strawman but it appears nowhere in my actual argument. To be clear it should be like DUIs, with small penalties on first time offenses increasing to much larger ones upon repetition of the crime.
> it's pretty much impossible to restrict this
Right. It's impossible to stop people committing murder as well. It's also impossible to catch every perpetrator. Yet we don't strain ourselves to have the laws on the books, and it's quite possible the laws, and the penalties themselves, have a "chilling effect" when it comes to criminality.
Or, if your sensibility of diminished rights is so offended, then it can be a trade. If you want to consume AI child pornography you have to voluntarily add your name to a public list. Those on this list will obviously be restricted from certain careers, certain public settings, and will be monitored when entering certain areas.
Which sounds more appropriate to you?
> No, it's because it will likely lead to those consuming it turning to real life sexual abuse. The behavior says "I'm attracted to children." We have a lot of good data on where this precisely leads to when entirely unsupervised or unchecked.
For one thing, again, we don't have quality studies clearly showing that.
But let's suppose that we do, and they agree. If so, then shouldn't the attraction itself be penalized, since it's inherently problematic? You're essentially saying that it's okay to nab people for doing something that is in and of itself harmless, because it is sufficient evidence that they will inevitably cause harm in the future.
I do have to note that it is, in fact, fairly straightforward to medically diagnose pedophilia in a controlled setting - should we just routinely run everyone through this procedure and compile the "sick pedo list" preemptively this way? If not, why not?
> You've constantly created this strawman but it appears nowhere in my actual argument.
My "strawman" is the actual situation today that you were, at least initially, trying to defend.
> Right. It's impossible to stop people committing murder as well. It's also impossible to catch every perpetrator. Yet we don't strain ourselves to have the laws on the books, and it's quite possible the laws, and the penalties themselves, have a "chilling effect" when it comes to criminality.
That can be measured, and we did - and yes, they do, but it's specifically the likelihood of getting caught, not so much the severity of the punishment (which is one of the reasons why we don't torture people as form of punishment anymore, at least not officially).
The point, however, was that nobody is "handing" them anything. It's all done with tools that are, at least at present, readily available and legal in our society, and this doesn't change whether you make some ways of using those tools illegal or not, nor is it impossible to detect such private use unless you're willing to go full panopticon or ban the tools.
Laws don't need to be absolutely enforceable to still work. You probably will not go to jail for running that stop sign at the end of your street (but please don't run it).
AI-generated CSAM is real CSAM and should be treated that way legally. The image generators used to generate it are usually trained on pictures of real children.
Assuming the person is a passive consumer with no messages / money exchanged with anyone, it is very hard to prove social harm or damage. Sentences should be proportional to the crime. Treating possession of cp as equivalent of literally raping a child just seems absurd to me. IMO, just for the legal protection of the average citizen, a simple possession should never warrant jail time.
CP is better described as "images of child abuse", and the argument is that the viewing is revictimising the child.
You appear to be suggesting that you shouldn't go to prison for possessing images of babies being raped?
For the record, i'm against any kind of child abuse, and 25 years for an actual abuser would not be a problem.
But...
Should you go to prison for possesing images of an adult being raped? What if you don't even know it's rape? What if the person is underage, but you don't know (looks adult to you)? What about a murder video instead of rape? What if the child porn is digitally created (AI, photoshop, whatever)? What if a murder scene is digitally created (fake bullets, holes+blood made in video editing software)? What if you go to a mainstream porno store, buy a mainsteam professional porno video and you later find out that the actress way a 15yo Traci Lords?
You don't go to prison for possessing images of adults being raped, last I checked. Or adults being murdered. Or children being murdered.
I don't think making the images illegal is a good way to handle things.
You do in some countries. For instance, knowingly possessing video of the Christchurch massacre is illegal in New Zealand, due to a ruling by NZ’s Chief Censor (yes, that’s the actual title), and punishable by up to 14 years in prison.
Personally, I prefer the American way.
It's a reasonable argument, but a concerning one because it hinges on a couple of layers of indirection between the person engaging in consuming the content and the person doing the harm / person who is harmed.
That's not outside the purview of US law (especially in the world post-reinterpretation of the Commerce Clause), but it is perhaps worth observing how close to the cliff of "For the good of Society, you must behave optimally, Citizen" such reasoning treads.
For example: AI-generated CP (or hand-drawn illustrations) are viscerally repugnant, but does the same "individual suffering and social damage" reasoning apply to making them illegal? The FBI says yes to both in spite of the fact that we can name no human that was harmed or was unable to give consent in their fabrication (handwaving the source material for the AI, which if one chooses not to handwave it: drop that question on the floor and focus on under what reasoning we make hand-illustrated cartoons illegal to possess that couldn't be applied to pornography in general).
> The FBI says yes to both in spite of the fact that we can name no
They have two arguments for this (that I am aware of). The first argument is a practical one, that AI-generated images would be indistinguishable from the "real thing", but that the real thing still being out there would complicate their efforts to investigate and prosecute. While everyone might agree that this is pragmatic, it's not necessarily constitutionally valid. We shouldn't prohibit activities based on whether these activities make it more difficult for authorities to investigate crimes. Besides, this one's technically moot... those producing the images could do so in such a way (from a technical standpoint) that they were instantly, automatically, and indisputably provable as being AI-generated.
All images could be mandated to require embedded metadata which describes the model, seed, and so forth necessary to regenerate it. Anyone who needs to do so could push a button, the computer would attempt to regenerate the image from that seed, and the computer could even indicate that the two images matched (the person wouldn't even need to personally view the image for that to be the case). If the application indicated they did not match, then authorities could investigate it more thoroughly.
The second argument is an economic one. That is, if a person "consumes" such material, they increase economic demand for it to be created. Even in a post-AI world, some "creation" would be criminal. Thus, the consumer of such imagery does cause (indirectly) more child abuse, and the government is justified in prohibiting AI-generated material. This is a weak argument on the best of days... one of the things that law enforcement efforts excel at is just this. When there are two varieties of a behavior, one objectionable and the other not, but both similar enough that they might at a glance be mistaken for one another, is that it can greatly disincentivize one without infringing the other. Being an economic argument, one of the things that might be said is that economic actors seek to reduce their risk of doing business, and so would gravitate to creating the legal variety of material.
While their arguments are dumb, this filth's as reprehensible as anything. The only question worth asking or answering is, were (AI-generated) it legal, would it result in fewer children being harmed or not? It's commonly claimed that the easy availability of mainstream pornography has reduced the rate of rape since the mid-20th century.
> the individual suffering and social damage caused by the actions that he incentivizes
That's some convoluted way to say he deserves 25 years because he may (or may not) at some point in his life molest a kid.
Personally i think that the idea of convicting a man for his thoughts is borderline crazy.
User of child pornography need to be arrested, treated, flagged and receive psychological followup all along their lives, but sending them away for 25 years is lazy and dangerous because when he will get out he will be even worst than before and won't have much to loose.
Respectfully, it's not pornography, it's child sexual abuse material.
Porn of/between consenting adults is fine. CSAM and sexual abuse of minors is not pornography.
EDIT: I intended to reply to the grandparent comment
Pornography is any multimedia content intended for (someone's) sexual arousal. CSAM is obviously a subset of that.
That is out of date
The language has changed as we (in civilised countries) stop punishing sex work "porn" is different from CASM
In the bad old days pornographers were treated the same as sadists
The language is defined by how people actually use it, not by how a handful of activists try to prescribe its use. Ask any random person on the street, and most of them have no idea what CSAM is, but they know full well what "child porn" is. Dictionaries, encyclopedias etc also reflect this common sense usage.
The justification for this attempt to change the definition doesn't make any sense, either. Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad. In fact, I would posit that making this argument in the first place is detrimental to sex-positive outlook on porn.
> Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad.
I think people who want others to stop using the term "child porn" are actually arguing the opposite of this. Porn is good, so calling it "child porn" is making a euphemism or otherwise diminishing the severity of "CSAM" by using the positive term "porn" to describe it.
I don't think the established consensus on the meaning of the word "porn" itself includes some kind of inherent implied positivity, either; not even among people who have a generally positive attitude towards porn.
"Legitimate" is probably a better word. I think you can get the point though. Those I have seen preferring the term CSAM are more concerned about CSAM being perceived less negatively when it is called child porn than they are about consensual porn being perceived more negatively.
Is it still okay to say "pirated movies" or is that not negative enough since movies are okay? Should we call it "intellectual property theft material"?
> The language is defined by how people actually use it,
Precisely
Which is how it is used today
A few die hard conservatives cannot change that
Stop doing this. You are confusing the perfectly noble aspect of calling it abuse material to make it victim centric with denying the basic purpose of the material. The people who worked hard to get it called CSAM do not deny that it’s pornography for its users.
The distinction you went on to make was necessary specifically for this reason.
In that case, we should all get 25 years for buying products made with slave labour.
Who do such harsh punishments benefit?
> the private search doctrine, which authorizes a government actor to repeat a search already conducted by a private party without securing a warrant.
IANAL, etc. Does that mean that if someone breaks in to your house in search of drugs, finds and steals some, and is caught by the police and confesses all that the police can then search your house without a warrant?
IANAL either, but from what I've read before the courts treat searches of your home with extra care under the 4th Amendment. At least one circuit has pushed back on applying private search cases to residences, and that was for a hotel room[0]:
> Unlike the package in Jacobsen, however, which "contained nothing but contraband," Allen's motel room was a temporary abode containing personal possessions. Allen had a legitimate and significant privacy interest in the contents of his motel room, and this privacy interest was not breached in its entirety merely because the motel manager viewed some of those contents. Jacobsen, which measured the scope of a private search of a mail package, the entire contents of which were obvious, is distinguishable on its facts; this Court is unwilling to extend the holding in Jacobsen to cases involving private searches of residences.
So under your hypothetical, I'd expect the police would be able to test "your drugs" that they confiscated from the thief, and use any findings to apply for a warrant for a search of your house, but any search without a warrant would be illegal.
[0] https://casetext.com/case/us-v-allen-167
I think the private search would have to be legal.
"That, however, does not mean that Maher is entitled to relief from conviction. As the district court correctly ruled in the alternative, the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required."
"Defendant [..] stands convicted following a guilty plea in the United States District Court for the Northern District of New York (Glenn T. Suddaby, Judge) of both receiving and possessing approximately 4,000 images and five videos depicting child pornography"
A win for google, for the us judicial system, and for constitutional rights.
A loss for child abusers.
Constitutional rights did not win enough, it should be that violating constitutional rights means the accused goes free, period, end of story
You forgot your IANAL, but thankfully it's obvious.
That's a ridiculous desire. In that world, if I delete your comment, and you kill me in retaliation, you should be let free if you argue that my deleting your comment infringed your right to free speech?
What I mean specifically is that because the police saw illegally obtained evidence, all evidence collected afterward after that point should be considered fruit of the poisoned tree and inadmissible
I think reasonable people can disagree on this. Fruit of the poisoned tree doctrines eliminate bad incentives for law enforcement to commit crimes in their investigations. But when doctrines such as this cause the guilty to go free, it erodes public confidence in the rule of law. Like many things, its a tradeoff - and the legal system is a process of discovery and adaptation, not some simplistic set of unchangeable rules. Justice for one must always be upheld. I think this ruling is pretty good at threading the line here.
The public are generally closeted authoritarians (see: pandemic, 9/11), and their opinions on fair trials, evidentiary rules, surveillance, and constitutional freedoms should not be regarded
Ah I see.
IANAL. But the argument for the right to search was twofold, one was considered unconstitutional the other wasn't.
If I search you based on probable cause and fear of destruction of evidence, and the judge rules one was valid and the other not, is the evidence inadmissible?
To me it's clearly an if OR search not an if AND search. Otherwise you disincentivize multiple justifications to do something.
The 9th circuit ruled the same way a few years ago: https://www.insideprivacy.com/data-privacy/ninth-circuits-in...
Was using those md5 sums on images for flagging images 20 years ago for the government, occasional false positives, but the safety team would review those, not operations. My only role was to burn the users account to a dvd (via a script) and have the police officer pick up the dvd, we never touched the disk, and only burned the disk with a warrant. (we never saw/touched the users data...)
Figured this common industry standard for chain of custody for evidence. Same with police videos, they are uploaded to the courts digital evidence repository, and everyone who looks at the evidence is logged.
Seems like a standard legal process was followed.
So, Google is examining billion of files from their customers, and reporting said customers to the police automatically, on the basis of a hash match?
Are you familiar with the story of father who lost 10 years of photos of his child because Google flagged a few photos as child porn?
Their servers their data, at least that’s what it seems like.
Serious question: a Google employee is not law enforcement. So if they are examining child pornography, in even only to identify it, aren’t they breaking the law?
18 U.S.C. § 2252
… c) Affirmative Defense.—It shall be an affirmative defense to a charge of violating paragraph (4) of subsection (a) that the defendant— (1) possessed less than three matters containing any visual depiction proscribed by that paragraph; and (2) promptly and in good faith, and without retaining or allowing any person, other than a law enforcement agency, to access any visual depiction or copy thereof— (A) took reasonable steps to destroy each such visual depiction; or (B) reported the matter to a law enforcement agency and afforded that agency access to each such visual depiction.
So the Google employee would both have to “promptly” notify law enforcement AND not show it to any other person unless law enforcement and also have fewer than three pieces of content. And additionally, assigning hashes/etc — does that mean the file was preserved (otherwise what’s the hash representing? A photo they once found and then destroyed? How could law enforcement prove that the hash represented a prohibited image in order to establish probable cause?) “we say some random string of characters represents an image that we saw is child porn, so now you can issue a warrant.” What protections does the user have against a warrant being issued under false information? If the original image no longer exists, then how could the warrant be challenged? If the original image does exist — is Google storing it? And if they are storing it, aren’t they themselves breaking the law?
In other words, do Google servers knowingly have child pornography on them? And does a private employee have immunity from prosecution if they don’t immediately notify law enforcement and also don’t show anyone else? I could see an instance where an employee shows his or her supervisor prior to calling law enforcement — which would be a violation of that law since the law explicitly says “any” person other than law enforcement.
I am being somewhat tedious here, but I am genuinely interested in how Google can handle this without breaking the law themselves — more specifically the employee who, at the time of classifying the image, is in possession of illegal content.
And does Google have some de facto exemption that wouldn’t apply to a smaller founder who had to contend with user created content and followed the same “assigning a hash” process as Google?
What are the best (legal) practices around this, specifically around smaller companies that might have to contend with these issues?
I guess it's always grey area for Google. But most likely, their contractors are overseas, and they contract their work to a third party in a third country. It's entirely possible that the business has gone full-circle and the same studios deliver CP and the CP hashes.
Is this because Google is compelled to report this to the authorities?
I would have thought any company could voluntarily submit anything to the police.
Well let's look at how this actually played out.
Now police are facing the obvious question, is this actually CP? They open the image, determine it is, then get a warrant to search his gmail account, and (later) another warrant to search his home.The court here is saying they should have got a warrant to even look at the image in the first place. But warrants only issue on probable cause. What's the PC here? The hash value. What's the probability of hash collisions? Non-zero but very low.
The practical upshot of this is that all reports from NCMEC will now go through an extra step of the police submitting a copy of the report with the hash value and some boilerplate document saying 'based on my law enforcement experience, hash values are pretty reliable indicators of fundamental similarity', and the warrant application will then be rubber stamped by a judge.
An analogous situation would be where I send a sealed envelope with some documents to the police, writing on the outside 'I believe the contents of this envelope are proof that John Doe committed [specific crime]', and the police have to get a warrant to open the envelope. It's arguably more legally consistent, but in practice it just creates an extra stage of legal bureaucracy/delay with no appreciable impact on the eventual outcome.
Recall that the standard for issuance of a warrant is 'probable cause', not 'mathematically proven cause'. Hash collisions are a possibility, but a sufficiently unlikely one that it doesn't matter. Probable cause means 'a fair probability' based on independent evidence of some kind - testimony, observation, forensic results or so. Even a shitty hash function that's only 90% reliable is going to meet that threshold. In the 10% of cases where the opened file turns out to be a random image with no pornographic content it's a 'no harm no foul' situation.
For reference, a primer on hash collision probabilities: https://preshing.com/20110504/hash-collision-probabilities/
and a more detailed examination of common perceptual hashing algorithms (skip to table 3 for the collision probabilities): https://ceur-ws.org/Vol-2904/81.pdf
I think what a lot of people are implicitly arguing here is that the detection system needs to be perfect before anyone can do anything. Nobody wants the job of examining images to check if they're CP or not, so we've outsourced it to machines that do so with good-but-not-perfect accuracy and then pass the hot potato around until someone has to pollute their visual cortex with it.
Obviously we don't want to arrest or convict people based on computer output alone, but how good does it have to be (in % or odds terms) in order to begin an investigation - not of the alleged criminal, but of the evidence itself? Should companies like Google have to submit an estimate of the probability of hash collisions using their algorithm and based on the number of image hashes that exist on their servers at any given moment? Should they be required to submit source code used to derive that? What about the microcode of the silicon substrate on which the calculation is performed?
All other things being equal, what improvement will result here from adding another layer of administrative processing, whose outcome is predetermined?
> Recall that the standard for issuance of a warrant is 'probable cause', not 'mathematically proven cause'. Hash collisions are a possibility, but a sufficiently unlikely one that it doesn't matter. Probable cause means 'a fair probability' based on independent evidence of some kind - testimony, observation, forensic results or so. Even a shitty hash function that's only 90% reliable is going to meet that threshold. In the 10% of cases where the opened file turns out to be a random image with no pornographic content it's a 'no harm no foul' situation.
But do we actually know that? Do we know what the thresholds of "similarity" are in use by google and others, and how many false positives they trigger? Billions of photos are processed daily by googles services (google photo, chat programs, gmail, drive, etc.), and very few people actually send such stuff via gmail, so what if the reality is, that 99.9% of the matches are actually false positives? What about intentional matches, like someone intentionally creating some random SFW meme image, that (when hashed) matches with some illegal image hash, and that photo is then sent around intentionally.. should police really be checking all those emails, photos, etc., without warrants?
Well, that's why I'm asking what threshold of certainty people want to apply. The hypotheticals you cite are certainly possible, but are they likely?
what if the reality is, that 99.9% of the matches are actually false positives
Don't you think that if Google were deluging the cops with false positive reports that turned out to be perfectly innocuous 999 times out of 1000, that police would call them up and say 'why are you wasting our time with this?' Or that defense lawyers wouldn't be raising hell if there were large numbers of clients being investigated over nothing? And how would running it through a judge first improve that process?
What about intentional matches, like someone intentionally creating some random SFW meme image [...]
OK, but what is the probability of that happening? And if such images are being mailed in bulk, what would be the purpose other than to provide cover for CSAM traders? The tactic would only be viable for as long as it takes a platform operator to change up their hashing algorithm. And again, how would the extra legal step of consulting a judge alleviate this?
should police really be checking all those emails, photos, etc., without warrants?
But that's not happening. As I pointed out, police examined the submitted image evidence to determine of it was CP (it was). Then they got a warrant to search the gmail account, and following that another warrant to search his home. The didn't investigate the criminal first, the investigated an image file submitted to them to determine whether it was evidence of a crime.
And yet again, how would bouncing this off a judge improve the process? The judge will just look at the report submitted to the police and a standard police letter saying 'reports of this kind are reliable in our experience' and then tell the police yes, go ahead and look.
> Don't you think that if Google were deluging the cops with false positive reports that turned out to be perfectly innocuous 999 times out of 1000, that police would call them up and say 'why are you wasting our time with this?' Or that defense lawyers wouldn't be raising hell if there were large numbers of clients being investigated over nothing? And how would running it through a judge first improve that process?
Yes, sure.. they send them a batch of photos, thousands even, and someone from the police skims the photos... a fishing expedition would be the right term for that.
> OK, but what is the probability of that happening? And if such images are being mailed in bulk, what would be the purpose other than to provide cover for CSAM traders? The tactic would only be viable for as long as it takes a platform operator to change up their hashing algorithm. And again, how would the extra legal step of consulting a judge alleviate this?
You never visited 4chan?
> But that's not happening. As I pointed out, police examined the submitted image evidence to determine of it was CP (it was). Then they got a warrant to search the gmail account, and following that another warrant to search his home. The didn't investigate the criminal first, the investigated an image file submitted to them to determine whether it was evidence of a crime.
They first entered your home illegally and found a joint on the table, and then got a warrant for the rest of the house. As pointed out in the article and in the title... they should need a warrant for the first image too.
> And yet again, how would bouncing this off a judge improve the process? The judge will just look at the report submitted to the police and a standard police letter saying 'reports of this kind are reliable in our experience' and then tell the police yes, go ahead and look.
Sure, if it brings enough results. But if they issue 200 warrants and get zero results, things will have to change, both for police and for google. This is like saying "that guy has long hair, he's probably a hippy and has drugs, let's get a search warrant for his house". Currently we don't know the numbers, and most people (you excluded) believe that police shouldn't search private data of people just because some algorithm thinks so, without a warrant.
The idea that police are spending time just scanning photos of trains, flowers, kittens and so on in hopes of finding an occasional violation seems ridiculous to me. If nothing else, you would expect NCMEC to wonder why only 0.1% of their reports are ever followed up on.
a fishing expedition would be the right term for that
No it wouldn't. A fishing expedition is where you get a warrant against someone without any solid evidence and then dig around hoping to find something incriminating.
You never visited 4chan?
I have been a regular there since 2009. What point are you attempting to make?
They first entered your home illegally and found a joint on the table, and then got a warrant for the rest of the house. As pointed out in the article and in the title... they should need a warrant for the first image too.
This analogy is flat wrong. I already explained the difference.
most people (you excluded) believe that police shouldn't search private data of people just because some algorithm thinks so, without a warrant.
That is not what I believe. I think they should get a warrant to search any private data. In this case they're looking at a single image to determine whether it's illegal, as a reasonably reliable statistical test suggests it to be.
You're not explaining what difference it makes if a judge issues a warrant on the exact same criteria.
As many others have said, Google isn’t using a cryptographic hash here. It’s using perceptual hashing, which isn’t collision-safe at all.
Did you read the whole thing?
and a more detailed examination of common perceptual hashing algorithms (skip to table 3 for the collision probabilities): https://ceur-ws.org/Vol-2904/81.pdf
And there was a whole lot of explanation of how probable cause works and how it's different from programmers' aspirations to perfection.
The table only proves the point. The lowest probability in the table is 1 in 100_000. Most others are 1 in 100.
28 billion photos are uploaded every week to Google Photos[1]. That’s at least 280k false positives per week.
Should we really be performing 30 search warrants on innocent people per second?
[1] https://blog.google/products/photos/storage-changes/
Do you have any evidence that this is happening? You don't think someone would have noticed by now if it were?
And as I pointed out, we're not talking about a search warrant on a person, we're talking about whether it's necessary to get a search warrant to look at a picture to determine if it's an illegal image.
What is the context of this?
Status quo, there is no change here.
The old example is the email server administrator. If the email administrator has to view the contents of user messages as a part of regular maintenance and that email administrator notices lawful violations in those user messages they can report it to law enforcement. In that case law enforcement can receive the material without a warrant only if law enforcement never asked for it before it was gifted to them. There are no fourth amendment protections provided to offenders in this scenario of third party accidental discovery. Typically, in these cases the email administrator does not have an affirmed requirement to report lawful violations to law enforcement unless specific laws claim otherwise.
If on the other hand law enforcement approaches that email administrator to fish for illegal user content then that email administrator has become an extension of law enforcement and any evidence discovered cannot be used in a criminal proceeding. Likewise, if the email administrator was intentionally looking through email messages for violations of law even not at the request of law enforcement they are still acting as agents of the law. In that case discovery was intentional and not an unintentional product of system maintenance.
There is a third scenario: obscenity. Obscenity is illegal intellectual property, whether digital or physical, as defined by criminal code. Possession of obscene materials is a violation of criminal law for all persons, businesses, and systems in possession. In that case an email administrator that accidentally discovers obscene material does have a required obligation to report their discoveries, typically through their employer's corporate legal process, to law enforcement. Failures to disclose such discoveries potentially aligns the system provider to the illegal conduct of the violating user.
Google's discovery, though, was not accidental as a result of system maintenance. It was due to an intentional discovery mechanism based on stored hashes, which puts Google's conduct in line with law enforcement even if they specified their conduct in their terms of service. That is why the appeals court claims the district court erred by denying the defendant's right to suppression on fourth amendment grounds.
The saving grace for the district court was a good faith exception, such as inevitable discovery. The authenticity and integrity of the hash algorithm was never in question by any party so no search for violating material was necessary, which established probably cause thus allowing law enforcement reasonable grounds to proceed to trial. No warrant was required because the evidence was likely sufficient at trial even if law enforcement did not directly view image in question, but they did verify the image. None of that was challenged by either party. What was challenged was just Google's conduct.
So now an algorithm can interpret the law better than a judge. It’s amazing how technology becomes judge and jury while privacy rights are left to a good faith interpretation. Are we really okay with letting an algorithmic click define the boundaries of privacy?
The algorithm is interpreting whether one image file matches another, a purely probabilistic issue.
The judge doesn't really understand a hash well. They say things like "Google assigned a hash" which is not true, Google calculated the hash.
Also I'm surprised the 3rd-party doctrine doesn't apply. There's the "private search doctrine" mentioned but generally you don't have an expectation of privacy for things you share with Google
Erm, "Assigned" in this context is not new: https://law.justia.com/cases/federal/appellate-courts/ca5/17...
"More simply, a hash value is a string of characters obtained by processing the contents of a given computer file and assigning a sequence of numbers and letters that correspond to the file’s contents."
From 2018 in United States v. Reddick.
The calculation is what assigns the value.
No. The calculation is what determines what the assignation should be. It does not actually assign anything.
This FOIA litigation by ACLU v ICE goes into this topic quite a lot: https://caselaw.findlaw.com/court/us-2nd-circuit/2185910.htm...
Yes, Google's calculation.
Did Google invent this hash?
Why is that relevant? Google used a hashing function to persist a new record within a database. They created a record for this.
Like I said in a sib. comment, this FOIA lawsuit goes into questions of hashing pretty well: https://caselaw.findlaw.com/court/us-2nd-circuit/2185910.htm...
Google at some point decided how to calculate that hash and that influences what the value is right? Assigned seems appropriate in that context?
Either way I think the judge's wording makes sense.
Google assigned the hashing algorithm (maybe, assuming it wasn't chosen in some law somewhere, I know this CSAM hashing is something the big tech work on together).
Once the hashing algorithm was assigned, individual values are computed or calculated.
I don't think the judge's wording is all that bad but the word "assigned" is making it sound like Google exercised some agency when really all it did was apply a pre-chosen algorithm.
Hash can be arbitrary, the only requirement is it is a deterministic one-way function.
And it should be mostly bijective under most conditions. (This is obviously impossible in practice but hashes with common collisions shouldn't be allowed as legal evidence). Also neural/visual hashes like those used by big tech makes things tricky.
The hash in question has many collisions. It it probably enough to get a warrant put it on a warrant, but it may not be enough to get a warrant without some other evidence. (it can be enough evidence to look for other public signs of evidence, or perhaps because there are a number of images that match different hashes)
There's a password on my Google account, I totally expect to have privacy for anything I didn't choose to share with other people.
The hash is kind of metadata recorded by Google, I feel like Google using it to keep child porn off their systems should be reasonable. Same ballpark as limiting my storage to 1GB based on file sizes. Sharing metadata without a warrant is a different question though.
As should be expected from the lawyer world, it seems like whether you have an expectation of privacy using gmail comes down to very technical word choices in the ToS, which of course neither this guy nor anyone else has ever read. Specifically, it may be legally relevant to your expectation of privacy whether Google says they "may" or "will" scan for this stuff.
Out of curiosity, what is false positive rate of a hash match?
If the FPR is comparable to asking a human "are these the same image?", then it would seem to be equivalent to a visual search. I wonder if (or why) human verification is actually necessary here.
I doubt sha1 hashes are used for this. Those image hashes should match files regardless of orientation, cropping, resizing, re-compression, color correction etc. The collision could be far more frequent with these hashes.
The hash should ideally match even if you use photoshop to cut the one person out of the picture and put that person into a different photo. I'm not sure if that is possible, but that is what we want.
The reason human verification is necessary is that the government is relying on something called the "private search" doctrine to conduct the search without a warrant. This doctrine allows them to repeat a search already conducted by a private party (i.e., Google) without getting a warrant. Since Google didn't actually look at the file, the government is not able to look at the file without a warrant, as that search exceeds the scope of the initial search Google performed.
Naively, 1/(2^{hash_size_in_bits}). Which is about 1 in 4 billion odds for a 32 bit hash, and gets astronomically low at higher bit counts.
Of course, that's assuming a perfect, evenly distributed hash algorithm. And that's just the odds that any given pair of images has the same hash, not the odds that a hash conflict exists somewhere on the internet.
You need to know the input space as well as the output space (hash size).
If you have a 32bit hash but your input is only 16bit, you'll never have a collision (and you'll be wasting a ton of space on your hashes!).
Image files can get into the megabytes though, so unless the output hash is large the potential for collisions is probably not all that low.
You do not need to know the input space.
Normal hash functions have pseudo-random outputs and they can collide even when the input space is much smaller than the output space.
In fact, I'll go run ten million values, encoded into 24 bits each, through a 40 bit hash and count the collisions. My hash of choice will be a truncated sha256.
... I got 49 collisions.
> Out of curiosity, what is false positive rate of a hash match?
No way to know without knowledge of the 'proprietary hashing technology'. Theoretically though, a hash can have infinitely many inputs that produce the same output.
Mismatching hash values from the same hashing algorithm can prove mismatching inputs, but matching hash values don't ensure matching inputs.
> I wonder if (or why) human verification is actually necessary here
It's not about frequency, it's about criticality of getting it right. If you are going to make a negatively life-altering report on someone, you'd better make sure the accusation is legitimate.
I'd say the focus on hashing is a bit of a red herring.
Most anyone would agree that the hash matching should probably form probable cause for a warrant, allowing a judge to sign off on the police searching (i.e., viewing) the image. So, if it's a collision, the cops get a warrant and open up your linux ISO or cat meme, and it's all good. Probably the ideal case is that they get a warrant to search the specific image, and are only able to obtain a warrant to search your home and effects, etc. if the image does appear to be CSAM.
At issue here is the fact that no such warrant was obtained.
> Most anyone would agree that the hash matching should probably form probable cause for a warrant
I disagree with this. Yes, if we were talking MD5, SHA, or some similar true hash algo, then the probability of a natural collision is small enough that I agree in principle.
But if the hash algo is of some other kind then I do not know enough about it to assert that it can justify probable cause. Anyone who agrees without knowing more about it is a fool.
That's fair. I came away from reading the opinion that this was not a perceptual hash, but I don't think it is explicitly stated anywhere. I would have similar misgivings if indeed it is a perceptual hash.
I think it'll prove far more likely that the government creates incentives to lead Google/other providers to fully do the search on their behalf.
The entire appeal seems to hinge on the fact that Google didn't actually view the image before passing it to NCMEC. Had Google policy been that all perceptual hash hits were reviewed by employees first, this would've likely been a one page denial.
If the hash algorithm were CRC8, then obviously it should not be probable cause for anything. If it were SHA-3, then it's basically proof beyond reasonable doubt of what the file is. It seems reasonable to question how collisions behave.
I don't agree that it would be proof beyond reasonable doubt, especially because neither google nor law enforcement can produce the original image that got tagged.
By original do you mean the one in the database or the one on the device?
If the device spit out the same SHA3, then either it had the exact same image, or the SHA3 was planted somehow. The idea that it's actually a different file is not a reasonable doubt. It's too unlikely.
By the original, I mean the image that was used to produce the initial hash, which Google (rightly) claimed to be CSAM. Without some proof that an illicit image that has the same hash exists, I wouldn't accept a claim based on hash alone.
Oh definitely you need someone to examine the image that was put in the database to show it's CSAM, if the legal argument depends on that. But that's an entirely different question from whether the image on the device is that image.
For non-broken cryptographic hashes (e.g., SHA-256), the false-positive rate is negligible. Indeed, cryptographic hashes were designed so that even nation-state adversaries do not have the resources to generate two inputs that hash to the same value.
See also:
https://en.wikipedia.org/wiki/Collision_resistance
https://en.wikipedia.org/wiki/Preimage_attack
These are not the kinds of hashes used for CSAM detection, though, because that would only work for the exact pixel-by-pixel copy - any resizing, compression etc would drastically change the hash.
Instead, systems like these use perceptual hashing, in which similar inputs produce similar hashes, so that one can test for likeness. Those have much higher collision rates, and are also much easier to deliberately generate collisions for.
Does a lab assigns a DNA to you or does it calculate?
Does two different labs DNA analysis are exactly equal?
Remember that you can use multiple different algorithms to calculate a hash.