I am just sharing a quick take on something that came to mind that occured earlier this year that I had forgotten about. I just received a domain renewal for a project that was dead in the water but should have been alive, and was unilaterally killed by the AWS Trust and Safety team in gaslighting. It is a bit too late, but I still think it is important for people to know.
Earlier this year (2025), a software engineering friend of mine had a great idea of creating a tool to pin on a map interface any ICE (the United States Immigration and Customs Enforcement) sightings from the public. I vibe coded it last February with Claude Sonnet 3.7, and put it up on AWS with a public facing domain. From a coding standpoint, it was a fun project because it was my first project that Terraformed a production app using Claude, autonomously. I had a website up that worked on desktop and mobile. From a public trust and safety standpoint, it helped by adding public accountability and mitigating overreach by a rogue, and potentially in some circumstances operating illegaly, agency. It was overall A Good Thing™.
However, what happened next was upsetting and shocking to me. A day after the website was put online, without me advertising it to ANYONE, the website became inaccessible to almost all browsers. It would simply show a giant red background saying this is associated with known criminals, etc., and has been blocked for safety. It is some Chrome / Google maintained blacklist mechanism that looks even scarier and more severe than the “https / http” / cert issue mismatch, and that cannot be overridden. To be clear, I had registered a certificate using AWS Certificate Manager, and there was no SSL issue. Instead, the domain had been unilaterally marked as “dangerous” by AWS (or Google, Chrome, whomever) one day after it was made publicly accessible, despite no advertising / attention.
I did all of this on my personal AWS account. The very next day, I received a scary email from AWS that claimed my account was violating their policies due to supporting criminal activity, and will be suspended immediately unless I remediate the account. I contacted AWS, who demurred and said they weren’t sure what was going on, but that their Trust and Safety team had flagged dangerous activity on my account (they did NOT specify any resources related to the application I had put up; they were generic and vague with no specificity). The only causal correlate was me putting up this public tool to report ICE. I terraform destroyed the resources Claude generated, and then waited. Within a day, the case was closed and my account was restored to normal status.
I am not going to share the domain name, but needless to say, this pissed me off majorly. What the fuck, AWS? (and Google, and Chrome, and anyone else that had a hand in this?) I understand mistakes happen, but this does not smell like a mistake; this smells like some thing worse: a lack of sound judgment. And if there were any automated systems involved, that is no excuse either. Per AWS support’s correspondence, they informed me a human in their Trust and Safety had reviewed the account and marked it as delinquent, not from a financial standpoint, but something worse—by equating its resources to criminality. If the concern was what I think it was (corporate cowardice), they should have been intellectually honest, and filed a case stating “We found an application on your account that is outside our policy. Here is the explanation for what we found and why we think it is outside our policy. You can dispute our decision at this link.” etc. Instead, I was gaslighted and treated like a guilty until proven innocent subject of weaponized fear (because for a second, with the scary language of the website block and the support email, I was scared.)
That AWS Trust and Safety employee’s judgment failed me, and their judgment failed themselves and the public as well as their own responsibility as an arbitrator of trust and safety; their decision, if there was one that can be attributed and not hidden behind the corporate veil of ambiguity, ultimately reduced public trust and safety.
Has what they (AWS, Google, and also Apple) did to Parler already been memory-holed, or are you just doing the classic Niemöller thing of being shocked when it happens to you?
their own responsibility as an arbitrator of trust and safety;
ultimately reduced public trust and safety.
Oh come on, surely pretending to actually believe the name the censors give themselves is laying it on too thick?
a project that was dead in the water but should have been alive
That statement sounds a bit too strong to me. Maybe this project wasn’t important enough to invest further effort into, but you basically tried no workarounds. E.g. probably just moving to a European cloud would have solved all your issues? (If we model the situation as some possibly illegal US govt order, or just AWS being overzealous about censoring themselves.)
Heck, all the shadow libraries and sci-hub and torrent sites manage to stay up on the clearnet, and those are definitely illegal according to the law.
And in extreme cases you could just host your app as a TOR hidden service. (Though making users install a separate browser app might add enough friction to kill this particular project unfortunately.)
Yeah, no. Sounds like they either got hit with a (probably illegal) threat from the DHS/DOJ, or, actually more likely, they feared such threats because they’d seen the (also illegal) threats that ICEBlock drew, and they didn’t want to deal with such.
It’s also possible that freelance MAGA types inside of those companies decided that code was “obviously criminal” and needed to be suppressed. Possibly then using the past ICEBlock threats as ammunition in internal arguments.
Actual courts in the US are still not particularly willing to apply prior restraints to speech, and would feel especially hampered in doing so by the fact that there’s nothing even slightly illegal about the project as described. Yes, if you asked Kristi Noem and Pam Bondi, they’d tell you it was illegal, but then they’d tell you many other untrue things as well. Obstruction of justice and interference with Federal officers, one or both of which are what they’d claim it was, do not work like that in reality.
I’ve actually never heard of a US court issuing a secret order like that. I’m not actually sure they have the power to do that. If they can do it at all, it’d be really unusual. You may be thinking of NSLs, which are secret, but are not court orders and also aren’t statutorily authorized to be used to suppress anything.
No, I’m pretty sure publishing sightings of law enforcement is legal in the US. Some traffic radio stations report on where police are using radar guns for example, and this is fully legal. Indeed, considering that mapping ICE sightings could be of academic/intellectual interest (and that it is actually perfectly reasonable for law-abiding US citizens to want to limit their time spent in close proximity to ICE agents) this is far more centrally “helping people get away with doing illegal things” (speeding) than robertzk’s project.
Legal ≠ consequence-free. Yes, reporting police locations is legal—Waze does it daily. But there’s a relevant difference between “drivers avoiding speed traps” and “people with deportation orders evading enforcement.”
The app is multi-use. Potential users:
1. Researchers wanting data on enforcement patterns
2. Legal residents avoiding hassle/intimidation
3. People with deportation orders evading enforcement
4. People actively helping category 3 evade enforcement
The developer can’t control which use case dominates. But category 3 and 4 users have the strongest incentive to use and contribute to the app—they’re the ones with real stakes. Selection effects mean they’ll likely dominate the user base.
I’m not claiming the app is illegal. I’m saying “it’s legal” doesn’t fully address whether AWS made a reasonable judgment call about what they want to host. Those are different questions.
i think it’s very likely that the latter is true, that AWS made a reasonable judgment call about what they want to host
but also, i think it’s reasonable for someone in robertzk’s position, based on the way the judgment call was actually communicated, to assume that it was the former. and i think that, perhaps deliberately, perhaps merely because of sort of selection effects, that’s the intent. in a sort of “a system is what it does” sort of way, at least.
I am just sharing a quick take on something that came to mind that occured earlier this year that I had forgotten about. I just received a domain renewal for a project that was dead in the water but should have been alive, and was unilaterally killed by the AWS Trust and Safety team in gaslighting. It is a bit too late, but I still think it is important for people to know.
Earlier this year (2025), a software engineering friend of mine had a great idea of creating a tool to pin on a map interface any ICE (the United States Immigration and Customs Enforcement) sightings from the public. I vibe coded it last February with Claude Sonnet 3.7, and put it up on AWS with a public facing domain. From a coding standpoint, it was a fun project because it was my first project that Terraformed a production app using Claude, autonomously. I had a website up that worked on desktop and mobile. From a public trust and safety standpoint, it helped by adding public accountability and mitigating overreach by a rogue, and potentially in some circumstances operating illegaly, agency. It was overall A Good Thing™.
However, what happened next was upsetting and shocking to me. A day after the website was put online, without me advertising it to ANYONE, the website became inaccessible to almost all browsers. It would simply show a giant red background saying this is associated with known criminals, etc., and has been blocked for safety. It is some Chrome / Google maintained blacklist mechanism that looks even scarier and more severe than the “https / http” / cert issue mismatch, and that cannot be overridden. To be clear, I had registered a certificate using AWS Certificate Manager, and there was no SSL issue. Instead, the domain had been unilaterally marked as “dangerous” by AWS (or Google, Chrome, whomever) one day after it was made publicly accessible, despite no advertising / attention.
I did all of this on my personal AWS account. The very next day, I received a scary email from AWS that claimed my account was violating their policies due to supporting criminal activity, and will be suspended immediately unless I remediate the account. I contacted AWS, who demurred and said they weren’t sure what was going on, but that their Trust and Safety team had flagged dangerous activity on my account (they did NOT specify any resources related to the application I had put up; they were generic and vague with no specificity). The only causal correlate was me putting up this public tool to report ICE. I
terraform destroyed the resources Claude generated, and then waited. Within a day, the case was closed and my account was restored to normal status.I am not going to share the domain name, but needless to say, this pissed me off majorly. What the fuck, AWS? (and Google, and Chrome, and anyone else that had a hand in this?) I understand mistakes happen, but this does not smell like a mistake; this smells like some thing worse: a lack of sound judgment. And if there were any automated systems involved, that is no excuse either. Per AWS support’s correspondence, they informed me a human in their Trust and Safety had reviewed the account and marked it as delinquent, not from a financial standpoint, but something worse—by equating its resources to criminality. If the concern was what I think it was (corporate cowardice), they should have been intellectually honest, and filed a case stating “We found an application on your account that is outside our policy. Here is the explanation for what we found and why we think it is outside our policy. You can dispute our decision at this link.” etc. Instead, I was gaslighted and treated like a guilty until proven innocent subject of weaponized fear (because for a second, with the scary language of the website block and the support email, I was scared.)
That AWS Trust and Safety employee’s judgment failed me, and their judgment failed themselves and the public as well as their own responsibility as an arbitrator of trust and safety; their decision, if there was one that can be attributed and not hidden behind the corporate veil of ambiguity, ultimately reduced public trust and safety.
Has what they (AWS, Google, and also Apple) did to Parler already been memory-holed, or are you just doing the classic Niemöller thing of being shocked when it happens to you?
Oh come on, surely pretending to actually believe the name the censors give themselves is laying it on too thick?
That statement sounds a bit too strong to me. Maybe this project wasn’t important enough to invest further effort into, but you basically tried no workarounds. E.g. probably just moving to a European cloud would have solved all your issues? (If we model the situation as some possibly illegal US govt order, or just AWS being overzealous about censoring themselves.)
Heck, all the shadow libraries and sci-hub and torrent sites manage to stay up on the clearnet, and those are definitely illegal according to the law.
And in extreme cases you could just host your app as a TOR hidden service. (Though making users install a separate browser app might add enough friction to kill this particular project unfortunately.)
Sounds like they got hit with a court order that prohibited disclosure of the order itself.
Yeah, no. Sounds like they either got hit with a (probably illegal) threat from the DHS/DOJ, or, actually more likely, they feared such threats because they’d seen the (also illegal) threats that ICEBlock drew, and they didn’t want to deal with such.
It’s also possible that freelance MAGA types inside of those companies decided that code was “obviously criminal” and needed to be suppressed. Possibly then using the past ICEBlock threats as ammunition in internal arguments.
Actual courts in the US are still not particularly willing to apply prior restraints to speech, and would feel especially hampered in doing so by the fact that there’s nothing even slightly illegal about the project as described. Yes, if you asked Kristi Noem and Pam Bondi, they’d tell you it was illegal, but then they’d tell you many other untrue things as well. Obstruction of justice and interference with Federal officers, one or both of which are what they’d claim it was, do not work like that in reality.
I’ve actually never heard of a US court issuing a secret order like that. I’m not actually sure they have the power to do that. If they can do it at all, it’d be really unusual. You may be thinking of NSLs, which are secret, but are not court orders and also aren’t statutorily authorized to be used to suppress anything.
Would you expect to?
Yes. Nothing stays secret forever.
Why do you predict that a court was involved?
People would use your app to do illegal things? Isn’t that a success of the AWS Trust and Safety employee?
I think you may be confusing “Iegal” and “aligns with your politics”.
No, I’m pretty sure publishing sightings of law enforcement is legal in the US. Some traffic radio stations report on where police are using radar guns for example, and this is fully legal. Indeed, considering that mapping ICE sightings could be of academic/intellectual interest (and that it is actually perfectly reasonable for law-abiding US citizens to want to limit their time spent in close proximity to ICE agents) this is far more centrally “helping people get away with doing illegal things” (speeding) than robertzk’s project.
Legal ≠ consequence-free. Yes, reporting police locations is legal—Waze does it daily. But there’s a relevant difference between “drivers avoiding speed traps” and “people with deportation orders evading enforcement.”
The app is multi-use. Potential users:
1. Researchers wanting data on enforcement patterns
2. Legal residents avoiding hassle/intimidation
3. People with deportation orders evading enforcement
4. People actively helping category 3 evade enforcement
The developer can’t control which use case dominates. But category 3 and 4 users have the strongest incentive to use and contribute to the app—they’re the ones with real stakes. Selection effects mean they’ll likely dominate the user base.
I’m not claiming the app is illegal. I’m saying “it’s legal” doesn’t fully address whether AWS made a reasonable judgment call about what they want to host. Those are different questions.
i think it’s very likely that the latter is true, that AWS made a reasonable judgment call about what they want to host
but also, i think it’s reasonable for someone in robertzk’s position, based on the way the judgment call was actually communicated, to assume that it was the former. and i think that, perhaps deliberately, perhaps merely because of sort of selection effects, that’s the intent. in a sort of “a system is what it does” sort of way, at least.