I will try to check it out and get back to you. Thank you for pinging me!
Gavin Runeblade
Shorthand can keep pace with speech and doesn’t require special equipment.
The other replies gave you good examples of how to resolve this. Let me take a stab at your mistake. From a high level, you are assuming the information contained in the scam is the only information the Bayesian has available to use.
As shown a Bayesian has probably got priors about the way the market works, the way people advertise, the existence and nature of scams, etc. The information in the predictions is applied against those priors, not just against itself.
The thing is, bot spam isn’t generally high quality yet. On many platforms the tell of name-lotta-numbers is still a viable method for identifying a bot. And part of Anthropic’s concern with the military was that AI can de-anonymize people across multiple accounts and platforms. That capability, if Dario is correct that it exists, seems in line with the ability to identify such a distillation attack. Or at least begin to. Once it begins, then RL means the AI is likely to get better at it over time. Or am I giving AI too much credit? I’m not sure.
In response to your question, I would set thresholds. At all levels below interruption of service and/or complaints from the user base, feed it poison combined with quiet pruning of the worst offenders via muting spread in the algorithm. At the level of interrupting actual users and receiving complaints, then start blocking and banning. But that’s a hard line and I assume problems will require escalation and resolution.
All three of the big US labs have recently accused various Chinese labs of large-scale covert distillation of their models, presenting evidence that the labs in question have been using thousands of fraudulent accounts to cover their tracks.
In the same way that models have learned to detect whether they are being given a real task or a training scenario and alter their responses, is it likely they will do the same for this scenario?
This would be fascinating, as would their responses. Especially if the models respond in different ways.
We used to have this great thing we called Star Trek. Now we don’t, because its shell is run by Alex Kurtzman who thinks he’s here to do deeply lazy social commentary and sci-fi action sequences and maybe some paint-by-numbers things that nominally look Trek-shaped. Which is why he says science fiction isn’t about the future.
The best Trek currently is the MMO Star Trek Online. Yes it has as many plot holes and oddities as TOS, TAS, and TNG but it also is remarkably true to them. It has admittedly terrible graphics (because it is very old). But it does have great stories that link to all your favorite parts of every series you enjoyed.
I will add, specifically for this audience, it is a highly complex and slow game that rewards thinking and planning over reflexes. Yes it is an MMO, and you have a character, but you also have your bridge crew, your away team, and your ships crew (called duty officers), all of whom provide stacking and overlapping bonuses and who all have their own methods of play. I think quite a few people here would appreciate this side of it.
It really does tell stories that I think Roddenberry would enjoy and approve. The Federation are the good guys, and the future is displayed as a better place you are supposed to want to achieve and believe you can achieve.
We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.
They note that when code or other artifacts are created by AI, users are less likely to check the underlying logic or identify missing context.
Anecdotes are not data, but I have observed that people create artifacts with AI they cannot create themselves. Therefore, fact checking might be outside the user’s capabilities.
I saw this in Excel even before Claude. People ask the office Excel guru for help, then explain help means do it for me, then don’t check anything and assume it is perfect. Now they’re doing the same thing but with AI.
Everyone can talk, not everyone can make the artifacts.
Derek also notes some people can’t get value out of it, and attributes this to the nature of their jobs versus current tools. I agree this matters, but if you don’t find AI useful then that really is a you problem at this point.
I think both Derek and Zvi underestimate the level of dysfunction in government.
I work in state government, we cannot use AI, it is blocked (the sites are blacklisted in our browsers, and we cannot install any software at all on our computers) from our computers and forbidden by our leadership.
My agency has been trying to get permission to use an industry standard AI tool for two years now, but cannot get permission because our staff should just do the work manually, as it is in our job description that we will. We cannot remove it from our job description because the tool hasn’t been implemented. A true catch-22.
That’s not a me problem.
People have already admitted doing this. The popups requesting authorization came too fast so they stop reading and just grant authority. This includes executives at AI companies. Again, last year, not in the future.
creating real world events, ‘showing up as real humans’ and forming real relationships.
I’ve been saying for years (think I’m the original source actually):
How it started: pics or it didn’t happen
How it’s going: IRL or it didn’t happen
There appears to be a window of opportunity for people to become known as legitimately human. Given the speed of improvement in AI influencers, actors, etc. I predict that window will close in less than 2 years at which point it may be all but impossible to “prove” you are human online.
I really wonder how much of a push towards recreating in person interactions will occur. Will there be a return to a variant of Google’s original trust based search algorithm and who will be high trust and how will it be calculated? I am very interested in this particular aspect of the AI-driven changes to society.
What were the big hits for applied AI this year? Were any of the big medical discoveries helped by AI tools? Not theoretical or “new study shows this will maybe happen eventually” but we’re there any actual, tangible, AI-driven life improvements for normies this year?
We can steal [China’s] very best manufacturing engineers, and put them to work here.
I am not confident this strategy is viable. Given the Wikipedia page has a list of over 40 convictions (the accusa ions list is far longer) for espionage primarily in tech, and the rate of discovery of such people is increasing, it seems more likely that a high risk is being taken by any company that tries to hire defectors from China. I am insufficiently skilled to weigh the risk against potential gains given the uncertainties involved, but from the scope of the problem and the risks that I do comprehend, I would err on the side of caution.
A perhaps superior strategy is to encourage Europe to hire these engineering experts, which keeps the spies away from frontier technologies, and still gets the honest developers away from China. Also has the knock on effect of supporting Europe’s AI progress.
On #3, Not having read Mo’s book what helps get me thinking the same way (though not as often as I would like) is “what does it feel like to be wrong. It doesn’t feel bad, that’s for when you know you are wrong. While you are still in the act of being wrong it feels exactly like being right.” Which I first encountered as a meme and I don’t know the source to appropriately attribute credit. But remembering the concept has helped me quite a bit. Mo’s phrasing seems good, I shall add it to my box. The tool I am still working on is remembering to ask myself.
On a related note, I have discovered the useful trick that after writing a text message or comment or post, my brain cannot tell the difference between deleting it and posting/sending it.
Not the way you think. You are seeing the mask and trying to understand it, while ignoring the shoggoth underneath.
The topic has gotten a lot of discussion, but from the relevant context of the shoggoth, not the irrelevant point of the mask. Every post about how do we know when AI is telling the truth vs repeating what is in training data, the talk of p-zombies, etc. that is all directly struggling with your question.
Edit: to be clear, I am not saying the fact ais make mistakes shows they’re inhuman, I am saying look at the mistake and ask yourself what does the fact that the ai made this specific mistake and not a different one tell you about why the ai thought the mistake was correct /edit
Here is an example. Alpha go beat all the best players in the world at go. They described its thinking as completely alien and they were emotionally distraught at how badly they lost. A couple months later multiple mediocre players obliterated it repeatedly and reliably. It turned out the AI didn’t know it was playing a game that had a board with conditions that persisted from turn to turn, it didn’t/doesn’t understand that there are fields that are controlled with pieces inside. It can figure out the best next move without that understanding. It now wins all the time again. But it still doesn’t understand the context. There is no way to confirm it knows it is playing a game.
Going directly at images, why are you so sure dalle knows what an image is or what it is doing? Why do you think it knows there is a person looking at its output vs an electric field pulsing in a non random way? Does it understand we see red but not infrared? Is it adding details only tetrachromate can see? No one has the answers to these questions. No one has a technique that has a plausible method of working to get the answers.
Your image prompt might be creating a pattern in binary, or hexadecimal color codes, but it is probably a bizarre set of tokens that fit like Cthulhu-esque jigsaw pieces in a mosaic using more relationships than human minds can comprehend. I saw a claim that chatgpt 4 broke language out into a 36,000+ dimensional table of relationships. It ain’t using hooked on phonics but it certainly can trick you into believing it does. That tricking you is the mask. The 36,000 dimensional map of language is part of the shoggoth, not the whole thing.
To make the mask slip in images give it tasks that rely on relationships not facades. For example, and I apologize if you are easily offended, do a search on hyper muscle growth. You will get porn, sorry. But in it you will find images with impossible mass and realistic internal anatomy. The artists understand skeletons and where muscle attaches to limbs. Drop some of the most extreme art into sora or nano banana or grok and animate it. The skeletons lose all coherence. The facade, the skin and limbs move, but what is going on inside cannot be. Skeletons don’t do what the image generators will try, because they can’t see the invisible. And skeletons are invisible. For a normally proportioned human that’s irrelevant, but for an impossible proportion it is. 3d artists draw skeleton wireframes then layer polygons above it so the range of motion fits plwhat is possible and correct. AI copies the training data and extrapolates. Impossible shapes cause the mask to slip, you sent doesn’t know what a body is it thinks we’re blobs that move and have precise shapes.
Monsters are another one. Try a displacer beast, this is a cat with six legs, four that comport to the shape of traditional feline front legs and two that are rear legs, plus it has two tentacles coming off its shoulders that are like the arms (not tentacles) of a squid with the barbed diamond shaped pads. The difference between tentacle motion and leg motion is unknown to AI because it relies on what is unseen, the skeleton underneath. Again, it think a monster is a blob that moves.
Getting to your architecture, this you see in window and door placement. There is no understanding of the relationship or the 3d nature of the space. Instead it knows walls have doors and windows and doors are more likely down low and windows are more likely up high. So it adds them. But it doesn’t understand space or function.
When you see people talk about AI slop articles using the it’s not x it’s y pattern or triplets or em dashes, this is the same topic. This is how the AI knows what it knows. Why it thinks that is good writing, but uses it in ways humans don’t even though it got the pattern out of human made training data. Same topic, different application.
People really are talking about it a lot
Yes, the data backs you up. In 2022 studies were showing that people had limited trust in AI, and even that varied by field. In 2023 the study came out showing that in blind trials patients overwhelmingly preferred AI chat it’s over human doctors. (https://bytefeed.ai/ai-chatbots-bedside-manner-preferred-over-conventional-doctors-by-shocking-margin-according-to-blind-study/) In 2024 & 2025 we got the studies showing AI outperforms human doctors but patients still didn’t trust it so long as they knew it was AI. Again, in blind studies patients prefer AI. (https://www.kcl.ac.uk/news/doctors-stay-ai-assists-new-study-examines-public-perceptions-of-ai-in-healthcare). Doctors don’t like AI and don’t like doctors who use AI (https://carey.jhu.edu/articles/doctors-who-use-ai-are-viewed-negatively-their-peers-new-study-shows).
What is the latest on how the enteric nervous system plays into cognition and memory (being relevant here to your topic)? I have seen a lot of research on the role in behavior, especially disorders like anorexia and bulemia and gambling addiction, etc.. Given it is half the number of neurons as the brain, one thing that makes me hesitant about cryonics and digital personalities is the thought that what if people are only getting 2⁄3 or less of themselves because only the brain is being considered. But data on it is not my specialty.
I see three issues with your argument, two that don’t change anything meaningful and one that does.
The two minor points:
You presume a world without preference for human made works. I posit this is a world that cannot exist under any circumstances where humans also exist. We are a species that pays a premium for art made by elephants and other animals. That shows off photos of our children and their accomplishments to people who we know don’t care. We had pet rocks. The drive to value that which is valueless is, for whatever reason, deeply embedded in us. It is not going anywhere. More importantly, acknowledging this does not in any way weaken your point, it does complicate the math a little, but the outcomes are all the same. Denying this point, however, makes you appear to be fundamentally off base on human psychology and that does weaken your persuasiveness.
Second, you posit the AI would rent the machine at the exact cost of its output making zero profit. That needs an explanation to me. Presuming the AI has a goal other than “use all current funds to make potatoes but don’t grow the amount produced over time” it will want some level of profit to achieve that goal. Even maximizing potato output wants to save up for more machines in the future and be prepared for inflation, market changes etc. If it really has no profit and loses the ability to rent the machine the first time the market swings, and thus goes bankrupt, it’s not a very smart super intelligence. I assume it will predict swings and keep the bare minimum needed for its goals, so razor thin margins that look crazy to us could be generous to it. That’s fine. But zero needs justification. 49,500 is functionally the same as 50,000 for your core argument, but resolves this.
The one that seems to matter though is “eke out a living on 5 potatoes a day”. The Amish are doing better than that. So are the Menonites, the Inuit, the Sentilinese, etc. People can and will carve out enclaves where life works. Special economic zones where AI doesn’t exist. Maybe that looks like North Korea, and maybe it looks like Pennsylvania, and maybe it looks like a patchwork of everything in between. Also, energy and logistics are a hard problem. We could not implement full robotics today, even if the tech were 100% ready, because most of the world doesn’t have access to reliable electricity. Even in the developed world, we don’t have spare capacity. You seem to need additional bullets that cover: robotics and energy production are solved such that no part of the economy is constrained by either; enclaves like the Amish are not included in this assessment, etc. Your scenario only addresses those humans who try to compete with AI and not those who walk away and go off the grid making their own economy. They already exist, why do you assume they will stop existing? Maybe this is two issues also.
Thanks for the confirmation on the navy. But you really, really need to check your data on transit costs.
On top left by the X in a black box it says May 2026. It is a little confusing with the line graph only going to April, but the overall snapshot is May.