Is this decision generally considered final and not subject to appeal, or do you expect comments on here/arguments by Said/etc to affect the final outcome you decide on?
bohaska
[Anthropic] A hacker used Claude Code to automate ransomware
If we have already found features/vectors for hallucinations, then why haven’t major AI companies tried shifting it downwards when deploying their AIs? Does reducing their strength decrease hallucinations? Is there a reason why using it in practice would not be helpful?
About 30% of Humanity’s Last Exam chemistry/biology answers are likely wrong
This sounds like a whiteboard to me
Epoch AI’s new evaluation for Gemini 2.5 Flash Preview is broken.
On their AI Benchmarking dashboard, the newest Gemini 2.5 Flash model is listed as having an accuracy of 4% ± 0.71% on GPQA Diamond, when Google’s official announcement lists it at over 80%, and when GPQA is a multiple-choice test with 4 options:
It’s because of formatting issues. Helpfully, Epoch provides the logs from the evaluation, and the model just simply hasn’t been responding in the correct format.
For example, if you look at the first sample from the logs, the correct answer is listed as “B”, but the model answered in Latex, $\boxed{B}$, so it was scored incorrect. There are plenty of other examples like this.
“Write like you talk” depends on which language you are talking about.
Take Arabic. Written Arabic and spoken Arabic has diverged enormously compared to written English and spoken English. Modern Standard Arabic (MSA) is the formal written language for books, newspapers, speeches etc. But no sane person speaks it. There are a lot of spoken dialects (like Egyptian, Levantine, Gulf Arabic, etc.). A speaker of different dialects may not understand other dialects or MSA, because all the vocabulary and grammar is different, which isn’t usually the case in English.
Written and spoken English are similar to each other compared to most other languages.
I’ve tried using a different method for r1 to generate flash fiction: one sentence at a time. If a human writer wouldn’t write out a flash fiction story in one message, then AI shouldn’t, either. Here’s a result:
She clocked in at 6AM, categorizing discards by residue: toothpaste-crusted wedding bands in Tier 4, melatonin vials from red-eyes in Tier 7, a child’s sock curled around her sister’s garnet earring (missing since the November her calls went unanswered).
Room 312’s newlyweds left forensic poetry—dental floss strung bedpost to minibar, aspirin dust tracing slammed door trajectories. She logged these under Domestic Erosion, Subcategory: Honeymoon Phase.
Room 214’s grid collapsed at the gel insoles—mint-green, bunioned, size 6 like her sister’s. The prescription (sertraline, 50mg) was dated three days after their last fight. She filed it under Unfinished Conversations, though the label peeled halfway.
The businessman in 603 prayed, she’d assumed. But his trash betrayed her grids: glucose tabs bisecting train tickets, bloodied test strips where kneelers should’ve dented carpet. Her scrubbing split her cuticles, crimson streaking the sink’s rust.
At dawn, she assembled her relics—unopened bills, an expired birth control foil (2019’s voicemail: static, then dial tone), lint rollers furred with 603’s hair. Each strand vibrated middle C, the note her sister had looped on the piano the night she vanished.
Aspirin dust still gritted her palms. She pressed them to the window as dawn blued the glass—that bleached hue he’d called “motel dusk” while wrestling their tent zipper, his breath hot and futile against her neck.
I think that this still has some imperfections, but I find that this method at least gives you an entirely different set of problems compared to the cliché output you describe.
I guess it’s just that the censors have not seen it yet.
There’s a lot of situations where a smaller website doesn’t get banned e.g. Substack is banned in China, but if you host your Substack blog on a custom URL, people in China can still read it.
Like Wolfram present a diffusion model as a world of concepts. But remove the noise, make the generated concepts like pictures in an art galley (so make 2D pictures stand upright like pictures in this 3D simulated art gallery), this way gamers and YouTubers will see how dreadful those model really are inside. There is a new monster every month on YT, they get millions of views. We want the public to know that AI companies make real-life Frankenstein monsters with some very crazy stuff inside of their electronic “brains” (inside of AI models). It can help to spread the outrage, if people also see their personal photos are inside of those models. If they used the whole output of humanity to train their models, those models should benefit the whole humanity, not cost $200/month like paid ChatGPT. People should be able to see what’s in the model, right now a chatbot is like a librarian that spits quotes at you but doesn’t let you enter the library (the AI model).
Okay, so you propose a mechanistic interpretability program where you create a virtual gallery of AI concepts extracted from Stable Diffusion, represented as images. I am slightly skeptical that this would move the needle on AI safety significantly, we already have databases like LAION which are open-source databases of scraped images used to train AI models, and I don’t see that much outrage over it. I mean, there is some outrage, but not a significantly large amount to be a cornerstone of an AI safety plan.
gamers and YouTubers will see how dreadful those model really are inside. There is a new monster every month on YT, they get millions of views. We want the public to know that AI companies make real-life Frankenstein monsters with some very crazy stuff inside of their electronic “brains” (inside of AI models).
What exactly do you envision that is being hidden inside these Stable Diffusion concepts? What “crazy stuff” is in it? I’m currently not aware of anything about their inner representations that is especially concerning.
It can help to spread the outrage, if people also see their personal photos are inside of those models.
It is probably a lot more efficient to show that by modifying the LAION database and slapping some sort of image search on it, so people can see that their pictures were used to train the model.
Well, this assumes that we have control of most of the world’s GPU’s, and that we have “Math-Proven Safe GPUs” which can block the execution of bad AI models and only output safe AIs (how this is achieved is not really explained in the text), and if we grant this, then AI safety already gets a lot easier.
This is a solution, but a solution similar to “nuke all the datacenters” and I don’t see how this outlines any steps that gets us closer to achieving it.
A helpful page to see and subscribe to all 31 Substack writers (out of 122 total) who were invited to LessOnline: https://lessonline2025invitedlist.substack.com/recommendations
I guess this is another case of ‘Universal’ Human Experiences That Not Everyone Has
Consider showering
Made a small, quick website showing GPQA benchmark scores plotted against LLM inference cost, at https://ai-benchmark-price.glitch.me/. See how much you get for your buck:
Most benchmark data is from Epoch AI, except for those marked “not verified”, which I got from the model developer. Pricing data is from OpenRouter.
All the LLMs on this graph which are on the Pareto frontier of performance vs price were released December 2024 or later...
Zvi has a Substack, there are usually more comments on his posts to there compared to his LessWrong posts https://thezvi.substack.com/p/levels-of-friction/comments
This particular post has 30+ comments in that link
https://www.nytimes.com/interactive/2025/03/03/world/europe/ukraine-russia-war-drones-deaths.html Here’s a link to an NYT article about that.
Here are some quotes:Drones, not the big, heavy artillery that the war was once known for, inflict about 70 percent of all Russian and Ukrainian casualties, said Roman Kostenko, the chairman of the defense and intelligence committee in Ukraine’s Parliament. In some battles, they cause even more — up to 80 percent of deaths and injuries, commanders say.
The conflict now bears little resemblance to the war’s early battles, when Russian columns lumbered into towns and small bands of Ukrainian infantry moved quickly, using hit-and-run tactics to slow the larger enemy.
today most soldiers die or lose limbs to remote-controlled aircraft rigged with explosives, many of them lightly modified hobby models. Drone pilots, in the safety of bunkers or hidden positions in tree lines, attack with joysticks and video screens, often miles from the fighting.
Ukrainian officials said they had made more than one million first-person-view, or FPV, drones in 2024. Russia claims it can churn out 4,000 every day. Both countries say they are still scaling up production, with each aiming to make three to four million drones in 2025.
They’re being deployed far more often, too. With each year of the war, Ukraine’s military has reported huge increases in drone attacks by Russian forces.
2022: 2,600+ reported attacks
2023: 4,700+ reported attacks
2024: 13,800+ reported attacks
People are better defined by smaller categories.
If someone is part of both a large category and a small category that usually don’t overlap, it’s likely that they are an outlier in the large category, not a representative member.For example, if someone is both a rationalist and a Muslim, you shouldn’t expect them to be very similar to a normal Muslim but much more similar to a rationalist, and that it’s possible that they may not be very good at representing Muslims in general to a rationalist audience.
The category of “social construction” is a social construct.
Tangential feature request: allow people to embed other comments in posts natively. This article uses screenshots of LessWrong to display conversations, but this does not responsively size them for mobile users and makes it harder to copy-paste stuff from this post, which a native implementation could fix.