Dwarkesh asked a very interesting question in his Sutton interview, which Sutton wasn’t really interested in replying to.
Dwarkesh notes that one idea for why the the bitter lesson was true is because general methods got to ride the wave of exponential computing power while knowledge engineering could not, as labour was comparatively fixed in supply. He then notices that post AGI labour supply will increase at a very rapid pace. And so he wonders, once the labour constraint is solved post AGI will GOFAI make a comeback? For we will then be able to afford the proverbial three billion philosophers writing lisp predicates or whatever various other kinds of high-labour AI techniques are possible.
For the people that reacted Why?/Citation?, the basic reason is that agent foundations has a very similar shaped bottleneck in the fact that we only have 100 at most agent foundation researchers, and even conditioning on agent foundations succeeding, it would require lots more labor than empirical approaches, and due to the human population mostly leveling off and people not being interested in AI alignment, labor is very, very constrained compared to compute.
I don’t actually buy this argument, but I think it’s a very important argument for someone to make, and for people to consider carefully. So thank you to Dwarkesh for proposing it, and to you for mentioning it!
I’ve been writing up a long-form argument for why “Good Old Fashioned AI” (GOFAI) is a hopeless pipedream. I don’t know if that would actually remain true for enormous numbers of superintelligent programmers! But if I had to sketch out the rough form of the argument, it would go something like this:
Your inputs are all “giant, inscrutable matrices”, or rather n-dimensional tensors. Sound, images, video, etc. If you think I’m being unfair calling images “inscrutable tensors”, please take a 4k image as a 2D array of RGB pixels, and write me a Lisp function that counts all the palm trees.
Your outputs are all probability distributions.
The function from your inputs to your outputs is inevitably going to be a giant, inscrutable tensor, probably with a bunch of extra complications stacked on top. (Like ReLU or neural firing patterns.)
Also, the world is vast, categories are fuzzy, and there will always be surprises outside of your training set.
Anyone who has ever been 15 years old knows that protestations of love, checksummed or no, are not to be taken at face value. And even if we wanted to take love out of this example, what would we replace it with? The universe of assertions that Joe might make about Mary is large, but the subset of those assertions that are universally interpretable and uncomplicated is tiny.
So I would argue that GOFAI fails because it simplifies its map of reality to eliminate all the giant tensors, but in doing so, it oversimplifies the inherent slipperiness of problems like “Count the palm trees in this giant pile of numbers” or really any other interesting question about anything.
So my guess is that a giant army of superintelligent programmers could (at enormous cost) build an LLM-like chatbot by hardcoding vast amounts of world knowledge in a way that another superintelligence could understand. But this same chatbot would be helpless before any adversary who could create “out of the training distribution” situations. The very flexibility required to reason about fuzzy-but-important ideas like “someone is screwing with me in a novel way” would require a vast and complex world view that extrapolates well to unfamiliar situations. And that function from inscrutable input tensors to output probability distributions would itself be a giant, inscrutable tensor, plus a bunch of complications that made it harder to understand.
Ha, and I have been writing up a long-form for when AI-coded-GOFAI might become effective, one might even say unreasonably effective. LLMs aren’t very good at learning in environments with very few data samples, such as “learning on the job” or interacting with the slow real world. But there often exist heuristics, ones that are difficult to run on a neural net, with excellent specificity that are capable of proving their predictive power with a small number of examples. You can try to learn the position of the planets by feeding 10,000 examples into a neural network, but you’re much better off with Newton’s laws coded into your ensemble. Data constrained environments (like, again, robots and learning on the job) are domains where the bitter lesson might not have bite.
As a former robotics developer, I feel the bitter lesson in my bones. This is actually one of the points I plan to focus on when I write up the longer version of my argument.
High-quality manual dexterity (and real-time visual processing) in a cluttered environment is a heartbreakingly hard problem, using any version of GOFAI techniques I knew at the time. And even the most basic of the viable algorithms quickly turned into a big steaming pile of linear algebra mixed with calculus.
As someone who has done robotics demos (and who knows all the things an engineer can do to make sure the demos go smoothly), the Figure AI groceries demostill blows my mind. This demo is well into the “6 impossible things before breakfast” territory for me, and I am sure as hell feeling the imminent AGI when I watch it. And I think this version of Figure was an 8B VLLM connected to an 80M specialized motor control model running at 200 Hz? Even if I assume that this is a very carefully run demo showing Figure under ideal circumstances, it’s still black magic fuckery for me.
But it’s really hard to communicate this intuitive reaction to someone who hasn’t spent years working on GOFAI robotics. Some things seem really easy until you actually start typing code into an editor and booting it on actual robot hardware, or until you start trying to train a model. And then these things reveal themselves as heartbreakingly difficult. And so when I see VLLM-based robots that just casually solve these problems, I remember years of watching frustrated PhDs struggle with things that seemed impossibly basic.
For me, “fix a leaky pipe under a real-world, 30-year-old sink without flooding the kitchen, and deal with all the weird things that inevitably go wrong” will be one of my final warning bells of imminent general intelligence. Especially if the same robot can also add a new breaker to the electrical panel and install a new socket in an older house.
I agree with the criticisms of literal GOFAI here, but I can imagine a kind of pseudo-GOFAI agenda plausibly working here. Classical logic is probably hopeless for this for the reasons you outline (real-world fuzziness), but it still seems an open question whether there’s some mathematical formalism with which you can reason about the input-output mapping.
I would gesture at dynamical systems analysis in RNNs, and circuit-based interpretability as the kinds of things that would enable this. For example, perhaps a model has learned to perform addition using a bag of heuristics, and you notice that there’s a better set of heuristics that it didn’t learn for path-dependent training reasons (e.g. clock and pizza). This would then enable the same kind of labor-intensive improvement through explicit reasoning about representations rather than end-to-end training.
It’s not clear to me that this will work, but the challenge is to explicitly articulate which properties of the function from inputs to outputs render it impossible. I don’t think fuzziness alone does it, like in the case of classical logic, because the mathematical structures involved might be compatible with fuzziness. Maybe the mechanisms in your model aren’t “local enough”, in that they play a role across too much of your input distribution to edit without catastrophic knock-on effects. Maybe the mechanisms are intrinsically high dimensional in a way that makes them hard to reason about as mechanisms. And of course, maybe it’s just never more efficient than end-to-end training.
Surely it’s still more efficient to put that labor back into deep learning rather than GOFAI, though, no? In a world where you have enough AI labor to get GOFAI to AGI, you probably also have enough AI labor to get deep-learning-based AGI to superintelligence.
Currently, intellectual labor from machine learning researchers costs a lot of compute. A $1M/year ML researcher costs the same as having 30 or so H100s. At the point where you have AGI, you can probably run the equivalent of one ML researcher with substantially less hardware than that. (I’m amortizing, presumably you’ll be running your models on multiple chips doing inference on multiple requests simultaneously.) This means that some ways to convert intellectual labor into compute efficiency will be cost-effective when they weren’t previously. So I expect that ML will become substantially more labor-intensive and have much more finicky special casing.
For @Random Developer, I agree that literal GOFAI is unlikely to make a comeback, because of the fuzziness problems that arise when you only have finite compute (though some of the GOFAI techniques probably will be reinvented), but I do think a weaker version of GOFAI that drops the pure determinism and embraces probabilistic programming languages (perhaps InfraBayes is involved?) (perhaps causal models as well) but still retains very high interpretability compared to modern AIs that lead to retargeting the search being viable is still likely to be possible.
The key insight I was missing was that while the world’s complexity is very high, so I agree with Random Developer that the complexity of the world is very high, it’s also pretty easy to decompose the complexity into low-complexity parts for specific tasks, and this lets us not need to cram all of the complexity of our world into our memory at once, and we can instead chunk it.
This is the part that convinced me that powerful AI with very interpretable models was possible at all, and the thing that made me update to thinking it’s likely is that the bitter lesson is now pretty easy to explain without invoking any special property of the world/AIs, simply by looking at the labor being constant but compute growing exponentially means that so long as uninterpretable AI is possible at all to scale, it will be invested in, and I’m a big fan of boring hypotheses relative to exciting/deep hypotheses (to human minds. Other minds would find other hypotheses boring, and other hypotheses exciting).
This is in a sense a restatement of the well known theorem that hypotheses that add conjuncts are never more likely than hypotheses that don’t add a conjunct.
How do we define GOFAI here? If we’re contrasting the search/learning based approaches with the sorts of approaches which leverage specialized knowledge in particular domains (as done by Sutton in his The Bitter Lesson), then if the AGI learns anything particular about a field, isn’t that leveraging “specialized knowledge in particular domains”? [1]
Its not clear to me that should be included as AI research, so its not obvious to me the question makes sense. For example, Alpha-zero was not GOFAI, but was its training process “doing” GOFAI, since that training process was creating an expert system using (autonomously gathered) information about the specialized domain of Go?
Maybe we want to say that in order for it to count as AI research, the AI needs to end up creating some new agent or something. Then the argument is more about whether the AI would want to spin up specialized sub-agents or tool-AIs to help it act in certain domains, then we can ask whether when its trying to improve the sub-agents, it will try to hand-code specialized knowledge or general principles.
As with today this seems very much a function of the level of generality of the domain. Note that GOFAI and improvements to GOFAI haven’t really died, they’ve just gotten specialized. See compilers, compression algorithms, object oriented programming, the disease ontology project, and the applications of many optimization & control algorithms.
But note this is different from how most use the term “GOFAI”, by which they mean symbolic AI in contrast to neuro-inspired AI or connectionism. In this case, I expect that the AI we get will not necessarily want to follow either of these two philosophical principles. It will understand how & why DNNs work, eliminate their flaws, and amplify their strengths, and have the theorems (or highly probable heuristic arguments) to prove why its approach is sound.
I can’t remember the quote, but I believe this possibility is mentioned offhand in IABIED, with the authors suggesting superhuman but still weak AI might do what we can’t and craft rather than grow another AI, to that is can ensure the better successor AI is aligned to its goals.
Before Allied victory, one might have guessed that the peoples of Japan and Germany would be difficult to pacify and would not integrate well with a liberal regime. For the populations of both showed every sign of virulent loyalty to their government. It’s commonly pointed out that it is exactly this seemingly-virulent loyalty that implied their populations would be easily pacified once their governments fell, as indeed they were. To put it in crude terms: having been domesticated by one government, they were easily domesticated by another.
I have been thinking a bit about why I was so wrong about Trump. Though of course if I had a vote I would have voted for Kamala Harris and said as much at the time, I assumed things would be like his first term where (though a clown show) it seemed relatively normal given the circumstances. And I wasn’t particularly worried. I figured norm violations would be difficult with hostile institutions, especially given the number of stupid people who would be involved in any attempt at norm violations.
Likely most of me being wrong here was my ignorance, as a non-citizen and someone generally not interested in politics, of American civics and how the situation differs from that of his first term.
But one thing I wonder about is my assumption that hostile institutions are always a bad sign for the dictatorially-minded. Suppose, for the sake of argument, that there is at least some kernel of truth to the narrative that American institutions were in some ways ideologically captured by an illiberal strand of progressivism. Is that actually a bad sign for the dictatorially-minded? Or is it a sign that having been domesticated by one form of illiberalism they can likely be domesticated by another?
Interesting take, but I’m not sure if I agree. IMO Trump’s second term is another joke played on us by the God of Straight Lines: successive presidents centralize more and more power, sapping it from other institutions.
There was a good article discussing this trend that I’m unable to find atm. But going off the top of my head, the most obvious executive overreach by Biden was the student loan forgiveness.
My opinion on that is that it was largely misrepresented by loud Trump loyalists for the purposes of normalization of radical power consolidation. In the style of ‘Accusation in a Mirror.’ Specifically, they already planned to radically consolidate executive power and used this as an opportunity to begin the desensitization process that would make their plan future consolidation efforts raise fewer eyebrows. The more they bemoaned Biden’s actions the easier their path would be. Even so, let’s not falsely equate Biden and student loans with Trump and the implementation of Project 2025 and its goal of a Unitary Executive (read: dictator).
Reading AI 2027, I can’t help but laugh at the importance of the president in the scenario. I am sure it has been commented before but one should probably look at the actual material one is working with.
“Many readers of AI 2027, including several higher-ups at frontier AI companies, have told us that it depicts the government being unrealistically competent.
Therefore, let it be known that in our humble opinion, AI 2027 depicts an incompetent government being puppeted/captured by corporate lobbyists. It does not depict what we think a competent government would do. We are working on a new scenario branch that will depict competent government action.”
I expect this to backfire with most people because it seems that their concept of the authors hasn’t updated in sync with the authors, and so they will feel that when their concept of the authors finally updates, it will seem very intensely like changing predictions to match evidence post-hoc. So I think they should make more noise about that, eg by loudly renaming AI 2027 to, eg, “If AI was 2027” or something. Many people (possibly even important ones) seem to me to judge public figures’ claims based on the perceiver’s conception of the public figure rather than fully treating their knowledge of a person and the actual person as separate. This is especially relevant for people who are not yet convinced and are using the boldness of AI 2027 as reason to update against it, and for those people, making noise to indicate you’re staying in sync with the evidence would be useful. It’ll likely be overblown into “wow, they backed out of their prediction! see? ai doesn’t work!” by some, but I think the longer term effect is to establish more credibility with normal people, eg by saying “nearly unchanged: 2028 not 2027” as your five words to make the announcement.
We are worried about this too and thinking of ways to mitigate it. I don’t like the idea of renaming the scenario itself though, it seems like a really expensive/costly way to signal-boost something we have been saying since the beginning. But maybe we just need to suck it up and do it.
If it helps, we are working on (a) a blog post explaining more about what our timelines are and how they’ve updated, and (b) an “AI 2032” scenario meant to be about as big and comprehensive as AI 2027, representing Eli’s median (whereas 2027 was my median last year). Ultimately we want to have multiple big scenarios up, not just one. It would be too difficult to keep changing the one to match our current views anyway.
Yeah, I think the title should be the best compression it can be, because for a lot of people, it’s what they’ll remember. But I understand not being eager to do it. It seems worth doing specifically because people seem to react to the title on its own. I definitely would think about what two-to-five words you want people saying when they think of it in order to correct as many misconceptions at once as possible—I’ve seen people, eg on reddit, pointing out your opinions have changed, so it’s not totally unknown. but people who are most inclined to be adversarial are the ones I’m most thinking need to be made to have a hard time rationalizing that you didn’t realize it.
Another scenario is just about as good for this purpose, probably. I’d strongly recommend making much more noise about intro-to-forecasting level stuff so that the first thing people who don’t get forecasts hear, eg on podcasts or by word of mouth, is the disclaimer about it intentionally being a maximum-likelihood-and-therefore-effectively-impossible no-surprises-happen scenario which will likely become incorrect quickly. You said it already, but most people who refer to it seem to use that very thing as a criticism! which is what leads me to say this.
I actually think Vance will be president, modally, sometime in 2026 anyway. And would probably go for “full nationalization” in the story’s February 2027/2028 if he could get away with it, else some less overt seizure of full control if he could get away with that. Either way still with very little change in what’s actually happening in the data centers, and with at least equally dystopian results on basically the same timeline. Doesn’t matter what he’s read.
If you play it with Trump as president, then at each point “The President” is mentioned in the story, he gets nudged by advisors into doing whatever they want (60 percent, hard to guess what, though, because it depends on which advisors are on top at the moment), just nods along with whatever OpenBrain says (20 percent), or does something completely random that’s not necessarily even on the menu (20 percent). That could amount to doing exactly what the story says.
Yes. I don’t expect Trump to finish the term. 2026 would be my guess for the most likely year, but each of 2027 and 2028 is almost equally likely, and there’s even some chance it could still happen before the end of 2025.
He’s acting erratic and weird (more than usual and increasingly). It may not be possible to prop him up for very long. Or at least it may be very hard, and it’s not clear that the people who’d have to do that are agreed on the need to try that hard.
His political coalition is under tremendous pressure. He’s unpopular, he keeps making unpopular moves, and there doesn’t seem to be any power base he’s not prepared to alienate. It’s hard to gauge how much all that is straining things, because you often don’t see any cracks until the whole thing suddenly collapses. The way collapse looks from the outside is probably that one of his many scandals, missteps, and whatnot suddenly sticks, a few key people or groups visibly abandon him, that signals everybody else, and it quickly snowballs into impeachment and removal.
He’s at risk of assassination. A whole lot of people, including crazy people, are very, very mad at him. A whole lot of others might just coldly think it’s a good idea for him to die for a variety of reasons. Including the desire to substitute Vance as president, in fact. He’s random, reckless, autocratic, and fast-moving enough to foreclose many non-assassination alternatives that might normally keep the thought out of people’s minds. Security isn’t perfect and he’s probably not always a cooperative protectee.
He’s almost 80, which means he has a several percent chance of dying in any given year regardless.
* This is not a parliamentary system. The President doesn’t get booted from office when they lose majority support—they have to be impeached[1]. * Successful impeachment takes 67 Senate votes. * 25 states (half of Senate seats) voted for Trump 3 elections in a row (2016, 2020, 2024). * So to impeach Trump, you’d need the votes of Senators from at least 9 states where Trump won 3 elections in a row. * Betting markets expect (70% chance) Republicans to keep their 50 seats majority in the November Election, not a crash in support.
The thing is that impeachment is still political, and Trump is a big pain in the butt for the Republicans at the moment. I’d guess that if they could individually, secretly push a button and make Trump resign in favor of Vance, 80 percent of Republicans in Congress would push that button right now.
Trump is making 2026 hard. Maybe they keep those 50 seats, by whatever means… and maybe they don’t. Maybe he does something insane in October 2026, maybe he doesn’t. People, including very right wing working class people they think of as the MAGA base, keep yelling at them all the time. He’s pulling less and less of his own weight in terms of pulling in votes. There’s the even the possibility of massive civil unrest, general strikes, whatever.
But maybe more importantly, Trump’s just generally a pain to work with or near. You can’t plan, you keep having to publicly reverse yourself when he tells you one of your positions is no longer OK, you have to grin and bear it when he insults you, your family, and your constituents. He gets wild ideas and breaks things at random, things that weren’t in the plan. You can’t make a bargain with him and expect him to keep up his end if there’s any meaningful cost to him in doing so. If you’re sincerely religious, he does a bunch of stuff that’s pretty hard to swallow.
If Trump reaches the point of, say, literally being unable to speak a single coherent sentence, then maybe some of the pain of working with him goes away, because you’re really working with whoever can manage to puppet him. But then you have to fear power struggles over the puppet strings, and there’s also a very large workload in maintaining any kind of facade.
Vance, on the other hand, is just as acceptable to most of the Republicans policy-wise as Trump is, maybe more so. I think he’s more in the Thielite or Moldbugger wing and less of a xenophobe or religious fanatic, but he’s not going to have any objections to working with xenophobes or religious fanatics, or to negotiating due attention for their priorities on terms acceptable to them. He’s more predictable, easier to work with and bargain with.
It’s a win for the Republicans if they can, say, throw Trump under the bus over something like Epstein, show their independence and “moral fiber”, install Vance and let him play the savior, tone down some of the more obvious attacks on norms (while still rapidly eroding any inconvenient ones), and stay on more or less the same substantive policy course (except with fewer weird random digressions).
That doesn’t necessarily translate into a 67-percent vote; there’s a huge coordination problem. And it’s not at all clear that Democrats are better off with Vance. On the other hand, probably nearly all of them personally hate Trump, and they know that holding out for, say, a Trump-Vance double impeachment won’t do them a lot of good. They won’t get it, and if they did get it they’d just end up with Mike Johnson. They might even get more competent appointments, if not more ideologically acceptable ones. So they don’t have a strong incentive to gum up an impeachment if the Republicans want to do one.
I think frankly acknowledging the state of the U.S. is likely to jeopardize AI safety proposals in the short term. If AI 2027 had written the president as less competent or made other value judgements about this administration, this administration could be much less receptive to reason (less than they already are?) and proactively seek to end this movement. I see the movement as trying to be deliberately apolitical.
This is maybe a good short term strategy, but a flawed long-term one. Aligned AI arising in an authoritarian system is not x-risk bad, but is still pretty bad, right?
You can just not go bald. Finasteride works as long as you start early. The risk of ED is not as high as people think. At worst, it doubles the risk compared to placebo. If you have bad side effects quitting resolves them but it can take about a month for DHT levels to return to normal. Some men even have increased sex drive due to the slight bump in testosterone it gives you.
I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn’t particularly noble. You could “just shave it bro” or you could just take a pill every day, which is easier than shaving your head. Hair is nice. It’s perfectly valid to want to keep your hair. Consider doing this if you like having hair.
Finasteride prevents balding but provides only modest regrowth. If you think you will need to start, start as soon as possible for the best results.
Note that there have been many reports of persistent physiological changes caused by 5-AR inhibitors such as finasteride (see: Post Finasteride Syndrome), some of which sound pretty horrifying, like permanent brain fog and anhedonia.
I’ve spent a lot of time reading through both the scientific literature and personal anecdotes and it seems like such adverse effects are exceedingly rare, but I have high confidence (>80%) that they are not completely made up or psychosomatic. My current best guess is that all such permanent effects are caused by some sort of rare genetic variants, which is why I’m particularly interested in the genetic study being funded by the PFS network.
The whole situation is pretty complex and there’s a lot of irrational argumentation on both sides. I’d recommend this Reddit post as a good introduction – I plan on posting my own detailed analysis on LW sometime in the future.
It is a “disfigurement” by the literal definition, but I see your point. Given it is now treatable, we should be honest about it being a significant hit to attractiveness. I just think people should be informed about what is now possible.
ED is not the only problem with finasteride. I saw a couple of cases of gynecomastia in medical school and stopped using finasteride after that. Minoxidil worked fine solo for 4 years, but applying it every night was annoying, and when I stopped using it, I went bald fast (Norwood 6 in 5 months!).
I was watching the PirateSoftware drama. There is this psychiatrist, Dr. K, who interviewed him after the internet hated him, and everyone praised Dr. K for calling him out or whatever. But much more fascinating is Dr. K’s interview with PirateSoftware a year before, as PirateSoftware expertly manipulates Dr. K into thinking he is an enlightened being and likely an accomplished yogi in a past life. If you listen to the interview he starts picking up on Dr. K’s spiritual beliefs and playing into them subtly:
I figured PirateSoftware must be stupider than I estimated given his weird coding decisions, but I bet he is legit very smart. His old job was doing white hat phishing and social engineering and I imagine he was very good at it.
Inadequate Equilibria lists the example of bright lights to cure SAD. I have a similar idea, though I have no clue if it would work. Can we treat blindness in children by just creating a device that gives children sonar? I think it would be a worthy experiment to create device that makes inaudible cherps and then translates their echos into the audible range and transmits them to some headphones the child wears. Maybe their brains will just figure it out? Alternatively, an audio interface to a lidar or a depth estimation model might do, too.
As I read somewhere in the Internet, even adult people with normal eyesight can learn echolocation. If it’s true, obviously blind children can learn it too!
Here Casey Muratori talks about computer programming being automated. Ignoring the larger concerns of AI for a minute, which he doesn’t touch, I just thought this was a beautiful, high-integrity meditation on the prospect of the career he loves becoming unremunerative: https://youtu.be/apREl0KmTdQ?si=So1CtsKxedImBScS&t=5251
He says that he only cares about the learning aspect, and that AI cannot help, because he isn’t bottlenecked by typing speed, i.e., it would take as much time for him to write the code as to read it. But it’s easier to learn from a textbook than figure things out yourself? Perhaps he meant that he only cares about the “figuring out” aspect.
I have had the Daylight Tablet for a couple months. I really like it. It is very overpriced but the screen is great and the battery life good. People who read a lot of pdfs or manga, in particular, might like it.
EDIT: I’m having a lot of fun exploring style’s with Suno 4.5. Many, if not most, of them must be entirely new to the Earth: bengali electropop, acid techno avant-garde jazz, mandarin trance. Strongly recommend scrolling through the wheel of styles.
Wow, those vocals are way better than Suno 3′s. Before, they had some kind of grainy texture to the vocals, as if there was a sudden, discrete transition between some notes. Kinda flat, in a way. Now, there is a lot more detail. Much more realistic.
I agree that the vocals have gotten a lot better. They’re not free of distortion, but it’s almost imperceptible on some songs, especially without headphones.
The biggest tell for me that these songs are AI is the generic and cringey lyrics, like what you’d get if you asked ChatGPT to write them without much prompting. They often have the name of the genre in the song. Plus the way they’re performed doesn’t always fit with the meaning. You can provide your own lyrics, though, so it’s probably easy to get your AI songs to fly under the radar if you’re a good writer.
Also, while some of the songs on that page sound novel to me, they’re usually more conventional than the prompt suggests. Like, tell me what part of the last song I linked to is afropiano.
Dwarkesh asked a very interesting question in his Sutton interview, which Sutton wasn’t really interested in replying to.
Dwarkesh notes that one idea for why the the bitter lesson was true is because general methods got to ride the wave of exponential computing power while knowledge engineering could not, as labour was comparatively fixed in supply. He then notices that post AGI labour supply will increase at a very rapid pace. And so he wonders, once the labour constraint is solved post AGI will GOFAI make a comeback? For we will then be able to afford the proverbial three billion philosophers writing lisp predicates or whatever various other kinds of high-labour AI techniques are possible.
Of course, the same consideration applies to theoretical agent-foundations-style alignment research
For the people that reacted Why?/Citation?, the basic reason is that agent foundations has a very similar shaped bottleneck in the fact that we only have 100 at most agent foundation researchers, and even conditioning on agent foundations succeeding, it would require lots more labor than empirical approaches, and due to the human population mostly leveling off and people not being interested in AI alignment, labor is very, very constrained compared to compute.
I don’t actually buy this argument, but I think it’s a very important argument for someone to make, and for people to consider carefully. So thank you to Dwarkesh for proposing it, and to you for mentioning it!
I’ve been writing up a long-form argument for why “Good Old Fashioned AI” (GOFAI) is a hopeless pipedream. I don’t know if that would actually remain true for enormous numbers of superintelligent programmers! But if I had to sketch out the rough form of the argument, it would go something like this:
Your inputs are all “giant, inscrutable matrices”, or rather n-dimensional tensors. Sound, images, video, etc. If you think I’m being unfair calling images “inscrutable tensors”, please take a 4k image as a 2D array of RGB pixels, and write me a Lisp function that counts all the palm trees.
Your outputs are all probability distributions.
The function from your inputs to your outputs is inevitably going to be a giant, inscrutable tensor, probably with a bunch of extra complications stacked on top. (Like ReLU or neural firing patterns.)
Also, the world is vast, categories are fuzzy, and there will always be surprises outside of your training set.
Clay Shirkey summarized a similar set of issues in the context of the Semantic Web:
So I would argue that GOFAI fails because it simplifies its map of reality to eliminate all the giant tensors, but in doing so, it oversimplifies the inherent slipperiness of problems like “Count the palm trees in this giant pile of numbers” or really any other interesting question about anything.
So my guess is that a giant army of superintelligent programmers could (at enormous cost) build an LLM-like chatbot by hardcoding vast amounts of world knowledge in a way that another superintelligence could understand. But this same chatbot would be helpless before any adversary who could create “out of the training distribution” situations. The very flexibility required to reason about fuzzy-but-important ideas like “someone is screwing with me in a novel way” would require a vast and complex world view that extrapolates well to unfamiliar situations. And that function from inscrutable input tensors to output probability distributions would itself be a giant, inscrutable tensor, plus a bunch of complications that made it harder to understand.
Ha, and I have been writing up a long-form for when AI-coded-GOFAI might become effective, one might even say unreasonably effective.
LLMs aren’t very good at learning in environments with very few data samples, such as “learning on the job” or interacting with the slow real world. But there often exist heuristics, ones that are difficult to run on a neural net, with excellent specificity that are capable of proving their predictive power with a small number of examples. You can try to learn the position of the planets by feeding 10,000 examples into a neural network, but you’re much better off with Newton’s laws coded into your ensemble. Data constrained environments (like, again, robots and learning on the job) are domains where the bitter lesson might not have bite.
As a former robotics developer, I feel the bitter lesson in my bones. This is actually one of the points I plan to focus on when I write up the longer version of my argument.
High-quality manual dexterity (and real-time visual processing) in a cluttered environment is a heartbreakingly hard problem, using any version of GOFAI techniques I knew at the time. And even the most basic of the viable algorithms quickly turned into a big steaming pile of linear algebra mixed with calculus.
As someone who has done robotics demos (and who knows all the things an engineer can do to make sure the demos go smoothly), the Figure AI groceries demo still blows my mind. This demo is well into the “6 impossible things before breakfast” territory for me, and I am sure as hell feeling the imminent AGI when I watch it. And I think this version of Figure was an 8B VLLM connected to an 80M specialized motor control model running at 200 Hz? Even if I assume that this is a very carefully run demo showing Figure under ideal circumstances, it’s still black magic fuckery for me.
But it’s really hard to communicate this intuitive reaction to someone who hasn’t spent years working on GOFAI robotics. Some things seem really easy until you actually start typing code into an editor and booting it on actual robot hardware, or until you start trying to train a model. And then these things reveal themselves as heartbreakingly difficult. And so when I see VLLM-based robots that just casually solve these problems, I remember years of watching frustrated PhDs struggle with things that seemed impossibly basic.
For me, “fix a leaky pipe under a real-world, 30-year-old sink without flooding the kitchen, and deal with all the weird things that inevitably go wrong” will be one of my final warning bells of imminent general intelligence. Especially if the same robot can also add a new breaker to the electrical panel and install a new socket in an older house.
See also Inscrutability was always inevitable, right?
I agree with the criticisms of literal GOFAI here, but I can imagine a kind of pseudo-GOFAI agenda plausibly working here. Classical logic is probably hopeless for this for the reasons you outline (real-world fuzziness), but it still seems an open question whether there’s some mathematical formalism with which you can reason about the input-output mapping.
I would gesture at dynamical systems analysis in RNNs, and circuit-based interpretability as the kinds of things that would enable this. For example, perhaps a model has learned to perform addition using a bag of heuristics, and you notice that there’s a better set of heuristics that it didn’t learn for path-dependent training reasons (e.g. clock and pizza). This would then enable the same kind of labor-intensive improvement through explicit reasoning about representations rather than end-to-end training.
It’s not clear to me that this will work, but the challenge is to explicitly articulate which properties of the function from inputs to outputs render it impossible. I don’t think fuzziness alone does it, like in the case of classical logic, because the mathematical structures involved might be compatible with fuzziness. Maybe the mechanisms in your model aren’t “local enough”, in that they play a role across too much of your input distribution to edit without catastrophic knock-on effects. Maybe the mechanisms are intrinsically high dimensional in a way that makes them hard to reason about as mechanisms. And of course, maybe it’s just never more efficient than end-to-end training.
Surely it’s still more efficient to put that labor back into deep learning rather than GOFAI, though, no? In a world where you have enough AI labor to get GOFAI to AGI, you probably also have enough AI labor to get deep-learning-based AGI to superintelligence.
The way I’d think about this is:
Currently, intellectual labor from machine learning researchers costs a lot of compute. A $1M/year ML researcher costs the same as having 30 or so H100s. At the point where you have AGI, you can probably run the equivalent of one ML researcher with substantially less hardware than that. (I’m amortizing, presumably you’ll be running your models on multiple chips doing inference on multiple requests simultaneously.) This means that some ways to convert intellectual labor into compute efficiency will be cost-effective when they weren’t previously. So I expect that ML will become substantially more labor-intensive and have much more finicky special casing.
For @Random Developer, I agree that literal GOFAI is unlikely to make a comeback, because of the fuzziness problems that arise when you only have finite compute (though some of the GOFAI techniques probably will be reinvented), but I do think a weaker version of GOFAI that drops the pure determinism and embraces probabilistic programming languages (perhaps InfraBayes is involved?) (perhaps causal models as well) but still retains very high interpretability compared to modern AIs that lead to retargeting the search being viable is still likely to be possible.
The key insight I was missing was that while the world’s complexity is very high, so I agree with Random Developer that the complexity of the world is very high, it’s also pretty easy to decompose the complexity into low-complexity parts for specific tasks, and this lets us not need to cram all of the complexity of our world into our memory at once, and we can instead chunk it.
This is the part that convinced me that powerful AI with very interpretable models was possible at all, and the thing that made me update to thinking it’s likely is that the bitter lesson is now pretty easy to explain without invoking any special property of the world/AIs, simply by looking at the labor being constant but compute growing exponentially means that so long as uninterpretable AI is possible at all to scale, it will be invested in, and I’m a big fan of boring hypotheses relative to exciting/deep hypotheses (to human minds. Other minds would find other hypotheses boring, and other hypotheses exciting).
This is in a sense a restatement of the well known theorem that hypotheses that add conjuncts are never more likely than hypotheses that don’t add a conjunct.
So I mostly agree with this hypothesis.
How do we define GOFAI here? If we’re contrasting the search/learning based approaches with the sorts of approaches which leverage specialized knowledge in particular domains (as done by Sutton in his The Bitter Lesson), then if the AGI learns anything particular about a field, isn’t that leveraging “specialized knowledge in particular domains”? [1]
Its not clear to me that should be included as AI research, so its not obvious to me the question makes sense. For example, Alpha-zero was not GOFAI, but was its training process “doing” GOFAI, since that training process was creating an expert system using (autonomously gathered) information about the specialized domain of Go?
Maybe we want to say that in order for it to count as AI research, the AI needs to end up creating some new agent or something. Then the argument is more about whether the AI would want to spin up specialized sub-agents or tool-AIs to help it act in certain domains, then we can ask whether when its trying to improve the sub-agents, it will try to hand-code specialized knowledge or general principles.
As with today this seems very much a function of the level of generality of the domain. Note that GOFAI and improvements to GOFAI haven’t really died, they’ve just gotten specialized. See compilers, compression algorithms, object oriented programming, the disease ontology project, and the applications of many optimization & control algorithms.
But note this is different from how most use the term “GOFAI”, by which they mean symbolic AI in contrast to neuro-inspired AI or connectionism. In this case, I expect that the AI we get will not necessarily want to follow either of these two philosophical principles. It will understand how & why DNNs work, eliminate their flaws, and amplify their strengths, and have the theorems (or highly probable heuristic arguments) to prove why its approach is sound.
I can’t remember the quote, but I believe this possibility is mentioned offhand in IABIED, with the authors suggesting superhuman but still weak AI might do what we can’t and craft rather than grow another AI, to that is can ensure the better successor AI is aligned to its goals.
Before Allied victory, one might have guessed that the peoples of Japan and Germany would be difficult to pacify and would not integrate well with a liberal regime. For the populations of both showed every sign of virulent loyalty to their government. It’s commonly pointed out that it is exactly this seemingly-virulent loyalty that implied their populations would be easily pacified once their governments fell, as indeed they were. To put it in crude terms: having been domesticated by one government, they were easily domesticated by another.
I have been thinking a bit about why I was so wrong about Trump. Though of course if I had a vote I would have voted for Kamala Harris and said as much at the time, I assumed things would be like his first term where (though a clown show) it seemed relatively normal given the circumstances. And I wasn’t particularly worried. I figured norm violations would be difficult with hostile institutions, especially given the number of stupid people who would be involved in any attempt at norm violations.
Likely most of me being wrong here was my ignorance, as a non-citizen and someone generally not interested in politics, of American civics and how the situation differs from that of his first term.
But one thing I wonder about is my assumption that hostile institutions are always a bad sign for the dictatorially-minded. Suppose, for the sake of argument, that there is at least some kernel of truth to the narrative that American institutions were in some ways ideologically captured by an illiberal strand of progressivism. Is that actually a bad sign for the dictatorially-minded? Or is it a sign that having been domesticated by one form of illiberalism they can likely be domesticated by another?
Interesting take, but I’m not sure if I agree. IMO Trump’s second term is another joke played on us by the God of Straight Lines: successive presidents centralize more and more power, sapping it from other institutions.
Can you give some examples of how this happened under Biden? Because frankly this line is not looking straight.
There was a good article discussing this trend that I’m unable to find atm. But going off the top of my head, the most obvious executive overreach by Biden was the student loan forgiveness.
It seems hard to argue that this was an escalation over Trump’s first term.
My opinion on that is that it was largely misrepresented by loud Trump loyalists for the purposes of normalization of radical power consolidation. In the style of ‘Accusation in a Mirror.’ Specifically, they already planned to radically consolidate executive power and used this as an opportunity to begin the desensitization process that would make their plan future consolidation efforts raise fewer eyebrows. The more they bemoaned Biden’s actions the easier their path would be. Even so, let’s not falsely equate Biden and student loans with Trump and the implementation of Project 2025 and its goal of a Unitary Executive (read: dictator).
Reading AI 2027, I can’t help but laugh at the importance of the president in the scenario. I am sure it has been commented before but one should probably look at the actual material one is working with.
https://x.com/DKokotajlo/status/1933308075055985042
“Many readers of AI 2027, including several higher-ups at frontier AI companies, have told us that it depicts the government being unrealistically competent. Therefore, let it be known that in our humble opinion, AI 2027 depicts an incompetent government being puppeted/captured by corporate lobbyists. It does not depict what we think a competent government would do. We are working on a new scenario branch that will depict competent government action.”
I think Daniel Kokotajlo et. al. have pushed their timelines back one year, so likely the president would be different for many parts of the story.
I expect this to backfire with most people because it seems that their concept of the authors hasn’t updated in sync with the authors, and so they will feel that when their concept of the authors finally updates, it will seem very intensely like changing predictions to match evidence post-hoc. So I think they should make more noise about that, eg by loudly renaming AI 2027 to, eg, “If AI was 2027” or something. Many people (possibly even important ones) seem to me to judge public figures’ claims based on the perceiver’s conception of the public figure rather than fully treating their knowledge of a person and the actual person as separate. This is especially relevant for people who are not yet convinced and are using the boldness of AI 2027 as reason to update against it, and for those people, making noise to indicate you’re staying in sync with the evidence would be useful. It’ll likely be overblown into “wow, they backed out of their prediction! see? ai doesn’t work!” by some, but I think the longer term effect is to establish more credibility with normal people, eg by saying “nearly unchanged: 2028 not 2027” as your five words to make the announcement.
We are worried about this too and thinking of ways to mitigate it. I don’t like the idea of renaming the scenario itself though, it seems like a really expensive/costly way to signal-boost something we have been saying since the beginning. But maybe we just need to suck it up and do it.
If it helps, we are working on (a) a blog post explaining more about what our timelines are and how they’ve updated, and (b) an “AI 2032” scenario meant to be about as big and comprehensive as AI 2027, representing Eli’s median (whereas 2027 was my median last year). Ultimately we want to have multiple big scenarios up, not just one. It would be too difficult to keep changing the one to match our current views anyway.
Yeah, I think the title should be the best compression it can be, because for a lot of people, it’s what they’ll remember. But I understand not being eager to do it. It seems worth doing specifically because people seem to react to the title on its own. I definitely would think about what two-to-five words you want people saying when they think of it in order to correct as many misconceptions at once as possible—I’ve seen people, eg on reddit, pointing out your opinions have changed, so it’s not totally unknown. but people who are most inclined to be adversarial are the ones I’m most thinking need to be made to have a hard time rationalizing that you didn’t realize it.
Another scenario is just about as good for this purpose, probably. I’d strongly recommend making much more noise about intro-to-forecasting level stuff so that the first thing people who don’t get forecasts hear, eg on podcasts or by word of mouth, is the disclaimer about it intentionally being a maximum-likelihood-and-therefore-effectively-impossible no-surprises-happen scenario which will likely become incorrect quickly. You said it already, but most people who refer to it seem to use that very thing as a criticism! which is what leads me to say this.
And the market’s top pick for President has read AI 2027.
I actually think Vance will be president, modally, sometime in 2026 anyway. And would probably go for “full nationalization” in the story’s February 2027/2028 if he could get away with it, else some less overt seizure of full control if he could get away with that. Either way still with very little change in what’s actually happening in the data centers, and with at least equally dystopian results on basically the same timeline. Doesn’t matter what he’s read.
If you play it with Trump as president, then at each point “The President” is mentioned in the story, he gets nudged by advisors into doing whatever they want (60 percent, hard to guess what, though, because it depends on which advisors are on top at the moment), just nods along with whatever OpenBrain says (20 percent), or does something completely random that’s not necessarily even on the menu (20 percent). That could amount to doing exactly what the story says.
...your modal estimate for the timing of Vance ascending to the presidency is more than two years before Trump’s term ends?
Yes. I don’t expect Trump to finish the term. 2026 would be my guess for the most likely year, but each of 2027 and 2028 is almost equally likely, and there’s even some chance it could still happen before the end of 2025.
He’s acting erratic and weird (more than usual and increasingly). It may not be possible to prop him up for very long. Or at least it may be very hard, and it’s not clear that the people who’d have to do that are agreed on the need to try that hard.
His political coalition is under tremendous pressure. He’s unpopular, he keeps making unpopular moves, and there doesn’t seem to be any power base he’s not prepared to alienate. It’s hard to gauge how much all that is straining things, because you often don’t see any cracks until the whole thing suddenly collapses. The way collapse looks from the outside is probably that one of his many scandals, missteps, and whatnot suddenly sticks, a few key people or groups visibly abandon him, that signals everybody else, and it quickly snowballs into impeachment and removal.
He’s at risk of assassination. A whole lot of people, including crazy people, are very, very mad at him. A whole lot of others might just coldly think it’s a good idea for him to die for a variety of reasons. Including the desire to substitute Vance as president, in fact. He’s random, reckless, autocratic, and fast-moving enough to foreclose many non-assassination alternatives that might normally keep the thought out of people’s minds. Security isn’t perfect and he’s probably not always a cooperative protectee.
He’s almost 80, which means he has a several percent chance of dying in any given year regardless.
Would you agree your take is rather contrarian?
* This is not a parliamentary system. The President doesn’t get booted from office when they lose majority support—they have to be impeached[1].
* Successful impeachment takes 67 Senate votes.
* 25 states (half of Senate seats) voted for Trump 3 elections in a row (2016, 2020, 2024).
* So to impeach Trump, you’d need the votes of Senators from at least 9 states where Trump won 3 elections in a row.
* Betting markets expect (70% chance) Republicans to keep their 50 seats majority in the November Election, not a crash in support.
Or removed by the 25th amendment, which is strictly harder if the president protests (requires 2⁄3 vote to remove in both House and Senate).
Maybe.
The thing is that impeachment is still political, and Trump is a big pain in the butt for the Republicans at the moment. I’d guess that if they could individually, secretly push a button and make Trump resign in favor of Vance, 80 percent of Republicans in Congress would push that button right now.
Trump is making 2026 hard. Maybe they keep those 50 seats, by whatever means… and maybe they don’t. Maybe he does something insane in October 2026, maybe he doesn’t. People, including very right wing working class people they think of as the MAGA base, keep yelling at them all the time. He’s pulling less and less of his own weight in terms of pulling in votes. There’s the even the possibility of massive civil unrest, general strikes, whatever.
But maybe more importantly, Trump’s just generally a pain to work with or near. You can’t plan, you keep having to publicly reverse yourself when he tells you one of your positions is no longer OK, you have to grin and bear it when he insults you, your family, and your constituents. He gets wild ideas and breaks things at random, things that weren’t in the plan. You can’t make a bargain with him and expect him to keep up his end if there’s any meaningful cost to him in doing so. If you’re sincerely religious, he does a bunch of stuff that’s pretty hard to swallow.
If Trump reaches the point of, say, literally being unable to speak a single coherent sentence, then maybe some of the pain of working with him goes away, because you’re really working with whoever can manage to puppet him. But then you have to fear power struggles over the puppet strings, and there’s also a very large workload in maintaining any kind of facade.
Vance, on the other hand, is just as acceptable to most of the Republicans policy-wise as Trump is, maybe more so. I think he’s more in the Thielite or Moldbugger wing and less of a xenophobe or religious fanatic, but he’s not going to have any objections to working with xenophobes or religious fanatics, or to negotiating due attention for their priorities on terms acceptable to them. He’s more predictable, easier to work with and bargain with.
It’s a win for the Republicans if they can, say, throw Trump under the bus over something like Epstein, show their independence and “moral fiber”, install Vance and let him play the savior, tone down some of the more obvious attacks on norms (while still rapidly eroding any inconvenient ones), and stay on more or less the same substantive policy course (except with fewer weird random digressions).
That doesn’t necessarily translate into a 67-percent vote; there’s a huge coordination problem. And it’s not at all clear that Democrats are better off with Vance. On the other hand, probably nearly all of them personally hate Trump, and they know that holding out for, say, a Trump-Vance double impeachment won’t do them a lot of good. They won’t get it, and if they did get it they’d just end up with Mike Johnson. They might even get more competent appointments, if not more ideologically acceptable ones. So they don’t have a strong incentive to gum up an impeachment if the Republicans want to do one.
I think frankly acknowledging the state of the U.S. is likely to jeopardize AI safety proposals in the short term. If AI 2027 had written the president as less competent or made other value judgements about this administration, this administration could be much less receptive to reason (less than they already are?) and proactively seek to end this movement. I see the movement as trying to be deliberately apolitical.
This is maybe a good short term strategy, but a flawed long-term one. Aligned AI arising in an authoritarian system is not x-risk bad, but is still pretty bad, right?
You can just not go bald. Finasteride works as long as you start early. The risk of ED is not as high as people think. At worst, it doubles the risk compared to placebo. If you have bad side effects quitting resolves them but it can take about a month for DHT levels to return to normal. Some men even have increased sex drive due to the slight bump in testosterone it gives you.
I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn’t particularly noble. You could “just shave it bro” or you could just take a pill every day, which is easier than shaving your head. Hair is nice. It’s perfectly valid to want to keep your hair. Consider doing this if you like having hair.
Finasteride prevents balding but provides only modest regrowth. If you think you will need to start, start as soon as possible for the best results.
Note that there have been many reports of persistent physiological changes caused by 5-AR inhibitors such as finasteride (see: Post Finasteride Syndrome), some of which sound pretty horrifying, like permanent brain fog and anhedonia.
I’ve spent a lot of time reading through both the scientific literature and personal anecdotes and it seems like such adverse effects are exceedingly rare, but I have high confidence (>80%) that they are not completely made up or psychosomatic. My current best guess is that all such permanent effects are caused by some sort of rare genetic variants, which is why I’m particularly interested in the genetic study being funded by the PFS network.
The whole situation is pretty complex and there’s a lot of irrational argumentation on both sides. I’d recommend this Reddit post as a good introduction – I plan on posting my own detailed analysis on LW sometime in the future.
“I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn’t particularly noble”
I think calling natural balding “disfigurement” is in line with the weird memes around male beauty.
Not having hair isn’t harmful.
Disclaimer: I may go bald.
It is a “disfigurement” by the literal definition, but I see your point. Given it is now treatable, we should be honest about it being a significant hit to attractiveness. I just think people should be informed about what is now possible.
source on the ED risk?
https://onlinelibrary.wiley.com/doi/full/10.2164/jandrol.108.005025
ED is not the only problem with finasteride. I saw a couple of cases of gynecomastia in medical school and stopped using finasteride after that. Minoxidil worked fine solo for 4 years, but applying it every night was annoying, and when I stopped using it, I went bald fast (Norwood 6 in 5 months!).
Doesn’t work for everyone even if you start early. Even transplants can fail. As of today there is nothing that is a 100% guarantee.
I was watching the PirateSoftware drama. There is this psychiatrist, Dr. K, who interviewed him after the internet hated him, and everyone praised Dr. K for calling him out or whatever. But much more fascinating is Dr. K’s interview with PirateSoftware a year before, as PirateSoftware expertly manipulates Dr. K into thinking he is an enlightened being and likely an accomplished yogi in a past life. If you listen to the interview he starts picking up on Dr. K’s spiritual beliefs and playing into them subtly:
I figured PirateSoftware must be stupider than I estimated given his weird coding decisions, but I bet he is legit very smart. His old job was doing white hat phishing and social engineering and I imagine he was very good at it.
Yeah people underestimate how hard social engineering is ngl, cuz it’s one of those things very easy to get started in but very hard to be good at
Inadequate Equilibria lists the example of bright lights to cure SAD. I have a similar idea, though I have no clue if it would work. Can we treat blindness in children by just creating a device that gives children sonar? I think it would be a worthy experiment to create device that makes inaudible cherps and then translates their echos into the audible range and transmits them to some headphones the child wears. Maybe their brains will just figure it out? Alternatively, an audio interface to a lidar or a depth estimation model might do, too.
or an audio interface to a camera
As I read somewhere in the Internet, even adult people with normal eyesight can learn echolocation. If it’s true, obviously blind children can learn it too!
I recall hearing about a blind kid who managed to skate-board through streets bc. he’d learnt how to mimic sonar.
Here Casey Muratori talks about computer programming being automated. Ignoring the larger concerns of AI for a minute, which he doesn’t touch, I just thought this was a beautiful, high-integrity meditation on the prospect of the career he loves becoming unremunerative: https://youtu.be/apREl0KmTdQ?si=So1CtsKxedImBScS&t=5251
He says that he only cares about the learning aspect, and that AI cannot help, because he isn’t bottlenecked by typing speed, i.e., it would take as much time for him to write the code as to read it. But it’s easier to learn from a textbook than figure things out yourself? Perhaps he meant that he only cares about the “figuring out” aspect.
I have had the Daylight Tablet for a couple months. I really like it. It is very overpriced but the screen is great and the battery life good. People who read a lot of pdfs or manga, in particular, might like it.
At risk of sharing slop, Suno 4.5 Beta is amazing: https://suno.com/song/6b6ffd85-9cd2-4792-b234-40db368f6d6c?sh=utBip8t6wKsYiUE7
EDIT: I’m having a lot of fun exploring style’s with Suno 4.5. Many, if not most, of them must be entirely new to the Earth: bengali electropop, acid techno avant-garde jazz, mandarin trance. Strongly recommend scrolling through the wheel of styles.
Wow, those vocals are way better than Suno 3′s. Before, they had some kind of grainy texture to the vocals, as if there was a sudden, discrete transition between some notes. Kinda flat, in a way. Now, there is a lot more detail. Much more realistic.
I agree that the vocals have gotten a lot better. They’re not free of distortion, but it’s almost imperceptible on some songs, especially without headphones.
The biggest tell for me that these songs are AI is the generic and cringey lyrics, like what you’d get if you asked ChatGPT to write them without much prompting. They often have the name of the genre in the song. Plus the way they’re performed doesn’t always fit with the meaning. You can provide your own lyrics, though, so it’s probably easy to get your AI songs to fly under the radar if you’re a good writer.
Also, while some of the songs on that page sound novel to me, they’re usually more conventional than the prompt suggests. Like, tell me what part of the last song I linked to is afropiano.
The lyrics are terrible, yes. I haven’t tried listening w/ my headphones, so that’s probably why I didn’t detect the distortions.