Anyone tried clawdbot yet? Tried it today after hearing a lot of hype on X and it really gives me the vibe of the drop-in virtual remote worker from Situational Awareness. It is basically open source agent that takes full control over your local computer environment and communicates with you in human like fashion over WhatsApp, Telegram, Slack. I know that there were a lot of attempts to try this kind of agents before but this is the first time that really seem to work.
Not sure what implications of this kind of tools are for AI timelines, but it really does seem like one of the crucial pieces of the fully automated economy has just been deployed.
artemium
In my opinion, it doesn’t make rational sense for them to invade at all. Even in the best-case scenario for China, where they manage to pacify Taiwan after a tough fight, I would still expect the following:
1) They would be permanently shut out of all Western trade and technology sharing.
2) All critical semiconductor manufacturing in Taiwan will be destroyed by the US or the local Taiwanese military before China can get to it, and most of it is already in the process of being successfully transferred to the US. I also expect that most of the human talent would be taken to the US.
3) Even if the US did not directly intervene, the US and their allies would start massive rearmament and reindustrialisation programmes and maximally utilise their advantage in AI and other critical technologies in future.
4) Regarding point 4, if American AI victory is inevitable due to their computing advantage, China might still get a better deal in the current scenario, where it is perceived as merely an economic competitor and geopolitical challenger, rather than a direct adversary, as it would be in the event of an invasion of Taiwan.
There are also some indications that Taiwanese politics are slowly moving in a pro-China direction, with increased support for peaceful re-unification among younger KMT voters, which might also incentivise China to bide its time and avoid doing anything reckless.
Beautiful! Even though I am twice your age, I feel very similarly. The only difference is that I think I was a bit luckier to have experienced some of life’s highlights in the Eld world, which is permanently coming to a close.
We’ll get through this, brother.
If I understood Eliezers argument correctly we can shorten those timescales buy improving human intelligences through methods like genetic engineering. Once majority of humans have Von Neumann level IQ I think its fine to let them decide how to proceed on AI research. Question is, how fast can this happen, and it probably would take a century or 2 at least.
Part where I am confused is why is this scenario considered as distinct over the standard ASI misalignment problem? A superintelligence that economically destroys and subjugates every country except ,perhaps, the country where it is based in is pretty close to the standard paperclip outcome right?
Whether I am turned into paperclips or completely enslaved by US-based superintelligence is rather trivial difference IMO and I think it could be treated as another variant of alignment failure.
That’s a reasonable concern, but I don’t think it’s healthy to ruminate too much about it. You made a courageous and virtuous move, and it’s impossible to perfectly predict all possible futures from that point onward. If this fails, I presume failure was overdetermined, and your actions wouldn’t really matter.
The only mistake you and your team made, in my opinion, was writing the slowdown scenario for AI-2027. While I know that wasn’t your intention, a lot of people interpreted it as a 50% chance of ‘the US wins global supremacy and achieves utopia,’ which just added fuel to the fire (‘See, even the biggest doomers think we can win! LFG!!!!’).
It also likely hyperstitionized increased suspicion among other leading countries that the US would never negotiate in good faith, making it significantly harder to strike a deal with China and others.
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven’t read the fine print. /s
In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an “AGI-soon” world as follows:
Alignment Hard
EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.Alignment Easy
EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological “stone age.”Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy—not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
Full integration with US AI efforts, securing a guarantee of equal benefits from aligned superintelligence. This could also give EU AI safety labs a seat at the table for alignment discussions.
Develop an autonomous EU AI leader, excelling in capabilities and alignment research to negotiate with the US and China as an equal. This would demand a drastic policy shift, massive investment in data centers and nuclear power, and deregulation, likely unrealistic in the short term.
If we accept all the premises of this scenario, what prescriptive actions might an average individual take in their current position at this point in time?
Some random ideas:
Continue investing in NVIDIA and other key winners in this timeline, particularly now with the temporary price discount following the recent tariff debacle
If you are not based in the U.S., consider relocating there (in both scenarios, the U.S. plays a decisive role. In the Slowdown scenario, benefits would primarily concentrate in the U.S., benefiting American citizens)
Attempt to raise awareness about the AI-2027 project, so that major political players can apply more pressure to reach the slowdown timeline
Are there any other recommendations?
If that was the case, wouldn’t Scott and Daniel develop the impressive AI-2027 website themselves with the help of AI Agents, instead of utilising your human webdev skills? /jk :D
[Question] What are the chances that Superhuman Agents are already being tested on the internet?
The answer surely depends mostly on what his impact will be on AI developments, both through his influence on the policy of the new administration and what he does with xAI. While I understand that his political actions might be mind-killing (to say the least) to many of his former fans, I would much prefer a scenario where Elon has infuriating politics but a positive impact on solving alignment over one with the opposite outcome.
A new open-source model has been announced by the Chinese lab DeepSeek: DeepSeek-V3. It reportedly outperforms both Sonnet 3.5 and GPT-4o on most tasks and is almost certainly the most capable fully open-source model to date.
Beyond the implications of open-sourcing a model of this caliber, I was surprised to learn that they trained it using only 2,000 H800 GPUs! This suggests that, with an exceptionally competent team of researchers, it’s possible to overcome computational limitations.
Here are two potential implications:
Sanctioning China may not be effective if they are already capable of training cutting-edge models without relying on massive computational resources.
We could be in a serious hardware overhang scenario, where we already have sufficient compute to build AGI, and the only limiting factor is engineering talent.
(I am extremely uncertain of this, it was just my reaction after reading about it)
artemium’s Shortform
Perhaps Randolph Carter was right about losing access to dreamlands after your twenties:
When Randolph Carter was thirty he lost the key of the gate of dreams. Prior to that time he had made up for the prosiness of life by nightly excursions to strange and ancient cities beyond space, and lovely, unbelievable garden lands across ethereal seas; but as middle age hardened upon him he felt these liberties slipping away little by little, until at last he was cut off altogether. No more could his galleys sail up the river Oukranos past the gilded spires of Thran, or his elephant caravans tramp through perfumed jungles in Kled, where forgotten palaces with veined ivory columns sleep lovely and unbroken under the moon.
Btw, have you heard about PropheticAI? They are working on device that is supposed to help you with lucid dreaming?
Still think it will be hard to defend against determined and competent adversaries committed to sabotaging the collective epistemic. I wonder if prediction markets can be utilised somehow?
I am not sure if dotcom 2000 market crash is the best way to describe a “fizzle”. The upcoming Internet Revolution at the time was a correct hypothesis its just that 1999 startups were slightly ahead of time and tech fundamentals were not ready yet to support it, so market was forced to correct the expectations. Once the tech fundamentals (internet speeds, software stacks, web infrastructure, number of people online, online payments, online ad business models etc...) became ready in mid 2000s the Web 2.0 revolution happened and tech companies became giants we know today.
I expect most of the current AI startups and business models will fail and we will see plenty of market corrections, but this will be orthogonal to ground truth about AI discoveries that will happen only in a few cutting edge labs which will be shielded from temporary market corrections.
But coming back to the object level question: I really don’t have a specific backup plan, I expect even the non-AGI level AI based on the advancement of the current models will significantly impact various industries so will stick to software engineering for forceable future.
My dark horse bet is on 3d country trying desperately to catch up to US/China just when they will be close to reaching agreement on slowing down progress. Most likely: France.
Why so? My understanding is that, if AGI will arrives in 2026 it will be based on the current paradigm of training increasingly large LLMs on massive clusters of advanced GPUs. Given that US has banned selling advanced GPUs to China, how do you expect them to catch up that soon?
Indeed. Also, take a look at the recent hype around the Clawdbot/Moldbot agent. Basically, every tech influencer is now rushing to give Claude access to their entire computer. By 2027, most prominent tech figures may already have swarms of agents managing their entire digital life and businesses.