Yes
In universalism the question is then why out of all the parts of the consciousness I could happen to be experiencing, I’m experiencing this particular part.
Yair Halberstadt
There is no other deck of cards here. There’s no copy of me to compare myself to, and say how curious that looks exactly like me.
That’s like saying that every game of cards must be rigged, because otherwise the chance of having this particular card order is miniscule...
Do we have the numbers?
what percentage of anthropic shares are owned by employees?
Of those what percentage are owned by EA pulled employees?
Of those how many which actually translate their words into actions once they have the money?
What percentage do we expect them to give, and over what time frame?
How does that compare to current EA related funding?
Making up random numbers (I’ve done zero research)
Not more than 50 billion owned by employeess
Not more than 20 billion owned by EA employees
75% will put their money where their mouth is
And donate 50%. If they spread that over 20 years, at current interest rates that’s about 500 million a year
Givewell raised 400 million last year. Not all of that is from EAs, but not all EAs donated there. Let’s say total of 500 million.
Then if this all happens it about doubles fundings for EA related causes. Is a reasonable chance of that happening worth upfronting donations for?
Fair enough. I found it unreadable in a way I associate with AI (lots of dense words, but tricky to extract the content out of them), and the em dashes are somewhat of a giveaway.
Given how much slop there is I do appreciate if people clarify what they used AI for because I don’t want to wade through a ton of slop which wasn’t even human written.
Thanks for replying.
Hi, was this post written by, or with assistance from, AI?
Thanks
What about persuading politicians that AI safety is a cause that will win them votes? That requires very broad spectrum outreach to get as many ordinary people on board as possible.
Lots of individual mistakes here, that together serve to severely overstate the case made:
Failure to defeat Houthi Rebels: The US Navy’s struggles against the Houthis’ low-cost drone arsenal demonstrate how even non-state actors can challenge NATO air defense. NATO forces found themselves hard-pressed to counter relatively primitive drone attacks.
The Houthis were unable to touch US naval power. What the US couldn’t do is defend ships in a very narrow stretch of water from drone, missile, and speedboat attacks. This is a very specific situation, and it’s like saying there’s an end to US ground power because they couldn’t decisively defeat the Taliban. Asymmetric warfare works, more news at 10.
Also note the drones they were using were far from cheap, often costing hundreds of thousands of dollars.
Economic logic of drone warfare: In Ukraine, drones account for over 70% of combat kills—a proportion likely to increase. The economics are devastating: $1,000 FPV drones routinely destroy $7 million Abrams tanks. Modern tanks are dangerously outdated and the entire armored warfare needs to be redesigned from the ground up to face the realities of modern drone warfare.
You do see the videos where the tank gets blown up by the (closer to $5000) drone, you don’t see the vast majority of videos where it does nothing. Meanwhile how many people were killed by the Abrams tank, and how useful was the Abrams tank when used correctly to break through enemy lines and regain movement—something drones are not really capable of.
Obsolescence of NATO doctrine: NATO’s military doctrine remains rooted in pre-drone warfare assumptions, creating an embarrassing disconnect where peacetime generals lecture battle-hardened Ukrainian officers who possess actual combat experience against a lethal, drone-equipped adversary.
Given that Nato’s doctrine assumes air superiority, which drones barely impact at all, I don’t see how you could possibly draw such a conclusion from the war in Ukraine, where both sides decisively lack air superiority.
Navy should transition from few expensive carriers to distributed drone-launching platforms—hundreds of cheap drone carriers, underwater drone deployments, and autonomous loyal wingmen for the naval air-force.
The seas are huge, and cheap drones are short range and slow. Enemy ships are difficult to find in a vast empty sea. Ships are extremely difficult to destroy or cripple. Communications are almost certainly jammed. For drones to be useful they need to be:
fast
long range
carry a large payload
autonomous
We call such drones “cruise missiles” and they are extensively deployed in all NATO Navies. If you know a way to make them cheaper, then the DoD will almost certainly be very interested.
autonomous loyal wingmen for the naval air-force.
Current drone warfare is all about low cost, slow, short range, non-autonomous technology. Autonomous wingmen would by high cost, fast, long range, autonomous. I don’t see how you could possibly draw conclusions about them from current events.
The Transparent Battlefield: Assume constant observation. Every movement is tracked, every concentration targeted within minutes. Forces must operate dispersed, communicating through secure mesh networks, moving constantly. Resources, command, logistics—everything dispersed, redundant, modular.
This is basically current NATO doctrine, and has been for years.
Technology parity: China has successfully replicated fifth-generation fighter capabilities (notably copying JSF technology through espionage) and is now mass-producing these aircraft at scale.
This is true, but note that China is investing heavily into this stuff (and aircraft carriers), not autonomous drone swarms.
are increasingly vulnerable to drone swarms and hypersonic missiles
I have seen zero evidence of this. Indeed hypersonic missiles are similarly vulnerable to air defences as non hypersonic ones, and come with a whole host of problems of their own.
What pickup trucks are you using? How much do the pickups cost? What armour do they have if any?
There’s one drone operator per pickup. These are FPV drones so they’re limited to at most a handful of drones at a time. You can’t swarm the Abrams, and unlike what the videos would have you believe the chance of an individual drone taking out an Abrams is tiny. The tank has plenty of time and opportunity to blow up the pickup.
The drones aren’t as cheap as you believe—the FPV drones with fibre optic used in Ukraine are many thousands of dollars each. Each pickup, operators, and drone is worth many hundreds of thousands and is likely a sitting duck to artillery, tank fire, and yes counter drones, especially when it moves.
The play would be to sneak the truck in under cover of darkness, set up shop somewhere camouflaged, and then use the drones to help defend the current area, and atrite enemy forces. Basically the same thing as is happening now in Ukraine. It helps in a slow grinding war, but doesn’t help you in a manoeuvre war.
Nato doctrine is all about manoeuvrability and air power. Once air superiority is achieved your pickups are sitting ducks. Only individual people can act effectively against air superiority.
The aim of the tank in that situation is rapid movement and firepower, whilst being protected from most attacks. The pickup can easily be blown up by an enemy ATGM, RPG, or drone operator so just isn’t as useful in manoeuvre warfare. The driver can easily be killed by an assault rifle.
Giving individual troops drones is obviously a power multiplier but with current drone technology I don’t think a drone carrier makes much sense—too exposed in a manoeuvre war, and no different to what’s currently going on in a war of attrition.
(Of course all this changes once we can coordinate fully autonomous drones at scale and low price)
In a full out war, the side with a disadvantage in space would probably try to introduce Kessler syndrome.
We have finally solved an age old problem in philosophy:
Gemini 3 pro is 1.2 cents per thousand tokens.
Gemini 3 pro image is 13.4 cents per image.
Therefore an image is worth 11167 words, not 1000 as the classicists would have it.
On average I think people suffer more from the opposite mistake. Refusing to go all in on something and commit, because they want to keep optionality open.
It could be drifting from one relationship to another, pushing off having children (but freezing eggs just in case), never buying a house and settling down in a community you like, never giving up everything to get that job you’ve always dreamed of, whatever it is that matters to you.
Life is often much richer and more fulfilling when you give up optionality for the sake of having your best shot on the things that are most important to you.
That said the extent these things remove your optionality is overstated. You can always get divorced, sell your house, move locations, find a new job, go back home, put your kid up for adoption, etc. Scrap that last one, having a child really does pretty permanently limit your optionality. But they go better when your mindset is one where making this work is your only option, there are no other alternatives.
For example marriage goes best when:
you go in with 100% intention of never getting divorced under any circumstances.
when the circumstances change such that divorce is your best course of action, you are able to recognise and switch mindset to one where you can weigh up the pros and cons carefully.
Doing so requires a kind of doublethink, but most people are capable of it fairly easily.
Quick thoughts on Gemini 3 pro:
It’s a good model sir. Whilst it doesn’t beat every other model on everything, it’s definitely pushed the pareto frontier a step further out.
It hallucinates pretty badly. ChatGPT 5 did too when it was released, hopefully they can fix this in future patches and it’s not inherent to the model.
To those who were hoping/expecting to have hit a wall. Clearly hasn’t happened yet (although neither have we proved that LLMs can take us all the way to AGI).
Costs are slightly higher than 2.5-pro, much higher than gpt 5.1, and none of googles models have seen any price reduction in the last couple of years. This suggests that it’s not quickly getting cheaper to run a given model, and that pushing the pareto frontier forward is costing ever more in inference. (However we are learning how to get more intelligence out of a fixed size with newer small models).
I would say Google currently has the best image models and best LLM, but that doesn’t prove they’re in the lead. I expect openai and anthropic to drop new models in the next few months, and Google won’t release a new one for another 6 months at best. It’s lead is not strong enough to last that long.
However we can firmly say that Google is capable of creating SOTA models that give openai and anthropic a run for their money, something many were doubting just a year ago.
Google has some tremendous structural advantages:
independent training and inference stack with TPUs, JAX, etc. It is possible they can do ML at a scale and price point noone else can achieve.
trivial distribution. If Google comes up with a good integration they have dozens of products where they can instantly push it out to hundreds of millions of people (monetising is a different question).
deep pockets. No immediate need to generate a profit, or beg investors for money.
lots of engineers. This doesn’t help with the cure model, but does help with integrations and RLHF.
Now that they’ve proven they can execute, they should likely be considered frontrunners for the AI race.
On the other hand ChatGPT has much greater brand recognition, and LLM usage is sticky. Things aren’t looking great for anthropic though with neither deep pockets or high usage.
In terms of existential risk: this is likely to make the race more desperate, which is unlikely to lead to good things.
95%+ of all studies of the human body study living bodies. Surgeons cut into living flesh umpteen times a day, and biologists do horrible things do living lab rats in a million different ways. Every study that comes out of today’s universities on behaviour, medicine, optics, or what have you not, is performed on living volunteers.
Many of the most important fields in biology focus on dynamic systems, such as biology, neurology, and yes, anatomy.
I’m not sure what justification there is for saying that biology is to focused on the dead, or static systems.
Hi and welcome to LessWrong.
Please see the policy on AI generated content: https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
In particular:
Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong’s standards. Please do not submit unedited or lightly-edited LLM content. You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.
Arrows of time and space
I’m not claiming that we need any extra laws of physics to explain consciousness. I’m saying that even if you showed me the equations that proved I would behave like a conscious being, I still wouldn’t feel like the problem was solved satisfactorily, until you explained why that would also make me feel like a conscious being.
I think that’s fairly limited evidence, would want to see more data than that before claiming anything is vindicated.
Ok, in that case you’re just basically referring to the SSA Vs SIA. That’s an old chestnut, and either way leads to seemingly paradoxical results.