Retired software engineer with a love of knowledge and disinterest in dead philosophers.
NickH
This is totally misguided. If heuristics worked 100% of the time they wouldn’t be rules of thumb, they’d be rules of nature. We only have to be wrong once for AI to kill us.
I invest in US assets myself but not because of any faith in the US, in fact the opposite—Firstly it’s like a fund manager investing into a known bubble—You know it’s going to burst but, if it doesn’t burst in the next year or so you cannot afford the short/medium term loss relative to your competitors and, secondly, If the US crashes it takes down the rest of the world with it and is probably the first to recover so you might as well stick with it. None of this translates to faith in US, AI, governance. Your mention of positive-sum deals is particularly strange since, if the world has learned one thing about Trump, it is that he sees the world, almost exclusively, in zero sum terms.
Stating the obvious here but Trump has ensured that the USG cannot credibly guarantee anything at all and hence this is a non-starter for foreign governments.
Evangelicals either hate people or don’t actually believe that their god is loving and compassionate. Proof:
If god DOES NOT love people who have never heard about him or only heard about him from people who did a bad job of “explaining” him then he is NOT loving or compassionate, but, in this case it would be caring and compassionate for evangelicals to evangelise in order to try to get people onto gods good side because the consequences of being on his bad side are BAD.
If god DOES love people who have never heard about him or only heard about him from people who did a bad job of “explaining” him then evangelicals who also love people and are compassionate and caring should actively AVOID spreading the word of god as this will necessarily deprive some people of the “get out of jail free” card (see 1)
I think it does. Certainly the way that I would do it would be to create a world map from memory, then overlay the coordinate grid, then just answer by looking it up. You answers will be as good as your map is. I believe that the LLMs most likely work from wikipedia articles—There are a lot of location pages with coordinates in wikipedia
Humans would draw a map of the world from memory, overlay the grid and look up the reference. I doubt that the LLMs do this. It would be interesting to see whether they can actually relate the images to the coordinates—I suspect not i.e. I expect that they could draw a good map, with gridlines from training data but would be unable to relate the visual to the question. I expect that they are working from coordinates in wikipedia articles and the CIA website. Another suggestion would be to ask the LLM to draw a map of the world with non-standard grid lines e.g. every 7 degrees
This is interesting but, in some ways, it should have been obvious—Everything we say, says something about who we are and what we say is influenced by what we know in ways that we are not conscious of. Magicians use subconscious forcing all the time along the lines of “Think of a number between 1 and 10”
It’s worse than that, (1) is just the big problem for philosophy hiding behind circular definitions and multiple undefined words to obscure the big issue. We have “progress” and “values” and “good” used as if they are independent when, even a cursory examination, shows that they are not and they are, in fact, “defined” using circular reasoning—We have made progress because our values are better (more good) now than they were in the past. How do we know that our values are better now than in the past? Because we have made progress. We believe that we are better now than we were in the past because, for example, we do not discriminate against homsexuals. But the people in the past would argue that they were better than us for exactly the same reason. I believe that the root cause of the illusion of moral progress is no more, and no less, than the the obvious observation that winners always get to write the history and always paint themselves in the best light. We are the winners. We defeated the “us” of the past and now “we” get to say that we are morally superior because the people of the past are not here to argue with us and, even if they were, we would destroy them with our superior technology.
Complacency! Try visiting a country that hasn’t had generations of peaceful democracy—They take these issues much more seriously. The optics of this are heavily skewed by the US, who had, essentialy, the same religion and politics for centuries and so they believe that none of the serious stuff consequences could ever happen to them.
People have very different ideas about when “the future” is, but everyone is really thinking extreme short term on an evolutionary scale. Once upon a time our ancestors were Trilobites (or something just as unlike us). If you could have asked one of those trilobites what they thought of a future in which all trilobites were gone and had evolved into us, I don’t think they would have been happy with that. Our future light cone is not going to be dominated by creatures we would recognise as human. It may be dominated by creatures “evolved” from us or maybe from our uploaded consciousness, or maybe by “inhuman” AI, but it’s not going to be Star Trek or any other Sci-Fi series you have seen. Given that future, the argument for minimising P(doom) at the cost of reducing P(good stuff for me and mine in my lifetime) looks pretty weak. If I am old and have no children, it looks terrible. Roll the dice.
I don’t see anything about exactly WHAT people were reading. Literacy, certainly nowadays, is not taught so that the masses are better able to experience classic literature but to enable them to transmit and receive factual information and instructions efficiently and effectively an this usage is inherently more suited to shorter sentences. We now live in a time when the fraction of all knowledge that anyone can ever hope to ingest is declining exponentially and so we benefit from greater clarity and higher information density whilst, simultaneously, for whatever reason, reading for pleasure is in decline.
Always try reversing and rephrasing things to see if they still make sense: “I want a toothbrush that’s more durable than my teeth” sounds kind of silly.
I think you overstate how wonderful hunter-gatherer life was, even in the good times. No, you didn’t have to work 60 hour weeks and suck up to the boss, but you did have to conform to the norms of your tribe or you would find it impossible to get a wife, be shunned by, what was effectively, everyone in existence, or even be cast out to die, alone. Getting on in modern society is much less onerous.
and yet you do not identify any of these, supposed “other useful properties”. How can you reconcile a prediction of algorithmic breakthroughs with reality? When would that reconciliation take place. Nobody is ever going to look back and say “I predicted algorithmic breakthroughs and there were none”. At best they’ll say that “the breakthroughs took longer than I expected but my predictions were good if you ignore that”.
Downvoted. See Burdensome Details. I particularly dislike predicting “Algorithmic Breakthroughs”
From a practical perspective, maybe you are looking at the problem the wrong way around. A lot of prompt engineering seems to be about asking LLMs to play a role. I would try to tell the LLM that it was a hacker and to design an exploit to attack the given system (this is the sort of mental perspective I used to use to find bugs when I was a software engineer). Another common technique is “generate then prune” : Have a separate model/prompt remove all the results of the first one that are only “possibilities”. It seems, from my reading, that this sort of two stage approach can work because it bypasses LLMs typical attempts to “be helpful” by inventing stuff or spouting banal filler rather than just admitting ignorance.
The CCP has no reason to believe that the US is even capable of achieving ASI let alone whether they have an advantage over the CCP. No rational actor will go to war over a possibility of a maybe when the numbers could, just as likely be in their favour. E.g. If DeepSeek can almost equal OpenAI with less resources, it would be rational to allocate more resources to DeepSeek before doing anything as risky as trying to sabotage OpenAI that is uncertain to succeeed and more likely to invite uncontrollable retaliatory escalation.
The West doesn’t even dare put soldiers on the ground in Ukraine for fear of an escalating Russian response. This renders the whole idea that even the US might premptively attack a Russian ASI development facility totally unbelievable, and if the US can’t/won’t do that then the whole idea of AI MAD fails and with it goes everything else mentioned here. Maybe you can bully the really small states but it lacks all credibility against a large, economically or militarily powerful state. The comparison to nuclear weapons is also silly in the sense that the outcome of nuclear weapons R&D is known to be a nuclear weapon and the time frame is roughly known whereas the outcome of AI research is unknown and there is no way to identify AI research that crosses whatever line you want to draw other than that provided by human intel.
Whilst the title is true, I don’t think that it adds much as, for most people, the authority of a researcher is probably as good as it gets. Even other researchers are probably not able to reliably tell who is or is not a good strategic thinker, so, for a layperson, there is no realistic alternative than to take the researcher seriously.
(IMHO a good proxy for strategic thinking is the ability to clearly communicate to a lay audience. )
You are arguing that it is tractable to have predictable positive long term effects using something that is known to be imperfect (heuristic ethics). For that to make sense you would have to justify why small imperfections cannot possibly grow into large problems. It’s like saying that because you believe that you only have a small flaw in your computer security nobody could ever break in and steal all of your data. This wouldn’t be true even if you knew what the flaw was and, with heuristic ethics, you don’t even know that.