I kind of feel like the moment is not ideal for a huge development project in collaboration between Finland, Russia, the US and Canada.
Dirichlet-to-Neumann
Genuine question : how much your opinion on college and higher education are due to the American system being insane ?
Because for example in France university/college is mostly free. Nobody get into debt to pay for tuition. How much would it change your opinion?
Concerning MAID, if past trend are to be believed, the most terrifying thing is that these numbers will only get worse.
To be honest I’m just as afraid of aligned AGI as of unaligned AGI. An AGI aligned with the values of the PRC seems like a nightmare. If it’s aligned with the US army it’s only really bad, and Yudkowsky dath illan is not exactly the world I want to live in either...
1 000 mg is the standard dose in France, with 500mg being used almost only for children.
Sadly both my time and capacity are limited to “try some prompts around to get a feeling of what the results look like.” I may do more if the results are actually interesting.
One of the first tasks I tested was actually to write essays in English with a prompt in French, which it did very well, I would say better than when asked for an essay in French. I’ve not looked at the inverse task though (prompt in English for essay in French).
I’ll probably translate the prompts through DeepL with a bit of supervision and analyse the results using a thoroughly scientific “my gut feeling” with maybe some added “my mother’s expertise”.
I’d like to make a quite systematic comparison of openAi’s chatbot performances in French and English. After a couple days trying things I feel like it is much weaker in French—which seems logical as it has much less data in French. I would like to explore that theory, so if you have interesting prompts you would like me to test let me know !
Dirichlet-to-Neumann’s Shortform
Training teachers is probably the main physical cost (it was a big problem for computer science in France), but the main social obstacle is the opposition to change from basically everyone : parents don’t want their children to learn different things than they did, teachers don’t want to lose curriculum hours to make room for new subjects, and administrators don’t want to risk making anything new.
Epistemic status : n=1.
I very much enjoyed my school years. I learned a lot on subject that turned out to be actually useful for me like maths and English, and on subject that were enjoyable to me (basically everything else). I would definitely have learned much less without the light coercion of the school system, and would have been overall less happy (In later years at college level where I was very much my own master I learned less and was less happy ; in my three years of “classe prépa”, the most intensive years of my studies I learned the most and was overall happier). In particular I would not have learned as much in STEM fields and definitely would not have become a mathematicians had I been home schooled or not schooled.
Now obviously this is n=1, but beware of the typical mind fallacy. One size fit all school means it is enjoyable for some and soul-sucking for others ; one size fit all no school would be exactly the same.
I tried to make it play chess by asking for specific moves in opening theory. I chose a fairly rare line I’m particularly fond off (which in hindsight was a bad choice, I should have sticked with the Najdorf). It could identify the line but not give any theoretical move and reverted to non-sense almost right away.
Interestingly it could not give heuristic commentary either (“what are the typical plans for black in the Bronstein-Larsen variation of the Caro-Kann defense”).
But I got it easily to play a game by… just asking”let’s play a chess game”. It could not play good or even coherent moves though. [Edit : I tried again. Weirdly it refused to play the first time but agreed after I cleared the chat and asked again (with the same prompt!)]
My reaction has nothing to do with “allowing AI to deceive” and everything with “this is a striking example of AI reaching better than average human level at a game that integrates many different core capacities of general intelligences such has natural language, cooperation, bargaining, planning, etc.
Or too put it an other way : for the profane it is easy to think of GPT-3 or deepL or Dall-e as tools, but Cicero will feels more agentic to them.
While I don’t think anyone aware of AI alignment issues should really update a lot because of Cicero, I’ve found this particular piece of news to be quite effective at making unaware people update toward “AI is scary”.
Musk proposition gives Ukraine no significant security guarantees AND forces it to lose territory. It’s basically a total win for Russia, and an excellent incentive to try again in 10 years (or maybe vs. the Baltic states or Georgia).
I downvoted because it’s not a particularly interesting critique of religion (contrary to, say, Eliezer’s https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable which is really solid. The paragraph on free will is weak because it fails to engage with what the other side is saying (yet alone refute a steel man version)
Besides, making a big post on lesswrong about how religion is silly is just preaching to the choir—in the same way that if you really want to make a post on why AI is dangerous, don’t do it if you don’t have something new to bring to the debate.
It’s certainly interesting although to be honest I’m pretty confident the top human stratego players are nowhere near the top achievable level for a human player (contrasting with games like chess or StarCraft).
I’m pretty sure no ballistic weapon until well into the 20th century was a significant threat to a balloon flying at more than a few hundred meters above ground.
Military uses of a hot balloon are more “create a small mountain to climb up” and less “satellite imagery before satellite”. They did not revolutionise warfare at the tactical or operational level, and even less at the strategical level.
To be fair changing the 5th postulate requires some creative redefining of what a straight line is. In my experience when explaining non-euclidean geometries to muggles the hardest part is not the 5th postulate or its consequences but making people accept that on a sphere a straight line is really a great circle (the easier way being through the concept of a geodesic line but this was invented after non-euclidean geometries if I’m not mistaken.
I would find the shiny look rather annoying, wouldn’t a matte finish be better ?