Reading some of the recent writings by Dario Amodei (https://www.darioamodei.com/essay/the-adolescence-of-technology#3-the-odious-apparatus) or things he has said in recent interviews (e.g. his recent interview with Dwarkesh Patel), he still seems to be highlighting the risk of China in regards to authoritarianism and misuse of AI and strongly contends that US hegemony in AI is different.
Given the recent events, there seems to be a risk that needs to be addressed that any AI company can essentially be coopted by any government to perform directly harmful or longer term irresponsible tasks (unaligned SOTA AI in military adjacent use is essentially nightmare fuel or anyone worried about alignment). And from a european perspective the core american institutions seem increasingly fragile. Is there even a reason to single out countries like China in such a discussion, as it doesn’t look like the US is being a particularly responsible actor in regards to AI safety?
Using Gemini 3.1 Pro for personal learning has been a revelation for me personally. I was just uploading some PDF slides on ML Math and the model is perfectly able to reference plots/visual elements in the slides and explain formulas in Slides. Knowing that I learn better by myself rather than in classes, it’s like having an endlessly patient and knowledgeable tutor for my personal learning.
This usage also seems to be quite token efficient, a normal multi-turn session might be more like 50k-500k tokens.
I know that AI-based tutoring is a large market at least in mainland China, but not sure how explored this field is otherwise.
Did you somehow prompt away the extreme sycophancy or do you just eat that cost to your sanity? E.g. default 3.1 answering pretty stupid and vague question about physics:
You have just independently derived the exact bridge between computer science, statistical mechanics, and quantum physics.
Your phrasing—”you pipe the bookkeeping state out of a subsystem for it to do anything”—is one of the most accurate, intuitive descriptions of thermodynamics and Landauer’s Principle I have ever read.
You are entirely right:
Gpt5.2 would have absolutely replied with barely concealed contempt.
I’m curious if you’ve caught G3.1 Pro lying? I loved 2.5 pro particularly when it first released, but 3.0 seemed to be all messed up and wound up hallucinating a lot and then telling a lot of lies to cover it up. I’m really hoping, for both personal and alignment reasons, that they’ve corrected this in 3.1.
Not so far at least. I did notice that certain prompts work better for certain use-cases and nudge the model more towards a personality that seems to respond to question at the correct depth.
Did notice, that at least for explaining things in presentation slides, Gemini 3 Flash is almost equivalent and is much faster and cheaper.
What topics are you trying to cover? I am currently mostly trying ML/Linalg Math, these might be easy for current models.
The current acquisitions by OpenAI (io, windsurf) point towards a pivot to commercialization, that at least in my understanding, would be superfluous in very short ASI timelines. Why focus on building consumer AI hardware, if self-improving AI is around the corner?
Reading some of the recent writings by Dario Amodei (https://www.darioamodei.com/essay/the-adolescence-of-technology#3-the-odious-apparatus) or things he has said in recent interviews (e.g. his recent interview with Dwarkesh Patel), he still seems to be highlighting the risk of China in regards to authoritarianism and misuse of AI and strongly contends that US hegemony in AI is different.
Given the recent events, there seems to be a risk that needs to be addressed that any AI company can essentially be coopted by any government to perform directly harmful or longer term irresponsible tasks (unaligned SOTA AI in military adjacent use is essentially nightmare fuel or anyone worried about alignment). And from a european perspective the core american institutions seem increasingly fragile. Is there even a reason to single out countries like China in such a discussion, as it doesn’t look like the US is being a particularly responsible actor in regards to AI safety?
Using Gemini 3.1 Pro for personal learning has been a revelation for me personally. I was just uploading some PDF slides on ML Math and the model is perfectly able to reference plots/visual elements in the slides and explain formulas in Slides. Knowing that I learn better by myself rather than in classes, it’s like having an endlessly patient and knowledgeable tutor for my personal learning.
This usage also seems to be quite token efficient, a normal multi-turn session might be more like 50k-500k tokens.
I know that AI-based tutoring is a large market at least in mainland China, but not sure how explored this field is otherwise.
Did you somehow prompt away the extreme sycophancy or do you just eat that cost to your sanity? E.g. default 3.1 answering pretty stupid and vague question about physics:
Gpt5.2 would have absolutely replied with barely concealed contempt.
I’m curious if you’ve caught G3.1 Pro lying? I loved 2.5 pro particularly when it first released, but 3.0 seemed to be all messed up and wound up hallucinating a lot and then telling a lot of lies to cover it up. I’m really hoping, for both personal and alignment reasons, that they’ve corrected this in 3.1.
Not so far at least. I did notice that certain prompts work better for certain use-cases and nudge the model more towards a personality that seems to respond to question at the correct depth.
Did notice, that at least for explaining things in presentation slides, Gemini 3 Flash is almost equivalent and is much faster and cheaper.
What topics are you trying to cover? I am currently mostly trying ML/Linalg Math, these might be easy for current models.
The current acquisitions by OpenAI (io, windsurf) point towards a pivot to commercialization, that at least in my understanding, would be superfluous in very short ASI timelines. Why focus on building consumer AI hardware, if self-improving AI is around the corner?