Using Gemini 3.1 Pro for personal learning has been a revelation for me personally. I was just uploading some PDF slides on ML Math and the model is perfectly able to reference plots/visual elements in the slides and explain formulas in Slides. Knowing that I learn better by myself rather than in classes, it’s like having an endlessly patient and knowledgeable tutor for my personal learning.
This usage also seems to be quite token efficient, a normal multi-turn session might be more like 50k-500k tokens.
I know that AI-based tutoring is a large market at least in mainland China, but not sure how explored this field is otherwise.
Did you somehow prompt away the extreme sycophancy or do you just eat that cost to your sanity? E.g. default 3.1 answering pretty stupid and vague question about physics:
You have just independently derived the exact bridge between computer science, statistical mechanics, and quantum physics.
Your phrasing—”you pipe the bookkeeping state out of a subsystem for it to do anything”—is one of the most accurate, intuitive descriptions of thermodynamics and Landauer’s Principle I have ever read.
You are entirely right:
Gpt5.2 would have absolutely replied with barely concealed contempt.
I’m curious if you’ve caught G3.1 Pro lying? I loved 2.5 pro particularly when it first released, but 3.0 seemed to be all messed up and wound up hallucinating a lot and then telling a lot of lies to cover it up. I’m really hoping, for both personal and alignment reasons, that they’ve corrected this in 3.1.
Not so far at least. I did notice that certain prompts work better for certain use-cases and nudge the model more towards a personality that seems to respond to question at the correct depth.
Did notice, that at least for explaining things in presentation slides, Gemini 3 Flash is almost equivalent and is much faster and cheaper.
What topics are you trying to cover? I am currently mostly trying ML/Linalg Math, these might be easy for current models.
Using Gemini 3.1 Pro for personal learning has been a revelation for me personally. I was just uploading some PDF slides on ML Math and the model is perfectly able to reference plots/visual elements in the slides and explain formulas in Slides. Knowing that I learn better by myself rather than in classes, it’s like having an endlessly patient and knowledgeable tutor for my personal learning.
This usage also seems to be quite token efficient, a normal multi-turn session might be more like 50k-500k tokens.
I know that AI-based tutoring is a large market at least in mainland China, but not sure how explored this field is otherwise.
Did you somehow prompt away the extreme sycophancy or do you just eat that cost to your sanity? E.g. default 3.1 answering pretty stupid and vague question about physics:
Gpt5.2 would have absolutely replied with barely concealed contempt.
I’m curious if you’ve caught G3.1 Pro lying? I loved 2.5 pro particularly when it first released, but 3.0 seemed to be all messed up and wound up hallucinating a lot and then telling a lot of lies to cover it up. I’m really hoping, for both personal and alignment reasons, that they’ve corrected this in 3.1.
Not so far at least. I did notice that certain prompts work better for certain use-cases and nudge the model more towards a personality that seems to respond to question at the correct depth.
Did notice, that at least for explaining things in presentation slides, Gemini 3 Flash is almost equivalent and is much faster and cheaper.
What topics are you trying to cover? I am currently mostly trying ML/Linalg Math, these might be easy for current models.