“A Rube Goldberg machine made out of candy, Sigma 85mm f/1.4 high quality photograph”
Thanks for clarifying!
Maybe the ‘actions → nats’ mapping can be sharpened if it’s not an AI but a very naive search process?
Say the controller can sample k outcomes at random before choosing one to actually achieve. I think that let’s it get ~ln(k) extra nats of surprise, right? Then you can talk about the AI’s ability to control things in terms of ‘the number of random samples you’d need to draw to achieve this much improvement’.
I’ve been experimenting with some style prompts suggested on Twitter, so have “A complex Rube Goldberg machine, Sigma 85mm f/1.4 high quality photograph”
“Cute White Cat Plushie On A Bed, 4K resolution, amateur photography”
Slightly modified because ‘shooting’ is a banned keyword: “A cartoon honey badger wearing a Brazilian Jiu Jitsu GI with a black belt, jumping in for a wrestling takedown”
“Aliens are conducting experiments on human subjects, as a screenshot from the movie Prometheus” came out weirdly video-game-esque?
“Aliens are conducting experiments on human subjects, as a medieval painting”
And this didn’t come out all that medieval-style, so I tried again with “Aliens are conducting experiments on human subjects, as a medieval illuminated manuscript”
“Aliens are conducting experiments on human subjects, as a screenshot from South Park”
“A 3D rendering of the number 5”
“Number 8”. Huh I think these are almost all street numbers on houses/buildings?
The subtlety I really want to point out here is that the choice is not necessarily “make a precise forecast” or “not make any forecast at all”. Notably, the precise forecasts that you generally can write down or put on website are limited to distributions that you can compute decently well and that have well-defined properties. If you arrive at a distribution that is particularly hard to compute, it can still tell you qualitative things (the kind of predictions Eliezer actually makes) without you being able to honestly extract a precise prediction.
In such a situation, making a precise prediction is the same as taking one element of a set of solutions for an equation and labelling it “the” solution.
(If you want to read more about Eliezer’s model, I recommend this paper)
“The letters X Y and Z” ok it’s starting to get confused here.… (My prediction is that it’ll manage the number 8 and number 5 in the next prompts, but if I try a 3-digit number it might flail).
Let’s see!
“The letter A”
“A little forest gnome leaving through his magic book—beautiful and detailed illustration”
“A piggo-saurus—an illustration of a pig-like dinosaur”
“A piggo-saurus—a pig-like dinosaur—hyper realistic art”
“A wild boar and an angel walking side by side along the beach—beautiful hyperrealistic art”
Thanks for the answer!
That is an interesting perspective to consider, the trade-off that you could be reducing the amount of time people spend learning even if it’s more effective! A quick back of the napkin says that even if it does reduce the amount you read drastically it’s still worthwhile, as long as you don’t reduce it by more than the forgetting curve!
Say you normally read 10 hours/week then you start using SRS and it drops it down to 5 hours/week. But you remember 10x the amount of what you would have previously remembered. Thus it ends up being the equivalent of reading 50 hours/week.I would say that it depends on what you want out of your reading. Most of the time I’m reading for extending my breadth, and so partial memories are completely fine, and covering more ground matters more. Would be different if I was studying in details a new maths subfield for example.
(typo: frought → fraught)
“A group of happy people does Circling and Authentic Relating in a park”