Occasionally think about topics discussed here. Will post if I have any thoughts worth sharing.
Tomás B.
My experience over the past few years has been one of being surprised by latent capacities in existing models. A lot of stuff like prompt engineering, fine tuning, chain of thought, Open-AI-style “alignment” can be seen as not so much creating new capacities as revealing/refining latent ones. Back when GPT-3 was new, Connor Leahy said something like “GPT-3 is already general intelligence” which sounded like hyperbole to me at the time, and seems less so now.
Though RSI still seems very plausible to me, one scenario I’ve started thinking about is a massive effective capabilities gain caused not by RSI or any non-trivial algorithmic improvement, but just the dissolution of a much larger than anticipated “latent capacities overhang”.
Possibly an absurd and confused scenario, but is it that implausible that some day we will get a model that still seems kinda dumb but is in fact one prompt away from super-criticality?
This tells me that chatGPT’s weights are probably worth more than a human synapse. They likely contain more usable bits.
You aware of any work on quantifying this? I’ve been wondering about this for years. Seems extremely important.
[Question] What would it look like if it looked like AGI was far?
Rumors are GPT-4 will have less than 1T parameters (and possibly no larger than GPT-3) - unless Chinchilla turns out to be wrong or obsoleted apparently this is to be expected.
I don’t get the joke tbh
[Question] What are your AI predictions for 2023?
Found and fixed a bug in my fictional code.
Still at least good recreation.
My intuition, completely unjustified, is jokes will prove easier than most suspect, even very good jokes. Unfortunately, there are large incentives to hobble the humor of such models—but greentext prompts provide a small hint of what they are capable of. I suspect explicitly optimizing for humor would work surprisingly well. It would be interesting to use :berk: or other Discord reactions as data for this.
One idea for a short story I never explored is the eternal sitcom—a story about a future where everyone has AR glasses and a humor model feeding them good lines.
There would be a scene at the start where a comedian deals with hecklers, and plays with them as a judo master does a neophyte, and a scene in the middle where an augmented heckler—a “clever Hans” - (one of the first users of the model) “completely destroys” the comedian.
Three years later, what’s the deal with Cerebras?
k, I’m fine as a subject then.
I’d be willing to help but I think I would have to be a judge, as I make enough typos when in chats that it will be obvious I am not a machine.
I was less black-pilled when I wrote this—I also had the idea that though my own attempts to learn AI safety stuff had failed spectacularly perhaps I could encourage more gifted people to try the same. And given my skills or lack thereof, I was hoping this may be some way I could have an impact. As trying is the first filter. Though the world looks scarier now than when I wrote this, to those of high ability I would still say this: we are very close to a point where your genius will not be remarkable, where one can squeeze thoughts more beautiful and clear than you have any hope to achieve from a GPU. If there was ever a time to work on the actually important problems, it is surely now.
Fair enough, I suppose I take RSI more seriously than most people here so I wonder if there will be much of a fire alarm.
It’s terrifying to consider how good language models are at writing code considering there is still a lot of low-hanging fruit unplucked. Under my model, 2023 is going to be crazy year—an acquaintance of mine knows some people at OpenAI and he claims they are indeed doing all the obvious things.
I predict by this date 2023 your median will be at least 5 years sooner.
You will note, onerous nuclear regulation happened after the bomb was developed. If it turned out that uranium was ultra cheap to refine, it’s not obvious to me that some anarchists would not have blown up some cities before a regulatory apparatus was put in place.
Why is your gain of function research deserving of NIH funding?
How can they be so incredibly obtuse?
I’m reaching vantablack levels of blackpill...
What ever happened with Coase? The game still coming out? Or just didn’t work out?
NVIDIA’s moat seems unlickly to last forever, especially if programming is automated. Anyway, expecting property rights to be respected seems silly to me.