Wouldn’t the granularity of the action space also impact things? For example, even if a child struggles to pick up some object, you would probably do an even worse job if your action space was picking joint angles, or forces for muscles to apply, or individual timings of action potentials to send to separate nerves.
This is a cool model. I agree that in my experience it works better to study sentence pairs than single words, and that having fewer exact repetitions is better as well. Probably paragraphs would be even better, as long as they’re tailored to be not too difficult to understand (e.g. with a limited number of unknown words/grammatical constructions).
One thing various people recommend for learning languages quickly is to talk with native speakers, and I also notice that this has an extremely large effect. I generally think of it as having to do with more of one’s mental subsystems involved in the interaction, though I only have vague ideas as to the exact mechanics of why this should be so helpful.
Do you think this could somehow fit parsimoniously into your model?
A few others have commented about how MSFT doesn’t necessarily stifle innovation, and a relevant point here is that MSFT is generally pretty good at letting its subsidiaries do their own thing and have their own culture. In particular GitHub (where I work), still uses Google Workspace for docs/email, slack+zoom for communication, etc. GH is very much remote-first whereas that’s more of an exception at MSFT, and GH has a lot less suffocating bureaucracy, and so on. Over the years since the acquisition this has shifted to some extent, and my team (Copilot) is more exposed to MSFT than most, but we still get to do our own thing and at worst have to jump through some hoops for compute resources. I suspect if OAI folks come under the MSFT umbrella it’ll be as this sort of subsidiary with almost complete ability to retain whatever aspects of its previous culture that it wants.
Standard disclaimer: my opinions are my own, not my employer’s, etc.
It’d be great if one of the features of these “conversation” type posts was that they would get an LLM-genererated summary or a version of it not as a conversation. Because at least for me this format is super frustrating to read and ends up having a lower signal to noise ratio.
You have a post about small nanobots being unlikely, but do you have similar opinions about macroscopic nanoassemblers? Non-microscopic ones could have a vacuum and lower temperatures inside, etc.
Strong upvote for the core point of brains goodhearting themselves being a relatively common failure mode. I honestly didn’t read the second half of the post due to time constraints, but the first rang true to me. I’ve only experienced something like social media addiction at the start of the Russian invasion last year since most of my family is still back in Ukraine. I curated a Twitter list of the most “helpful” authors, etc., but eventually it was taking too much time and emotional energy and I stopped, although it was difficult.
I think this is related to a more helpful, less severe version of the same phenomenon. When I get frustrated, sometimes it’s helpful to accomplish some small household todo like cleaning the table or taking out the trash, and that helps me feel more in control/accomplished and helps me get back into a reasonable mood in which I can be happier and more productive.
For AIs we can use the above organizational methods in concert with existing AI-specific training methodologies, which we can’t do with humans and human organizations.
It doesn’t seem particularly fair to compare all human organizations to what we might build specifically when trying to make aligned AI. Human organizations have existed in a large variety of forms for a long time, they have mostly not been explicitly focused on a broad-based “promotion of human flourishing”, and have had to fit within lots of ad hoc/historically conditional systems (like distributions between for profit vs non profit entities) that have significant influence on the structure of newer human organizations.
I grew up in Arizona and live here again now. It has had a good system of open enrollment for schools for a long time, meaning that you could enroll your kid into a school in another district if they have space (though you’d need to drive them, at least to a nearby school bus stop). And there are lots of charter schools here, for which district boundaries don’t matter. So I would expect the impact on housing prices to be minimal.
Godzilla strategies now in action: https://simonwillison.net/2022/Sep/12/prompt-injection/#more-ai :)
No super detailed references that touch on exactly what you mention here, but https://transformer-circuits.pub/2021/framework/index.html does deal with some similar concepts with slightly different terminology. I’m sure you’ve seen it, though.
Is the ordering intended to reflect your personal opinions, or the opinions of people around you/society as a whole, or some objective view? Because I’m having a hard time correlating the order to anything in my wold model.
This is the trippiest thing I’ve read here in a while: congratulations!
If you’d like to get some more concrete feedback from the community here, I’d recommend phrasing your ideas more precisely by using some common mathematical terminology, e.g. talking about sets, sequences, etc. Working out a small example with numbers (rather than just words) will make things easier to understand for other people as well.
My mental model here is something like the following:
a GPT-type model is trained on a bunch of human-written text, written within many different contexts (real and fictional)
it absorbs enough patterns from the training data to be able to complete a wide variety of prompts in ways that also look human-written, in part by being able to pick up on implications & likely context for said prompts and proceeding to generate text consistent with them
Slightly rewritten, your point above is that:
The training data is all written by authors in Context X. What we want is text written by someone who is from Context Y. Not the text which someone in Context X imagines someone in Context Y would write but the text which someone in Context Y would actually write.After all, those of us writing in Context X don’t actually know what someone in Context Y would write; that’s why simulating/predicting someone in Context Y is useful in the first place.
The training data is all written by authors in Context X. What we want is text written by someone who is from Context Y. Not the text which someone in Context X imagines someone in Context Y would write but the text which someone in Context Y would actually write.
After all, those of us writing in Context X don’t actually know what someone in Context Y would write; that’s why simulating/predicting someone in Context Y is useful in the first place.
If I understand the above correctly, the difference you’re referring to is the difference between:
prompt = “A lesswrong post from a researcher in 2050:”
GPT’s internal interpretation of context = “A fiction story, so better stick to tropes, plot structure, etc. coming from fiction”
GPT’s internal interpretation of context = “A lesswrong post (so factual/researchy, rather than fiction) from 2050 (so better extrapolate current trends, etc. to write about what would be realistic in 2050)”
Similar things could be done re: the “stable, research-friendly environment”.
The internal interpretation is not something we can specify directly, but I believe sufficient prompting would be able to get close enough. Is that the part you disagree with?
Alas, querying counterfactual worlds is fundamentally not a thing one can do simply by prompting GPT.
Citation needed? There’s plenty of fiction to train on, and those works are set in counterfactual worlds. Similarly, historical, mistaken, etc. texts will not be talking about the Current True World. Sure right now the prompting required is a little janky, e.g.:
But this should improve with model size, improved prompting approaches or other techniques like creating optimized virtual prompt tokens.
And also, if you’re going to be asking the model for something far outside its training distribution like “a post from a researcher in 2050”, why not instead ask for “a post from a researcher who’s been working in a stable, research-friendly environment for 30 years”?
Please consider aggregating these into a sequence, so it’s easier to find the 1⁄2 post from this one and vice versa.
Sounds similar to what this book claimed about some mental illnesses being memetic in certain ways: https://astralcodexten.substack.com/p/book-review-crazy-like-us
If you do get some good results out of talking with people, I’d recommend trying to talk to people about the topics you’re interested in via some chat system and then go back and extract out useful/interesting bits that were discussed into a more durable journal. I’d have recommended IRC in the distant past, but nowadays it seems like Discord is the more modern version where this kind of conversation could be found. E.g. there’s a slatestarcodex discord at https://discord.com/invite/RTKtdut
YMMV and I haven’t personally tried this tactic :)
Well written post that will hopefully stir up some good discussion :)
My impression is that LW/EA people prefer to avoid conflict, and when conflict is necessary don’t want to use misleading arguments/tactics (with BS regulations seen as such).
I agree I’ve felt something similar when having kids. I’d also read the relevant Paul Graham bit, and it wasn’t really quite as sudden or dramatic for me. But it has had a noticeable effect long term. I’d previously been okay with kids, though I didn’t especially seek out their company or anything. Now it’s more fun playing with them, even apart from my own children. No idea how it compares to others, including my parents.
Love this! Do consider citing the fictional source in a spoiler formatted section (ctrl+f for spoiler in https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq)