Thank you! I wholeheartedly agree to be honest. I’ve added a footnote to the claim, linking and quoting your comment. Are you comfortable with this?
Shoshannah Tekofsky
Oooh gotcha. In that case, we are not remotely any good at avoiding the creation of unaligned humans either! ;)
Could you paraphrase? I’m not sure I follow your reasoning… Humans cooperate sufficiently to generate collective intelligence, and they cooperate sufficiently due to a range of alignment mechanics between humans, no?
United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress
March—Social Meetup
Short Notes on Research Process
February Online Meetup
Should we have a rewrite the Rationalist Basics Discourse contest?
Not that I think anything is gonna beat this. But still :D
Ps: can be both content and/or style
Thank you! I appreciate the in-depth comment.
Do you think any of these groups hold that all of the alignment problem can be solved without advancing capabilities?
Thanks!
And I appreciate the correction—I admit I was confused about this, and may not have done enough of a deep-dive to untangle this properly. Originally I wanted to say “empiricists versus theorists” but I’m not sure where I got the term “theorist” from either.
Thanks!
And to both examples, how are you conceptualizing a “new idea”? Cause I suspect we don’t have the same model on what an idea is.
Reflections on Deception & Generality in Scalable Oversight (Another OpenAI Alignment Review)
A Simple Alignment Typology
Two things that worked for me:
-
Produce stuff, a lot of stuff, and make it findable online. This makes it possible for people to see your potential and reach out to you.
-
Send an email to anyone you admire asking if they are interested in going for a coffee (if you have the funds to fly out to them) or do a video call. Explain why you admire them and why this would be high value to you. I did this for 4 people without limit of ‘how likely are they to answer’ and one of them said ‘yeah sure’ and I think the email made them happy cause a reasonable subset of people like learning how they have touched other’s lives in a positive way.
-
Even in experiments, I think most of the value is usually from observing lots of stuff, more than from carefully controlling things.
I think I mostly agree with you but have the “observing lots of stuff” categorized as “exploratory studies” which are badly controlled affairs where you just try to collect more observations to inform your actual eventual experiment. If you want to pin down a fact about reality, you’d still need to devise a well-controlled experiment that actually shows the effect you hypothesize to exist from your observations so far.
If you actually go look at how science is practiced, i.e. the things successful researchers actually pick up during PhD’s, there’s multiple load-bearing pieces besides just that.
Fair!
Note that a much simpler first-pass on all these is just “spend a lot more time reading others’ work, and writing up and distilling our own”.
I agree, but if people were both good at finding necessary info as an individual and we had better tools for coordinating (e.g.,finding each other and relevant material faster) then that would speed up research even further. And I’d argue that any gains in speed of research is as valuable as the same proportional delay in developing AGI.
There is an EU telegram group where they are, among other things, collecting data on where people are in Europe. I’ll DM an invite.
That makes a lot of sense! And was indeed also thinking of Elicit
Optimizing Human Collective Intelligence to Align AI
Note: The meetup this month is Wednesday, Jan 4th, at 15:00. I’m in Berkeley currently, and I couldn’t see how times were displayed for you guys cause I have no option to change time zones on LW. I apologize if this has been confusing! I’ll get a local person to verify dates and times next time (or even set them).
Well damn… Well spotted.
I found the full-text version and will dig in to this next week to see what’s up exactly.