Hi y’all.
Recently I’ve become very interested in open research. A friend of mine gave me the tip to check out lesswrong.
I found that lesswrong has been interested in trying to support collaborative open research (one, two, three) for a few years at least. That was the original idea behind lesswrong.com/questions. Recently Ruby explained some of their problems getting this sort of thing going with the previous approach and sketched a feature he’s calling “Research Agendas.” I think something like his Research Agendas seems quite useful.
So that’s what brought me here. But I’ve had a lot of fun reading through old top rated posts.
I just made my first post about a question centered wiki I’ve been working on. I guess it’s a sort of self promotion, so I hope that’s ok. I felt that it’s the sort of thing that people here may be interested in. I’m also very interested to hear critiques of the argument I put forward in that post.
weathersystems
A Wiki for Questions
[Question] What question would you like to collaborate on?
Which personalities do we find intolerable?
What do you think about the vulnerable world hypothesis? Bostrom defines the vulnerable world hypothesis as:
If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semian-archic default condition.
(There’s a good collection of links about the VWH on the EA forum). And he defines “semi-anarchic default condition” as having 3 features:
1. Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions – particularly actions that are very strongly disfavored by > 99 per cent of the population.
2. Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commons – particularly in high-stakes situations where vital national security interests are involved.
3. Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) – in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (‘the apocalyptic residual’) who would act in ways that destroy civilization even at high cost to themselves.
To me, the idea that we’re in a vulnerable world is the strongest challenge to the value of technological progress. If we are in a vulnerable world, the time we have left before civilizational devastation is partly determined by our rate of “progress.”
Bostrom doesn’t give us his probability estimate that the hypothesis true. But to me it seems quite likely that at some point we’ll invent the technology that will screw us over (if we haven’t already). AI and engineered pandemics are the scariest potential examples for me.
Do you disagree with me about the probability of us being in a vulnerable world? Do think we can somehow avoid discovering the civilization destroying tech while only finding the beneficial stuff?Or do you think we are in a vulnerable world, but that we can exit the “semi-anarchic default condition?” Bostrom’s suggestions (like having complete surveillance combined with a police state) for exiting the semi-anarchic default condition seem quite terrifying.
If you’ve written or spoken about this somewhere else, feel free to just point me there.
I’m a bit confused. What’s the difference between “knowing everything that the best go bot knows” and “being able to play an even game against a go bot.”? I think they’re basically the same. It seems to me that you can’t know everything the go bot knows without being able to beat any professional go player.
Or am I missing something?
Maybe a dumb question. What’s an EM researcher? Google search didn’t do me any good.
Functional Trade-offs
Sure. But the question is can you know everything it knows and not be as good as it? That is, does understanding the go bot in your sense imply that you could play an even game against it?
Why would self-awareness be an indication of sentience?
By sentience, do you mean having subjective experience? (That’s how I read you)
I just don’t see any necessary connection at all between self-awareness and subjective experience. Sometimes they go together, but I see no reason why they couldn’t come apart.
https://en.wikipedia.org/wiki/Berkson%27s_paradox
I also liked this numberphile video about it: Link
Thanks for writing this. As someone who went through something very similar, I largely agree with what you wrote here.
To make the “accept the panic” bit a more concrete: following someone’s advice, when I’d start to panic, I’d sit down and imagine I was strapped to the chair. I’d imagine my feelings were a giant wave washing over me, but that I couldn’t avoid them, because I was strapped to the chair. The wave wouldn’t kill me though, just feel uncomfortable. I’d repeat that in my head “this is uncomfortable but not dangerous. this is uncomfortable but not dangerous...” Turns out that if you don’t try to avoid the bad feelings, they don’t last as long. My understanding is that by just sitting and taking it without flinching, you’re teaching your brain that panic is not something to be feared which reduces their intensity and frequency.
Before doing that I felt terrible for about an hour. With that technique it was reduced to about 15 minutes, then I quickly (in a week or two) stopped having panic attacks.
I’m not sure I understand how “Three, distract yourself.” fits with accepting panic though. I know for me, distracting myself was a way of not accepting. Of trying not to feel bad.
Thanks for writing up your thoughts here. I hope you wont mind a little push-back.
There’s a premise underlying much of your thought that I don’t think is true.But as the world of Social Studies consists of the interactions of persons, places, and things, they are subject to the Laws of Physics, and so the tenants of Physics must apply.
I don’t really see how the laws of physics apply to social interactions. To me it sounds like you’re mixing up different levels of description without any reason.
Yes, at bottom we’re all made up of physical stuff that physics describes. But that doesn’t really mean that the laws of physics are particularly useful when trying to explain human scale phenomena like why people get hungry, or angry, or why people have a hard time coordinating, or (more to your point), why people sometimes believe the wrong things. The fields of psychology, evolutionary biology, sociology among others seem like they’d be more relevant than physics. The different fields of knowledge exist for a good reason.
[Question] What questions should we ask ourselves when trying to improve something?
Do you have anything else you remember about the statement? Where you heard it, when you heard it etc.
I’m not so sure I get your meaning. Is your knowledge of the taste of salt based on communication?
Usually people make precisely the opposite claim. That no amount of communication can teach you what something subjectively feels like if you haven’t had the experience yourself.
I do find it difficult to describe “subjective experience” to people who don’t quickly get the idea. This is better than anything I could write: https://plato.stanford.edu/entries/qualia/.
Gary Musk decided
Ah. Ya that makes sense. It sounds like it’s not so much about what to do in the moment of panic as what to focus on throughout your day-to-day life. Let yourself be interested in and pay attention to things other than that you feel bad all the time. Don’t let your pain be your main/only focus.
StackExchange only flags duplicates, that’s true, but the reason is so that search is more efficient, not less. The duplicate serves as a signpost pointing to the canonical question.
Ya I get that. But why keep all the answers and stuff from the duplicates? My idea with the question wiki was to keep the duplicate question page (because maybe it’s worded a bit differently and would show up differently in searches), have a pointer to the canonical question, and remove the rest of the content on that page, combining it with the canonical question page.Also, StackExchange does indeed allow edits to answers by people other than the original poster. Those with less than a certain amount of reputation can only propose an edit and someone else has to approve it, and those who have a higher level of reputation can edit any answer and have the edit immediately go into effect.
Huh. That’s new to me. Thanks for the info. That may affect my view on the need for the question wiki. I’ll have to think about it. Maybe I gotta take a closer look at stackexchange.
The quotes above are not the complete conversation. In the section of the discussion about AGI, Blake says:
I don’t think he’s making the mistake you’re pointing to. Looks like he’s willing to allow for AI with at least as much generality as humans.
And he doesn’t seem too committed to one definition of generality. Instead he talks about different types/levels of generality.