These kinds of explorations (unusual and truth-seeking) are why I love lesswrong :)
Aay17ush
This is a great idea! I’m gonna try it out. It fixes quite a lot of things with existing systems, as you point out.
I’m curious though, since when have you been experimenting with it and how has it been? I’m assuming it went well, but I am interested to know more about the details in your process (setbacks, changes, etc) and expect it’ll be helpful for others experimenting with this as well :)
Disclosure: I am new to AI Alignment, and have picked this as my first reading to dive in to.
However, most possibilities for such crucial features, including this one, could be recreated in artificial training environments and in artificial neural networks.
I don’t understand how you arrive at this conclusion. Is there some paper/reasoning you could point me to that backs this? Would be helpful.
Also, is this anologous to saying “We can simulate environments that can give rise to features such as general intelligence? ” (Making sure I’m not misinterpreting)
The way you use intelligence is different from how many people here using that word mean it.
Check this out (for a partial understanding of what they mean): https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence
I’ve found the post “Reward is not the optimization target” quite confusing. This post cleared the concept up for me. Especially the selection framing and example. Thank you!
Good luck! :)
I assume EA student groups have a decent amount of rationalists in them (30%?), so the two categories are not as easily separable. And thus it’s not as bad as it sounds for rationalists.
My biggest reasoning for not babbling is imposter syndrome. So there’s no better exercise than this to start babbling :)
Read a book on imposter syndrome.
Meditate
Talk to someone
Cut yourself some slack
Read about babble!
Ignore it and publish the result anyway
Look at your past achievements
Do a poll on twitter asking how many people get imposter syndrome
Sleep
Go do something you know you’re amazing at
Write about your feelings—writing therapy
Enjoy it until you have it.
Get a coloring book and color inside the lines. That’s hard!
Cook something delicious
Listen to some motivational/self-help speaker for some short-term boost
Go for a walk
Do some intense workout
Laugh at yourself
Take some time off and have some fun
Take a crazy cold shower or better yet, an ice bath
Watch batman take on the Justice League
Help someone less fortunate than you
Dance
Take it out on a punching bag
Do some kindness meditation
Maintain a streak of how many times you overcome imposter syndrome
Break it down to identify the underlying reasons, and solve them one by one.
Join the army.
Do something you think you can’t do.
Go for a therapy session
Get out of your room and surround yourself with nature
Watch an uplifting movie
Have sex
Go to a coffee-place and chill out
Go for a hike
Pick something else, and come back to your current activity later.
Pray to god
Talk to yourself and increase your self-confidence
Ask someone to take a look at your paper—you’ll probably hear that it’s not that bad.
Hangout with someone
Sit by a river/lake/sea
Play with some animals (puppies?)
Talk to someone who you know is an imposter
Act like a real imposter and fake something. You’ll realize you weren’t being an imposter earlier.
Read psychology
Buy a block of cheese and slowly enjoy it to its fullest
Do a r/roastme
Sing your favourite songs
Go to a language club of your native language—feel like a king.
Don’t do anything. Sit there and notice when that feeling passes away.
What is the reasoning behind non-disclosure by default? It seems opposite to what EleutherAI does.
This is lovely! I’ve a couple questions (will post them in the AMA as well if this is not a good place to ask)
-
What is the reasoning behind non-disclosure by default? It seems opposite to what EleutherAI does.
-
Will you be approachable for incubating less experienced people (for example student interns), or do you not want to take that overhead right now?
-
Love this initiative! I do have a question though. It seems that people with 100+ karma have most likely figured out how to write publicly with a decent quality. So this service would simply be a bonus for them.
Isn’t it more important to enable this service for lurkers/readers on Lesswrong who haven’t yet written many posts due to the reasons you’ve mentioned?
Disclaimer: I don’t have 100+ karma and haven’t written a lot outside as well—just privately in my note taking app.
Interesting! I’ve recently been thinking a bunch about “narratives” (frames) and how strongly they shape how/what we think. Making it much harder to see “the” truth since changing the narrative changes things quite a bit.
I’m curious if anyone has an example of how they would go about applying frame-invariance to rationality.
Finland too (and I expect quite a few other EU countries to do so as well)
https://mobile.twitter.com/i/lists/1185207859728076800 AGI Safety core by JJ (From AI Safety Support)
[Question] How do you conduct a personal study retreat?
Lily: If I was a parent I would change the fifteen minutes to ten minutes. Screen time is kind of bad for kids. I also like having an hour and a half for movies, but I think maybe it’s a bit much?
haha that’s so sweet! :D
Metta (loving-kindness) meditation would be an example practice that tries to focus attention on actively loving others in order to get better at it over time.
I don’t have time to currently point out to concrete research backing it up, but it’s been often discussed positively on Lesswrong and the EA Forum and I have had surprisingly good results from it. In my experience though, it has quite a quick feedback loop so trying it out might be the most efficient way of testing it. The Waking up app by Sam Harris is a good starting point.
I’ve often thought about this, and this is the conclusion I’ve reached.
There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.
I put my laptop on a box on top of my desk and use an external keyboard and mouse to operate it.
Will you be approachable for incubating less experienced people (for example student interns), or do you not want to take that overhead right now?