Mati_Roy(Mati Roy)
for unrelated reasons, I would also like a feature where I can mark a post as read (although that’s probably not common enough to be used as direct signal, but could still be a proxy maybe)
I haven’t read it, but the summary makes it seem related
What fraction of the cards from the Irrational Game didn’t replicate?
Is there a set of questions similar to this available online?
In the physical game I have, there’s a link to http://give-me-a-clue.com/afterdinner (iianm) which is supposed to have 300 more trivia questions, but it doesnt work—anyone has does?
Part-time remote assistant position
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices
The form to apply is here: https://airtable.com/shrdBJAP1M6K3R8IG It pays 20 usd/h.
You can ask questions here, in PM, or at mati@pantask.com.
Thanks
Lots of people felt they’d been injured by being mailed doomsday codes for LW or EAF for Petrov day...
how so? 0_o
If not, do you think there’s value in faithful representation of this topic within the art realm?
I think so
Are there any public misconceptions of AI that you think are dangerous? Or to a lesser extreme: hamper AI funding?
https://futureoflife.org/background/aimyths/
Have you ever seen the subject of AI faithfully represented in media art: ex films, books, graphic novels, etc?
See What are fiction stories related to AI alignment?. Not all of them qualify, but some do. I think those ones are very good: The Intelligence Explosion and NeXt
Is there a go-to resource that you recommend for someone outside of the field to learn about contemporary issues around AI Safety?
FLI has articles and a podcast: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
80,000 Hours has some articles and episodes on this: https://80000hours.org/podcast/
The AI Revolution: The Road to Superintelligence by WaitButWhy
Superintelligence by Nick Bostrom
Human Compatible by Stuart Russell
For more, see my list of lists here: https://www.facebook.com/groups/aisafetyopen/posts/263224891047211/
Do you think the AI research community or LW is caricatured in any way that is harmful to AI research?
I don’t know if the overall sign is positive or negative, but I’d guess there are likely non-zero caricatures that are harmful.
Are there any specific issues around AI that concern you the most?
The alignment problem (or are you asking what concerns us the most within that scope?)
If someone said they didn’t believe AI can have any positive impact on humanity, what’s your go-to positive impact/piece of research to share?
I don’t have one. Depends where they’re coming from with that belief.
How did your interest in AI begin?
I don’t know if I became interested in LessWrong or machine learning first—one of those.
Do you think there is enough general awareness around AI research and safety? If not, what do you think would help ferment AI safety in public and political discourse?
That’s assuming most people here want this—I don’t think that’s the case
Or to a lesser extreme: hamper AI funding?
I don’t know if by “hamper” you mean reduce, but it seems to be like there are conflicting views/models here about whether that would be good or bad.
What do you personally think the likelihood of AGI is?
That is, that humans eventually create AGI, right?
i didnt know that story, but a friend just told me that “avocadro corp” is showing in vivid detail how an autocorrect system like grammarly could be harnessed by an AI to subtly take over the world
this video has a good guide to how humans use (and misuse) categories: 1. Introduction to Human Behavioral Biology
someone mentioned to me that the “vagus nerve” might play some small/tiny role in our identity; am writing this as a reminder or invitation to maybe research this more
Right, good point, not necessarily, but also we’re working with finite resources_
Ah, yes absolutely! I should have specified my original claim further to something like “when it affects the whole world” and “a lot of people you’ve identified as rational disagree”.
I like this idea
I’m helping Abram Demski with making the graphics for the AI Safety Game (https://www.greaterwrong.com/posts/Nex8EgEJPsn7dvoQB/the-ai-safety-game-updated)
We’ll make a version using https://app.wombo.art/. We have generated multiple possible artwork for each card and made a pre-selection, but we would like your input for the final selection.
You can give your input through this survey: https://forms.gle/4d7Y2yv1EEXuMDqU7 Thanks!
Oh, I assumed the problem was a social / logistic one, but now I’m assuming there’s also a scientific / technological one
I guess I was using “economies of scale” very loosely; that’s kind of what I had in mind, but thank you for the details and explanations!
Good point. (No I’m not. I’m also considering using gamete donors anyway.)
You need a bunch of follicle cells and the only source for those currently is abortion tissue.
Hmm, that seems like it shouldnt be that hard of a problem to solve, but idk. I hope someone takes this on if that’s really a bottleneck.
Oops, right!