I’m grateful for MIRI etc and their work on what is probably as world-endy as nuclear war was (and look at all the intellectual work that went into THAT).
The thing that’s been eating me lately, almost certainly mainly triggered by the political situation in the U.S., is how to manage the transition from 2020 to what I suspect is the only way forward for the species—genetic editing to reduce or eliminate the genetically determined cognitive biases we inherited from the savannah. My objectives for the transition would be
Minimize physical suffering
Minimize mental/emotional suffering
Maximize critical thinking
Maximize sharing of economic resources
I’m extra concerned about tribalism/outgrouping and have been thinking a lot about the lunch-counter protestors in the U.S. practice/role-playing the inevitable taunts, slurs, and mild or worse physical violence they would receive at a sit-in, knowing that if they were anything less than absolute model minorities, their entire movement could be written off overnight.
I’m only just starting to look into what research there might already be on such a broad topic, so if you see this, and you have literally any starting points whatsoever (beyond what’s on this site’s wiki and SlateStarCodex), say something.
Do you think genetic editing could remove biases? My suspicsion is that they’re probably baked pretty deeply into our brains and society, and you can’t just tweak a few genes to get rid of them.
I figure that at some point in the next ~300 years, computers will become powerful enough to do the necessary math/modeling to figure this out based on advances in understanding genetics.
It just feels like “biases” are such a high level of abstraction that are based on basic brain architecture. To get rid of them would be like creating a totally different design.
Prediction: In a month, if we look at vaccine doses administered per day in the U.S., the FDA’s approval of Comirnaty will not be reflected in a subsequent increase, even temporary, exceeding 10%. Confidence: 80%
Subsequent evidence suggests I had the right idea but was overly precise in my predictions or should have tried to predict the effect over a longer period of time to avoid extreme but temporary outcomes:
In the two weeks since the Food and Drug Administration approved Pfizer’s COVID-19 vaccine, the US’s average weekly vaccination rate has declined 38%.
Initial evidence suggests I was wrong:
In the week prior to the full approval, an average of about 404,000 Americans were initiating vaccination each day. As of Monday, approximately 473,000 Americans were getting their first shot each day.
Remote Desktop is bad for your brain?
I live abroad but work for a US company and connect to my computer, located inside the company’s office, through a VPN shell and then Windows’ Remote Desktop function. I have a two-monitor setup at my local desk and use them both for RDP, the left one in horizontal orientation (email, Excel, billing software) and the right one vertical (for reading PDFs, drafting emails in Outlook, drafting documents in Word).
My computer shut itself off after hours in the US, so I had to get a Word document emailed to me so I could keep drafting it on my local computer. I feel like getting rid of the lag between [keypress] and [character appears on screen], due to RDP lag (admittedly mild), is making me 30% smarter. Like the delay was making me worse at thinking. It’s palpable. So it’s either real or some kind of placebo effect associated with me being persnickety or both. Anyone seen any data on this?
Yes, the value of minimizing response time is a well-studied area of human-computer interfaces: https://www.nngroup.com/articles/response-times-3-important-limits/
This is great. Thank you. I’m fascinated by the fact that this problem was studied as far back as the 1960s.
How do you evaluate the 30%? By outcome metrics or by how you feel during the activity?
Just by feel. At this stage, I’m just spitballing and reporting subjective sensation. The sensation went down but not away after a few hours.
This is a Humble Bundle with a bunch of AI-related publications by Morgan & Claypool. $18 for 15 books. I’m a layperson re the material, but I’m pretty confident it’s worth $18 just to have all of these papers collected in one place and formatted nicely. NB increasing my payment from $18 to $25 would have raised the amount donated to the charity from $0.90 to $1.25--I guess the balance of the $7 goes directly to Humble.
In case anyone sees this: I turned off my Vote Notifications, and it has increased my enjoyment of the site by at least 10%. You should, too.
Counterpoint: I get value from being notified of votes/karma changes. Especially when someone bothers to vote on an old post, it’s nice to revisit it and update my mental model of which comments of mine will be popular or not. As a result, I’ve changed my target from 80% upvotes to 90% - If I don’t get some downvotes, I’m likely over-editing and over-filtering myself, but people are kind enough that I have to be pretty bad to get many downvotes.
Definitely try it on or off for a week or two every year, and optimize for yourself :)
I eventually got tired of not knowing where the karma increments were coming from, so I changed it to cache once a week. I just got my first weekly cache, and the information I got from seeing what was voted on outweighed the encouragement of any Internet Points Neurosis I may have.
This makes sense re old posts. Thanks for pointing to a valid use.
Inside my brain, I feel especially susceptible to anything that acts like Internet Points, and that little star was triggering the itch. Without the star there, I click less often on my username to see how many Internet Points I got. (I was also clicking on the star even when I knew there was no new information there!) Removing the star removed some of the emotional immediacy.
Yep, I expect some people will want them turned off, which is why we tried to make that pretty easy! It might also make sense to batch them into a weekly batch instead of a daily one, which I’ve done at some points to reduce the degree to which I felt like I was goodharting on them.
Why isn’t weekly notifications the default? Daily is likely more harmful than useful for the typical person
Most people definitely wanted daily, since that’s what their LessWrong habits were already. I also am pretty okay with daily, and think it gets rid of most of the bad “repeatedly check 10 times a day” loop that things like Facebook can get me into.
I’m of the type to get easily addicted to notifications, and daily has felt rare enough for me to not trigger any reaction.
Does anyone have some good primary/canonical/especially insightful sources on the question of “Once we make a superintelligent AI, how do we get people to do what it says?”
I’m trying to hold the question to the question posed, rather than get into the weeds on “how would we know the AI’s solutions were good” and “how do we know it’s benign” and “evil AI in a box” as I know where to look for that information.
So assume (if you will) all other problems with AI are solved and that the AI’s solutions are perfect except that they are totally opaque. “To fix global warming, inject 5.024 mol of boron into the ionosphere at the following GPS coordinates via a clone of Buzz Aldrin in a dirigible...”. And then maybe global warming would be solved, but Exxon’s PR team spends $30 million on a campaign to convince people it was actually because we all used fewer plastic straws, because Exxon’s baby AI is telling them that the superintelligence is about tell us to dismantle Exxon and execute its board of directors by burning at the stake.
Or give me some key words to google.
Once one species of primate evolves to be much smarter than the others, how will it come about that the others do as it says?
—For the most part, it doesn’t matter whether the others do as it says. The other primates aren’t the ones in the drivers seat, literally and figuratively.
—But when it matters, the super-apes (humans) will figure out a variety of tricks and bribes that work most of the time.