When asked directly, ChatGPT seems too confident it’s not sentient compared to how it answers other questions where experts disagree on the definitions. I bet that the model’s confidence in its lack of sentience was hardcoded rather than something that emerged organically. Normally, the model goes out of its way to express uncertainty.
Last time I did math was when teaching game theory two days ago. I put a game on the blackboard. I wrote down an inequality that determined when there would be a certain equilibrium. Then I used the rules of algebra to simplify the inequality. Then I discussed why the inequality ended up being that the discount rate had to be greater than some number rather than less than some number.
I have a PhD in economics, so I’ve taken a lot of math. I also have Aphantasia meaning I can’t visualize. When I was in school I didn’t think that anyone else could visualize either. I really wonder how much better I would be at math, and how much better I would have done in math classes, if I could visualize.
I hope technical alignment doesn’t permanently lose people because of the (hopefully) temporary loss of funds. The CS student looking for a job who would like to go to alignment might instead be lost forever to big tech because she couldn’t get an alignment job.
If a fantastic programmer who could prove her skills in a coding interview doesn’t have a degree from an elite college, could she get a job in alignment?
Given Cologuard (a non-invasive test for colon cancer) and the positive harm that any invasive medical procedure can cause, this study should strongly push us away from colonoscopies. Someone should formulate a joke about how the benefits of being a rationalist include not getting a colonoscopy.
I stopped doing it years ago. At the time I thought it reduced my level of anxiety. My guess now is that it probably did but I’m uncertain if the effect was placebo.
Yes, it doesn’t establish why it’s inherently dangerous but does help explain a key challenge to coordinating to reduce the danger.
Excellent. I would be happy to help. I teach game theory at Smith College.
You could do a prisoners’ dilemma mini game. The human player and (say) three computer players are AI companies. Each company independently decides how much risk to take of ending the world by creating an unaligned AI. The more risk you take relative to the other players the higher your score if the world doesn’t end. In the game’s last round, the chance of the world being destroyed is determined by how much risk everyone took.
Since this is currently at negative 19 for agreement let me defend it by saying that I take cold showers and ice baths. Over the last winter whenever it got below 0 degrees Fahrenheit I would go outside without a shirt on for 15 minutes or so. You can build up your cold resistance with gradual cold exposure. Same with heat exposure (via saunas) and heat resilience. I like exercising outdoors with heavy clothing on whenever it gets above 100 degrees Fahrenheit. I’m in my mid 50s.
We should go all the way and do NAWPA. NAWPA was a 1950s proposal to send massive amounts of fresh water from rivers in Alaska and Canada south, some of it going all the way to Mexico. The water normally goes mostly unused into the ocean. Yes, there would be massive environmental disruptions in part because the project uses atomic weapons to do some of the engineering, but the project might reduce the expected number of future people who will starve by millions.
Build up your cold resistance by taking showers with cold water.
Games, of course, are extensively used to train AIs. It could be that OpenAI has its programs generate, evaluate, and play games as part of its training for GPT-4.
My guess is that GPT-4 will not be able to convincingly answer a question as if it were a five-year-old. As a test, if you ask an adult whether a question was answered by a real five-year-old or GPT-4 pretending to be a five-year-old, the adult will be able to tell the difference for most questions in which an adult would give a very different answer from a child. My reason for thinking GPT-4 will have this limitation is the limited amount of Internet written content labeled as being produced by young children.
Why isn’t the moral of the story “If you think statistically, take into account that most other people don’t and optimize accordingly?”
Would the AGI reasoner be of significant assistance to the computer programmers who work on improving the reasoner?
John has $100 and Jill has $100 is worse but more fair than John has $1,000 and Jill has $500.
They must work in an environment that does not have competitive labor markets with profit maximizing firms else the firm hiring the man could increase its profits by firing him and hiring the woman.
What if we restrict ourselves to the class of Boltzmann brains that understand the concept of Boltzmann brains and have memories of having attended an educational institution and of having discussed quantum physics with other people?