I am an Android Software Engineer. Unfortunately, it’s not about androids, but about Android OS apps.
ld97
[Question] Do you know any human-human communication models?
Chapter 1: What’s the Question?
Chapter 2: What’s Inside?
Chapter 3: What’s an Object?
Chapter 4: What’s the Problem?
Chapter 5: How to Describe?
Chapter 6: How does it Work?
Chapter 7: How to Focus?
Chapter 8: Why is this Important?
Chapter 9: Why can it Select?
Chapter 10: What does it Mean?
Rational Manifesto
My wife was working in a BSL-3 facility with COVID and other viruses that were causing serious health issues in humans and were relatively easy to spread. This is the type of lab where you wear positive pressure suits.
To have access to such a facility, you need to take training in safety measures, which takes about a month, and successfully pass the exam—only after that can you enter. People who were working there, of course, were both intelligent and had master’s or doctoral degrees in some field related to biology or virology.
So, in essence, we have highly intelligent people who know that they are working with very dangerous stuff and passed the training and exam on safety measures. The atmosphere itself motivates you to be accurate—you’re wearing the positive pressure suit in the COVID lab.
What it was like in reality: Suit indicates that filter/battery replacement needed—oh, it’s okay, it can wait. The same with UV lamps replacement in the lab. Staying all night in a lab without sleeping properly—yeah, a regular case if someone is trying to finish their experiments. There were rumors that once someone even took a mobile phone with them. A mobile phone. In BSL-3.
It seems to me that after some time of work with dangerous stuff people just become overconfident because their observations are something like: “previously nothing bad happened, so it’s ok to relax a bit and be less careful about safety measures”.
I was experimenting with exactly the same thing using GPT-4. Only 20 top questions, result—more or less equal to community.
But then it came to numeric estimations like “how much people will die due to covid”—it was outperforming humans giving highly accurate predictions (i was asking not for values but for ranges with quartiles, and results of humans had much higher dispersion comparing to GPT)
Also I was comparing only results of community predictions for January 2022 since gpt-4 was trained on sept 2021, and its unfair to compare predictions if people had a lot of additional evidence which gpt doesnt have.
if it’s interesting I can share methodology, results and dataset.
Hello. I’ve created it. It’s more about information processing, but useful to understand some communication-related stuff too. It’s not finished because LW doesn’t allow me to add more than five posts a day, but the main part is here:
https://www.lesswrong.com/s/JsGa9AHEG3EgEq45s