I’ve also played a cooperative version of this for two people, which we called Contact. It’s entertaining for long road trips.
There’s no category/question given—the game can go in any direction.
In each round, the players simultaneously pick words, then say them at the same time.
The game continues until there is a round in which the two players say the same word. The goal is to match as quickly as possible.
Words cannot be re-used in multiple rounds.
In theory, you could extend this to more than two people, but my guess is that it would become significantly more difficult in a larger group to get everyone to match.
I think this post would benefit from having at least one real-world example, as well as your fictional example. I can’t tell what actual situations you’re pointing to.
One high-level summary that occurs to me is that “trying to solve problems sometimes makes them worse”—but I think you meant something more specific than that.
I used Wirecutter for this: The Best Air Purifier for 2021 | Reviews by Wirecutter (nytimes.com). I picked their top choice, the Coway AP-1512HH Mighty, about a month ago.
So far, it seems to work pretty well, and it’s very quiet in standby mode—roughly similar to the fridge. But every time I fry anything on the stove, the fan automatically speeds up to the highest level, which is much louder, roughly similar to a typical conversation. On the bright side, though, at least that proves that it works.
I don’t know if this is a complete answer, but maybe it has something to do with the average age?
India’s population pyramid looks a lot different from the US or Europe:
Population of India 2020 - PopulationPyramid.net
What kind of constraints do you have?
There are many constraint solvers for different kinds of equations. I enjoyed using pySMT, which provides a Python wrapper for several different libraries: pysmt/pysmt: pySMT: A library for SMT formulae manipulation and solving (github.com).
Sorry, let me try again, and be a little more direct. If the New Center starts to actually swing votes, Republicans will join and pretend to be centrists, while trying to co-opt the group into supporting Republicans.
Meanwhile, Democrats will join and try to co-opt the group into supporting Democrats.
Unless you have a way to ensure that only actual centrists have any influence, you’ll end up with a group that’s mostly made up of extreme partisans from both sides. And that will make it impossible for the group to function as intended.
I see a few other failure points mentioned, but no one has mentioned what I consider the primary obstacle—if membership in the New Center organization is easy, what prevents partisans from joining purely to influence its decisions? And if membership is hard, how do you find enough people willing to join?
The key idea that makes Bitcoin work is that it runs essentially a decentralized voting algorithm. Proof-of-work means that everyone gets a number of votes proportional to the computational power that they’re willing to spend.
You need something similar to proof-of-work here, but I don’t see any good way to implement it.
Your suggestion of moral ensemble modeling sounds essentially the same to me as the final stage in Kegan’s model of psychological development. David Chapman has a decent summary of it here: Developing ethical, social, and cognitive competence | Vividness.
I don’t think I had noticed the relationship with statistical modeling, though. I particularly like the analogy of overfitting.
Yes, I fully agree that using a grammar to represent graphics is the One True Way.
There’s a lab at UW that’s working to extend the same philosophy to support interactive graphics:UW Interactive Data Lab | Papers (washington.edu). I haven’t had a chance to use it yet, but their examples seem pretty cool!
To save you a click, I’ve copied the example visualization on the homepage below. It shows all of the variables in the entire stack at the specified point in the execution.
It’s all auto-generated, so it doesn’t support more complex visualizations like your Container With Most Water example. But maybe it could be extended so that the user could define custom visualizations for certain objects?
Cal Newport is a CS professor who has written about productivity and focus for a long time. I’d recommend starting with the productivity section of his blog: Tips: Time Management, Scheduling, & Productivity—Study Hacks—Cal Newport. He’s also known for his book Deep Work, which I plan to read soon.
His advice is somewhat helpful for me as a software engineer, but I think it’s particularly aimed at people in research fields.
Edit: As others have pointed out, this is not the best strategy.
Nice problem! The best strategy seems to be to mix the red clay with the blue clay in small infinitesimal steps. Every bit of red clay then becomes as cold as possible, meaning that as much energy as possible is transferred to the blue clay.
Here’s what we get with two steps:
Mix 1⁄2 red at 100 degrees + 1 blue at 0 degrees ⇒ 33.33 degrees.
Now remove the red clay, and add the other half.
1⁄2 red at 100 degrees + 1 blue at 33.33 degrees ⇒ 55.55 degrees.
Now let’s compute this with n steps. So at each step we add a 1/n fraction of the red clay to the blue clay, then remove it. Let the temperature of the blue clay after k steps be Tk.
Adding a 1/n piece of red clay to the blue clay:
To simplify our expression, let r be the ratio: nn+1.
Unrolling our definition gives a geometric series:
Replacing the geometric series with its sum:
Now rn+1=(nn+1)n+1=(1−1n+1)n+1. As n approaches infinity, it’s well-known that this approaches 1/e.
Tn=100(1−1/e) = 63.2 degrees.
As a final note, the LaTeX support for LessWrong is the best I’ve seen anywhere. My thanks to the team!
I’m looking forward to following this!
If your analysis of the game theory of the situation is correct, we would expect that the military occasionally makes concessions to share power, but also violently reasserts their full control when they thinks it’s necessary. Do you see any way for the country to break out of that cycle?
For example, how effective do you think new international sanctions will be at curbing the violence?
US: U.S. To Impose Sanctions On Myanmar Military Officials Over Coup : NPR
UK: UK announces further sanctions against Myanmar generals—ABC News (go.com)
EU: EU agrees to sanctions on Russia crackdown and Myanmar coup | The Japan Times
On wearing glasses: Do you think contacts would also be helpful?
I’ve noticed, for example, that my eyes aren’t nearly as affected by chopping onions when I’m wearing contacts. That seems vaguely similar to COVID transmission by aerosols.
″...and that’s why you can’t haggle with store clerks.”
That is the norm, at least in the US. But a friend of mine worked at Macy’s, and she said customers would occasionally try to negotiate prices. They were often successful.
A quick search online found that it’s possible to haggle at a lot more places than I had realized: 11 Retailers Where You Can Negotiate a Lower Price (wisebread.com).
I consider this somewhat ethically dubious, though. If the clerks are paid hourly, they don’t have their own “skin in the game,” so there’s no way to have a fair negotiation.
When my little sister was very young, we told her that the ice cream truck was a “music truck”—it just went around playing music for people.
I don’t necessarily recommend lying, but it may have prevented some tantrums...
Google is the prime example of a tech company that values ethics, or it was in the recent past. I have much less faith in Amazon or Microsoft or Facebook or the US federal government or the Chinese government that they would even make gestures toward responsibility in AI.
I work for Microsoft, though not in AI/ML. My impression is that we do care deeply about using AI responsibly, but not necessarily about the kinds of alignment issues that people on LessWrong are most interested in.
Microsoft’s leadership seems to be mostly concerned that AI will be biased in various ways, or will make mistakes when it’s deployed in the real world. There are also privacy concerns around how data is being collected (though I suspect that’s also an opportunistic way to attack Google and Facebook, since they get most of the revenue for personalized ads).
The LessWrong community seems to be more concerned that AI will be too good at achieving its objectives, and we’ll realize when it’s too late that those aren’t the actual objectives we want (e.g., Paperclip Maximizer).
To me those seem like mostly opposite concerns. That’s why I’m actually somewhat skeptical of your hope that ethical AI teams would push a solution for the alignment issue. The work might overlap in some ways, but I think the main goals are different.
Does that make sense?
Scott Alexander wrote a post related to this several years ago: Should You Reverse Any Advice You Hear? | Slate Star Codex
I wonder whether everyone would be better off if they automatically reversed any tempting advice that they heard (except feedback directed at them personally). Whenever they read an inspirational figure saying “take more risks”, they interpret it as “I seem to be looking for advice telling me to take more risks; that fact itself means I am probably risk-seeking and need to be more careful”. Whenever they read someone telling them about the obesity crisis, they interpret it as “I seem to be in a very health-conscious community; maybe I should worry about my weight less.”
Of course, some comments noted that this meta-advice is also advice that you should consider reversing—if you’re on LessWrong, you’re already in a community that’s committed to testing ideas, perhaps to an extreme degree.
For myself, when it comes to advice, I usually try to inform rather than to persuade. That is, I present the range of opinions that I consider reasonable, and let people make their own decisions. Sometimes I’ll explain my own approach, but for most issues I just hope to help people understand a broader range of perspectives.
This does occasionally backfire—some people are already committed strongly to one side, and a summary of the opposite perspective that sounds reasonable to me sounds absurd to them. In some cases, I’ve trapped myself into defending one side, trying to make it sound more reasonable, while I actually believe the exact opposite. And that tends to be more confusing than helpful.
But as long as I stick to following this strategy only with friends that are already curious and thoughtful people, it generally works pretty well.
(And did you catch how I followed this strategy in this comment itself?)
I understand your argument that there’s a systematic bias from tracking progress on relatively narrow metrics. If progress is uneven across different areas at different times, then the areas that saw progress in the recent past may not be the same areas in which we see progress today.
You don’t seem to make any suggestions on what would be a better metric to use. But to me it seems like the simplest solution is just to use broader metrics. For example, instead of tracking the cost of installing solar panels, we could measure the total cost of our electric grid (perhaps including environmental concerns such as carbon emissions as one part of that cost).
Along those lines, the broadest metrics we have are macroeconomic statistics such as GDP per capita. The arguments I’ve seen for stagnation (mostly from Jason Crawford or Tyler Cowen) already use the recent observed slowdown in GDP growth extensively.
If we see the same trend across most areas and most levels of metrics (both narrow, specific use cases and overall summary statistics) - isn’t that strong evidence in favor of the stagnation hypothesis?
Or do you think there are no reliable metrics for measuring progress as a whole?
Yeah, I forgot to mention, I actually tried that too! I at least visited one of my teacher’s other students and tried performing on her piano.
It’s harder for me to tell how much it helped, but I think it was useful, at least for my own confidence.