Sure, opinions come to people from a few different sources. I speculate that interpersonal transmission is the most common, but they can also originate in someone’s head, either via careful thought or via a brief whim.
scarcegreengrass
People don’t have opinions—opinions have people.
Often, one hears someone express a strange, wrong-seeming opinion. The bad habit is to view this as that person’s intentional bad action. The good habit is to remember that the person heard this opinion, accepted it as reasonable, & might have put no further thought into the matter.
Opinions are self-replicating & rarely fact-checked. People often subscribe to 2 contradictory opinions.
Epistemic status: I’m trying this opinion on. It’s appealing so far.
I like it! In addition, I suppose you could use a topic-wide prior for those groups that you don’t have much data on yet.
This is totally delightful!
Personally I’d rather have the public be fascinated with how chatbots think than ignorant of the topic. Sure, non experts won’t have a great understanding, but this sounds better than likely alternatives. And I’m sure people will spend a lot of time on either future chatbots, or future video games, or future television, or future Twitter, but I’m not convinced that’s a bad thing.
The regulation you mention sounds very drastic & clumsy to my ears. I’d suggest starting by proposing something more widely acceptable, such as regulating highly effective self modifying software that lacks security safeguards.
Basing ethical worth off of qualia is very close to dualism, to my ears. I think instead the question must rest on a detailed understanding of the components of the program in question, & the degree of similarity to the computational components of our brains.
Excellent point. We essentially have 4 quadrants of computational systems:
Looks nonhuman, internally nonhuman—All traditional software is in this category
Looks nonhuman, internally humanoid—Future minds that are at risk for abuse (IMO)
Looks humanoid, internally nonhuman—Not a ethical concern, but people are likely to make wrong judgments about such programs.
Looks humanoid, internally humanoid—Humans. The blogger claims LaMDA also falls into this category.
Good point. In my understanding it could go either way, but I’m open to the idea that the worst disasters are less than 50% likely, given a nuclear war.
Good point. Unless of course one is more likely to be born into universes with high human populations than universes with low human populations, because there are more ‘brains available to be born into’. Hard to say.
In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form ‘X is unavoidable, everything is about X, you are screwed’. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics.
Future hazards deserve more research funding, yes, but remember that the future is not certain.
What’s the status of this meetip, CitizenTen? Did you hear back?
I have similar needs. I use a spreadsheet, populated via a Google Form accessible via a shortcut from my phone’s main menu. I find it rewarding to make the spreadsheet display secondary metrics & graphs too.
Other popular alternatives include Habitica & habitdaily.app (iPhone only). I’m still looking for a perfect solution, but my current tools are pretty good for my needs.
I’m not sure either. Might only be needed for the operating fees.
Agreed. We might refer to them as ‘leaderless orgs’ or ‘staffless networks’.
Does this reduction come from seniority? Is the idea that older organizations are generally more reliable?
Are you saying there would be a causal link from the poor person’s vaccine:other ratio to the rich person’s purchasing decision? How does that work?
Thanks! Useful info.
Can you clarify why the volcano triggering scheme in 3 would not be effective? It’s not obvious. The scheme sounds rather lethal.
I call all those examples opinions.