Just this guy, you know?
Interesting near-simultaneous post with https://www.lesswrong.com/posts/gNodQGNoPDjztasbh/lies-damn-lies-and-fabricated-options . In both cases, it’s about discussion with someone who has a different model, and there’s no real way to understand whether a semi-specified counterfactual world is “possible”.
Finding the crux of a disagreement of this sort is non-trivial. Starting with explaining your beliefs and understanding in a more detailed way is a pretty reasonable place to start, IMO. Ideally, your conversational partner will stop you when they disagree, and expound on THEIR model in similar detail.
What predictions does the theory make? Most discussion of consciousness around here seeks to dissolve the question, not to debate the mechanics of an un-measured subjective phenomenon.
Penrose’s theory (quantum sources of consciousness) does get mentioned occasionally, but not really engaged with, as it’s not really relevant to the aspects that get debated often.
https://xkcd.com/927/Telling time by specifying the timezone (3:12pm Pacific Time) or ISO8601 is pretty much usable anywhere, and as precise as you need. It’s going to be more universal to get competent at timezone handling than to (try to) convince everyone to use UTC.
A lot depends on the X, and the implications of X, and whether EITHER of you have evidence for or against it. There are plenty of topics that really aren’t motivated by empirical predictions and evidence. Those tend to be very frustrating to try to use Bayes’ Rule on.
Many social or religious statements, for instance, are not claims as such. They’re framed as truth, but are really moral axioms or faith-based frameworks of thought. My general advice: let it go, you probably can’t convince them, nor can they convince you. Truth-seeking is not the mode of communication you’re in.
Don’t learn too much from fiction (including games with non-real-world rules).
There are no actual sentient beings harmed by the actions in A&A. There is no moral impact from where you choose to use your plastic pieces.
In real life, some violence is likely justified to prevent worse outcomes. Whether any given war meets that standard, and what actions within a war are overall beneficial, are at best “difficult to determine”.
It’s likely (unless you’re quite unusual as a human) that this is a false dichotomy. You’ll likely want to find multi-dimensional optima, rather than picking one simple thing and ignoring other aspects. The more important question is how long-term you think regarding gratification.
Look back over the last year. Do you wish you’d done things that made you have a few much happier moments, or do you wish you’d done things that made you a little happier much of the time?
I think you’re missing the fact that “the economy” isn’t actually about currency or accounting. Those are ways of tracking the economy, which consists of various goods and services that people provide to each other.
If any given currency (crypto or not) becomes untrustworthy, it’s value goes to zero, and other currencies take over as accounting mechanisms. Often with some violence in disputed ownership of the actual stuff that the currency was supposed to have been tied to.
Keep in mind—oil is food production, as much or more than land is. Moving workers to the right spots, running irrigation pumps, machinery, etc. is all fueled by oil. As is moving raw food to processing areas, and then to the actual hungry people.
I like the idea, but it requires a definition of what citizenship rights/privileges get downscaled with less than full 1.0 citizenship (and upscaled with increasing shares. Can I buy myself up to 10000 citizenship? Or retain some rights that are important to me (freedom to travel and permanent residence/work ability) with 0.01 citizenships?I strongly suspect that disaggregating the “rights” into tradeable licenses is a more workable mechanism than fractional citizenship. And, of course, once it’s no longer considered a “right” that’s acquired with birth and/or naturalization, it’ll stop being granted that way, and only rentable from the authority for a fee.
Good point! I think it’s simply a mistake to say
my end goal is finding cars that are really a cut above the rest in terms of safety
That’s not an end goal. That’s an instrumental goal toward actually being safe while moving about. Actually spending less time on the road, or improving your driving ability/habits, is very likely to have more safety impact than the difference between the top few contenders for your choice of vehicle.
It seems to me like there are meaningful differences between these words: viable, lovable and appropriately gray.
It seems that way, but I haven’t found it to be the case in actual product discussions. The details and unknowns overwhelm the semantic and general differences.
My team had a huge, annoying debate about MVP vs MLP, and it didn’t take me long to notice that it doesn’t matter. They ended up with MVP, and that’s just fine. AGP is just another label that I don’t think really helps with any actual decisions. Once you have agreement that you don’t really know what’s viable, lovable, or appropriate yet anyway, the wording is just bike-shed painting.The underlying idea for all of these is that you build incrementally, and as early as possible, launch to real customers who can give you useful feedback and start imposing market discipline on your product. Only then can you find out what’s actually important to them.
 defining “possible” is usually the sticking point. This needs to be discussed in detail, not in general about whether it’s “viable” or “lovable”. What actual features do you want feedback on, and what set of operations/uses is anyone willing to pay for?
I don’t think there’s enough information to answer beyond the basic obvious expectation. 50% is my prior for coin flips, and unless you specify a VERY small number of voters and a known distribution of their votes, the coinflip is lost in the noise so there’s no evidence in the result. Assuming “always” is a mathematical certainty rather than my opponent’s intent, which could be misleading in results, it must be 1-coinflip.
In repeated cases, knowledge about previous bids can give participants good reason to lie about their preferences. In the simplest case, knowing that a higher willingness-to-pay exists (but did not obtain because the second-best bid was quite low) can motivate someone to put in a higher-but-expected-to-lose bid just to drive up the price.
I think there are very distinct skills in “philosophy”, which have different measures for achievement/skill, and therefore different training regimens. Like most things one might want to get better at, there are specializations to consider. Analogously, you can train in “sports” to a certain level, but beyond that you train in “track and field”, and beyond that “javelin throw”.
So, what do you actually want to get better at, and where are you starting from?
Parts of logical presentation of arguments can be trained as debate, and others as blog posts or published articles.
Philosophical history and comparative studies of populations are probably best practiced in academia.
Actual useful models of human behaviors and justifications that many give for those behaviors—probably best practiced in the doing.
If you think the estimates are made using the same or better information than you have, and are representative (unbiased in selection or reporting) of the true beliefs of the estimators. If these do not hold, the aggregate estimate MAY still be better than yours, or your independent estimate may be better.
If median is significantly different from the mean of a group of estimates, beware. Depending on the reasons you see for the variance, you may prefer to throw out outliers and then take the median/mean (which will be closer together).
Generally, use it as evidence in calculating a posterior from your prior, rather than adjusting your prior. The trick is in not double-counting evidence that you’re using directly which the public estimates are also depending on.
For Metaculus as evidence, there’s not much mechanism for correction—nobody is making money by moving the prediction toward truth. Which implies that it’s not very good for any more than a trigger to look deeper if you’re surprised by a result. You’ll have to figure out the reason for the surprising prediction, and use those reasons as evidence (if you agree with them), not just the resulting predictions.
In a law class, the binding question would be “who is my client”? I believe I can make an argument for culpability or justification of any action, including flipping a coin or somehow derailing the trolley, saving all 6 but killing 20 passengers.
I don’t anticipate facing the question in reality, and don’t have a pre-committment to what I’d do. Analogous decisions (where I can prioritize outcome over common rules) are not actually similar enough to generalize. In many of those, the constraint is how much my knowledge and impact prediction differs from he circumstances that evolved the rules. In cases where I’m average, I follow rules. In cases where I have special knowledge or capability, I optimize for outcome.
This is unsatisfying to many, and those who pose this question often push for a simpler answer, and get angry at me for denying their hypothetical. But that’s because it’s simply false that I have a rule for whether to follow rules or decide based on additional factors.
Is there a corresponding list of solved ML Safety problems?
Thank you for this! Another +1 for it being the single biggest influence on my thinking, and I am very happy that you’ve summarized it in this way—it won’t remove the enjoyment of reading it, but it will probably remove 1.5 passes from the normal “read it 3 times over the course of a decade to really get it” advice.
A lot of voting problems are actually group-preference ambiguity. For a given population with static preferences and independent positions (that is, the selected body is just a list of the top N candidates, not a slate where less-preferred individuals become preferred as a group), I’d expect that cloning is ideal.
The populace would actually prefer N copies of the best candidate, rather than the N-1 not-as-good people.
In the case where the interaction of selected options matters, and for instance, all cooperative second-best party is preferred over a mixed-party of the best people, then this kind of vote fails to serve regardless.