Just this guy, you know?
Dagon
One of my favorite banjo solos is in this video: (Gimme Some of That) Ol’ Atonal Music—Merle Hazard feat. Alison Brown . It’s extremely relevant to the post, as well—making the point that there are multiple levels to art appreciation. The video makes the distiction between emotion and thinking, or heart and brain, but your distinction about timeframes and types of impact (immediate pleasure vs changing/improving future interpretations of experiences) is valid as well.
That said, I’m not sure that it’s the art which contains the differences, so much as the audience and what someone is putting into the experience of the art. Ok, both—some art supports more layers than others.
As I’ve gotten older, I note more and more problems with the literal interpretation of topics like these. This has made me change my default interpretation (and sometimes I mention it in my response) to a more charitable version, like “what are some of your enjoyable or recommended …”. In addition to the problems you mention, there are a few other important factors that make the direct “exactly one winner, legibly better than all others” interpretation problematic:
Variable over time. I explicitly value variety and change, and my ordering of things changes based on which attributes of that thing are more important in this instant, and how I estimate my reaction to those attributes in this instant.
Illegible preferences. I have no idea why I’m drawn to some things. I can make up reasons, but I have no expectation that I can actually discern all my true reasons, and I’m usually unwilling to try very hard.
High-dimensionality. Most discussions and recommendation-requests of this form are about non-simple experiences. Comparing any two requires a LOT of choice in how to weight various dimensions of comparison in order to get a ranking.
It’s interesting to figure out how to make use of this multi-level model. Especially since personal judgement and punishment/reward (both officially and socially) IS the egregore—holding people accountable for their actions is indistinguishable from changing their incentives, right?
Mostly agreed—this argument fails to bridge (or even acknowledge) the is-ought gap, and it relies on very common (but probably not truly universal) experiences. I also am sad that it is based on avoidance instincts (“truly sucks”) rather than seeking anything.
That said, it’s a popular folk philosophy, for very good reasons. It’s simple enough to understand, and seems to be applicable in a very wide range of situations. It’s probably not “true” in the physics sense, but it’s pretty true in the “workable for humans” sense.
There’s probably a larger gap here than, say newton to einstein for gravity, but it’s the same sort of distinction.
I’m not sure that AI boxing is a live debate anymore. People are lining up to give full web access to current limited-but-unknown-capabilities implementations, and there’s not much reason to believe there will be any attempt at constraining the use or reach of more advanced versions.
This seems just like regular auth, just using a trusted 3P to re-anonymize. Maybe I’m missing something, though. It seems likely it won’t provide much value if it’s unbreakably anonymous (because it only takes a few stolen credentials to give an attacker access to fake-humanity), and doesn’t provide sufficient anonymity for important uses if it’s escrowed (such that the issuer CAN track identity and individual usage, even if they currently choose not to).
Interesting thought. I tend to agree that the endgame of … protection from scalable attacks in general … is lack of anonymity. Without identity, there can be no memory of behavior, and no prevention of abuse that’s only harmful across multiple events/sources. I suspect it’s a long way out, though.
Your proposed solution (paid IP whitelisting) is pretty painful—the vast majority of real users (and authorized scrapers) don’t have a persistent enough address, or at least don’t know that they do, to participate.
I’m not sure it classifies as an emotion (nor does stupidity, for that matter), but it probably does exist as a motivation for some human acts, with the relevant emotion usually being anger.
I don’t think your distinction (harm for its own sake, distinct from harm with a motivation) is real, unless you think there are un-caused actions in other realms, or you discount some motivations (like anger or hatred) as “not valid” for some reason.
Anthropics start out weird.
Trying to reason from a single datapoint out of an unknown distribution is always going to be low-information and low-predictive-power. MWI expands the scope of the unknown distribution (or does it? It all adds up to normal, right?), but doesn’t change the underlying unknowns.
That’s a really good example, thank you! I see at least some of the analogous questions, in terms of physical measurements and variance in observations of behavioral and reported experiences. I’m not sure I see the analogy in terms of qualia and other unsure-even-how-to-detect phenomena.
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness,
My imagination is pretty good, and while I can imagine that, it’s not about this universe or my experience in reasoning and prediction.
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic? Not meta-reasoning about science, but actual reasoning about a real thing?
Hmm, still not following, or maybe not agreeing. I think that “if the reasoning used to solve the problem is philosophical” then “correct solution” is not available. “useful”, “consensus”, or “applicable in current societal context” might be better evaluations of a philosophical reasoning.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
But if someone finds the correct answer to a philosophical question, then they can… try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?
I think this is a crux. To the extent that it’s a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about “useful” rather than “true”), posts like this one make no sense. To the extent that it’s expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it’s NOT purely philosophical.
This post appears to be about an empirical question—can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain. It’s not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood. It’s also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.
This comes down to a HUGE unknown—what features of reality need to be replicated in another medium in order to result in sufficiently-close results?
I don’t know the answer, and I’m pretty sure nobody else does either. We have a non-existence proof: it hasn’t happened yet. That’s not much evidence that it’s impossible. The fact that there’s no actual progress toward it IS some evidence, but it’s not overwhelming.
Personally, I don’t see much reason to pursue it in the short-term. But I don’t feel a very strong need to convince others.
I mean “mass and energy are conserved”—there’s no way to gain weight except if losses are smaller than gains. This is a basic truth, and an unassailable motte about how physics works. It’s completely irrelevant to the bailey of weight loss and calculating calories.
Not sure this is a new frontier, exactly—it was part of high-school biology classes decades ago. Still, very worth reminding people and bringing up when someone over-focuses on the bailey of “legible, calculated CICO” as opposed to the motte of “absorbed and actual CICO”.
[Question] What epsilon do you subtract from “certainty” in your own probability estimates?
I’d enjoy some acknowledgement that there IS an interplay between cognitive beliefs (based on intelligent modeling of the universe and other people) and intuitive experienced emotions. “not a monocausal result of how smart or stupid they are” does not imply total lack of correlation or impact. Nor does it imply that cognitive ability to choose a framing or model is not effective in changing one’s aliefs and preferences.
I’m fully onboard with countering the bullying and soldier-mindset debate techniques that smart people use against less-smart (or equally-smart but differently-educated) people. I don’t buy that everyone is entitled to express and follow any preferences, including anti-social or harmful-to-others beliefs. Some things are just wrong in modern social contexts.
Hmm. I can’t tell whether this is an interesting or new take on the question of what is a “true” experience, or if it’s just another case of picking something we can measure and then talking about why it’s vaguely related to the real question.
Do you also compare HUMAN predictions of other human emotional responses, to determine if that prediction is always experienced as suffering itself?