Just this guy, you know?
Dagon
Probably crazy, yes. Don’t feel bad, all humans are.
But when you lead with ‘I can’t shake my belief’, that indicates an internal conflict that part of you doesn’t believe, and since there’s no evidence that can resolve the question, you probably could use professional help to figure out how to believe more mainstream illusions that are easier and more satisfying for most humans.
Yes, different “it” will have VASTLY different costs and potential evidence from trying, so the discussion doesn’t generalize very well. “you are reasoning too much” implies “you are empirically testing too little”, which could easily be true or false, or neither (it could be “you are reasoning badly from evidence we agree on” or “you need to BOTH measure and reason a lot more clearly”).
For some (but not all) topics, “just try it” is INCREDIBLY unhelpful—most people are pretty bad observers, and a lot of things don’t separate experiential elements in ways that are easy to analyze which parts are evidence, which parts are idiosyncratic triggers of internal states.
I suppose it’s because Bob isn’t aware of all the things he needs to say before posting the question, and Alice assumes on what he needs while he thinks he doesn’t need it.
Without actual specifics, it’s hard to know WHY the disconnect is happening. It does seem that Alice and Bob aren’t in agreement over what the question is, but it’s unclear which (if either) is closer to something useful.
This seems WAY over-abstracted. There are important differences in what kinds of evidence is obtainable by what techniques for problems in very different domains.
Also, this seems unnecessarily adversarial between Bob and Alice. Have they not agreed on the problem or on what would constitute a solution? If they can reframe to a shared seeking of knowledge, it may be easier to actually talk about what each believes and why.
Is this in a situation where you’re limited in time or conversational turns? It seems like the follow-up clarification was quite successful, and for many people it would feel more comfortable than the more specific and detailed query.
In technical or professional contexts, saving time and conveying information more efficiently gets a bit more priority, but even then this seems like over-optimizing.
That said, I do usually include additional information or a conversational follow-up hook in my “I don’t know” answers. You should expect to hear from me “I don’t know, but I’d go at least 2 hours early if it’s important”, or “I don’t know, what does Google Maps say?”, or “I don’t know, what time of day are you going?” or the like.
I’d love to see some reasoning and value calculations or sketches of what to do INSTEAD of the things you eschew (planning, saving, and working toward slight improvements in chances).
Even if the likelihood is small, it seems like the maximum value activities are those which prepare for and optimize a continued future. Who knows, maybe the horse will learn to sing!
Causal commitment is similar in some ways to counterfactual/updateless decisions. But it’s not actually the same from a theory standpoint.
Betting requires commitment, but it’s part of a causal decision process (decide to bet, communicate commitment, observe outcome, pay). In some models, the payment is a separate decision, with breaking of commitment only being an added cost to the ‘reneg’ option.
There’s some subtlety here about exactly what “zooming” means. In standard implementations, zooming recalculates a small area of the current view, such that the small area has higher precision (“zoomed”), but the rest of the space (“unzoomed”) goes out of frame and the memory gets reused. The end result is the same number of sampled points (“pixels” in the display) each zoom level.
There’s a saying about investing which somewhat applies here. “The market can stay irrational longer than you can stay solvent”. Another is “in the long run, we’re all dead.”
Nothing is forever, but many things can outlast your observations. Eventually everything is steady state, fine. But there can be a LOT of signal before then.
Note that your computer doesn’t run out of bits when exploring the Mandelbrot set. Bits can encode an exponential number of states, and a few megabytes is enough to not terminate for millennia if it’s only zooming in and recalculating thousands of times per second. Likewise with your job—if it maxes or mins a hundred years out, rather than one, it’s a very different frame.
It’s surprising that it’s taken this long, given how good public AI coding assistants were a year ago. I’m skeptical of anything with only closed demos and not interactive use by outside reviewers, but there’s nothing unbelievable about it.
As a consumer, I don’t look forward to the deluge of low-quality apps that’s coming (though we already have it to some extent with the sheer number of low-quality coders in the world). As a developer,I don’t like the competition (mostly for “my” junior programmers, not yet me directly), and I worry a lot about whether the software profession can make great stuff ever again.
That answer just raises more questions. How do new voters get votes, and what happens to deceased or newly-ineligible voter’s “stored votes”? Are votes transferable (or sellable)?
Money is very rather different from votes; there’s zero expectation of “fair distribution” or “equal weight”. That’s why we have different things for different purposes.
You COULD just do away with voting and use currency auctions. I think a lot of people would object.
I might say it fails to avoid that bias, rather than creating it. Personally, I think it carries enough more information than votes that it’s worth having it. In fact, I’d probably remove the agree/disagree and just fold it into reacts.
You could probably reduce the bias a little by putting “suggested reacts” in the same line, with a 0 next to them, so they can just be clicked rather than needing to discover and click. At the expense of clutter and not seeing the ACTUAL reacts as easily.
How durable is the vote storage? I can see this as great if there’s a closed set of voters on a closed set of issues, and voters get to allocate the marginal importance to them of each issue, in order to use all their voting power for the most important. I suspect that for long-running governance-choices, this will feel unfair to young/new voters, and to older ones who’ve used their votes on previous things that they now realize were unimportant.
I will also say that I’m worried by the statement
a political system is not legitimate because of the consent of the governed, but because of the welfare of the governed
This treats people as moral patients rather than moral actors. That’s a framing that leads to disenfranchisement pretty easily.
I don’t intend it as strong pushback—this is all speculative enough that it falls well outside my intuitions, and far enough from historical distribution that data is misleading. Anything could happen!
here will presumably be some entity that is producing basic goods that humans need to survive. If property rights are still a thing, this entity will require payments for the basic goods.
Do you mean “if” or “iff”? In the case where a large subset of humans CAN’T make payments for basic goods, does that mean property rights aren’t a thing? I suspect so (or that they’re still a thing, but far different than today).
Quite. We don’t hear enough about individuality and competitive/personal drives when talking about alignment. I worry a lot that the abstraction and aggregation of “human” values completely misses the point of what most humans actually do.
The equilibrium comprises literal transactions, right? You should be able to find MANY representative specific examples to analyze, which would help determine whether your model of value is useful in these cases.
My suspicion is that you’re trying to model “value” as something that’s intrinsic, not something which a relation between individuals, which means you are failing to see that the packaged/paid/delivered good is actually distinct and non-fungible with the raw/free/open good, for the customers who choose that route.
Note that in the case of open-source software, it’s NOT a game of ultimatum, because both channels exist simultaneously and neither has the option to deny the other. A given consumer paying for one does not prevent some other customer (or even the same customer in parallel) using the direct free version.
It’s worth examining whether “capturing value” and “providing value” are speaking of the same thing. In many cases, the middlemen will claim that they’re actually providing the majority of the value, in making the underlying thing useful or available. They may or may not be right.
For most goods, it’s not clear how much of the consumer use value comes from the idea, the implementation of the idea, or from the execution of the delivery and packaging. Leaving aside government-enforced exclusivity, there are usually reasons for someone to pay for the convenience, packaging, and bundling of such goods.
I worked (long ago) in physical goods distribution for toys and novelties. I was absolutely and undeniably working for a middleman—we bought truckloads of stuff from factories, repackaged it for retail, and sold it at a significant markup to retail stores, who marked it up again and sold it to consumers. Our margins were good, but all trades were voluntary and I don’t agree with a framing that we were “capturing” existing value rather than creating value in connecting supply with demand.
I don’t get it. “AI that does what we need AI to do” implies that “we” is a cohesive unit, and also that what we need is extremely limited. Neither are anywhere close to true.
I have nothing against tools, but for many desired outcomes, I don’t want tools, I want someone who knows the tools and can do the work.
Property rights are respected, but there is no financial assistance by governments or AGI corporations.
I have trouble imagining this equilibrium. Property rights are ALREADY eroding, at least in big cities—there’s a whole lot more brazen theft and destruction than ever, almost all without consequence to the perpetrators. Electronic-asset rights are a bit more secure in daily life, but you can’t eat a robinhood screen, and in a LOT of AGI-available worlds, all the electronic records become suspect and useless pretty quickly. At a lower level, stocks lose their value when companies lose their revenue streams. It’s the masses of humans striving and participating in a given economic system that makes it exist at all. If most humans are trading something else than money (time, attention, whatever), then money won’t be used much.
Losing your job to automation is absolutely survivable if it’s rare bad luck, or if new jobs that aren’t instantly (or already) automated can be created. Having a majority (or a large minority, probably) in that situation changes the fundamental assumptions too much to predict any stability of financial assets.
This depends entirely on context and specifics. How did I get such control (and what does “control” even mean, for something agentic)? How do I know it’s the first, and how far ahead of second is it? What can this agent do that my human collaborators or employees can’t?
In the sci-fi version, where it’s super-powerful and able to plan and execute, but only has goals that I somehow verbalize, I think Eliezer’s genie description (from https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes) fits: ’There are three kinds of genies: Genies to whom you can safely say “I wish for you to do what I should wish for”; genies for which no wish is safe; and genies that aren’t very powerful or intelligent’. Which one is this?
A more interesting framing of a similar question is: for the people working to bring about agentic powerful AI, what goals are you trying to imbue into it’s agency?
It matters a lot what “it” is. Common targets of “just try it” are mystic or semi-mystic experiences around drugs, meditation, religion, etc., These tend to be hard to communicate because they’re not actually evidence of outside/objective phenomena, they’re evidence of an individual’s reaction to something. I have no clue whether that applies here or not—that’s my primary point: one size does not fit all.
Note that Bob is making an error if he flatly denies Alice’s experiences, rather than acknowledging that the experiences can be real without the underlying model being correct.