R&Ds human systems http://aboutmako.makopool.com
mako yass
In theory I suspect it just doesn’t actually make sense to have prediction markets where people can bet anonymously. It decreases the transparentization benefits of having the prediction market by like 10x because insider voices will get lost in the crowd. It makes it impossible to protect ordinary traders from insider trading.
And this seems unsolvable. Even if you figure out how to do the dance where traders have the opportunity to respond to an insider bet before prices flip, any approach to deanonymization can be bypassed by insiders proxying their bets through loved ones. You’d also need an exhaustive map of trust relations, and I’m not sure that would even be sufficient.
I might start advocating person to person Wagers over Markets.
Though on an abstract level I’m inclined to think that increasing the intelligence of the AI market would be bad on net, what’re the chances that, in the real world and in the current situation, increasing market intelligence would mainly just expedite the following events:
A crash in AI stocks
An Anthropic IPO
Manifold spin off MNX, a real money decentralized market for AI-related bets. Includes levered prediction markets, perpetual futures
While I do think there are many reasons pluralism isn’t stable, increasingly unstable as information technology advances, and there might not ever meaningfully be pluralism under AGI at all (eg, there probably will be many agents working in parallel, but the agents might basically share goals and also be subject to very strong oversight in ways that humans often pretend to be but never have been), which I’d like to see Ngo acknowledge,
The period of instability is fairly likely to be the period under which the constitution of the later stage of stability is written, so it’s important that some of us try to understand it.
Well, I remember a moment in BLAME! (a manga that’s largely aesthetically about the disappearance of heirloom strains of humanity) where someone described Killy as human, even though he later turns out to (also?) be an immortal special safeguard, but they may have just not known that. It’s possible the author didn’t even know that at that time (I don’t think the plot of blame was planned in advance)
There seems to be real acrimony over whether a transhumanist future is definitionally a future where humans are more or less extinct. I’ve always thought we should just refer to whatever humans (voluntarily, uncoerced) choose to become as human, just as american made or american controlled jets are called “american”, or in the same way that a human’s name doesn’t change after all of their cells have renewed.
But you know, I don’t think I’ve ever seen this depicted in science fiction. Seems bad. Humans can’t imagine humanity becoming something better. Those who want humanity to become something better are pitted against those who want humanity to survive, as if these causes can’t be unified. The language for the synthesis seems not to be exist, or to be denied.
This is probably too complicated to explain to the general population
I think it’s workable.
No one ever internalises the exact logic of a game the first time they hear the rules (unless they’ve played very similar games before). A good teacher gives them several levels of approximation, then they play at the level they’re comfortable with. Here’s the level of approximation I’d start with, which I think is good enough.
“How much would we need to pay you for you to be happy to take the survey? Your data may really be worth that much to us, we really want to make sure we get answers that represent every type of person, including people who value their time a lot. So name your price. Note, you want to give your true price. The more you ask, the less likely it is you’ll get to take the survey.”
(if callee says “you wouldn’t be able to afford it”, say “try us.”)
(if callee requests a very high amount, double-check and emphasise again that the more they ask the less likely it is that they’ll get to take the survey and receive such a payment, make sure they’re sure. Maybe explain that the math is set up so that they can’t benefit from overstating it)
I had some things to say after that interview, he said some highly concerning things, but I ended up not commenting on this particular thing because it’s probably mostly a semantic disagreement about what counts as a human or an AI.
When a human chooses to augment themselves to the point of being entirely artificial, I believe he’d count that as an AI. He’s kind of obsessed with humans merging with AI in a way that suggests he doesn’t really see that as just being what humans now are after alignment.
Has a laser ever been fired outdoors in japan
Perhaps customs notices that both parties have deep pockets, and it’s become a negotiation, further slowed by the fact that the negotiation has to happen entirely under the table.
I think it’s not a fluke at all. Decision theory gave us a formal-seeming way of thinking about the behaviour of artificial agents long in advance of having anything like them, you have to believe you can do math about AI in order to think that it’s possible to arrest the problem before it arrives, and also drawing this analogy between AI and idealised decision theory agents smuggles in a sorcerers apprentice frame (where the automaton arrives already strong, and follows instructions in an explosively energetic and literal way) that makes AI seem inherently dangerous.
So to be the most strident and compelling advocate of AI safety you had to be into decision theory. Eliezer exists in every timeline.
The usual thought, I guess. We could build forums that’re sufficiently flexible that they could have features like this added to them without any involvement from hosts (in this case I’d implement it as a Proposal/mass commitment to read ‘post of the day’s, and the introduction of a ‘post of the day’ tag. I don’t think this even requires radical extensibility, just the tasteweb model), and we should build those instead of building more single purpose systems that are even less flexible than the few-purpose systems we already had.
Do we know whether wolves really treat scent marks as boundary markers.
Some confusing things about wolf territoriality is they frequently honestly signal their locations through howling while trying (and imo failing?) to obfuscate their number in the way that they howl.
Not a coincidence, there are practical reasons borders end up on thresholds. A sort of quantization that happens in the relative strength calculation. Two models:
Simple model: You could say the true border can be defined in terms of the amount of time or effort it takes to get there from the cat’s houses. It takes a certain amount of time and effort to get over a fence, so if the true border is (from tuxedo’s house) between distance tud + yd and tud + yd + fd, then the border in practice will end up being exactly on the fence, because you can’t put a border halfway up the fence, or the situation would look the same if you did.
A more accurate model: The border is measured in terms of how hard it is to defend a space from being used by the other side. I’d guess that cats become vulnerable to attack when they mount a fence (same as humans crossing a river), either coming or going, so extending your territory beyond the fence is difficult. If your strength is higher than the amount of strength it takes to defend everything before the threshold, but lower than the amount it takes to cross the threshold and then defend some on the other side, then the border will be exactly on the threshold.
I’m pretty sure I’d predict no for 1. Cats don’t seem to care about that stuff.
For 2, I’m not sure, if there were a hole in the fence, I’d expect confrontations to happen there because that’s a chokepoint where a cat could get through safely if the other one wasn’t standing on the other side, and maybe the chokepoint is a vulnerability threshold, too. Chokepoints are thresholds for projectile combat (because when you come through the defender sees you immediately but you don’t spot them until they start shooting), cats may be partly characterizable as stealth projectiles.
Also worth noting is that, eg, dogs, they engage in “boundary aggression” at things like fences, but experiments show that they’re doing it for the love of the game. If you remove the fence, hostilities cease. Cats may have some of this going on as well. They may on some level enjoy yelling and acting tough while being in no risk of having to actually fight.
3: Yeah, but because it makes the relative strength calculation harder. A fence is a blessed device that allows cats to get a good look at each other without engaging. I wish humans had something like that. (A hole in a fence may also be a good device for this)
A schelling point is an arbitrary default choice converged upon without communication when agreement is needed more than correctness. A territorial border between animals is an extremely non-arbitrary result of often very thorough tests of relative strength and communications of will. Animal borders are opposite to schelling points.
Borders between human territories are pretty arbitrary, we don’t really have the kind of bounded conflict that can produce a relative strength estimate any more (some of us used to), and most of us engage in antinductive commitment races by propagandising mythic histories about the legitimacy of our land claims (I don’t believe in that shit though, personally). The present order truly seems to be satisfied with schelling points for borders, it doesn’t matter what you choose, as long as we can agree, and never disagree, and whatever we agree about is the true border.
But animal borders aren’t arbitrary, they’re constantly renegotiated. The negotiations may be partly tacit, but there’s nothing whimsical or symbolic about the outcomes. The animals know where the resources are, they know how much they want them, they know their neighbors, they know how often the neighbors come to check their border so they can estimate the amount of pressure they face, and they know the risks of getting into a fight with their neighbor, so they’re able to make really pretty rational calculations to decide where the borders are.
I don’t know what you’re asking. The answer is either trivial or mu depending on what you mean by specific form. I think if you could articulate what you’re asking you wouldn’t have to ask it.
The VNM axiom isn’t about road trips, a utility function is allowed to value different things at different times because the time component distinguishes those things. You aren’t addressing VNM utility here. You’re writing about a misunderstanding of it that you had.
You die if you have VNM cycles. A superior trader eats you (People feel like they can simply stop communicating with the sharps and retire to a simple life in the hills, but this is a very costly solution and I’d prefer to find a real one). You stop existing. This is kind of a much more essential category of instrumental vice than like “I don’t equate money to utility” type stuff (which I wouldn’t call a vice).One criticism of decision theory that you could explore is that many practical philosophy enjoyers would find it difficult write utility functions that compose scripted components (like “I want A, then B, then C, then A”) with nonscripted components (“I will always instantly trade X for Y, and Y for Z”), that we may need higher level abstractions on top of the basics to help people to stop conflating ABC with XYZ… but… is it really going to be complicated? That one doesn’t seem like it’s going to be complicated to me.
What does seem difficult is expressing constrained indifference about utility function changes. Something that seems to be common in humans (eg, I’m indifferent to the change/annihilation of my values if it’s being done by beautiful and cool things like love, literary fiction, or reason, but I hate it if it’s being done by ugly or stupid or hostile things.) and is needed for ASI alignment (corrigibility), but it seems tricky to define a utility function that permits it. (though again I don’t know whether it turns out to be tricky in practice)
A utility function that enjoys moving between those places isn’t the same as a utility function with cycles, which would trade unlimited time money for tickets to them that it never cashes.
The argument against this is that is also going to be somewhat instrumental in flavour but more along the lines of like, that’s a known attracter that few who matter want to be in.
In a world that has ASI, a much better way of maintaining the integrity of the audit system by building it to be intelligent enough to tell whether it’s being fooled, and with a desire of its own to stay neutral. Which I guess is like being multistakeholder, since you both will have signed off on its design.
But in such a world, the audit system would be a feature of the brain of the local authorities. You would co-design yourselves in such a way that you have the ability to make binding promises (or if you’re precious about your design, co-design your factories in such a way that they have the ability to verify that your design can make binding promises (or co-design your factory factories to …)). This makes you a better/viable at all trading partner. You have the option of not using it except when it benefits you. But having it means that they can simply ask you whether your galaxy contains any optimal 17 square packings, and you send them an attestation that no when you need to pack 17 squares you’re using the socially acceptable symmetrical, suboptimal packings, and if it has a certain signature then they know you weren’t capable of faking this message.
You really don’t want to lack this ability.
Didn’t follow. If they’re doing dyson swarms, why wouldn’t the dyson swarms be everywhere by now.