interactive system design http://aboutmako.makopool.com
mako yass
Threatening to kill a species for attempting AGI would be unnecessarily high impact. It can just blow up the datacenter when it sees we’re close. Knowing it’s there and losing a datacenter would be deterrent enough. Maybe people would try a few more times in secret, and it would detect it and blow those up too. We wouldn’t know how. If we develop to the point where we start replicating its tech and finding the real blindspots in its sensors, maybe then it would have to start voicing threats and muscling in.
Or maybe at that point we will have matured in the only way it will accept, nearly AGI level ourselves, and we’ll finally be given our AGI license.
Tesla has begun “brake blending” to compensate when lesser regen is available for a consistent feel at the expense of efficiency.
Uh so is the issue resolved then, or?..
This was somewhat surprising—I had only paid for a small sedan (“B”) and the Teslas are fancy
The reason for this, I’ve heard, is that the maintenence costs of electric cars ends up being so much lower. Consumers don’t tend to notice that or take that into account when making their purchasing decisions (yet), but a commercial fleet manager like Hertz will.
But then, why teslas, rather than a cheaper electric car? I’m not sure, it’s possible their batteries are expected to have a longer lifespan or something, or that some deal was made (tesla’s margins have historically been quite large, though they were shrunk significantly this week).
I’d expect his “useful guide” claim to be compatible with worlds that’re entirely AGIs? He seems to think they’ll be subject to the same sorts of dynamics as humans, coordination problems and all that. I’m not convinced, but he seems quite confident.
(personally I think some coordination problems and legibility issues will always persist, but they’d be relatively unimportant, and focusing on them wont tell us much about the overall shape of AGI societies.)
Also note that iirc he only assigns about 10% to the EM scenario happening in general? At least, as of the writing of the book. I get the impression he just thinks about it a lot because it is the scenario that he, a human economist, can think about.
I think early AGI may actually end up being about designing organizations that robustly pursue metrics that their (flawed, unstructured, chaotically evolved) subagents don’t reliably directly care about. Molochean equilibrium fixation and super-agent alignment may turn out to be the same questions.
An analytic account of Depression: When the agent has noticed that strategies that seemed fruitful before have stopped working, and doesn’t have any better strategies in mind.
I imagine you’ll often see this type of depression behavior in algorithmic trading strategies, as soon as they start consistently losing enough money to notice that something must have changed about the trading environment, maybe more sophisticated strategies have found a way to dutch book them. Those strategies will then be retired, and the trader or their agency will have to search for new ones.
Chronic depression in humans (designed agencies wouldn’t tend to have such an obvious bug) kinda feels like when the behavior of searching for new strategies has itself has been caught within depression’s scope as an invalidated strategy.
Searching anew is what’s supposed to happen (I had a post called “Good Gloom” that drew an analogy to turning all of the lights out in a town so that you’ll be able to see the light pollution of a new town, and set out for it), but if the depression is blocking that too, you stop moving.
I’ve noticed another reason a person might need a binding theory, if they were the sort of agent who takes Rawlsian Veils to their greatest extent, the one who imagines that we are all the same eternal actor playing out every part in a single play, who follows a sort of timeless, placeless cooperative game theory that obligates them to follow a preference utilitarianism extending — not just over the other stakeholders in some coalition, as a transparent pragmatist might — but over every observer that exists.
They’d need a good way of counting and weighting observers, to reflect the extent to which their experiences are being had. That would be their weighting.
A minor example… I’m fairly sure you can make guesses about what kinds of expressions a person makes a lot from a few photos of their face. I’m not sure what else to point at to convey this intuition, but I seem to believe that behaviors in very different contexts leak information that’ll all become apparent with enough data.
I guess I can believe that there are probably a lot of people who don’t output enough content for this to work, maybe even among the users of this forum, but I don’t think it’s a large proportion of them.
That’s a shame. Unreliable notifications is a very strong poison. Undeniability of reciept/solving the byzantine generals problem is like, fundamental to all coordination problems.
I think this design would be good.
I’m working on the same problem of improving discussion and curation systems with Tasteweb. I focus more on making it easier to extend or revoke invitations with transparency and stronger support for forking/subjectivity. I’m hoping that if you make it easy to form and maintain alternative communities, it’ll become obvious enough that some of them are much more good faith/respectful/sincerely interested in what others are saying, and that would also pretty much solve deduplication.
I think in reality, it’s too much labor, and it would only work for subjects that people really really care about, but those also happen to be the most important applications to build for so.I like the focus on relevance. Relevance is all you need. If everyone just voted on the basis of relevance, reddit would be a lot better (but of course, the voters are totally unaccountable, so there’s no way to get them to).
I don’t think graph visualizations are really useful. The data should be graph-shaped, sure, but it’s super rare that you want to see the entire graph or browse through the data that way. A tree is just a clean layout for the results of a query from from a particular origin node in a graph. I’d recommend a UI for directed graphs, a tree where things can be mounted to the tree at multiple points, and where it’s communicated to the user if they’ve seen a comment recently before with, eg, red backlinks.
What sorts of things, that you would want preserved, or that the future would find interesting, would not be captured by that?
[Question] Will chat logs and other records of our lives be maintained indefinitely by the advertising industry?
I agree that there doesn’t seem to be a theory, and there are many things about the problem that makes reaching any level of certainty about it impossible (the we can only have one sample thing). I do not agree that there’s a principled argument for giving up looking for a coherent theory.
I suspect it’s going to turn out to be like it was with priors about the way the world is: Lacking information, we have just fall back on solomonoff induction. It works well enough, and it’s all we have, and it’s better than nothing.
So… oh… we can define priors about our location in the in terms of the complexity of a description of their locations. This feels like most of the solution, but I can’t tell, there are gaps left, and I can’t tell how difficult it will be to complete the bridges.
A fun thing about example 1. is that we can totally imagine an AF System that could drag a goat off a cliff and eat it (put it in a bioreactor which it uses to charge its battery), it’s just that no one would make that, because it wouldn’t make sense. Artificial systems use ‘cheats’ like solar power or hydrocarbons because the cheats are better. There may never be an era or a use case where it makes sense to ‘stop cheating’.
A weird but important example is that you might not ever see certain (sub-pivotal) demonstrations of strength from most AGI researcher institutions, not because they couldn’t make those things, but because doing so would cause them to be nationalized as a defense project.
Ack. Despite the fact that we’ve been having the AI boxing/infohazards conversation for like a decade I still don’t feel like I have a robust sense of how to decide whether a source is going to feed me or hack me. The criterion I’ve been operating on is like, “if it’s too much smarter than me, assume it can get me to do things that aren’t in my own interest”, but most egregores/epistemic networks, which I’m completely reliant upon, are much smarter than me, so that can’t be right.
This depends on how fast they’re spreading physically. If spread rate is close to c, I don’t think that’s the case, I think it’s more likely that our first contact will come from a civ that hasn’t received contact from any other civs yet (and SETI attacks would rarely land, most civs who hear them would either be too primitive or too developed to be vulnerable to them, before their senders arrive physically.).
Additionally, I don’t think a viral SETI attack would be less destructive than what’s being described.
Over time, the concept of Ra settled in my head as… the spirit of collective narcissism, where we must recognize narcissism as delusional striving towards attaining the impossible social security of being completely beyond criticism, to be flawless, perfect, unimprovable, to pursue Good Optics with such abandon, as to mostly lose sight of whatever it was you were running from.
It leads to not being able to admit to most of the org’s imperfections even internally, though they may admit to that imprefection internally, doing so resigns them to it, and they submit to it.
I don’t like to define it as the celebration of vagueness, in my definition that’s just an entailment. Something narcissism tends to do, to hide.
I really wish that the post has been written in a way that let me figure out it wasn’t for me sooner...
I think it would have saved a lot of time if the paragraph in bold had been at the top.
Man if it’s possible to rearrange hot matter into truly perpetual reversible simulations, that wouldn’t just explain weird aliens, it would also explain the anthropic binding mystery, it would redeem the Teeming Consortium story.