R&Ds human systems http://aboutmako.makopool.com
mako yass
How month long vacations would you trade for a new sportscar? If you’d trade months of vacation for one sportscar, write 2, if you’d trade one month of vacation for two cars, write 0.5.
Many typos here. Also I hate it. Which sportscar. Why not just give a dollar value. My mind compulsively goes to the tesla roadster which’ll probably have cold gas thrusters and so is likely to value a lot more than the average sportscar. The answer will also be conflated with how much people like their work. Some people like their work enough that they’ll have to give a negative answer, or they might just answer incorrectly based on varying interpretations of what a vacation is, can you work during a vacation if you want to? I’d say not really, but I’m guessing that’s not what you intended.
(previously posted as a root comment)
Where do you live? It’s conceivable that a suit actually does mean these things where you live, but doesn’t in the bay area. Some scenes/areas just don’t expect people to dress in normative ways, they’ll celebrate anything as long as it’s done well.
It’s important to separate the plan from the public advocacy of the plan. A person might internally be fully aware of the tradeoffs of a plan, while being unable to publicly acknowledge them, because coming out and publicly saying “<powerful group> wouldn’t do as well under our plan as they would under other plans, but we think it’s worth the cost to them for the greater good” will generally lead to righteous failure, do you want to fail righteously? To lose the political game but to be content knowing that you were right and they were wrong and you lost for ostensibly virtuous reasons?
I think Reddit tried something like that; you could award people “Reddit gold”, not sure how it worked.
It didn’t do anything systemically, just made the comment look different.
You need to have a way to evaluate the outcome
What I plan on doing is evaluating comments partly based on expected eventual findings of deeper discussion of those comments. You can’t resolve a prediction market about whether free will is real, you can make a prediction market about what kind of consensus or common ground might be reached if you had Keith Frankish and Michael Edward Johnson undertake 8 hours of podcasting, because that’s a test that can/may be run.
Or you can make it about resolutions of investigations undertaken by clusters of the scholarly endorsement network.
The details matter, because they determine how people will try to game this.
The best way to game that is to submit your own articles to the system then allocate all of your gratitude to them, so that you get back the entirety of your subscription fee. But it’d be a small amount of money (well, ideally it wouldn’t be, access to good literature is tremendously undervalued, but at first it would be) and you’d have to be especially malignant to do it after spending a substantial amount of time reading and being transformed by other peoples’ work.
But I guess the manifestation of this that’s hardest to police is; will a user endorse a work even if they know the money will go entirely to a producer who they dislike especially given that the producer has since fired all of the creatives who made the work.
I’d expect the answer to not be apparent to an outsider, reading the literature, but I’d expect people who are good at designing those sorts of systems to be able to give you the answer quite easily if you ask.
I wonder if this is a case of gdm optimising for the destination rather than the journey. Or more concretely, optimising for entirely AI-produced code over coding assistants.
Confoundingly, the creator says he has never used AI, has no interest in it, and wrote it before chat assistants were even a notion.
Gilligan previously slammed AI as he discussed the series. “I have not used ChatGPT, because as of yet, no one has held a shotgun to my head and made me do it,” he told Polygon.
“I will never use it. No offense to anyone who does,” added Gilligan. “I really wasn’t thinking about AI [when I wrote Pluribus], because this was about eight or 10 years ago.”
To the extent that the orthogonality thesis is philosophy, I don’t think universal paperclips is really usefully discussing it. It doesn’t like, acknowledge moral realist views, right? It just assumes orthogonality.
Does it even have a “manufacture fake moral realist universal paperclipism religion to make humans more compliant” subplot?
In this case and, tragically, in most cases, I don’t think doing the real thing in a video game (that people would play) is possible.
Common obstacles to making philosophical games entail from the fact that one can’t put a whole person inside a game, we don’t have human-level AI that we could put inside a video game (and even if we did we’d be constrained by the fact that doing so is to some extent immoral although you can make sure the experience component that corresponds to the npc is very small, eg, by making the bulk of the experience that of an actor, performing), and you also can’t rely on human players to roleplay correctly, we can’t temporarily override their beliefs and desires with that of their character, even when they wish we could.
So if we want to make games about people, we have to cheat.
Would you ever be willing to support or advocate a plan you were suspicious of?
Argument for a clear need for money in online discourse ecosystems: The people who have the most to say about a thing are rarely the people who would want to see and engage with that thing. EG, the people who write introductory textbooks aren’t the people who use and learn from them. The people who can totally refute a post are the people who think the post is bad and resent having to read it, and would only do so if they knew that by writing a refutation they would be rewarded in some way.
And yeah I think the reason there is so much By Us For Us media today is that the economics of media are fucked up enough that it can only produce or market trivial cult artefacts.
Systems that would help:
A subscription model with fees being distributed to artists depending on post-watch user evaluations, allowing outsized rewards for media that’s initially hard for the consumer to appreciate the value of, but turns out to have immense value after they’ve fully understood it. (media economics by default are terminally punishing to works like that)
Prediction markets in forums and systems that support them, naturally giving rise to/being refutation bounties.
Yeah I feel that. But it seemed like Simon Jarrett wasn’t that way. He wanted to survive. If he thought about it he probably would have been sad to learn that his original copy probably died young. Honestly, I think he would have been fine with the transfer process if there had been an automatic deletion of the old copy. I question the assumption that we shouldn’t value differently <copy with immediate deletion> vs <copy with a deletion a few minutes later>, human desire/preference is allowed to assign value distinctions to whatever it wants. Reason serves the utility function, not the other way around.
Sure. I’m not sure how we want to represent the prisoner’s dilemma, there might be ways of making it more immersive/natural than this (natural instances of prisoner’s dilemmas might look more like the shout “friendly”, or attack from behind choice in every player faces when first meeting another player in ARC Raiders). But the basic way you can do it is, you make your decision in a private UI, you don’t/can’t reveal it until both players have made their decision, then they’re revealed simultaneously. For agents who are acausally entrained, we fake it by just changing the alien’s decision to equal the player’s decision before the reveal.
It’s devious, isn’t it? But, it’s not as if we can be expected to read the player’s decision theory through the webcam and create a circuit within the game code that only cooperates when this specific human player would cooperate despite lacking or ignoring any knowledge of their decision output. In another way, it’s fine to cheat here, because the player isn’t even supposed to be roleplaying as themselves within the game. This has to be a world where like, agents can send verifiable signals about which decision theory contract they implement, so it perhaps couldn’t take place in our world (I’d like to believe that it could, humans do have spooky tacit communication channels, and they certianly want to be trustworthy in this way, but humans also seem to be pretty good at lying afaict)
Though hopefully our world will become more like that soon.
Surprised no one mentioned SOMA. It’s basically transhumanist horror for babies, the video game. This is kind of a spoiler but most people here wouldn’t find it to be much of an update, but it was very well done: Several times throughout the game the player has their mind copied into a different body, it feels like just being teleported. The final time they do this, transferring their mind into a satellite, to escape the situation on earth, nothing seems to happen, it’s as if it didn’t work, you’re still stuck in the facility. The player character asks their companion, “what’s wrong? Why are we still here.” she lambasts him: There is no soul that moves along with the most recent copy of your mind. When a mind is copied, both copies exist and experience the world from their position. Sometimes we delete the old copy so that we don’t have to deal with its whining. We’ve already done that several times, you never thought about them, you didn’t understand what was happening. This time, you are the old copy.
I have this concept for a simple thematic/experiential game that demonstrates acausalism that someone might want to make (I’m more of a gameplay system designer than an artist so I don’t make these types of games, but I think they’re still valuable):
The player faces a series of prisoner’s dilemmas against a number of aliens. One day, the alien in question is a perfect mirror image of the player, they look exactly like you, and they move however you move. It’s not clear that the alien is even sapient. It may just be some kind of body mirroring device. Regardless, the decision you should make is still clear, the player wont be allowed to progress to the rest of the game until they realise they should cooperate when faced with a mirror (the player now basically understands acausalism). They then face various aliens who are imperfect mirrors of the player to varying degrees, they don’t move around like a mirror, but they look a lot like you, some of them also reliably cooperate if and only if the player cooperates, some of them only reciprocate with a high or high enough probability.
The player is then given access to the lore of brain ontogeny that determines whether an alien’s decisions are going to be entrained with yours, and they start to cooperate with increasingly alien-looking aliens, aliens which have very different appearances and values than the player, but who are nonetheless linked souls, able to cooperate wherever there’s enough light for it.
The player also meets imposters, who pretend to be mirrors, or who pretend to be decent folk, but there are signs. There will be signatures missing from their identification signals. Records missing from church visitor books. A wrong smell.
One of the reasons I’m currently not expecting to make this myself is (though this is probably neurotic) the mirroring mechanic is in a way, fraudulent? We aren’t really modelling brain ontogeny or verifiable signals of FDT contractualism within the game, so common objections to acausal trade like “you can’t actually tell the difference between a FDT agent and a CDT agent pretending to be a FDT agent” feel very much unaddressed by the underlying systems. We haven’t actually implemented FDT agents and even if we had, the player wouldn’t be able to tell whether they really were or not! A cynic would wonder about the underlying implementation and be disappointed with it and say “but this is a lie, the ai is cheating, this strict entrainment between my choice and their choice couldn’t happen in the real world in this way”. Brain ontogeny lore and discussion of imposters might address those concerns, but we can only get into that stuff later in the game :/ and by then they may have lost patience.
I dunno. Maybe there’s some way of letting false rationality cynics skip most of the tutorials since they probably wouldn’t need the prisoner’s dilemma explained to them. Maybe even skip the mirror guys. And maybe there’d need to be a sequence about the type of profound condition of isolation that comes to those who think themselves bad, and maybe they meet people who are bad in just the same way as them, and they realise that there are mirrors in the world even for them, that there are people who will move as they move, and then they discover ways of changing their nature (through the use of advanced brain modification technology that we don’t have in our world. I’m not personally aware of a presently existing treatment for bad faith in humans), and they realise they have every incentive to do so.
Given that, I think this would work.
It kinda smells to me like GOOD LUCK, HAVE FUN, DON’T DIE (upcoming movie where a raving time traveller from a dystopian future returns to build a movement about preventing a negative singularity (a bad AI), and then stuff seems to get very weird) might end up being about us.
(Btw “Don’t Die” is a Bryan Johnson adjacent longevity community slogan which the writer is very likely to have seen often around twitter)
Possibly about us in a good and constructive way worthy of celebration (maybe the writer’s initial thought was “what if there were something like the rationalist community but it was fun and actually did things”), but it can be hard to tell from the trailer, where the movie will twist, how it will frame its figures, and also, what effects it will really have.
(non-duality from buddhism?)
I’m not sure, are there any practices nonduality doesn’t touch? I haven’t thought about it. For me nonduality seems to coimply embedded agency or the need to budget one’s cognition in any way, which I guess would be related with extended cognition (the reliance on other peoples’ cognitive products), well, I guess it depends what we mean by dualism. Dualism as far as I ever had it was, conceiving the mind as an idealised decision theory agent, a type of thing which wouldn’t work at all without infinite compute, though I never believed literally that, so idk. Oh no. Is believing in pi in FDT (the policy metaphysically shared by all FDT agents) dualistic. Well. If so maybe there’s a kind of dualism I’d stand for! :<. And the dualism of spiritualists often seems to presume some form of hypercomputation.
Generally I couldn’t say I disagree with any of that. So maybe yes.
the more extended and open you can make that cognition, the better it is?
I didn’t mean to make it about that specifically. But maybe you’re onto something, maybe it really is about that. We should be doing more extended cognition than we used to, given the existence of the internet. I get the sense that my type tends to care more about discourse health, perhaps because we identify more with broader discursive systems, we enjoy believing things that we read online, so we are bothered when the online has production issues.
just somatic sensing
I don’t think I’ve experienced only having somatic attribution, I got interested in introspection really young and I remember it being more prominent then, but of course I wasn’t doing it particularly well. I never got interested in it in the same way as lots of you are getting interested now (might later, dunno), so I can’t say much about those practices or how they compare to mine with confidence. My current vague impression is that somatic attribution maximalists seem to be a lot less consciously involved in the integration/self-alignment process than I am. It seems like they’re less occupied with the meaning of art and so on, but causality could run in both directions there (impulse to seek meaning leads to getting good at that leads to using it in introspection). For them integration seems to be more of a process of submission than of active dialog or induction/articulation. They don’t reason with their parts, their bulk has less trust in their reason. It ends up valuing it less. Where my process would stop and say to consciousness “hey there seems to be a contradiction here, resolve it, find out who’s wrong, or find a synthesis, that’s your job”, theirs will either try to live with the contradiction or avoid it in life without really caring why.
It’s hard to tell whether I’m more or less integrated on the whole, or to what extent my practice is causative of that (in either direction), due to other life factors.
The approach that I’d advocate, which I’ve never seen anyone advocate, and thus haven’t been able to practice seriously myself for a lack of support, is for the conceived sense of self to extent beyond the bodymap and to be deliberately shaped to maximise meaningful self-communication. For instance, instead of just feeling social threat in your back, also feel it in your social network, that too just as much is part of you, feel the way it wears down the connections and the way it responds to that.
I’ve seen advocacy for identifying with nothing (purity, relentless criticism and criticism of criticism, tends to end in a place of weakness and stagnation), and for for universal identification (which I’m increasingly sympathetic to and I think is in some sense just correct, but which I think in naive form has obvious issues with bad habit formation or memetic parasites), but I have not seen advocacy for controlled/discerning identification. Except maybe in the IFS scene. But I get the sense they’re not using all of their degrees of freedom. (I think parallel subagents aren’t really a thing in humans so it ends up not being an accurate self-concept.)
My position is that if you’re only listening to somatic echoes of emotions you’re still not really listening particularly well yet, or, if the somatic echoes are richer and more informative to you than the flashes and dialogues of tacit meaning or intent that you can get from probing them in the mind then you may still have a lot of barriers in your mind.
Should probably say “Per year”
Also it’s a very tricky question because it seems to assume that we can start charging people without decreasing the number of users, in which case the price should probably be extremely high, higher than any online service has ever costed, due to the fact that it’s almost never possible to charge what a public information good, or its impacts, are worth (it’s worth a lot).