Parfitt’s Hitchhiker and transparent Newcomb: So is the interest in UDT motivated by the desire for a rigorous theory that explains human moral intuitions? Like, it’s not enough that feelings of reciprocity must have conveyed a selective advantage at the population level, we need to know whether/how they also are net beneficial to the individuals involved?
bokov
What should one do if in a Newcomb’s paradox situation but Omega is just a regular dude who thinks they can predict what you will choose, by analysing data from thousands of experiments on e.g. Mechanical Turk?
Do UDT and CDT differ in this case? If they differ then does it depend on how inaccurate Omega’s predictions are and in what direction they are biased?
Thank you for answering.
I’m excluding simulations by construction.
Amnesia: So does UDT roughly-speking direct you to weigh your decisions based on your guesstimate of what decision-relevant facts apply in that scenario? And then choose among available options randomly but weighted by how likely each option is to be optimal in whatever scenario you have actually found yourself in?
Identical copies, (non-identical but very similar players?), players with aligned interests,: I guess this is a special case of dealing with a predictor agent where our predictions of each other’s decisions are likely enough to be accurate that they should be taken into account? So UDT might direct you to disregard causality because you’re confident that the other party will do so the same on their own initiative?
But I don’t understand what this has in common with amnesia scenarios. Is it about disregarding causality?
Non-perfect predictors: Most predictors of anything as complicated as behaviour are VERY imperfect both at the model level and the data-collection level. So wouldn’t the optimal thing to do be to down weigh your confidence in what the other player will do when deciding your own course of action? Unless you have information about how they model you in which case you could try to predict your own behaviour from their perspective?
Are there any practical applications of UDT that don’t depend on uncertainty as to whether or not I am a simulation, nor on stipulating that one of the participants in a scenario is capable of predicting my decisions with perfect accuracy?
I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it’s not my idea. It’s happening with or without me.
My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.
I’m with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don’t want to be rendered redundant.
The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.
Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don’t necessarily need AI/ML and long predate AI/ML.
Then there are auto-suggestor/auto-reminder use-cases: “You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn’t mean to use that more specific code?”
So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It’s the more subtle screw-ups that I’m worried about at the moment.
[Question] Request for comments/opinions/ideas on safety/ethics for use of tool AI in a large healthcare system.
Definition please.
VNM
The first step is to see a psychiatrist and take the medication they recommend. For me it was an immediate night-and-day difference. I don’t know why the hell I wasted so much of my life before I finally went and got treatment. Don’t repeat my mistake.
Yes, OP
I actually tried running your essay through ChatGPT to make it more readable but it’s way too long. Can you at least break it into non-redundant sections not more than 3000 words each? Then we can do the rest.
I second that. I actually tried to read your other posts because I was curious to find out why you are getting downvoted—maybe I can learn something outside the LW party-line from you.
But unfortunately, you don’t explain your position in clear, easy to understand terms so I’m going to have to put off sorting through your stuff until I have more time.
I meant prepping metaphorically, in the see of being willing to delve into the specifics of a scenario most other people would dismiss as unwinnable. The reason I posted this is that though it’s obvious that the bunker approach isn’t really the right one, I’m drawing a blank for what the right approach would even look like.
That being said, I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI. Are you saying scenarios where many but not all people die due to political/economic/environmental consequences of AI emergence are unlikely enough to disregard?
So let’s talk about dystopias/wierdtopias. Do you see any categories into which these can be grouped? The question then becomes, who will lose the most and who will lose the least under various types of scenarios.
Is it time to talk about AI doomsday prepping yet?
It’s ironic that you’re so excited about autonomous weapons but the first video you posted is a dramatic depiction created by a YouTube account called “Stop Autonomous Weapons”.
I think the idea of this video was to scare the public by how powerful, precise, and possibly opaque these weapons are.
But I agree with you—ethical or not, groups that limit their use of these weapons will be at a disadvantage against groups that do not. That’s a microcosm of the whole AI regulatory problem right there.
I’m sad to see him go. I don’t know enough about LWs history and have too little experience with forum moderation to agree or disagree with your decision. Though LW had been around for a very long time without imploding so that’s evidence you guys know what you’re doing.
Please don’t take down his post though. I believe somewhere in there is a good faith opinion at odds with my own. I want to read and understand it. Just not ready for this much reading tonight.
I wish I could write so prolifically! Or maybe it’s a curse rather than a blessing because then it becomes an obstacle to people understanding your point of view.
Are there any links we can read about non-appeasing de-escalation strategies?
Either theoretical ones or ones that have been tried in the past are fine.
How much should we care about non-human animals?
There have been “Nuclear first-use and threats or advocacy thereof” and those are easy to condemn. But as far as I know they are coming unilaterally from the Russian side and already being widely condemned by those not on the Russian side. But it sounds like you are looking for some broader consensus to condemn escalation on both sides.
Unfortunately neither this post nor the open letter you linked give any specifics about what other behaviours you are asking us to condemn. I’m reluctant to risk endorsing a false-equivalence argument by signing a blank check.
Is blowing up the Kerch bridge escalatory? Is Arestovich trolling the occupiers to sap their morale and bolster the morale of the defenders escalatory? I’m not qualified to determine whether the tactical or psychological benefit is justified by the escalatory risk of these sorts of actions and in the Kerch example, we don’t even know if it was done by the Ukrainian government, provocateurs, or sympathizers acting independently.
I agree that it’s not a binary choice between appeasement and escalation, and I am very curious about the non-appeasing de-escalation strategies you allude to. That’s what we should be brainstorming and what you should lead with in your letter for it to be convincing.
The EU approach to getting Ukraine to protect the rights of minorities seems more… sustainable… than Russia’s approach, so I propose a different compromise:
How about Russia withdraw all its troops back to the 2014 borders and we all give the slow, non-violent path a chance to work.
I guess scenarios where humans occupy a niche analogous to animals that we don’t value but either cannot exterminate or choose not to.