I operate by Crocker’s rules.
niplav
Interesting! Sounds quite similar to the contents on the blog.
Purely pragmatically, something having moral agency seems to me to be just another way of saying “Will this thing learn to behave better if I praise/blame it.” (Praise and Blame are Instrumental).
But this is, of course, just a definitional debate.
I had a very similar thought a while back, but was thinking more of best possible current versions of myself.
You said this:
I think your coherent, extrapolated self may know things you don’t know, and may have learned that some of your goals were misguided. Because your ability to communicate with them is bottlenecked on your current skills and beliefs, I can’t vouch for the advice they might give.
I called it “Coherent Extrapolated Niplav”, where I was sort-of having a conversation with CEN, and since it was CEN, it was also sympathetic to me (after all, my best guess is that if I was smarter, thought longer etc., I’d be sympathetic to other people’s problems!).
If you haven’t read it yet, you might be very interested in Reason as memetic immune disorder.
[epistemic status: tried it once] Possible life improvement: Drinking cold orange juice in the shower. Just did it, it felt amazing.
Hot shower.
This might become difficult with values of , though…
Judge it as “right”. PB automatically converts your 10% predictions into 90%-not predictions for the calibration graph, but under the hood everything stays with the probabilities you provided. Hope this cleared things up.
Idea to for an approach how close GPT-3 is to “real intelligence”: generalist forecasting!
Give it the prompt: “Answer with a probability between 0 and 1. Will the UK’s Intelligence and Security committee publish the report into Russian interference by the end of July?”, repeat for a bunch of questions, grade it at resolution.
Similar things could be done for range questions: “Give a 50% confidence interval on values between −35 and 5. What will the US Q2 2020 GDP growth rate be, according to the US Bureau of Economic Analysis Advance Estimate?”.
Perhaps include the text of the question to allow more priming.
Upsides: It seems to me that making predictions is a huge part of intelligence, relatively easy to check and compare with humans
Downsides: Resolution will not be available for quite some time, and when the results are in, everybody will already be interested in the next AI project. Results only arrive “after the fact”.
[Question] “Do Nothing” utility function, 3½ years later?
Cryonics Cost-Benefit Analysis
I have been putting this off because my medical knowledge is severely lacking, and I would have to estimate how the leading factors of death influence the possibility to get crypreserved mainly by subjectively evaluating them. That said, I’ll look up some numbers, update the post and notify you about it (other people have been requesting this as well).
Math is interesting in this regard because it is both very precise and there’s no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).
Programming, OTOH, gives you clear feedback loops.
I feel like this meme is related to the troll bridge problem, but I can’t explain how exactly.
Mistake theory/conflict theory seem more like biases (often unconscious, hard to correct in the moment of action) or heuristics (should be easily over-ruled by object-level considerations).
Politics supervenes on ethics and epistemology (maybe you also need metaphysics, not sure about that).
There’s no degrees of freedom left for political opinions.
Actually not true. If “ethics” is a complete & transitive preference over universe trajectories, then yes, otherwise not necessarily.
Thank you, I found this slightly amusing.
If we don’t program philosophical reasoning into AI systems, they won’t be able to reason philosophically.
What is your verdict?
I’m currently reading through his blog Metamoderna and feel like there are some similarities to rationalist thoughts on there (e.g. this post on what he calls “game change” and this post on what he calls proto-synthesis).