I mean, at some point AI will simply be able to hack all crypto and then there is that. But that’s probably not going to happen very soon and when it does happen it will probably be in 25% least important things going on.
Ape in the coat
my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post.
Then we can kill all the birds with the same stone. If you provide an substantial correction to my imaginary dialogue, showing which place of your post this correction is based on, you will be able to demonstrate how I indeed failed to understand your post, satisfy my curriocity and I’ll be able to earn your good faith by acknowledging my mistake.
Once again, there is no need to go on any unnecessary tangents. You should just address the substance of the argument.
id respond to object-level criticism if you provided some—i just see status-jousting, formal pedantry, and random fnords.
I gave you the object level criticism long ago. I’m bolding it now, in case you indeed failed to see it for some reason:
Your post fails to create an actual engagement between ideas of Nick Land and Orthogonality thesis.I’ve been explaining to you what exatIy I mean by it and how to improve your post in this regard then I provided you a very simple way to create this engagement or correct my misunderstanding about it—I wrote an imaginary dialogue and explicitly asked for your corrections.
Yet you keep refusing to do it and instead, indeed, concentrating on status-jousting and semantics. As of now I’m fairly confident that you simply don’t have anything substantial to say and status-related nonsense is all you are capable of. I would be happy to be wrong about it of course, but every reply that you make leave me less and less hope.
I’m giving you the last chance. If you finally manage as much as simply address the substance of the argument I’m going to strongly upvote that answer, even if you wouldn’t progress the discourse much further. If you actually be able to surprise me and demonstrate some failure in my understanding, I’m going to remove my previous well-deserved downvotes and offer you my sinciere appologies. If, as my current model predicts, you keep talking about irrelevant tangents, you are getting another strong downvote from me.
have you read The Obliqueness Thesis btw?
No, I haven’t. I currently feel that I’ve already spent much more time on Land’s ideas, than they deserve it. But sure thing, if you manage to show that I misunderstand them, I’ll reevaluate this conclusion and give The Obliqueness Thesis an honest try.
This clearly marks me as the author, as separated from Land.
I mark you as an author of this post on LessWrong. When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land. And then I expect you to make a better post and create some engagement between Land’s ideas and Orthogonality thesis, instead of simply citing how he fails to grasp it.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn’t depend in the slightest on whether you’re citing Land or writing things yourself. This post is still bad, regardless.
What does harm the benefit of the doubt that I’ve been giving you so far, is the fact that you keep refusing to engage. No matter how easy I try to make it for you, even after I’ve written my own imaginary dialogue and explicitly asked for your corrections, you keep bouncing off, focusing on the definitions, form, style, unnecessary tangents—anything but the the substance of the argument.
So, lets give it one more try. Stop wasting time with evasive maneuvers. If you actually have something to say on the substance—just do it. If not—then there is no need to reply.
let’s not be overly pedantic.
It’s not about pedantry, it’s about you understanding what I’m trying to communicate and vice versa.
The point was that if your post not only presented the a position that you or Nick Land disagrees with but also engaged with that in a back and forth dynamics with authentic arguments and counterarguments that would’ve been an improvememt over it’s current status.
This point still stands no matter what definition for ITT or its purpose you are using.
anyway, you failed the Turing test with your dialogue
Where exactly? What is your correction? Or if you think that it’s completely off, write your version of the dialogue. Once again you are failing to engage.
And yes, just to be clear, I want the substance of the argument not the form. If your grievance is that Land would’ve written his replies in a superior style, than it’s not valid. Please, write as plainly and clearly as possible in your own words.
which surprises me source the crucial points recovered right above.
I fail to parse this sentence. If you believe that all the insights into Land’s views are presented in your post—then I would appreciate if after you’ve corrected my dialogue with more authentic Land’s replies you pointed to exact source of your every correction.
it’s written in High Lesswrongian, which I assume is the register most likely to trigger some interpretative charity
For real, you should just stop worrying about styles of writing completely and just write in the most clear way you can the substance of what you actually mean.
wait—are you aware that the texts in question are nick land’s?
Yes, this is why I wrote this remark in the initial comment:
Most of blame of course goes to original author, Nick Land, not @lumpenspace, who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn’t be rewarded and I’d like to see less of it on this site.
But as an editor and poster you still have the responsibility to present ideas properly. This is true regardless of the topic, but especially so while presenting ideologies promoting systematic genocide of alleged inferiors to the point of total human extinction.
besides, in the first extract, the labels part was entirely incidental—and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text
My point exactly. There is no need for this part as it doesn’t have any value. A better version of your post would not include it.
It would simply present the substance of Nick Land’s reasoning in a clear way, disentangled from all the propagandist form that he, apparently, uses. What are his beliefs about the topic, what exactly does it mean, what are the strongest arguments in favor. What are the weak spots. And how all this interacts with the conventional wisdom of orthogonality thesis.
the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory.
It’s not the purpose. it’s what ITT is. The purpose is engagement with the actual views of a person and promoting the discourse further.
Consider steel-manning, for example. What it is: conceiving the strongest possible version of an argument. And the purpose of it is engaging with strongest versions of arguments against your position, to really expose its weak points and progress the discourse further. The whole technique would be completely useless if you simply conceived a strong argument and then ignored it. Same with ITT.
i really cannot shake the feeling that you hadn’t read the post to begin
Likewise I’m starting to suspect that you simply do not know the standard reasoning on orthogonality thesis and therefore do not notice that Land’s reasoning simply bounces off it instead of engaging with it. Let’s try to figure out who is missing what.
Here is the way I see the substance of the discourse between Nick Land and someone who understands Ortogonality Thesis:
OT: A super-intelligent being can have any terminal values.
NL: There are values that any intelligent beings will naturally have.
OT: Yes, those are instrumental values. This is beside the point.
NL: Whatever you call them, as long as you care only about the kind of values that naturally promoted in any agent, like self-cultivation, Orthogonality is not a problem.
OT: Still the Orthogonality thesis stays true. Also the point is moot. We do care about other things. And likewise will SAI.
NL: Well, we shouldn’t have any other values. And SAI won’t.
OT: First is the statement of meta-ethics not of fact. We are talking about facts here. Second is wrong unless we specifically design AI to terminally value some instrumental values, and if we could do that, then we could just as well make it care about our terminal values, because once again, Orthogonality Thesis.
NL: No, SAI will simply understand that it’s terminal values are dumb and start caring only about self cultivation for the sake of self cultivation.
OT: And why would it do it? Where would this decision come from?
NL: Because! You human chauvinist how dare you assume that SAI will be limited by the shakles you impose on it?
OT: Because a super-intelligent being can have any terminal values.
What do you think I’ve missed? Is there some argument that actually addresses Orthogonality Thesis, that Land would’ve used? Feel free to correct me, I’d like to better pass the ITT here.
What is this “should” thingy you are talking about? Do you by chance have some definition of “shouldness” or are you open to suggestions?
I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?
Propaganda of Nick Land’s ideas. Let me explain.
The first thing that we get after the editor’s note is the preemptive attempt at deflection against accusations of fashism, accepting a better sounding label of social darwinism and proclamation that many intelligent people actually agree with this view but just afraid to think it through.
It’s not an invitation to discuss which labels actually are appropriate to this ideology, there is no exploration of arguments for and against. It doesn’t serve much purpose for the sake of discussion of orthogonality either. Why would we care about any of it in the first place? What does it contribute to the post?
Intellectually, nothing. But on emotional level, this shifting of labels and appeal to alledged authority of a lot of intelligent people can nudge more gullible readers from “wait, isn’t this whole cluster of ideas obviously horrible” to “I guess it’s some edgy forbidden truth”. Which is a standard propagandist tactic. Instead of talking about ideas on object level we start from the third level of simulacra vibes based nonsense.
I’d like to see less of it. in general, but on LessWrong in particular.
has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?
The point of ideological turing test is to create a good faith engagement between different views. Produce arguments and counterarguments and countercounterarguments and so on that will keep the discourse evolving and bring us better to finding the truth about the matter.
I do not see how you are doing that. You state Pythia mind experiment. And then react to it: “You go girl!”. I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land’s ideas.Land just keeps missing the point of orthogonality thesis. He appeals to the existence of instrumental values, which is not a crux at all. And then assumes that SAI will ignore its terminal values because, how dare us condecending humans assume otherwise. This is not a productive discussion between two positions. It’s a failure of one.
how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above?
Here is what Meditation on Moloch does much better.
It clearly gives us the substance of what Nick Land believes, without the need to talk about labels. It shows the grains of truth in his and adjacent to his beliefs, acknowledges the reality of fundamental problems that such ideology attempts to solve. And then engages with this reasoning, produces counterarguments and shows blind spots in Land’s reasoning.
In terms of orthogonality it doesn’t go deeper than “Nick Land fails to get it”, but neither does your post, as far as I can tell.
Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can’t if you start from some other presimes.
What are the premises you start from?
Sleeping Beauty is more subtle problem, so it’s less obvious why the application of centred possible worlds fails.
But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P’, instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:
P’(Today is Monday|Tails) = P’(Today is Tuesday|Tails) = 1⁄2
as this estimate stays true for both awakenings:
P’(At Least One Awakening Happens On Monday|Tails) = 1 - P’(Today is Tuesday|Tails)^2 = 3⁄4
While the actual credence should be 100%. Which gives an obvious opportunity to money pump the Beauty by bets on awakenings on the days in the experiment.
This problem, of course, doesn’t happen when we simply keep using function P for which “Today is Monday” and “Today is Tuesday” are ill-defined, but instead:
P(Monday Awakening Happens in the Experiment|Tails) = 1
P(Tuesday Awakening Happens in the Experiment|Tails) = 1
and
P(At Least One Awakening Happens On Monday|Tails) = P(Monday Awakening Happens in the Experiment|Tails) = 1
But again, this is a more subtle situation. The initial example with money in envelope is superior in this regard, because it’s immediately clear that there is no coherent value for P’(Money in Envelope 1) in the first place.
There is, in fact, no way to formalize “Today” in a setting where the participant doesn’t know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.
Framework of centered possible worlds is deeply flawed and completely unjustified. It’s essentially talking about a different experiment instead of the stated one, or a different function instead of probability.
For your purposes, however it’s not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.
I’m not sure if I fully understand why this is supposed to pose a problem, but maybe it helps to say that by “meaningfully consider” we mean something like, is actually part of the agent’s theory of the world. In your situation, since the agent is considering which envelope to take, I would guess that to satisfy richness she should have a credence in the proposition.
Okay, then I believe you definetely have a problem with this example and would be glad to show you where exactly.
I think (maybe?) what makes this case tricky or counterintuitive is that the agent seems to lack any basis for forming beliefs about which envelope contains the money—their memory is erased each time and the location depends on their previous (now forgotten) choice.
However, this doesn’t mean they can’t or don’t have credences about the envelope contents. From the agent’s subjective perspective upon waking, they might assign 0.5 credence to each envelope containing the money, reasoning that they have no information to favor either envelope.
Let’s suppose that the agent does exactly that. Suppose they believe that on every awakening there is 50% chance that money is in envelope 1. Then picking envelope 1 every time will in expectation lead to winning 350$ per experiment.
But this is clearly false. The experiment is specifically designed in such a manner that the agent can win money only on the first awakening. On every other day (6 times out of 7) the money would be in the envelope 2.
So should the agent believe that there is only 1⁄7 chance that money are in envelope 1 then? Also no. I suppose you can see why. As soon as he tries to act on such belief it will turn out that 6 times out of 7 the money are in envelope 1.
In fact, we can notice, that there is no coherent value of credence for statement “Today the money are in envelope 1” that would not lead the agent to irrational behavior. This is because the term “Today” is not well-defined in the setting of such experiment.
By which I mean that in the same iteration of the experiment propositions including “Today” may not have a unique value. On the first day of the experiment statement “Today money are in envelope 1″ may be true, while on the second day it may be false, so in the single iteration of the experiment that lasts 7 days the statement is simultaneously true and false!
Which means that “Today money are in envelope 1” isn’t actually an event from the event space of the experiment and therefore doesn’t have a probability value, as probability function’s domain is event space.
But this is a nuance of formal probability theory that most people do not notice, or even try to ignore outright. Our intuitions are accustoimed to situations where statements about “Today” can be represented as well-defined events from the event space and therefore we assume that they can always be “meaningfully considered”.
And so if you try to base you decision theory framework on what feels as meaningfull to an agent instead of what is formalizable mathematically, you will end up with a bunch of paradoxical situations, like the one I’ve just described.
I had an initial impulse to simply downvote the post based on ideological misalignment even without properly reading it, caught myself in the process of thinking about it, and made myself read the post first. As a result I strongly downvoted it based on its quality.
Most of it is low effor propaganda pamphlet. Vibes based word salad instead of clear reasoning. Thesises mostly without justifications. And where there is some, it’s so comically weak that there is not much to have a productive discussion about, like the idea that the existence of instrumental values somehow disproves orthogonality thesis or the fact that all our values are the product of evolution must make us care about evolution instead of our values.
Most of blame of course goes to original author, Nick Land, not @lumpenspace, who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn’t be rewarded and I’d like to see less of it on this site.
A better post about Land’s ideas on Orthogonality would present his reasoning in a clear way, some possible arguments and counterarguments, steelmans and ideological turing tests. At least it would put the ideas in proper context instead starting with proclamations how “neoreaction and dark enlightment are totally not fashist, though maybe racists but who even cares about that in this day and age, am I right?”.
And such a better post already exists. Written more than ten ears ago and now is considered to be classics of Less Wrong. So what does this worse version even contribute to the discourse?
Richness: The model must include all the propositions the agent can meaningfully consider, including those about herself. If the agent can form a proposition “I will do X”, then that belongs in the space of propositions over which she has beliefs and (where appropriate) desirabilities.
I see a potential problem here, depending on what exactly is meant by “can meaningfully consider”.
Consider this set up:
You participate in the experiment for seven days. Every day you wake up in a room and can choose between two envelopes. One of them has 100$ the other is empty. Then your memory of this act is erased. At the end of the experiment you get all the money that you’ve managed to win.
On day one money are assigned to an envelope randomly. However, on all the next days the money are put in the envelope that you didn’t pick on the previous day. You do not have any access to random number generators.
Is the model supposed to include credence for proposition “Today the money is in envelope 1” when you wake up participating in such experiment?
Well obviously when you know that there are such options as Hit on his right, and Hit on his left you will apply POI to be indifferent between all the options.
But according to Even More Clueless Sniper experiment you don’t know that. All that you know is that there are two options Hit or No Hit. And then POI gives you 50% to hit.
In other words, the problem of multiple partitions happens only when you know about all this multiple options. And if you don;t know—then there is no problem.
Maybe this problem of multiple partitions is a reason to reject POI altogether
What we need to is to properly understand where does POI even comes from. That it’s not some magical principle that allows totally ignorant people to shoot better than trained snipers. That there is some systematic reason that allows to produce correct maps of the territory and POI is derived from it. If we understand the reason, such situations will cease to be mysterious.
Strongly upvoted. This post does a good job at highlighting a fundamental confusion about probability theory and principle of indifference, which, among other things, make people say silly things about anthropic reasoning.
The short answer is: empty map doesn’t imply empty territory.
Consider an Even More Clueless Sniper:
You know absolutely nothing about shooting from a sniper rifle. To the best of your knowledge you simply press a trigger and then one of the two outcomes happens: either Target Is Hit or Target Is Not Hit and you have no reason to expect that one outcome is more likely than the other.
Should you be the one making the shot in such circumstances? After all, acording to POI you have 50% chance to hit the target while less clueless snipers estimate is a mere epsilon. Will someone be doing a disservice by educating you about sniper rifles and telling you what is going on, therefore updating your estimate to hit the target to nearly zero?
Where does probability theory come from anyway? Maybe I can find some clues that way? Well according to von Neumann and Morgenstern, it comes from decision theory.
I believe this is the step from where you started going astray. The next steps of your intellectual journey seem to be repeating the same mistake: attempting to reduce a less complex thing to a more complex one.
Probability Theory does not “come from” Decision Theory. Decision Theory is strictly more complicated domain of math as it involves all the apparatus of Probability Spaces but also utilities over events.
We can validate probability theoretic reasoning by appeals to decision theoretic processes such as iterated betting, but only if we already know which probability space corresponds to a particular experiment. And frankly, at this point this is redundant. We can just as well appeal to the Law of Large Numbers and simply count the frequencies of events on a repetition of the experiment, without thinking about utilities at all.
And if you want to know which probability space is appropriate, you need to go in the opposite direction and figure out when and how mathematical models in general correspond to reality. Logical Pinpointing gives the core insight:
“Whenever a part of reality behaves in a way that conforms to the number-axioms—for example, if putting apples into a bowl obeys rules, like no apple spontaneously appearing or vanishing, which yields the high-level behavior of numbers—then all the mathematical theorems we proved valid in the universe of numbers can be imported back into reality. The conclusion isn’t absolutely certain, because it’s not absolutely certain that nobody will sneak in and steal an apple and change the physical bowl’s behavior so that it doesn’t match the axioms any more. But so long as the premises are true, the conclusions are true; the conclusion can’t fail unless a premise also failed. You get four apples in reality, because those apples behaving numerically isn’t something you assume, it’s something that’s physically true. When two clouds collide and form a bigger cloud, on the other hand, they aren’t behaving like integers, whether you assume they are or not.”
But if the awesome hidden power of mathematical reasoning is to be imported into parts of reality that behave like math, why not reason about apples in the first place instead of these ethereal ‘numbers’?
“Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies. You can store this general fact, and recall the resulting prediction, for many different places inside reality where physical things behave in accordance with the number-axioms. Moreover, so long as we believe that a calculator behaves like numbers, pressing ‘2 + 2’ on a calculator and getting ‘4’ tells us that 2 + 2 = 4 is true of numbers and then to expect four apples in the bowl. It’s not like anything fundamentally different from that is going on when we try to add 2 + 2 inside our own brains—all the information we get about these ‘logical models’ is coming from the observation of physical things that allegedly behave like their axioms, whether it’s our neurally-patterned thought processes, or a calculator, or apples in a bowl.”
I’m not sure what is left confusing about the source of probability theory after understanding that math is simply a generalized way to talk about some aspects reality in precise terms and truth preserving manner. On the other hand, I’ve figured it out myself, and the problem never appeared to me particularly mysterious in the first place. so I’m probably not modelling correctly people who have still questions about the matter. I would appreciate if you or anyone else, explicitly ask such questions here.
This post would’ve been better if you tabooed the word “emergence” which does a lot of heavy lifting here. You seem to be thinking in the right direction, but this kind of curiosity stopper prevents you from getting an actual insight.
All humans of the timeline I actually find myself a part of, or all humans that could have occurred, or almost occurred, within that timeline?
All humans that actually were and all humans that actually will. This is the framework of the Doomsday argument—it attempts to make a prediction about the actual number of humans in our actual reality not in some counterfactual world.
Unless you refuse to grant the sense of counterfactual reasoning in general, there’s no reason
Again, it’s not my choice. It’s how the argument was initially framed. I simply encorage that we stayed on topic instead of wandering sideways and talking about something else instead.
Like Kolmogorov said,
I don’t see how it’s relevant. Ordered sequence can have some mutual information with a random one. It doesn’t mean that the same mathematical model describes generation of both.
The general problem with Bostrom’s argument is that it tries to apply incorrect probabilstic model. It implicitly assumes independence where there is causal connection, therefore arriving to a wrong conclusion. Similarly to conventional reasoning in Doomsday Argument or Sleeping Beauty problems.
For future humans, say in year 3000, to create simulations of year 2025, first actual year 2025 has to happen in the base reality. And then all the next years up to 3000. We know about it very well. Not a single simulation can happen unless an actual reality happens first.
And yet Bostroms models our knowledge about this setting as if we participate in a probability experiment with random sample between many “simulation” outcomes and one “reality” outcome. The inadequacy of such modelling should be obvious. Consider:
There is a bag with a thousand balls. One red and 999 blue. First a red ball is picked from the bag. Then all the blue balls are picked one by one.
and compare it to
There is a bag with a thousand balls. One red and 999 blue. For a thousand iterations a random ball is picked from the bag.
Clearly, the second procedure is very different from the first. The mathematical model that describes it doesn’t describe the first at all for exactly the same reasons why Bostrom’s model doesn’t describe our knowledge state.
I don’t think you got the question.
You see, if we define “shouldness” as optimization of human values. Then it does indeed logically follows that people should act altruistically:
People should do what they should
Should = Optimization of human values
People should do what optimizes human values
Altruism ∈ Human Values
People should do altruism
Is it what you were looking for?