Well, if my crackpot physics is right, it actually kind of reduces the probability I’d assign to the world I inhabit being “real”. Seriously, the ideas aren’t complicated, somebody else really should have noticed them by now.
But sure it makes predictions. There should be a repulsive force which can be detected when the distance between two objects is somewhere between the radius of the solar system and the radius of the smallest dwarf galaxy. I’d guess somewhere in the vicinity of 10^12 meters.
Also electrical field polarity should invert somewhere between 1 and 10^8 meters. That is, if you have an electrical field, and you measure it to be positive or negative, if you move some distance away, it should invert to be negative or positive.
Are these predictions helpful? Dunno.
Either way, however, it doesn’t really say anything about whether the world is internal or external.
Does any of that make an observable difference.
Not really, no. And that’s sort of the point; the claim that the world is external is basically an empty claim.
I think one of the more consistent reports of those who connect with that voice is that they lose that fear.
Because it’s expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.
As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers. The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it’s helpful for creating algorithms, but useless for actually running them.
Programming languages are judged as helpful in part by how well they do at pretending to be a mirror model, and efficient by how well they completely ignore the mirror model when it comes time to compile/run. There is no program which is made more efficient by representing data internally as the objects the programmers created; efficiency gains are made in compilers by figuring out how to reduce away the unnecessary complexity the programmers created for themselves so they could more easily map their messy intuitions to cold logic.
Why would an AI introduce this step in the middle of its processing?
I think it’s useful to model Democracy as a mock war which we perform every so often to forestall the necessity for real war.
That is, the objective in Democracy is to balance interests such that nobody who would win a war in pursuit of their own interests has any incentive to actually go to war—and additionally, with respect to, for example minority rights, that any game-theoretic incentive to engage in a losing war is also eliminated. (An extreme example of incentive-to-fight-a-losing-war is genocide; we want Democracy to prevent genocide, because any potential target parties in the case of genocide have a game theory incentive to go to war, even if they would lose, to make doing so expensive. Less extreme examples will also suffice, but may be harder to argue around.)
Thus, Democracy is a form of cooperation to the net benefit of the participants.
Defection in a Democracy is the majority taking any action which the minority would prefer to go to war than to allow to pass.
Vague language is often the result of vague thinking; most people do not actually try to be specific in their thinking; many of them don’t know how.
Vague language will also arise when language doesn’t correctly encapsulate a concept, or when the writer doesn’t know how to use language for that specific purpose; pointing more specifically at the wrong thing is being actively misleading. Thus vague language can often occur in areas where there isn’t a common and codified way of expressing specific thoughts. For example, this post is vague about what vague language is; the specific concept is one I suspect you’ve never had to specify, so it’s hard to translate it into words. Instead you focus on what it isn’t, trying to be specific by ruling out, rather than ruling in.
Vague language can also arise in areas where the common communication mechanism is necessarily lossy, such as when talking about qualia.
I think “deliberate” is doing most of the heavy lifting in this post.
That’s a very weak form of anti realism. If 0 and 1 aren’t probabilities, nothing is absolute proveable.
Sure. Is realism the claim that reality probably exists, or definitely exists, however?
What do “inside” and “outside” mean?
In a sense, it’s a statement of dependence. If our minds are inside the world, then if the world stops existing, so do our minds. If the world is inside our mind, then if our mind stops existing, then so does the world.
In another sense, it’s a statement of correspondence. If our minds are inside the world, then the map-territory distinction is ontologically important (note that the mind-territory distinction is itself an anti-realist position, as it argues that there is no direct correspondence between the world and the contents of our mind). If the world is inside our mind, then the map-territory distinction is indistinguishable from any other state of confusion.
Note the importance of “external”—if we omit external, then the word “world” just refers to the common factor of our experiences whatever that is, and we don’t actually disagree.
That is, anti-realism holds that nothing provably exists outside the mind. The argument comes down to “A world which is internal to our mind, and a world that is external to our mind, is not differentiable”. For what reason would you expect the internality or externality of the world to have a bearing on whether or not inductive logic applies?
Suppose the entire universe boils down to a mathematical equation; everything is one equation, maybe a fractal, which from a point of simplicity gives rise to complexity. What difference do we expect to encounter if that mathematical equation exists inside of our mind, as opposed to outside of it? If the universe is the expression of that equation, and the equation is compatible with induction, then we should expect induction to work without regard to whether the universe is internal or external to our mind.
The value of the external world theory is that it explains why science works at all, not that it explains anything in particular.
No—the validity of inductive logic explains why science works at all. There’s no prior reason to expect inductive logic to be valid in an arbitrary external world.
No, that is not the point.
Suppose for a moment that what you want is possible. Suppose it is possible to write values into an organization such that the organization never stops supporting those values. Is there any point in your own country’s history where you would have wanted them to use this technique to ensure that the values of that era were preserved forever more? What other groups alive today would you actually trust with the ability to do this?
Yes. It’s an additional assumption that leads to greater explanatory power. If it had no such advantage, you should not make it, but since it does, it is not obviously ruled out by parsimony.
It doesn’t add any explanatory power; it only seems to, because you’ve attached all your explanations to that external world. They don’t actually change when you get rid of the external world.
Suppose you live in a simulation. Do any observations become invalid? Are you going to stop expecting the things you have labeled apples to fall in concordance with the inverse-square law?
Suppose the external world isn’t real. Do any observations become invalid? Are you going to stop expecting the things you have labeled apples to fall in concordance with the inverse-square law?
The “external world” hypothesis adds no information to any of your models of your experiences; it predicts nothing.
I wasnt arguing for moral realism, I was arguing against ignoring agency.
Agency isn’t relevant?
You can assume inexplicable consistency, but assuming a world is assuming explicable consistency.
That is an additional assumption, not the same assumption. Additionally, the claim that the world’s consistency is explicable is just another assumption; you can’t explain why the external world exists, nor why it is consistent.
If you think “The universe exists” is a simpler explanation than “The universe exists because God created it”, because the former assumes only the existence of the universe, and the latter assumes an additional unprovable entity, then you should notice that “My experiences exist” is a simpler explanation than “My experiences exist because there is an external world I interact with”. In both cases the latter is an unprovable statement that only increases the complexity of the necessary assumptions.
Neither does subjective morality, except by changing your actions. But moral realism would change your actions too, if true and compelling. Ethics is supposed to relate to behaviour. You can make it look irrelevant by portraying people as purely passive entities that do nothing but attempt predict ther experiences, but the premise is clearly false.
“Compelling” is doing all the work there, and doesn’t require that the ethics objectively exist in the external world.
Notes to myself:
Let r be the axis of motion, and t be the axis of time. Supposing for a moment we express velocity, as observed from a sufficiently distant observer, as v(r)=c * sin(θ), and rate-of-time as v(t)=c * sin(ω) - such that, in terms of the Lorentz factor γ, v(t) = c*γ, or sin(ω) = γ. That is, supposing we express velocity as a rotation of the plane of the axes of time and the direction of travel relative to some observer, such that θ+ω=pi/2. θ (and thus ω) is defined as rotation relative to the observer’s orientation.
Let the curvature at a given point in spacetime be expressed as κ. I think the equation for acceleration might take something like the form dθ/dr = κ(r) * cos(θ).
Struggling with the math for this. Curvature expresses the radius of a circle; what I’m looking for is something more like torque. Rotational acceleration.
Why is there rotational acceleration? Because the near side and far side of a particle are instantaneously moving at different velocities. Why is the rotation in time? Because the disparity in velocity exists with respect to the axes corresponding to time (future light-cone) and distance (to the gravitational body).
This is going to be proportional to the difference in velocity. κ(r) represents the instantaneous curvature. cos(θ) is expected because as θ approaches pi/2 (as spacial velocity approaches the speed of light), the acceleration approaches 0. We need to multiply by c here as well, since that is the rate of change; dθ/dr = c * κ(r) * cos(θ).
For acceleration in terms of m/s, I get the following equation:
Let M be the mass of the gravitational body, let G be the gravitational constant, let c be the speed of light.
Let rs=2*M*G/c^2 (Schwarzschild radius)
Let κ(r) = c / (2*r^2 * (1-rs/r)^1/2) ← I don’t understand tensors well enough to use the Ricci curvature, so I invented my own curvature that I know represents the values I’m interested in, which is the derivative of Schwartzschild time dilation as t0/tf.
Let θ = asin(v(r) / c) ← We need to convert the velocity into a form we can work with. I am assuming all velocity is either directly towards the source of gravity; I don’t know if, or how, the equation will change if this assumption is invalid.
Then the acceleration experienced by a particle in a gravitational field, as measured by an independent observer, should be:
acceleration (m/s^2) = c * k(r) * cos(θ) * rs
I’ve tested the equation; it appears to work, except for one problem: That last factor, the Schwarzschild radius, makes no sense to me here; the actual acceleration the particle experienced using my original equation was incorrect. I expected -a- value there, because otherwise the units wouldn’t make sense. This may be a product of the way I reinvented curvature.
However, none of this is actually what I set out to do, since we’re using m/s^2, instead of radians/s^2; that is, I was originally setting out to figure out either dθ/dt^2 or dθ/dr. What I ended up with doesn’t actually look like what I wanted.
You get to write an AI, and decide how it handles its value function.
However, the value function gets to be written by a group that wins a lottery; there are 100 groups who have put in for the lottery. Two of the groups wants human extinction, one voluntarily and one involuntarily; thirty of the groups want all humans to follow their religion; seven of the groups want the world economy to reflect their preferred economic model; and most of the remaining groups want their non-religious cultural values to be enshrined for all time.
How important is it to you that there be no value drift?
What is the strongest argument you know for antirealism?
From Aella; the external world is a meaningless hypothesis; given a set of experiences and a consistent set of expectations about what form those experiences will take in the future, positing an external world doesn’t add any additional information. That is, the only thing that “external world” would add would be an expectation of a particular kind of consistency to those experiences; you can simply assume the consistency, and then the external world adds no additional informational content or predictive capacity.
What is the strongest argument against moral realism?
Just as an external world changes nothing about your expectations of what you will experience, moral realism, the claim that morality exists as a natural feature of the external world, changes nothing about your expectations of what you will experience.
If you think nothing is “valuable in itself” / “objectively valuable”, why do you think so?
Consider a proposal to replace all the air around you with something valuable. Consider a proposal to replace some percentage of the air around you with something valuable.
The ideal proposal replaces neither all of the air, nor none of the air. In the limit of all of the air being replaced, the air achieves infinite relative value. In the limit of none of the air being replaced, the air has, under normal circumstances, no value.
Consider the value of a vacuum tube; vacuum, the absence of anything, has particular value in that case.
Which is all to say—value is strictly relative, and it is unfixed. The case of the vacuum tube demonstrates that there are cases where having nothing at all in a given region is more valuable than having something at all there. If the vacuum tube is part of a mechanical contraption that is keeping you alive, there is nothing you want in that vacuum tube, more than vacuum itself; thus, there is nothing that has, in that specific situation, objective value, given that the only sense by which we can make sense of objective value is a comparison to nothing, and in that particular case nothing is more valuable than the something.
How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn’t make any sense?
Because you’ve tautologically defined it to be so when you said the altruism is disinterested. If I have no interest in a thing, it makes no sense to behave as if I have an interest in that thing. Any sense in which it would make sense for me to have an interest in a thing, is a claim that I have an interest in that thing.
Because I think the word “know”, as used by a human understanding a model, is standing in for a particular kind of mirror-modeling, in which we possess a reproductive model of a thing in our mind which we can use to simulate a behavior, whereas the word “know”, as used by the referent AI, is standing in for “the set of information used to inform the development of a process”.
So an AI which has been trained on a game which it lost can behave “as if it has knowledge of that game”, when in fact the only remnant of that game may be a slightly adjusted parameter, perhaps a connection weighting somewhere is 1% different than it would otherwise be.
In order to “know” what the AI knows, in the sense that it knows it, requires a complete reproduction of the AI state—that is, if you know everything the AI actually knows, as opposed to the information-state that informed the development of the AI, then all you actually know, in that case, is that this particular connection is weighted 1% different; in order to meaningfully apply this knowledge, you must simulate the AI (you must know how all the connections interaction in a holistic sense), in which case you don’t know anything, you’re just asking the AI what it would do, which is not meaningfully knowing what it knows in any useful sense.
Which is basically because it doesn’t actually know anything. Its state is an algorithm, a process; this algorithm could perhaps be dissected, broken down, simplified, and turned into knowledge of how it operates—but this is just another way of simulating and querying a part of the AI; critically, knowing how the AI operates is having knowledge that the AI itself does not actually have.
Because now we are mirror-modeling the AI, and turning what the AI is, which isn’t knowledge, into something else, which is.
Taboo “know” and try to ask the question again, because I think you’re engaging in a category error when you posit that, for example, a neural network actually knows anything at all. That is, the concept of “knowledge” as it applies to a human being cannot be meaningfully compared to “knowledge” as it applies to a neural network; they aren’t the same kind of thing. A Go AI doesn’t know how to play Go; it knows the current state of the board. These are entirely different categories of things.
The closest thing I think the human brain has to the kind of “knowledge” that a neural network uses is the kind of thing we represent in our cultural narrative as, for example, a spiritual guru slapping you for thinking about doing something instead of just doing it. That is, we explicitly label this kind of thing, when it occurs in the human brain, as not-knowledge.
You can move your arm, right? You know how to move your arms and your legs and even how to do complicated things like throw balls and walk around. But you don’t actually know how to do any of those things; if you knew how to move your arm—much less something complicated like throwing balls! - it would be a relatively simple matter for you to build an arm and connect it to somebody who was missing one.
Does this seem absurd? It’s the difference between knowing how to add and knowing how to use a calculator. Knowing how to add is sufficient information to build a simple mechanical calculator, given some additional mechanical knowledge—knowing how to use a calculator gives you no such ability.
AI risk denial is denial, dismissal, or unwarranted doubt that contradicts the scientific consensus on AI risk
An earlier statement from the paper with the same general set of issues with respect to the rejection of authority later in the paper. Deniers are wrong because the scientific consensus is against them; the consensus of researchers is wrong because they are factually incorrect.
If the citations have anything like the bias the rhetoric has, the paper isn’t going to be useful for that purpose, either.
This concept is not fully formed. It is necessary that it is not fully formed, because once I have finished forming it, it won’t be something I can communicate any longer; it will become, to borrow a turn of phrase from SMBC, rotten with specificity.
I have noticed a shortcoming in my model of reality. It isn’t a problem with the accuracy of the model, but rather there is an important feature of the model missing. It is particularly to do with people, and the shortcoming is this: I have no conceptual texture, no conceptual hook, to attach nebuluous information to people to.
To explain what I need a hook for, a professional acquantance has recriprocated trust. There is a professional relationship there, but also a human interaction; the trust involved means we can proceed professionally without negotiating contractual terms beforehand. However, it would undermine the trust in a very fundamental way to actually treat this as the meaning of the trust. That is to say, modeling the relationship as transactional would undermine the basis of the relationship (but for the purposes of describing things, I’m going to do that anyways, basically because it’s easier to explain that way; any fair description of a relationship of any kind should not be put to a short number of words).
I have a pretty good model of this person, as a person. They have an (adult) child who has developed a chronic condition; part of basic social interaction is that, having received this information, I need to ask about said child the next time we interact. This is something that is troubling this person; my responsibility, to phrase it in a misleading way, is to acknowledge them, to make what they have said into something that has been heard, and in a way that lets them know that they have been heard.
So, returning to the shortcoming: I have no conceptual texture to attach this to. I have never built any kind of cognitive architecture that serves this kind of purpose; my interactions with other humans are focused on understanding them, which has basically served me socially thus far. But I have no conceptual hook to attach things like “Ask after this person’s child”. My model is now updated to include the pain of that situation; there is nothing in the model that is designed to prompt me to ask. I have heard; now I need to let this person know that they have been heard, and I reach for a tool I suddenly realize has never existed. I knew this particular tool was necessary, but have never needed it before.
It’s maybe tempting to build general-purpose mental architecture to deal with this problem, but as I examine it, it looks like maybe this is a problem that actually needs to be resolved on a more individual basis, because as I mentally survey the situation, a large part of the problem in the first place is the overuse of general-purpose mental architecture. I should have noticed this missing tool before.
I am not looking for solutions. Mostly it is just interesting to notice; usually, with these sorts of things, I’ve already solved the problem before I’ve had a chance to really notice, observe, and interact with the problem, much less notice the pieces of my mind which actually do the solving. Which itself is an interesting thing to notice; how routine the construction of this kind of conceptual architecture has gotten, that the need for novel mental architecture actually makes me stop for a moment, and pay attention to what is going on.
It can sometimes be hard to notice the things you mentally automate; the point of automating things is to stop noticing them, after all.
I really, really dislike waste.
But the thing is, I basically hate the way everybody else hates waste, because I get the impression that they don’t actually hate waste, they hate something else.
People who talk about limited resources don’t actually hate waste—they hate the expenditure of limited resources.
People who talk about waste disposal don’t actually hate waste—they hate landfills, or trash on the side of the road, or any number of other things that aren’t actually waste.
People who talk about opportunity costs (‘wasteful spending’) don’t hate the waste, they hate how choices were made, or who made the choices.
Mind, wasting limited resources is bad. Waste disposal is itself devoting additional resources—say, the land for the landfill—to waste. And opportunity costs are indeed the heart of the issue with waste.
At this point, the whole concept of finishing the food on your plate because kids in Africa don’t have enough to eat is the kind of old-fashioned where jokes about it being old fashioned are becoming old fashioned, but the basic sentiment there really cuts to the heart of what I mean by “waste”, and what makes it a problem.
Waste is something that isn’t used. It is value that is destroyed.
The plastic wrapping your meat that you throw away isn’t waste. It had a purpose to serve, and it fulfilled it. Calling that waste is just a value judgment on the purpose it was put to. The plastic is garbage, and the conflation of waste and garbage has diminished an important concept.
Food you purchase, that is never eaten and thrown away? That is waste. Waste, in this sense, is the opposite of exploitation. To waste is to fail to exploit. However, we use the word waste now to just mean that we don’t approve of the way something is used, and the use of the word to express disapproval of a use has basically destroyed—not the original use of the word, but the root meaning which gives the very use of the word to express disapproval weight. Think of the term “wasteful spending”—you already know the phrase just means spending that the speaker disapproves of, the word “wasteful” has lost all other significance.
Mind, I’m not arguing that “waste” literally only means a specific thing. I’m arguing that an important concept has been eroded by use by people who were deliberately trying to establish a link with that concept.
Which is frustrating, because it has eroded a class of criticisms that I think society desperately needs, which have been supplanted by criticisms rooted in things like environmentalism, even when environmentalism isn’t actually a great fit for the criticisms—it’s just the framing for this class of criticism where there is a common conceptual referent, a common symbolic language.
And this actually undermines environmentalism; think about corporate “green” policies, and how often they’re actually cost-cutting measures. Cutting waste, once upon a time, had the same kind of public appeal; now if somebody talks about cutting waste, I’m wondering what they’re trying to take away from me. We’ve lost a symbol in our language, and the replacement isn’t actually a very good fit.