Hi. I’ve joined late, and posted on the “Hi” thread late.
atucker
Hi! I posted on the other thread that I was around, but I guess I should introduce myself.
I guess the weirdest thing about me (relative to the community) is my age—I’m still in high school and have been lurking LW since its creation and OB before that… I’m in the Montgomery Blair Magnet program, which has pretty thoroughly taught me that I’m by no means especially smart.
I got interested in the whole rationality thing after reading some of the articles that were tangentially related to the more philosophical articles that I was interested in* and found on Hacker News. The metaethics sequence seemed much less forced than a lot of the other considerations of morality that I had heard (mostly from a Christian background), which only piqued my interest further.
Short note: Harry Potter and the Methods of Rationality is pretty much the best introduction to rationalist topics for people my age that I’ve ever seen, I recommended it to a few friends, one of whom started reading it, lurking LW, and convincing others to read as well.
The article most tangibly helpful in my life was http://lesswrong.com/lw/i0/are_your_enemies_innately_evil , mainly in that it helped me realize that everyone seems reasonable to themselves and that you don’t get anywhere when you argue as if they’re totally wrong. It’s helped a lot in resolving interpersonal issues, and is probably one of the major factors of my being elected President of my school’s FIRST robotics team.
*My interest in philosophy started about 3 years ago, mostly as a result of my freshman physics class and reading Godel Escher Bach.
Would I be right in thinking that this implies that you can’t apply could to world()? i.e. “Our universe could have a green sky” wouldn’t be meaningful without some sort of metaverse program that references what we would think is the normal world program?
Or have the utility computed also depend on some set of facts about the universe?
I think both of those would require that the agent have some way of determining the facts about the universe (I suppose they could figure it out from the source code, but that seems sort of illegitimate to me).
It seems like there are two kinds of “could” at work here, one that applies to yourself and is based on consistent action to utility relationships, and another that involves uncertainty as to what actions cause what utilities (based on counterfactuals about the universe).
What if instead of assuming your Xs, you got them out of the program?
To simplify, imagine a universe with only one fact. A function fact() returns 1 if that fact is true, and 0 if its not.
Now, agent can prove statements of the form “fact()==1 and agent()==A implies world()==U”. This avoids the problem of plugging in false, agent doesn’t start from assuming X as a statement of its own from which it can derive things, it looks at all the possible outputs of fact(), then sees how that combines with its action to produce utility.
Trivially, if agent has access to the code of fact(), then it can just figure out what that result would be, and it would know what actions correspond to what utilities.
Otherwise, agent() could use a prior distribution or magically infer some probability distribution of if fact is true or not, then choose its action based on normal expected utility calculations.
Note: The above departs from the structure of the world() function given here in that it assumes some way of interacting with world (or its source code) to find out fact() without actually getting the code of fact. Perhaps world is called multiple times, and agent() can observe the utility it accumulates and use that (combined with its “agent()==A and fact()==1 implies world() == U” proofs) to figure out if fact is true or not?
Interestingly, while this allows agent to apply “could” to the world (“It could rain tomorrow”), it doesn’t allow the agent to apply “could” to things that don’t effect world’s utility calculation (“People could have nonmaterial souls that can be removed without influencing their behavior in any way”).
I agree that anti-akrasia tactics should be a major part of most rationality power tool sets. Being in high school, the vast majority of things that I do don’t really hinge on the likelihood of specific things (politics, technologies, etc.) happening in the near future.
Akrasia, on the other hand, often gets in the way of things that I’m trying to do, and seems like a major roadblock in turning any thought into useful action.
Is an illusion something other than a system failing to correctly represent its otherwise accurate sensory input?
Particularly if it happens in a specific and reproducible way, so that even though the information it gathers is accurate, said information is internally represented as being something different and inaccurate.
Because I can write programs that have bugs that are pretty much that.
From a consequentialist standpoint, I agree that leaving her happy is better than leaving her crying.
However, it’s not definitely right to comfort people using religious ideas you know are wrong, unless you intend to correct them later or something. Then its just hazy.
Where it gets really muddled though, is where nothing you’re saying is necessarily false, but it wouldn’t seem true without your mentioning it. Like, the link between reasserting her “caring nature”, donating to the charity, and dealing with her father’s death. And how she can reconstruct her life and assert her values by combining them.
She would not come up with that on her own, but once she starts thinking about it/herself that way, I think there are compelling arguments that it actually becomes that way. Or at least feels like that way, and lacks objective evidence to the contrary. Which might be enough in self-concept stuff?
Sorry if the last paragraph (or two) is a bit mysterious or hard to follow—I’m confused about the subject.
Its easier to tell that something is unhealthy than if its optimally healthy. Coughing up blood is worse than not doing so, but is good stamina better than increased alertness?
(I’d posit that) Most moral arguments are over if something is immoral or not, and I think that a lot of times those can be related to facts.
Because it hurts like fuck, that’s why.
Totally agreed, but its also been a damn good motivator for a lot of personal changes in my life. Like growing a backbone.
Granted, that sort of stuff might not be necessary if all love was requited, and if I ever can’t find stuff I want to change about myself then I could imagine wanting this drug.
A lot of this is probably going to sound incredibly unconvincing. I’d assert that I would have found it unconvincing had I not gone through it.
The first time around it was mostly a matter of me realizing that it was hopeless with this girl and that I have better things to do than worry. Not particularly vertebral, but sort of significant for me nonetheless.
The next time was a bit weirder—there was someone who was a friend that I got a crush on, then we dated for a while, then split up, then at various times started dating again before I ultimately wound up in the friend zone. (I’d like to mention that she was pretty open about the whole non-interested not-seriousness of everything after the initial split, and it was more my pigheadedness that allowed it to continue). At various points during that, she would become somewhat interested in other guys.
After one incident (inviting one of said interests to a meeting with me), I had decided that I had enough of it and called her out on it. I then actually accepted that she wasn’t interested, and had backbone enough to stop trying to accommodate her in every way possible. Following up, I became more secure in myself in general, more assertive and demanding of my own interests, and less worried about the opinions that other people don’t actually have because they’re not on average interested enough to judge you. So overall less of a pushover.
Not much, it just didn’t seem in my opinion as important (though it felt that way at the time), and the backbone growth didn’t particularly propagate through my life, apart from ending that whole middle school/high school like-someone-but-never-do-anything-about-it thing.
Short answer, yes.
Long answer, we’re good friends now, and its working out much better with a backbone. The main issue with the relationship not working is that a lot of what Robin Hanson speculates about “mating behavior” is true with her (to the point that when I explained those ideas to her she thought I was just being ridiculously insightful and empathetic). Other than that, she’s really fun to be around.
On the subject of pain-moment badness, If pain becomes more common (even if it gets forgotten), then at any given instant I’m more likely to be in pain than otherwise, which I would consider worse, even if I forget it afterwards.
So torturing people then removing the memory is a bad thing, as a general rule.
I’m still a bit shaky on how that affects question 2.
You go on a lot about birds and frogs, but including so many examples of writing about them seems sort of superfluous to me. I found the two concepts pretty easy to think about, and the added detail didn’t do much to increase my understanding. (You cite them later which is great, but I feel like where the reffered to idea is vital you could just say “___ said __” and summarize the point you want to use, rather than give us their whole quote to wade through)
Beavers, on the other hand, were introduced abruptly, and then not explained nearly as much. The Beaver classification seemed like an interesting idea to me, but I was sort of disappointed by their coverage.
The article raises some interesting ideas, but I feel like you did a disservice to them by focusing so much on birds and frogs, and so little on beavers, which seem to be the more novel part of your analysis.
I hope that’s helpful.
Entire agreement with the organizations being smarter than people part. In terms of actually being able to steer the future into more favorable regions, I’d say that organizations are smarter than the vast majority of humans.
To use a specific example, my robotics team is immensely better at building robots than I am (or anyone else on the team is) on my own. Even if it messes up really really badly it’s still better at building a robot in the given constraints (budgetary constraints, 6 week time span) than I am.
I can see an argument being made that organizations don’t very efficiently turn raw computational power into optimization though.
Ima split up intelligence into optimizer (being able to more effectively reach a specified goal) and an inferencer (being able to process information to produce accurate models of reality).
There are weaknesses of the team as an inferencer. It seems to be (largely) unable to remember things beyond the extent that individuals involved with the team do, and all connecting of integration of information is ultimately done by individuals.
Though the organization does facilitate specialization of knowledge, and conversations cause ideas better than the ideas of an individual working alone, I don’t think its a fundamental shift. It seems to more be augmenting a human inferencer, and combining those results for better optimization, rather than a fundamental change in cognitive capability.
To illustrate, breakthroughs in science are certainly helped by universities, but I doubt that you can make breakthroughs significantly faster by combining all of the world’s universities into one well-organized super-university. There’s a limit to how brilliant people can be.
That being said, I’m pretty sure that the rate incremental change could be drastically improved by a combination like that.
My 2 cents.
Maybe, if you discount the organization of “modern civilization.” There’s certainly a fundamental shift between self-reliant generalists and trading specialists. But is the difference between programmers in a small company and programmers working alone “fundamental”? Possibly not, though I’d probably call it that.
I avoided this example because I don’t have a particularly good goalset for modern civilization to cohesively work towards, so discussing optimization is sort of difficult.
In terms of optimization to my standards I agree that its a huge shift. I can get things from across the world shipped to my door, preprocessed for my consumption, at a ridiculously low cost. But in terms of informational processing ability I feel like its not that gigantic of a deal. Like, it processes way more information but can’t do much beyond the capabilities of its constituent people to individual pieces of data, (like, eyeball a least squares regression line or properly calculate a posterior probability without using an outside algorithm to do so). (Note: lots of fuzzy linguistic constructions in the previous two sentences. I notice some confusion.)
Now, would it be best to have one super-university? Probably not- one of the main benefits of top universities is their selectivity. If you’re at Harvard, you probably have a much higher opinion of your colleagues than if you’re at a community college. It seems there are additional benefits to be gained from clustering, but there are decreasing returns to clustering (that become negative).
I wasn’t being clear, sorry. Concentrating your best people does help, but I don’t think you can get the equivalent of the best people by just clustering together enough people, no matter how good your structure is.
I worded that too strongly. You get diminishing returns on possible breakthroughs after a certain point. You get more effective smartness, but its not drastically better to the extent that a GAI is.
Advice for a Budding Rationalist
Thanks a lot! I think this is the one I saw a while ago.
I don’t think he necessarily has to spend all 11 of those years just waiting, there’s plenty of other things for a plotter of his caliber to do in that time to advance their plans.
This probably mirrors Eliezer’s life too much, but he could have put world domination on hold while he learns more foolproof (or less destructive) methods of world domination.