Necromacy may be a subset of some of your vaguer examples, like “means to an end”, “want to be friends with a computer”, or “desire for power”, but IMO it’s a distinct subcategory. One way that humans react to grief is to try to bring back the decedent, and AGI currently looks like it could either do that or build something that could. I personally expect that functional necromancy and AGI will likely come hand in hand, in any scenario where the AGI doesn’t just wipe us out entirely. If necromancy (or other forms of running a human on a computer) comes first, it seems extremely likely that a rich smart nerd uploaded to enough compute would bootstrap themself into being the first AGI, because rich smart nerds so often get that way by being more about “can I?” than “should I?”. And if AGI comes first and decides to behave at all cooperatively with us for whatever reasons it might have, solving and undoing death are among the most boringly predictable things we tend to ask of entities that we even think are superhuman.
nim
I enjoy doing conversations with my past or future self.
What would past-me think if they could see what I’m up to right now—what would surprise, delight, or disappoint them? How do I justify those decisions, and do those justifications stand up to their scrutiny?
What will future-me wish that I’d considered about decisions that I’m currently making? Is there anything about the present that they would ask me to do differently?
Of course they’re both just mental models or shoulder advisors, but they’re still mostly alright to talk to—and if they aren’t, if I’m being or becoming someone whose company I wouldn’t want to keep, that’s a useful signal in itself.
One prompt that often helps put things in perspective, though it’s more of a thought experiment than a writing exercise for me personally, is to show my mental model of my distant ancestors around in my everyday life. Trying to figure out how I would explain mundane things to them often shows me aspects of those things which I had not considered before.
I wonder where one would buy large quantities of thin aluminum c-channel of a dimension suitable for holding filters. With a hacksaw and miter box, it would be straightforward to construct frames for your preferred size of filter. Then the frames could be taped together, instead of taping the filters themselves, constructing basically the thing you would like to buy. Some enterprising individual with a band saw and drill press could make custom filter frame kits if people use varying sizes of filter.
Did you choose the filter size in this project based on cost efficiency, or for compatibility with other appliances in your home, or based on some other metric? When I do things with filters, I like using the same size that my HVAC takes, because then any extras can go in the whole-house fan rather than risking them going to waste. I assume that others might choose filters the same way, but I realize that I’ve never actually asked. This seems to matter because if everyone uses the same size filters and fans, mass producing the frame kits would make sense, but if folks want to retrofit existing fans with filters that match their other home applications, custom building kits would likely be preferable.
I zoned out halfway through your attempt to justify benign boundary violations, because the defense feels like such implicature. The first section of your post built a mental model for me in which I heard you saying “I would like reassurance that I belong to a group which sets and follows social norms distinct from those of society at large”, to which I reply, “well duh”.
I was recently introduced to the concept of geek social fallacies, and the “no valid and wholesome social group can have norms other than those of the wider society” thing that you seem so (justifiably, imo, unfortunately) worried about getting slapped with for writing this feels like it rhymes closely with those.
Thank you for discussing a thing online which can often be socially dangerous to discuss. I think you did it well.
(typo: frought → fraught)
I found that some parts of sequences felt like cliff-hangers and demanded that the next post follow, but for the most part, they could be jumped around in to wherever piqued my interest at the time.
Logistically, tracking what I had left to read of them was fiddly: I ended up putting the titles of and links to all the sequences posts (scraped from some overview page) into a checklist in my notes app, then trying to remember to tick them off as I read them. If a feature for this was built into lesswrong itself, I was unaware of it at the time.
In May of 2022? I would consider it a high priority to unplug them from all sources of bad news about current events. This could probably be done most effectively by doubling down on a hobby which they already value, and taking them “off-grid” in a moderately sized group of supportive and relatively values-aligned individuals. The cash could be used to fake some sort of scholarship or fellowship award to them to basically pay them to do something they want to do more of, and remove them from whatever employment is probably making them unhappy already.
Thank you for clarifying! This highlights an assumption about AI so fundamental that I wasn’t previously fully aware that I had it. As you say, there’s a big difference between what to do if we discover AI, vs if we create it. While I think that we as a species are likely to create something that meets our definition of strong AI sooner or later, I consider it vanishingly unlikely that any specific individual or group who goes out trying to create it will actually succeed. So for most of us, especially myself, I figure that on an individual level it’ll be much more like discovering an AI that somebody else created (possibly by accident) than actually creating the thing.
It’s intuitively obvious why alignment work on creating AI doesn’t apply to extant systems. But if the best that the people who care most about it can do is work on created AI without yet applying any breakthroughs to the prospect of a discovered AI (where we can’t count on knowing how it works, ethically create and then destroy a bunch of instances of it, etc)… I think I am beginning to see where we get the meme of how one begins to think hard about these topics and shortly afterward spends a while being extremely frightened.
I notice that I am confused by not seeing discourse about using AI alignment solutions for human alignment. It seems like the world as we know it is badly threatened by humans behaving in ways I’d describe as poorly aligned, for an understanding of “alignment” formed mostly from context in AI discussions in this community.
I get that AI is different from people—we assume it’s much “smarter”, for one thing. Yet every “AI” we’ve built so far has amplified traits of humanity that we consider flaws, as well as those we consider virtues. Do we expect that this would magically stop being the case if it passed a certain threshhold?
And doesn’t alignment, in the most general terms, get harder when it’s applied to “smarter” entities? If that’s the case, then it seems like the “less smart” entities of human leaders would be a perfect place to test strategies we think will generalize to “smarter” entities. Conversely, if we can’t apply alignment findings to humans because alignment gets “easier” / more tractable when applied to “smarter” entities, doesn’t that suggest a degenerate case of minimum alignment difficulty for a maximally “smart” AI?
I like that spelling “gracefwly”. It reads right phonetically while looking cooler than the usual.
I would unironically suggest discussing this type of shyness with a good therapist or counselor, because it can arise from some rather detrimental habits of thought that you might benefit from identifying and thus gaining the ability to choose whether you want to modify them.
Levels 2 and up in your graphic read to me as ultimately fears of immaturity, and misplacement of personal responsibility.
Consider framing level 2 as fear of interacting with someone who lies to you about their preferences for whatever reason. Nice people lie for good reasons sometimes; it doesn’t automatically make them bad people or something. But if you’re interacting with someone who chooses to lie to you, and then suffers as a result of having made that choice, do you really want to make that suffering your problem?
Consider framing level 3 as an extreme desire to control someone else’s experiences. By not approaching the person, you’re saying that your idea of what’s best for them is more important than giving them the choice of whether or not they want to interact with you. You’re doing something uncomfortable for yourself in an attempt to control another person’s experience in a way that doesn’t seem to me like it ought to be any of your business. Try generalizing this to other parts of life, to see its absurdity: what if the stranger next to you in the grocery store had a really bad experience with your favorite food one time? Should you try to protect them from being reminded of that bad experience by not buying your favorite food, lest they see it in your cart? Not a perfect example, for sure, but it’s another case of trying to control someone else’s experiences in a way that’s unreasonable to expect of yourself and ultimately not good for you.
Level 4 is like a combination of the two: fearing that you can’t let other people make their own decisions, and living in a world where adults shouldn’t be given choices lest they suffer due to the consequences of their own actions.
If you insist on holding a paradigm where you’re responsible for others’ experiences to the point of withholding choices from them, it seems you could turn it around as an argument for social interaction: What if these poor incompetent hypothetical people, who can’t be trusted to say what they think or do what they prefer, actually want your friendship but are too shy and untrustworthy to pursue it first? What harms are you bringing them to by withholding your company?
As an individual concerned about food shortages, you always have the option of tailoring how you buy groceries. A trick I learned from Reddit for stocking up on long-shelf-life foods: watch what food packaging leaves your house in the trash or recycling. Check the best-before date on the package each time you eat something that stores for a relatively long time, especially canned foods and dry goods. This way you can find the approximate rate at which you go through a given item—maybe you usually use 1 box of pasta and 1 can of soup each week for an easy casserole. Then for each item, you get a feel for how much longer you could’ve kept it before its best-by date: I’ve noticed that the pasta I buy often has a date 2-3 years in the future, and canned goods often claim 1-2 years. So if pasta is good for 2 years and I eat 1 box a week, that means I could keep 104 boxes of it if I had the space, and still use up all of it before its best-by. If I eat 1 can of a given soup per week, and it’s best by 1 year in the future, I could keep 52 cans if I had the space.
Now, most foods are safe and nutritious after their best-by or sell-by dates, sometimes long after. But the date is a good lazy rule of thumb for “the manufacturer is confident the packaging will protect it for this long” when you’re getting started. Once you figure out how much stuff you can keep around and still use it in time, figure out how much space you’re willing to commit to your personal food security insurance. To figure out which long-shelf-life items are best for you to store in your limited space, consider how they come together into meals (would this item be useless to you without a specific fresh ingredient?) and whether eating from the selection you choose to store would offer balanced nutrition.
Once you’ve calculated what you want to store based on these constraints, buy a little extra of your storage items each time you grocery shop, until you’ve reached your target quantities. Then all you have to do is remember to rotate through them: eat the oldest of each item first. If you learn that the enjoyment you get from a particular item declines when it reaches a certain age that’s younger than the manufacturer’s stated use-by, treat that as its new de facto use-by date going forward. If your dietary needs change, like if you discover a new allergy or decide you don’t like a given item any more, you can easily donate your extras to a local food pantry because everything you store will still be in date.
The trick to this process is to make it as easy as possible. If there’s a part of the food storage process that makes you look for an excuse to stop, change it so you don’t hate the process. Having food on hand is important preparedness for natural disasters as well as man-made ones, and being able to drop your grocery bills to 0 for a few weeks or months while still eating well is a wonderful trick to keep up your sleeve in the world of personal finance.
If there are new electronics that you want primarily because they’re new, like the latest phone, stocking up early won’t buy you much benefit. But if there are secondhand or older electronics that you’re certain you’ll want in the foreseeable future, increased competition for them in the future may suggest that prices will be better now than later.
Thank you for explaining. What I hear in this is that rationality also works like an esoteric hobby, and for people who want more friendships built on commonalities, adding an uncommon use of time is counterproductive.
I think I don’t experience the same negative effects because my “it’s good to interact cooperatively with people different from oneself” needs are met instead by some location-based volunteering hobbies. I live in an area with low enough population density that “vaguely competent and willing to show up and do stuff” buys one a lot of goodwill and quality time with others, which is a whole other social hack of its own :)
Rationality has, in fact, harmed me more than it has helped.
This framing causes me to wonder whether I experience similar effects but attribute them to causes other than Rationality itself. Would you be willing/able to share some examples of harms you expect that you would not have experienced if you hadn’t undertaken this study of correct thought?
Yes.
A particularly common instance of this in my life is that the tools of thought which I learned from the Sequences cause me to actually use spreadsheets more often. It goes something like this:
-
I think that I want a thing.
-
I shop for the thing, and find that there are far too many options, all of which have some people claiming they’re the worst thing ever (one-star reviews). I feel worried, intimidated, and afraid of actually getting the thing because I’ll get the wrong one and be stuck with the wrong one and it’ll be my own fault.
-
I step back, and think harder about what I actually want the thing to do. I attempt to formalize a framework for comparing the different options. I feel gently annoyed by my own uncertainty about what I actually want, but this annoyance transforms into confidence or even pride in my own thoroughness as I proceed through this step.
-
Surprisingly often, this more-intentional framing of the problem causes me to realize that I can actually solve the problem with stuff I have on hand. For instance, a home-row letter keycap on my laptop keyboard recently broke. Intentionally attempting to think rationally about the problem caused me to realize that I could move an infrequently-used symbol keycap to the home row and continue typing comfortably. When this happens, I feel brilliant, like I’ve discovered a tiny exploit in the interface between my expectations and the world.
-
When I still want to go get the thing, I attempt to quantify the relevant aspects of the thing into columns on a spreadsheet, and my options for getting the thing into the rows. By filling out each cell, I can compare, score, and sort the different options, and better visualize what information is omitted by advertisements which otherwise look highly tempting. I often feel surprised and annoyed that an option which looked like it’d probably be the best is actually a non-starter due to some essential trait being wrong or undocumented.
-
I then get the thing which appears to represent the best compromise between cost and features. I feel confident that I have gotten the best thing I could find, and that if it turns out to be inadequate, the problem will be due to factors outside of my control.
Before going down this rabbit hole of big-R Rationality, I knew enough about cognitive biases and similar effects to feel distrustful of brains, including my own, in situations where I noticed that such distortions seemed relevant. But concrete, everyday Rationality has given me tools to circumvent those biases—mitigation rather than just detection, treatments rather than just awareness.
-
It is funny from the perspective of a member of the clique, because if someone tries that kind of thing against a clique member, their friends can retaliate. It is deeply un-funny to an outsider, who can expect no such safety net. Humor is situational, and often extremely revealing about peoples’ underlying assumptions.
A visual artist once explained a similar phenomenon to me, where one’s ability to recognize quality work is always a step ahead one’s ability to create things to that standard. They were explaining why they were so aware of room for improvement in their work even though it seemed great to me, a novice at their medium.
It sounds like the “just be yourself” advice might emerge when someone’s ability to do the thing matches or exceeds their ability to recognize quality or excellence in the thing. I’m surprised that people who get to that point don’t seem confused or frightened by their inability to articulate how they’re getting their results.
Is this meant to disincentivize downvoting, or is that accidental? Pinning a monetary value to votes makes me feel like downvoting unclear, inaccurate, or off-topic content is literally taking money away from someone.
And, half-jokingly: If a post gets a net negative number of votes, it implies that the author would be expected to pay the site.
I personally keep my identities separated across platforms as well.
To address the problem of naming, I keep a list of names in my notes app. Sometimes strings or sounds cross my path which light the “this might make a good name someday” bulb in my head, but I can’t recall them on command without writing them down then looking them up.
I find it helpful to think adversarially about what I say: If I wanted to track the person behind any of my accounts to a physical location, how would I do it? Being my own attacker helps me draw lines between what I find it appropriate to share where. Every bit of information that you share under the same identity increases an attacker’s odds of finding you. For instance, sharing what company you work at is safe. Sharing what type of work you do is safe. Sharing what age you are, what gender you are, what ethnicity you are, what part of town you live in—all safe on their own, but if someone had all those pieces at once, they could probably pinpoint you as a unique individual and look up your details in public records to find your address.
I decide whether to disclose information by how unambiguously it pins a given account to my physical identity or to another account. It’s almost always fine to share superficial detail about a hobby, as long as a lot of people do it. For instance, you can share that you like playing reed instruments, or that you enjoy keeping fish. But if you get into greater detail—if you build and play reproduction 17th century oboes, or if you’re a regional champion competitive koi breeder—you’ve probably doxed yourself.
It’s also worth being clear about why you’re separating your identities. I do it because I plan to live in the same place for a rather long time, so having my location found by anyone motivated to cause me problems would be disproportionately inconvenient.
Escalating acquaintanceship into friendship involves increasing trust and disclosing more personal information, regardless of whether it happens in physical or digital places. I think you’d do well to look closely at what you think others would gain by filling out the quiz that you propose, and look for ways that you could offer that to them directly with a lower chance of accidentally sharing more than you want to with someone you’d prefer to keep at a greater distance from yourself.
Insight is hard to talk about, even harder to sound sane and logical while discussing. We could perhaps model “having an insight” as 2 stages: Asking an interesting new question, and producing a useful answer to that question. These 2 steps can often be at odds with each other: Improving the skill of making your answers more useful risks falling into habits of thought where you don’t ask certain possibly-interesting questions because you erroneously assume that you already know their whole answers. Improving the skill of asking wild questions risks forming too strong a habit of ignoring the kind of common sense that rules out questions with “that shouldn’t work”, and yet is essential to formulating a useful answer once an interesting question is reached.
The benefits of erring toward the “produce useful answers” skillset are obvious, as are the drawbacks of losing touch with reality if one fails to develop it. I think it’s easy to underestimate the benefits of learning the skills which one can use to temporarily boost the “ask interesting questions” side, though. Sadly most of the teachable skills that I’m aware of for briefly letting “ask interesting questions” override “produce useful answers” come packaged in several layers of woo. Those trappings make them more palatable to many people, but less palatable to the sorts of thinkers I typically encounter around here. The lowest-woo technique in that category which comes to mind is oblique strategies.
“A good rationalist always questions what her teacher says.”
Why does Saundra believe this? I’d hazard the guess that her teacher said it to her.
The axioms that we pick up before we learn to question new axioms are the hardest to see and question. I wonder if that’s a factor in the correlation between “smarter” people often seeming to learn to question axioms earlier in life—less time spent getting piled with beliefs that were never tested by the “shall I choose to believe this?” filter because the filter didn’t exist yet when the beliefs were taken on.