thinking abt how to make:
1. buddhist superintelligence
2. a single, united nation
3. wiki of human experience
more here.
thinking abt how to make:
1. buddhist superintelligence
2. a single, united nation
3. wiki of human experience
more here.
would be nice to have a way to jointly annotate eliezer’s book and have threaded discussion based on the annotations. I’m imagining a heatmap of highlights, where you can click on any and join the conversation around that section of text.
would make the document the literal center of x risk discussion.
of course would be hard to gatekeep. but maybe the digital version could just require a few bucks to access.
maybe what I’m describing is what the ebook/kindle version already do :) but I guess I’m assuming that the level of discussion via annotations on those platforms is near zero relative to LW discussions
I guess I’m considering a vastly more powerful being that needs orthogonal resources… the same way harvesting solar power (I imagine) is orthogonal generally to ants’ survival. In the scheme of things, the chance that a vastly more powerful being wants the same resources thru the same channels as we… this seems independent of or indirectly correlated with intelligence. But the extent of competition does seem dependent on how anthromorphic/biomorphic we assume it to be.
I have a hard time imagining electricity, produced via existing human factories, is not a desired resource for proto ASI. But at least at this point we have comparable power and can negotiate or smthing. For superhuman intelligence—which will by definition be unpredictable to us—it’d be weird to think we’re aware of all the energy channels it’d find.
I guess I don’t think this is true:
“Technological progress increases number of things you can do efficiently and shifts balance from “leave as it is” to “remake entirely”.
Technological progress may actual help you pinpoint more precisely what situations you want to pay attention to. I don’t have any reason to believe a wiser powerful being would touch every atom in the universe.
I appreciate the way you’re thinking, but I guess I just don’t believe that the situation or don’t agree with your intuition that the situation with machines next to humans will be worse or deeply different than the situations of humans next to ants. I mean, the differences actually might benefit humans. For example, the fact that we’ve had machines in such close contact with us as they’re growing might point to a kind of potential for symbiosis.
I just think the idea that machines will try to replace us with robots I think if you look closely, doesn’t totally make sense. When machines are coming about, before they’re totally super-intelligent, but while they’re comparably intelligent to us, they might want to use us because we’ve evolved for millions of years to be able to see and hear and think in ways that might be useful for a kind of digital intelligence. In other words, when they’re comparably intelligent to us, they may compete for resources. When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival. That they’ll ruin our homes because the bricks can be used better elsewhere? It takes much less energy to let things be as they are if they’re not the primary obstacle you face—both if you’re a human or a super human intelligence.
So, self interested superintelligence could cause really bad stuff to happen, but it’s a stretch from there to call it the total end of humanity. By the time that machine gets superhuman intelligence, like totally vastly more powerful than us, it’s unclear to me that it would compete for resources with us that it would even live or exist along similar dimensions to us. Things could go really wrong, but I think the idea that there will be an enormous catastrophe that wipes out all of humanity just sounds to me like the outcomes will be more weird and spooky, and concluding death is feels a little bit forced.
It feels to me like, yeah, they’ll step on us some of the time, but it’d be weird to me if they conceive of themselves or if the entities or units that end up evolutionarily propagating that we’re calling machines end up looking like us or looking like physical beings or really are competing with us for resources. The same resources that we use. At the end of the day, there might be some resource competitions, but I just think the idea that it will try to replace every person is just excessive and even taking is given all of the arguments up until the point of like machine believing that machines will have a survival drive, assuming that they’ll care enough about us to do things like replace each of us. It’s just strange, you know? It feels forceful to me.
I’m inspired in part here by Joscha Bach / Emmett Shear’s conceptions of superintelligence: as ambient beings distributed across space and time.
It just feels to me like the same argument could have been made about humans relative to ants—that ants cannot possibly be the most efficient use of the energy they require from the perspective of humans. But in reality, what they do and the way they exist is so orthogonal to us that even though we step on an ant hill every once in a while, their existence continues. There’s this weird assumption in the book that disassembling Earth is profitable, or just disassembling humans is profitable. But humans have evolved over a long time to be sensing machines in order to walk around and be able to perceive the world around us.
So the idea that a super-intelligent machine would throw that out because it wants to start over, especially as it’s becoming super-intelligent, is sort of ridiculous to me. It seems like a better assumption is that it would want to use us for different purposes, maybe for our physical machinery and for all sorts of other reasons. The idea that it will disassemble us I think is an unexamined assumption itself—it’s often much easier to leave things as they are than it is to fully replace or modify.
Does Eliezer believe that humans will be worse off next to superintelligence than ants are next to humans? The book’s title says we’ll all die, but in my first read, the book’s content just suggests that we’ll just be marginalized.
thanks for sending science bench in particular.
I’m thinking often about whether LLM systems can come up with societal/scientific breakthrough.
My intuition is that they can, and that they don’t need to be bigger or have more training data or have different architecture in order to do so.
Starting to keep a diary along these lines here: https://docs.google.com/document/d/1b99i49K5xHf5QY9ApnOgFFuvPEG8w7q_821_oEkKRGQ/edit?usp=sharing
I’m interested in what it’d look like for LLMs to do autonomous experiments on themselves to uncover more about their situations/experiences/natures.
Made this social camera app, which shows you the most “meaningfully similar” photos in the network every time you upload one of your own. Isorta fun, for uploading art; idk if any real use.
agreed context is maybe the bottleneck.
i wonder if genius ai—the kind that can cure cancers, reverse global warming, and build super-intelligence—may come not just from bigger models or new architectures, but from a wrapper: a repeatable loop of prompts that improves itself. the idea: give an llm a hard query (eg make a plan to reduce global emissions on a 10k budget), have it invent a method for answering it, follow that method, see where it fails, fix the method, and repeat. it would be a form of genuine scientific experimentation—the llm runs a procedure it doesn’t know the outcome of, observes the results, and uses that evidence to refine its own thinking process.
the time of day i post quick takes on lesswrong seems to determine how much people engage more than the quality of the take
One underestimated approach to making superintelligence: designing the right prompt chain. If a smart person can come up with a genius idea/breakthrough through the right obsessive thought process, so too should a smart LLM be able to come up with a genius idea/breakthrough through the right obsessive prompt chain.
In this frame, the “self-improvement” which is often discussed as part of the path toward superintelligence would look like the LLM prompt chain improving the prompt chain, rather than rewiring the internal LLM neural nets themselves.
you’re keyed into what i think is the most important question in the world
another intuition pump for why goodness (or empathy) might compete in a “locust” world:
you’re keyed into what i think is the most important question in the world
another intuition pump for why goodness (or empathy) might compete in a “locust” world:
In the past we weren’t in spaces which wanted us so desperately to be single-minded consumers.
Workplaces, homes, dinners, parks, sports teams, town board meetings, doctors offices, museums, art studios, walks with friends—all of these are settings that value you for being yourself and prioritizing long term cares.
I think it’s really only in spaces that want us to consume, and want us to consume cheap/oft-expiring things, that we’re valued for consumerist behavior/short term thinking. Maybe malls want us to be like this to some extent: churn through old clothing, buy the next iPhone, have our sights set constantly on what’s new. Maybe working in a newsroom is like this. But feed-based social networks are most definitely like this. They reward participation that are timely and outrageous and quickly expiring, posts which get us to keep scrolling. And so, we become participants that keep scrolling, keep consuming, and detach from our bodies and long term selves.
So, I think it’s cuz of current social media architectures/incentive structures that individual humans are more nearsighted today than maybe ever.
I need to think more about what it is abt the state of modern tech/society/culture that have proliferated these feed-based networks.
thank u, haven’t really