Got any evidence?
TsviBT(Tsvi Benson-Tilsen)
I’m not sure how to integrate such long-term markets from Manifold. But anyway, that market seems to have a very vague notion of emulation. For example, it doesn’t mention anything about the emulation doing any useful cognitive work!
I guess that falls under “value drift” in the table. But yeah I think that’s extremely unlikely to happen without warning, except in the case of brain emulations. I do think any of these methods would be world-changing, that therefore extremely dangerous and would demand lots of care and caution.
Yeah, of course it affects gene regulation. I’m saying that—maayybe—nature has specific broad patterns of gene expression associated with powerful cognition (mainly, creativity and learning in childhood); and since these are implemented as GRNs, they’ll have small, discoverable on-off switches. You’re copying nature’s work about how to tune a brain to think/learn/create. With ultrasound, my impression is that you’re kind of like “ok, I want to activate GABA neurons in this vague area of the temporal cortex” or “just turn off the amygdala for a day lol”. You’re trying to figure out yourself what blobs being on and off is good for thinking; and more importantly you have a smaller action space compared to signaling molecules—you can only activate / deactivate whatever patterns of gene expression happen to be bundled together in “whatever is downstream of nuking the amygdala for a day”.
I mostly don’t know but it doesn’t seem all that unlikely it could work.
My main evidence is
It’s much easier to see the coarse electrical activity, compared to 5-second / 5-minute / 5-hour processes. The former, you just measure voltage or whatever. The latter you have to do some complicated bio stuff (transcriptomics or other *omics).
I’ve asked something like 8ish people associated with brain emulation stuff about slow processes, and they never have an answer (either they hadn’t thought about it, or they’re confused and think it won’t matter which I just think they’re wrong about, or they’re like “yeah totally but we’ve already got plenty of problems just understanding the fast electrical stuff”).
We have very little understanding of how the algorithms actually do their magic, so we’re relying on just copying all the details well enough that we get the whole thing to work.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
not very legible evidence
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
why would they be able to be much smarter together than individually
Ok some examples:
-
Multiple attention heads.
One person solves a problem that induces genuine creative thinking; the other person watches this, and learns how genuine creative thinking works. Not very feasible with current setup, maybe feasible with low-cost hardware access.
One person works on a difficult, high-context question; the other person remembers the stack trace, notices and remembers paths [noticed, but not taken, and then forgotten], debugs including subtle shifts, etc. Not very feasible currently without a bunch of distracting exposition. See TAP.
-
More direct (hence faster, deeper) implicit knowledge/skill sharing.
But a lot of the point is that there are thoughtforms I’m not aware of, which would be created by networked people. The general idea is as I stated: you’ve genuinely moved somewhat away from several siloed human minds, toward something more integrated.
-
Signaling molecules can potentially take advantage of nature’s GRNs. Are you saying that ultrasound might too?
(1):
If one person could think with two brains, they’d be much smarter. Two people connected is not the same thing, but could get some of the benefits. The advantages of an electric interface over spoken language are higher bandwidth, lower latency, less cost (producing and decoding spoken words), and potentially more extrospective access (direct neural access to inexplicit neural events).
Do you think that one person with 2 or more brains would be 2-20 SDs?
Such details would be helpful.
I have no idea, that’s why the range is so high.
(2):
The .02 is, as the table says, “as described”; so it should be plausibly a realistic emulation of the human brain. That would include getting slower dynamics right-ish, but wouldn’t exclude getting value drift anyway.
it’s not too hard to guess some of the important learning rules
Maybe. Why do you think this?
this means there’s already a lot of potential communication / control / measurement bandwidth left on the table.
I’m talking about neuron-neuron bandwith. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html
I agree that neuron-computer bandwidth has easier ways to improve it—but I don’t think that bandwidth matters very much.
which is not currently the bottleneck and will take too long to yield any benefits
My guess is that it would be very hard to get to millions of connections, so maybe we agree, but I’m curious if you have more specific info. Why is it not the bottleneck though?
confidence in my sanity and intelligence metrics to tamper with my brain by injecting neurons into it and stuff.
That’s fair. Germline engineering is the best approach and mostly doesn’t have this problem—you’re piggybacking off of human-evolution’s knowledge about how to grow a healthy human.
minor non-invasive general fluid intelligence increase at the top of the intelligence distribution would be incredibly valuable and profits could be reinvested in more hardcore augmentation down the line
You’re talking about a handful of people, so the benefit can’t be that large. A repeatable method to make new supergeniuses is vastly more valuable.
I think it makes sense to pick the low-hanging fruit first (then attempt incrementally harder stuff with the benefit of being slightly smarter)
No, this doesn’t make sense.
I think the stuff you’re doing is probably fun / cool / interesting / helpful / something you like. That’s great! You don’t need to make an excuse for doing it, in terms of something about something else.
But no, that’s not the right way to make really smart humans. The right way is to directly create the science and tech. You’re saying something like “it stands to reason that if we can get a 5% boost on general intelligence, we should do that first, and then apply that to the tech”. But
It’s not a 5% boost to the cognitive capabilities that are the actual bottlenecks to creating the more powerful tech. It’s less than that.
What you’re actually doing is doing the 5% boost, and never doing the other stuff. Doing the other stuff is better for the purposes of making a bunch of supergeniuses. (Which, again, doesn’t have to be your goal!)
I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software.
I agree something like this happens, I just don’t think it’s that strong of an effect.
I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the “repeatedly” criterion seems a bit off to me.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
for the few +6std people on earth it might just give +0.2std or +0.3std,
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
it’s sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
These are both addressed in the post.
Someone gets some kind of interface, and then they stop being conscious. So they act weird, and people are like “hey they’re acting super weird, they seem not conscious anymore, this seems bad”. https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies