If it’s the top 0.1%, how do we distinguish them from the similar fraction of people that have been right on accident so far? I would expect that the number of lucky people is comparable to the number of skilled people if the skill is really really hard. Incidentally, this is how I think about investors. Some beat the market, but there are such a huge number of people it is nearly impossible to tell if they won by skill or by luck.
Dom Polsinelli
I do not understand exactly what you mean. Are you proposing something like this: https://en.wikipedia.org/wiki/Energy-dispersive_X-ray_spectroscopy
in molecular detail? Looking up the words “biofield” gives some weird and highly varied stuff. Anyway, iirc from
https://gwern.net/doc/ai/scaling/hardware/2008-sandberg-wholebrainemulationroadmap.pdf
most scientists don’t think a direct molecular simulation is wise or necessary.
Where has it already been done? What sequence of variables do we know the cause and effect of?
You left a comment one a previous post of mine about this but that was almost a year ago I think, so hardly spam.
My perfect very wishful thinking world involves ASI miraculously not happening and normal human neuroscience efforts shifting toward uploading and away from what it is now, which is a lot of wheel spinning and performative science. I do not assign a high probability to either of these. I also feel I am not well informed enough on either to make such sweeping claims.
I can’t say that I have heard of anyone doing that specifically for connectomics. I would not be surprised if I just missed it this paper (already in the post) or it is being done for other biological research but is too expensive for connectomics. I would also recommend looking at this guy’s new lab/old work for another place to start.
Having just reread your objections to uploading without reverse engineering I think it merits a response more detailed than the one I am about to give. It may be correct or at least have some room for middle ground where a lot of the short timescale/easy stuff is directly simulated and then corrections are
spaghetti codedadded on to prevent particular failures with data from real experiments.That said, my (limited) experience with trying to reverse engineer what is going on in a mouse’s brain during social interaction makes me feel utterly hopeless and everyday I dream about how much easier it would be if we could do a barebones direct simulation like the fruit fly simulation to see if we are even on the right track. Because of this (again, quite limited) experience trying to do something like the reverse engineering you suggest, I expect it to take ~forever whereas disentangling the mess that is all the higher order corrections past simple electrical models of cells connected with one way chemical synapses would merely take a really, really long time.
Also, I think doing the kind of reverse engineering on humans is challenging for purely ethical reasons whereas sufficiently detailed models for neuron/other components from mice would just carry over to human WBE much better than a fully reverse engineered mouse. I may be misunderstanding what depth you feel reverse engineering is necessary and what experiments it would require.
I can’t say for sure but a (really interesting) worst case scenario is this guy with a 7 second memory:
I am extremely sympathetic to this post both because I too become uncomfortable when I don’t understand the low level details of things and also because I think it has a strictly better thesis and is much better written to the post I made here. I have long hoped that I would be able to solve enough real problems that I would feel accomplished in perpetuity and then be comfortable coasting and solving fun problems from then on. I don’t know if that is a realistic goal because my threshold for real seems pretty high and my brain might just not work in a way that lets me feel good about something I did a long time ago, but that is the only thing I can think of in terms of reprogramming oneself to feel god without working on real work.
I would be interested in hearing his thoughts and would gladly make a follow up (or add it to my own possible experiment post) if he has convincing arguments that he does not post himself.
I would also read a dedicated post if you made one on how to treat indoor air in a cost effective manner. It is something I am becoming more aware of because of this, one guy on LW posting about ultra violet sterilization, and wanting to get a dog but maintain a clean apartment.
I would do the HRV thing but the deadbeats that run my apartment complex haven’t fixed my heat all winter so the absolute max I can get to is like 60 degrees sometimes so I don’t have any to spare. That does sound useful and fun to build though.
That is an excellent warning and I will certainly not do anything crazy. I got SCUBA certified and do vaguely remember learning about oxygen toxicity so it is something to be aware of, although I’m hazy on the details too. Although iirc early astronauts had 100% oxygen atmosphere but at only 20% atmospheric pressure giving the same partial pressure. I don’t think that is the best way to go about this, especially as I think it caused a really horrible fire, but oxygen enrichment is possible to do without toxicity. Still really skeptical of a 30% increase in learning on any task.
I second this and would at least like a version where all the blue words, presumably links, take you to the explanation if one exists.
Edit: I did not click the link, of course you have to click the link, this is my fault
I don’t support building even aligned super intelligence. I am in huge support of cybernetic and genetic enhancements to humans as well as uploaded minds. Based on your definition of super intelligence, I guess some of those may be considered such. It feels wrong to hand off the keys of the universe to something with no human lineage whatsoever even if it had something recognizable as human ethics and took care of us. It feels very much like being a kid with doting parents and that is bad in my eyes.
This is interesting and especially relevant to AI risk if we are nearing automation of the research process.
That said, I am more interested in what fraction of all code bein deployed is being written by AI. That would be more representative of AGI as it relates to mass unemployment or other huge economic shifts but not necessarily human disempowerment.
It’s not that I want to be fully conjoined with another person, so much as I might prefer that to death in the medium term to get me over the hump to longevity escape velocity. Also, I kind of always imagined it more like a dialysis machine. We don’t grow a whole person so much as a big pile of organs sans brain that is genetically you (or just compatible enough to not cause rejection issues) and get hooked up to that for a while. Maybe medically induce a coma for a few months out of the year, maybe it will be quick and easy to connect/disconnect and you can be plugged in while sleeping or doing desk work. People always object to my life support clone/flesh pile idea but again, better than dying imo.
Hijacking this to pick your brain. Do you think head transplants on to repeatedly cloned bodies could work as life extension? Even without genetic improvements to increase longevity, I can imagine switching bodies every 20-50 years becoming mundane with nearly modern surgical techniques provided we can reconnect the nervous system. Related to this, do you think parabiosis would work without all the body switching? I don’t know if this is in your wheelhouse exactly but you mentioned mentioned a replacement body and this has been on my mind for a while.
perhaps both from you
Minor critique but I’m pretty sure this is inbreeding at worst and a clone at best which is not really what you seem to be after.
As to the title, I would give a naive “yes” and would broadly be in support of this idea, technical limitations aside of course. That said, if we actually had this level of control I feel like we could probably explicitly select the best genes from both parents and not mess around with the randomization.
I have noticed you posting daily and I appreciate this post along with several others. I has encouraged me to try more new things. While I am only slowly doing that, this is on the list now.
I think you’re right that imaging deserving welfare on a spectrum and suffering should be one as well. However, people would still place things radically differently on said spectrum and that confuses me. As I said, any animal that had LLM level capabilities would be pretty universally agreed upon to be deserving of some welfare. People remark that LLMs are stochastic parrots but if an actual parrot could talk as well as and LLM people would be even more empathetic toward parrots. I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation. I don’t know why this is.
I guess all this boils down to your last point, what uniformly present qualities do I look for? It seems that everything I empathize with has a nervous system that evolved. But that seems so arbitrary and my intuition is that there is nothing special about evolution even is gradient descent on our current architectures is not a method of generating SDoW. I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
They certainly act weird but not universally so and no weirder than you act in your own dreams, perhaps not even weirder than someone drunk. We might characterize those latter states as being unconscious or semi-conscious in some way but that feels wrong. Yes, I know that dreams happen when you’re asleep and hence unconscious but I think that is a bastardization of the term in this case. Also, my intuition is that if a someone in real life acted as weirdly as a the weirdest dream character did, that would qualify them as mentally ill but not as a p-zombie.
I am curious if the people you encounter in your dreams count as p-zombies or if they contribute anything to the discussion. This might need to be a whole post or it might be total nonsense. When in the dream, they feel like real people and from my limited reading, lucid dreaming does not universally break this. Are they conscious? If they are not conscious can you prove that? Accepting that dream characters are conscious seems absurd. Coming up with an experiment to show they are not seems impossible. Therefore p-zombies?
Upvoted because the site looks quite nice, approachable to non LW people, and generally expresses similar opinions to my own.