For those of you who don’t know, room-temperature semiconductors would be a really big deal for all kinds of things, including AI alignment.
Among many other applications, there is also a decent possibility of building affordable helmet-sized fMRI machines (the currently-giant machines that 3d-map electrical brain activity), enough for the living rooms of hundreds of alignment researchers (or even every smartphone), and get large sample sizes of hard data about what it physically looks like when humans succeed and fail at thinking about alignment. As in, hundreds of thousands of hours, of hundreds of people thinking about AI alignment, billions of 3d frames per year, including the hours and minutes leading up to every breakthrough that happened while someone was wearing the device (perhaps there will be a ding whenever a researcher is on the right track).
That’s the best thing I can currently think of. I have no idea what will be discovered by the thousands of actual ML engineers and neurologists who actually get to look at that kind of data (and see what they can do with it).
I think the bigger implication is that it would potentially create lots of room for innovation and progress and prosperity via means other than advancing AI capabilities, assuming the potential gains aren’t totally squandered and strangled by regulation.
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
Can you go into more details about this? As far as I’m aware, portable mass-producable fMRI machines, alone, will shorten AGI timelines far more than the effect from a big economic transformation diverting attention away from AI (e.g. by contributing valuable layers to foundation models).
Well, one question I’m interested in and don’t know the answer to is, given that the discovery is real, how easy is it to actually get to cheap portable fMRI machines, actually mass produced and not just mass produce-able in theory?
Also, people can already get a lot of fMRI data if they want to, I think? It’s not that expensive or inconvenient. So I’m skeptical that even a 10x or 100x increase in scale / quality / availability of fMRI data will have a particularly big or unique impact on AI or alignment research. Maybe you can build some kind of super-CFAR with them, and that leads to a bunch of alignment progress? But that seems kinda indirect, and something you could also do in some form if everyone is suddenly rich and prosperous and has lots of slack generally.
Oh, right, I should have mentioned that this is on the scale of a 10000-100000x increase in fMRI machines, such as one inside the notch of every smartphone, which is something that a ton of people have wanted to invest in for a very long time. The idea of a super-CFAR is less about extrapolating the 2010s CFAR upwards, and more about how CFAR’s entire existence was totally defined by the absense of fMRI saturation, making the fMRI saturation scenario pretty far out-of-distribution from any historical precedent. I definitely agree that effects from fMRI saturation would definitely be contingent on how quickly LK shortens the timeline for miniaturization of fMRI machines, and you’d need even more time to get useable results out of a super-CFAR(s).
Also, I now see your point with things like slack and prosperity and other macro-scale societal/civilizational upheavals being larger factors (not to mention siphoning substantial investment dollars away from AI which currently doesn’t have many better alternatives).
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
Well for starters, if it were only as difficult as graphene to manufacture in quantity, ambient condition superconductors would not see use yet. You would need better robots to mass manufacture them, and current robots are too expensive, and you’re right back to needing a fairly powerful level of AGI or you can’t use it.
Your next problem is ok, you can save 6% or more on long distance power transmission. But it costs an enormous amount of human labor to replace all your wires. See the above case. If merely humans have to do it, it could take 50 years.
There’s the possibility of new forms of compute elements, such as new forms of transistor. The crippling problem here is the way all technology is easiest to evolve from a pre-existing lineage, and it is very difficult to start fresh.
For example, I am sure you have read over the years how graphene or diamond might prove a superior substrate to silicon. Why don’t we see it used for our computer chips? The simplest reasons is that you’d be starting over. The first ICs on this process would be similar 1970s densities. The ‘catch up’ would go much faster than it did, but it still would take years, probably decades, meanwhile silicon is still improving. See how OLEDs still have not replaced LCD based displays despite being outright superior in most metrics.
Same would apply with fundamentally superior superconductor based ICs. At a minimum you’re starting over. Worst case, lithography processes may not work and you may need nanotechnology to actually efficiently construct these structures, if they are in fact superconducting in ambient conditions. To unlock nanotechnology you need to do a lot of experiments, and you need a lot of compute, and if you don’t want it to take 50 years you need some way to process all the data and choose the next experiment and we’re right back to wanting ASI.
Finally I might point out that while I sympathize with your desire—to not see everyone die from runway superintelligence—it’s simply orthogonal. There’s very few possible breakthroughs we could have that would suddenly make AGI/ASI not something worth investing in heavily. Breakthroughs like this one that would potentially make AGI/ASI slightly cheaper to build and robots even better actually causes there to be more potential ROI from investments in AGI. I can’t really think of any to be honest except some science fiction device that allows someone to receive data from our future, and with that data, avoid futures where we all die.
I don’t believe there is that much you can do with MRI data, to develop treatments, on relevant timescales? Like, we’ll probably have the compute advancement long before we have the cognitive enhancement?
Can you explain further? A lot of comments here have already gone into the weeds about how large amounts of fMRI data can contribute heavily to cognitive enhancement.
At minimum, large amounts of fMRI data make it easier to conduct longitudinal investigations of what accelerates/reduces the rate of brain mass decline with age after age ~20 (eg would plasmalogens help? would taurine help? what are the associated metabolomics? What is an ANOVA of white matter hyperintensities with each of the metabolites in iollo? a mass-parallel study of all of this is important [cf marton m from LBF2]), and this would help improve the clarity that experienced people have with thinking + get people to better vet accuracy/helpfulness/informativeness of AI models over their lifetime + reduce fluid intelligence decline with age—relevant for helping humans keep up with machines, especially in a world where average age [esp the age of people who have stayed in alignment for longer] is increasing to rates where their decrease becomes relevant.
Humans have phenomenally poor memory (worsens with age) and this causes MANY testimonies to be wrong, and many people to say things that aren’t true (and for alignment to happen we NEED people to be as truthful as possible, and especially not inaccurate due to dumb things like brain decline from excess blood glucose due to not combining acarbose/taurine with the shitty ultraprocessed food they do eat...
More exciting IMO isn’t so much the big data aspect, but just the opportunity for “big individual data”: people getting to watch their own brain state for many hours. E.g. learning when you’re rationalizing, when you’re avoiding something, when you’re deluded, when you’re tired, when you’re really thinking about something else, etc.
Yes, this is exactly the innovation I was thinking about. With superconductors that fit in hats, you can also combine that self-observation with big data, predictive analytics, and thousands of neurologists/ML engineers/psychologists to identify trends and formulate standard strategies, to get people get themselves on the right track. You can basically open-source the research, Auto-GPT-style.
A billion 3d frames per year per 300 people will make a lot of internal phenomena stick out like a sore thumb, especially the internal phenomena that typically leads up to/away from peak alignment thoughtflow. Just have a “ding” sound when someone’s mind is going in the right direction, and a “dong” sound for the wrong directions.
Just have a “ding” sound when someone’s mind is going in the right direction, and a “dong” sound for the wrong directions
I’d definitely like to try that. The right UX would be a number that goes up as you get closer to the target headspace, with milestone numbers along the way, which each give you a reward. It should possibly be coupled with a puzzle game or a set of creative exercises or something. (Games are good because they can provide reward. If a person isn’t already productive it may be because they didn’t find practicing engineering deeply rewarding so this part of it might be important.)
E.g. learning when you’re rationalizing, when you’re avoiding something, when you’re deluded, [...] when you’re really thinking about something else, etc.
It seems extremely unlikely that these things could be seen in fMRI data.
For those of you who don’t know, room-temperature semiconductors would be a really big deal for all kinds of things, including AI alignment.
Among many other applications, there is also a decent possibility of building affordable helmet-sized fMRI machines (the currently-giant machines that 3d-map electrical brain activity), enough for the living rooms of hundreds of alignment researchers (or even every smartphone), and get large sample sizes of hard data about what it physically looks like when humans succeed and fail at thinking about alignment. As in, hundreds of thousands of hours, of hundreds of people thinking about AI alignment, billions of 3d frames per year, including the hours and minutes leading up to every breakthrough that happened while someone was wearing the device (perhaps there will be a ding whenever a researcher is on the right track).
That’s the best thing I can currently think of. I have no idea what will be discovered by the thousands of actual ML engineers and neurologists who actually get to look at that kind of data (and see what they can do with it).
I think the bigger implication is that it would potentially create lots of room for innovation and progress and prosperity via means other than advancing AI capabilities, assuming the potential gains aren’t totally squandered and strangled by regulation.
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
Can you go into more details about this? As far as I’m aware, portable mass-producable fMRI machines, alone, will shorten AGI timelines far more than the effect from a big economic transformation diverting attention away from AI (e.g. by contributing valuable layers to foundation models).
Well, one question I’m interested in and don’t know the answer to is, given that the discovery is real, how easy is it to actually get to cheap portable fMRI machines, actually mass produced and not just mass produce-able in theory?
Also, people can already get a lot of fMRI data if they want to, I think? It’s not that expensive or inconvenient. So I’m skeptical that even a 10x or 100x increase in scale / quality / availability of fMRI data will have a particularly big or unique impact on AI or alignment research. Maybe you can build some kind of super-CFAR with them, and that leads to a bunch of alignment progress? But that seems kinda indirect, and something you could also do in some form if everyone is suddenly rich and prosperous and has lots of slack generally.
Oh, right, I should have mentioned that this is on the scale of a 10000-100000x increase in fMRI machines, such as one inside the notch of every smartphone, which is something that a ton of people have wanted to invest in for a very long time. The idea of a super-CFAR is less about extrapolating the 2010s CFAR upwards, and more about how CFAR’s entire existence was totally defined by the absense of fMRI saturation, making the fMRI saturation scenario pretty far out-of-distribution from any historical precedent. I definitely agree that effects from fMRI saturation would definitely be contingent on how quickly LK shortens the timeline for miniaturization of fMRI machines, and you’d need even more time to get useable results out of a super-CFAR(s).
Also, I now see your point with things like slack and prosperity and other macro-scale societal/civilizational upheavals being larger factors (not to mention siphoning substantial investment dollars away from AI which currently doesn’t have many better alternatives).
Well for starters, if it were only as difficult as graphene to manufacture in quantity, ambient condition superconductors would not see use yet. You would need better robots to mass manufacture them, and current robots are too expensive, and you’re right back to needing a fairly powerful level of AGI or you can’t use it.
Your next problem is ok, you can save 6% or more on long distance power transmission. But it costs an enormous amount of human labor to replace all your wires. See the above case. If merely humans have to do it, it could take 50 years.
There’s the possibility of new forms of compute elements, such as new forms of transistor. The crippling problem here is the way all technology is easiest to evolve from a pre-existing lineage, and it is very difficult to start fresh.
For example, I am sure you have read over the years how graphene or diamond might prove a superior substrate to silicon. Why don’t we see it used for our computer chips? The simplest reasons is that you’d be starting over. The first ICs on this process would be similar 1970s densities. The ‘catch up’ would go much faster than it did, but it still would take years, probably decades, meanwhile silicon is still improving. See how OLEDs still have not replaced LCD based displays despite being outright superior in most metrics.
Same would apply with fundamentally superior superconductor based ICs. At a minimum you’re starting over. Worst case, lithography processes may not work and you may need nanotechnology to actually efficiently construct these structures, if they are in fact superconducting in ambient conditions. To unlock nanotechnology you need to do a lot of experiments, and you need a lot of compute, and if you don’t want it to take 50 years you need some way to process all the data and choose the next experiment and we’re right back to wanting ASI.
Finally I might point out that while I sympathize with your desire—to not see everyone die from runway superintelligence—it’s simply orthogonal. There’s very few possible breakthroughs we could have that would suddenly make AGI/ASI not something worth investing in heavily. Breakthroughs like this one that would potentially make AGI/ASI slightly cheaper to build and robots even better actually causes there to be more potential ROI from investments in AGI. I can’t really think of any to be honest except some science fiction device that allows someone to receive data from our future, and with that data, avoid futures where we all die.
I don’t believe there is that much you can do with MRI data, to develop treatments, on relevant timescales? Like, we’ll probably have the compute advancement long before we have the cognitive enhancement?
Can you explain further? A lot of comments here have already gone into the weeds about how large amounts of fMRI data can contribute heavily to cognitive enhancement.
I see none. Wait, you mean this one?
At minimum, large amounts of fMRI data make it easier to conduct longitudinal investigations of what accelerates/reduces the rate of brain mass decline with age after age ~20 (eg would plasmalogens help? would taurine help? what are the associated metabolomics? What is an ANOVA of white matter hyperintensities with each of the metabolites in iollo? a mass-parallel study of all of this is important [cf marton m from LBF2]), and this would help improve the clarity that experienced people have with thinking + get people to better vet accuracy/helpfulness/informativeness of AI models over their lifetime + reduce fluid intelligence decline with age—relevant for helping humans keep up with machines, especially in a world where average age [esp the age of people who have stayed in alignment for longer] is increasing to rates where their decrease becomes relevant.
Humans have phenomenally poor memory (worsens with age) and this causes MANY testimonies to be wrong, and many people to say things that aren’t true (and for alignment to happen we NEED people to be as truthful as possible, and especially not inaccurate due to dumb things like brain decline from excess blood glucose due to not combining acarbose/taurine with the shitty ultraprocessed food they do eat...
RELEVANT:
https://www.frontiersin.org/articles/10.3389/fnagi.2022.895535/full
https://qualiacomputing.com/2022/10/27/on-rhythms-of-the-brain-jhanas-local-field-potentials-and-electromagnetic-theories-of-consciousness/
https://www.sciencedirect.com/science/article/pii/S0035378721006974
https://www.frontiersin.org/articles/10.3389/fnhum.2023.1123014/full
https://advancedconsciousness.org/protocol-003b-preparation-materials/
BTW all these threads are worth discussing on augmentationlab.org (and its discord!)
https://foresight.org/summary/owen-phillips-brain-aging-is-the-key-to-longevity/
More exciting IMO isn’t so much the big data aspect, but just the opportunity for “big individual data”: people getting to watch their own brain state for many hours. E.g. learning when you’re rationalizing, when you’re avoiding something, when you’re deluded, when you’re tired, when you’re really thinking about something else, etc.
Yes, this is exactly the innovation I was thinking about. With superconductors that fit in hats, you can also combine that self-observation with big data, predictive analytics, and thousands of neurologists/ML engineers/psychologists to identify trends and formulate standard strategies, to get people get themselves on the right track. You can basically open-source the research, Auto-GPT-style.
A billion 3d frames per year per 300 people will make a lot of internal phenomena stick out like a sore thumb, especially the internal phenomena that typically leads up to/away from peak alignment thoughtflow. Just have a “ding” sound when someone’s mind is going in the right direction, and a “dong” sound for the wrong directions.
functional Machine Intelligence Research Imaging
I’d definitely like to try that. The right UX would be a number that goes up as you get closer to the target headspace, with milestone numbers along the way, which each give you a reward. It should possibly be coupled with a puzzle game or a set of creative exercises or something. (Games are good because they can provide reward. If a person isn’t already productive it may be because they didn’t find practicing engineering deeply rewarding so this part of it might be important.)
It seems extremely unlikely that these things could be seen in fMRI data.