So Iâd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.
đ¤ This is actually a path to progress, right? The difficulty in alignment is figuring out what we want precisely enough that we can make an AI do it. It seems like a feasible research project to map this out for kudzugoth.
Seems convincing enough that Iâm gonna make a Discord and maybe switch to this as a project. Come join me at Kudzugoth Alignment Center! ⌠đ I might close again quickly if the plan turns out to be fatally flawed, but until then, here we go.
Building new organisms from scratch (synthetic biology) is an engineering problem. Fundamentally we need to build the right parts and assemble them.
Without major breakthroughs (Artificial Superintelligence) thereâs no meaningful âalignment planâ, just a scientific discipline. Thereâs no sense in which you can really âalignâ an AI system to do this. The closest things would be:
building a special purpose model (EG:alphafold) useful for solving sub-problems like protein folding
teaching an LLM to say âI want to build green biotechâ and associated ideas/âopinions.
which is completely useless
Problem is that biology is difficult to mess with. DNA sequencing is somewhat cumbersome, DNA writing is much more so, costing on the order of 25¢/âbase currently.
Also imaging the parts to figure out what they do and if theyâre doing it can be very cumbersome because theyâre too small to see with a light microscope. Everything is indirect. Currently we try to crystalize them and then use X-rays (which are small enough but also very destructive) to image the crystal and infer the structure. Thereâs continuous progress here but itâs slow.
AI techniques can be applied to some of these problems (EG:inferring protein structure from amino acids (Alphafold), or doing better quantum level simulation Ferminet)
Note that AI techniques are replacing existing ones based on human coded algorithms rooted in physics and often have issues with out of distribution inputs (EG: work well for wildtype protein but give garbage when mutations are added.)
Like any ML system, we just have to feed it more data which means we need to do more wet lab work, x-ray crystallography etc.
Synthetic biology is the best way forwards but itâs a giant scientific/âengineering discipline, not an âalignment approachâ whatever thatâs supposed to mean.
Without major breakthroughs (Artificial Superintelligence) thereâs no meaningful âalignment planâ, just a scientific discipline. Thereâs no sense in which you can really âalignâ an AI system to do this.
Do you expect humanity to bioengineer this before we develop artificial superintelligence? If not, presumably this objection is irrelevant.
Basically if artificial superintelligence happens before sufficiently advanced synthetic biology, then one way to frame the alignment problem is âhow do we make an ASI create a nice kudzugoth instead of a bad kudzugoth?â.
I agree that one could do something similar with other tech than neat biotech, but I donât think this proves that Kudzugoth Alignment is as difficult as general alignment. I think aligning AI to achieve something specific is likely to be a lot easier than aligning AI in general. Itâs questionable whether the latter is even possible and unclear what it means to achieve it.
đ¤ This is actually a path to progress, right? The difficulty in alignment is figuring out what we want precisely enough that we can make an AI do it. It seems like a feasible research project to map this out for kudzugoth.
Seems convincing enough that Iâm gonna make a Discord and maybe switch to this as a project. Come join me at Kudzugoth Alignment Center! ⌠đ I might close again quickly if the plan turns out to be fatally flawed, but until then, here we go.
Building new organisms from scratch (synthetic biology) is an engineering problem. Fundamentally we need to build the right parts and assemble them.
Without major breakthroughs (Artificial Superintelligence) thereâs no meaningful âalignment planâ, just a scientific discipline. Thereâs no sense in which you can really âalignâ an AI system to do this. The closest things would be:
building a special purpose model (EG:alphafold) useful for solving sub-problems like protein folding
teaching an LLM to say âI want to build green biotechâ and associated ideas/âopinions.
which is completely useless
Problem is that biology is difficult to mess with. DNA sequencing is somewhat cumbersome, DNA writing is much more so, costing on the order of 25¢/âbase currently.
Also imaging the parts to figure out what they do and if theyâre doing it can be very cumbersome because theyâre too small to see with a light microscope. Everything is indirect. Currently we try to crystalize them and then use X-rays (which are small enough but also very destructive) to image the crystal and infer the structure. Thereâs continuous progress here but itâs slow.
AI techniques can be applied to some of these problems (EG:inferring protein structure from amino acids (Alphafold), or doing better quantum level simulation Ferminet)
Note that AI techniques are replacing existing ones based on human coded algorithms rooted in physics and often have issues with out of distribution inputs (EG: work well for wildtype protein but give garbage when mutations are added.)
Like any ML system, we just have to feed it more data which means we need to do more wet lab work, x-ray crystallography etc.
Synthetic biology is the best way forwards but itâs a giant scientific/âengineering discipline, not an âalignment approachâ whatever thatâs supposed to mean.
Do you expect humanity to bioengineer this before we develop artificial superintelligence? If not, presumably this objection is irrelevant.
Basically if artificial superintelligence happens before sufficiently advanced synthetic biology, then one way to frame the alignment problem is âhow do we make an ASI create a nice kudzugoth instead of a bad kudzugoth?â.
I guess but thatâs not minimal and doesnât add much.
âhow do we make an ASI create a nice (highly advanced technology) instead of a bad (same)?â.
IE: kudzugoth vs robots vs (self propagating change to basic physics)
Put differently:
If we build a thing that can make highly advanced technology, make it help rather than kill us with that technology.
Neat biotech is one such technology but not a special case.
Aligning the AI is a problem mostly independent of what the AI is doing (unless youâre building special purpose non AGI models as mentioned above)
I agree that one could do something similar with other tech than neat biotech, but I donât think this proves that Kudzugoth Alignment is as difficult as general alignment. I think aligning AI to achieve something specific is likely to be a lot easier than aligning AI in general. Itâs questionable whether the latter is even possible and unclear what it means to achieve it.