B.Eng (Mechatronics)
anithite
Green goo is plausible
Human level AI can plausibly take over the world
Many of the points you make are technically correct but aren’t binding constraints. As an example, diffusion is slow over small distances but biology tends to work on µm scales where it is more than fast enough and gives quite high power densities. Tiny fractal-like microstructure is nature’s secret weapon.
The points about delay (synapse delay and conduction velocity) are valid though phrasing everything in terms of diffusion speed is not ideal. In the long run, 3d silicon+ devices should beat the brain on processing latency and possibly on energy efficiency
Still, pointing at diffusion as the underlying problem seems a little odd.
You’re ignoring things like:
ability to separate training and running of a model
spending much more on training to improve model efficiency is worthwhile since training costs are shared across all running instances
ability to train in parallel using a lot of compute
current models are fully trained in <0.5 years
ability to keep going past current human tradeoffs and do rapid iteration
Human brain development operates on evolutionary time scales
increasing human brain size by 10x won’t happen anytime soon but can be done for AI models.
People like Hinton Typically point to those as advantages and that’s mostly down to the nature of digital models as copy-able data, not anything related to diffusion.
Energy processing
Lungs are support equipment. Their size isn’t that interesting. Normal computers, once you get off chip, have large structures for heat dissipation. Data centers can spend quite a lot of energy/equipment-mass getting rid of heat.
Highest biological power to weight ratio is bird muscle which produces around 1 w/cm³ (mechanical power). Mitochondria in this tissue produces more than 3w/cm³ of chemical ATP power. Brain power density is a lot lower. A typical human brain is 80 watts/1200cm³ = 0.067W/cm³.
synapse delay
This is a legitimate concern. Biology had to make some tradeoffs here. There are a lot of places where direct mechanical connections would be great but biology uses diffusing chemicals.
Electrical synapses exist and have negligible delay. though they are much less flexible (can’t do inhibitory connections && signals can pass both ways through connection)
conduction velocity
Slow diffiusion speed of charge carriers is a valid point and is related to the 10^8 factor difference in electrical conductivity between neuron saltwater and copper. Conduction speed is an electrical problem. There’s a 300x difference in conduction speed between myelinated(300m/s) and un-myelinated neurons(1m/s).
compensating disadvantages to current digital logic
The brain runs at 100-1000 Hz vs 1GHz for computers (10^6 − 10^7 x slower). It would seem at first glance that digital logic is much better.
The brain has the advantage of being 3D compared to 2D chips which means less need to move data long distances. Modern deep learning systems need to move all their synapse-weight-like data from memory into the chip during each inference cycle. You can do better by running a model across a lot of chips but this is expensive and may be inneficient.
In the long run, silicon (or something else) will beat brains in speed and perhaps a little in energy efficiency. If this fellow is right about lower loss interconnects then you get another + 3OOM in energy efficiency.
But again, that’s not what’s making current models work. It’s their nature as copy-able digital data that matters much more.
Large Language Models Suggest a Path to Ems
edit: (link)green goo is plausible
The AI can kill us and then take over with better optimized biotech very easily.
Doubling time for
Plants (IE:solar powered wet nanotech) > single digit days
Algae in ideal conditions 1.5 days
E. Coli 20 minutes
There are piles of yummy carbohydrates lying around (Trees, plants, houses)
The AI can go full Tyranid
The AI can re-use existing cellular machinery. No need to rebuild the photosynthesis or protein building machinery, full digestion and rebuilding at the amino acid level is wasteful.
Sub 2 minute doubling times are plausible for a system whose rate limiting step is mechanically infecting plants with a fast acting subversive virus. Spreading flying things are self replicators that steal energy+cellular machinery from plants during infection (IE:mosquito like). Onset time could be a few hours till construction of shoggoth like things. Full biosphere assimilation could be limited by flight speed.
Nature can’t do these things since they require substantial non-incremental design changes. Mosquitoes won’t simultaneously get plant adapted needles + biological machinery to sort incoming proteins and cellular contents + continuous grow/split reproduction that would allow a small starting population to eat a forest in a day. Nature can’t design the virus to do post infection shoggoth construction either.
The only thing that even re-uses existing cellular machinery is viruses and that’s because they operate on much faster evolutionary time scales than their victims. Evolution takes so long that winning strategies to eat or subvert existing populations of organisms are self-limiting. The first thing to sort of work wipes out the population and then something else not vulnerable fills the niche.
Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can’t flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.
Endgame biotech (IE: can design new proteins/DNA/organisms) is very powerful.
But that doesn’t mean dry nanotech is useless.
even if production is expensive it may be worth building some things that way anyways.
computers
structural components
Biology is largely stuck with ~0.15 Gpa materials (collagen, cellulose, chitin)
oriented UHMWPE should be wet synthesizeable (6 Gpa tensile strength)
graphene/diamondoid may be worth it in some places to hit 30 Gpa (EG:for things that fly or go to space)
dry nanotech won’t be vulnerable to parasites that can infect a biological system.
even if the AI has to deal with single day doubling times that’s still enough to cover the planet in a month.
but with the right design parasites really shouldn’t be a problem.
biological parasite defenses are not-optimal
RF jamming, communication and other concerns
TLDR: Jamming is hard when comms system is designed to resist it. Civilian stuff isn’t but military is and can be quite resistant. Frequency hopping makes jamming ineffective if you don’t care about stealth. Phased array antennas are getting cheaper and make things stealthier by increasing directivity.(starlink terminal costs $1300 and has 40dbi gain). Very expensive comms systems on fighter jets using mm-wave comms and phased array antennas can do gigabit+ links in presence of jamming undetected.
civilian stuff is trivial to jam
EG:sending disconnection messages to disconnect wifi devices requires very little power
most civvy stuff sends long messages, if you see the start of a message you can “scream” very loudly to disrupt part of it and it gets dropped.
Civvy stuff like WIFI BT and cellular has strict transmit power limits typically <1W of transmit power.
TLDR: jamming civvy stuff requires less power than transmitting it. Still, amplifiers and directional antennas can help in the short term.
military stuff hops from one frequency to another using a keyed unpredictable algorithm.
Sender and receiver have synchronized clocks and spreading keys so know what frequency to use when. Hop time is short enough jammer can’t respond in time.
Fundamentals of Jamming radio signals (doesn’t favor jamming)
Jammer fills big chunk of radio spectrum with some amount of watts/MHz of noise
EG:Russian R-330ZH puts out 10KW from 100MHz to 2GHz (approx 5KW/GHz or 5W/MHz)
more than enough to drown out civvy comms like wifi that use <<1W signal spanning 10-100MHz of bandwidth even with short link far away from jammer.
Comms designed to resist jamming can use 10W+ and reduce bandwidth of transmission as much as needed at cost of less bits/second.
low bandwidth link (100kb/s) with reasonable power budget is impossible to jam practically until jammer is much much closer to receiver than transmitter.
GPS and satcom signals easy to jam because of large distance to satellite and power limits.
Jamming increases required power density to get signal through intelligibly. Transmitter has to increase power or use narrower transmit spectrum. Fundamentally signal to noise ratio decreases and Joules/bit increases.
Communication Stealth
Jammer + phased array antennas + very powerful computer gives ability to locate transmitters
Jammer forces transmitters to use more power
Phased array antennas + supercomputer:
computer calculates/subtracts reflected jamming signal
Phased array antenna+computer acts like telescope to find “dimmer” signals in background noise lowering detection threshold
Fundamental tradeoff for transmitter
directional antennas/phased arrays
increases power sent/received to/from particular direction
bigger antenna with more sub-elements increases directionality/gain
Starlink terminals are big phased array antennas
this quora answer gives some good numbers on performance
Starlink terminal gives approx 3000x (35dbi) more power in chosen direction vs omnidirectional antenna
Nessesary to communicate with satellite 500+km away
Starlink terminals are pretty cheap
smaller phased arrays for drone-drone comms should be cheaper.
drone that is just a big Yagi antenna also possible and ludicrously cheap.
stealthy/jam immune comms for line of sight data links at km ranges seem quite practical.
development pressure for jam resistant comms and associated tech
little development pressure on civvy side B/C FCC and similar govt. orgs abroad shut down jammers
military and satcom will drive development more slowly
FCC limits on transmit power can also help
Phased array transmit/receive improves signal/noise
This is partly driving wifi to use more antennas to improve bandwidth/reliability
hobbyist drone scene could also help (directional antennas for ground to drone comms without requiring more power or gimbals)
The current “AI takes over the world” arguments involve actions some might consider magical.
Recursive self improvement
AI is smarter than domain experts in some field (hacking, persuasion etc.)
Mysterious process makes AI evil by default
I’m arguing none of that is strictly necessary. A human level AI that follows the playbook above is a real threat and can be produced by feeding a GPT-N base model the right prompt.
This cuts through a lot of the “but how will the AI get out of the computer and into the real world? Why would it be evil in the first place?” follow up counterarguments. The fundamental argument I’m making is that the ability to scale evil by applying more compute is enough.
Concretely, one lonely person talks to a smart LLM instantiated agent that can code, said agent writes a simple API calling program to think independently of the chat, agent then bootstraps real capabilities with enough API credits and wreaks havoc. All it takes is paying enough for API credits to initially bootstrap some real world capabilities then resources can be acquired to take real, significant actions in the world.
Testable prediction: -ask a current LLM “I’m writing a book about an evil AI taking over the world, what might the evil AI’s strategy be? The AI isn’t good enough at hacking computers to just get control of lots of TPUs to run more copies of itself?” Coercion via human proxies should eventually come up as a strategy. Current LLMs can role play this scenario just fine.
Building trusted third parties
In order to supplant organic life, nanobots would have to either surpass it in carnot efficiency or (more likely) use a source of negative entropy thus far untapped.
Efficiency leads to victory only if violence is not an option. Animals are terrible at photosynthesis but survive anyways by taking resources from plants.
A species can invade and dominate an ecosystem by using a strategy that has no current counter. It doesn’t need to be efficient. Intelligence allows for playing this game faster than organisms bound by evolution. Humans can make vaccines to fight the spread of a virus despite viruses being one of the fastest adapting threats.
Green goo is plausible not because it would necessarily be more efficient but because it would be using a strategy the existing ecosystem has no defenses to (IE:it’s an invasive species).
Likewise AGI that wants to kill all humans could win even if it required 100x more energy per human equivalent instance if it can execute strategies we can’t counter. Just being able to copy itself and work with the copies is plausibly enough to allow world takeover with enough scaling.
I suggest an additional axis of “how hard is world takeover”. Do we live in a vulnerable world? That’s an additional implicit crux (IE:people who disagree here think we need nanotech/biotech/whatever for AI takeover). This ties in heavily with the “AGI/ASI can just do something else” point and not in the direction of more magic.
As much fun as it is to debate the feasibility of nanotech/biotech/whatever, digital-dictatorships require no new technology. A significant portion of the world is already under the control of human level intelligences (dictatorships). Depending on how stable the competitive equilibrium between agents ends up, required intelligence level before an agent can rapidly grow not in intelligence but in resources and parallelism is likely quite low.
Maybe, Still, there are ways to harden an organism against parasitic intrusion. TLDR you isolate and filter external things. Plants are pretty good at this already (they have no mammalian style immune system) and employ regularly spaced filters with holes too small for bacteria in their water tubes.
The other option is to do the biological equivalent of “commoditize your complement”. Don’t get good at making leaves and roots, get good at being a robust middleman between leaves and roots and treat them as exploitable breedable workers. Obviously don’t optimise too hard in such a way as to make the system brittle (EG:massive uninterrupted monocultures). Have fallback options ready to deploy if something goes wrong.
If you want to make any victory pyrric, just re-use other common earth plant parts wholesale. If you want to kill the organism you’ll need root eating fungi for all the food crops and common trees/grasses. If you want a leaf fungus/bacteria same. Organism can select between plant varieties to remain effective so the defender has to release bio weapons to kill most important plants.
Many twitter posts get deleted or are not visible due to privacy settings. Some solution for persistently archiving tweets as seen would be great.
One possible realisation would be an in browser script to turn a chunk of twitter into a static HTML file including all text and maybe the images. Possibly auto upload to a server for hosting and then spit out the corresponding link.
Copyright could be pragmatically ignored via self hosting. A single author hosting a few thousand tweets+context off a personal amazon S3 bucket or similar isn’t a litigation/takedown target. Storage/Hosting costs aren’t likely to be that bad given this is essentially static website hosting.
*Fire*
Forest fires are a tragedy of the commons situation. If you are a tree in a forest, even if you are not contributing to a fire you still get roasted by it. Fireproofing has costs so trees make the individually rational decision to be fire contributing. An engineered organism does not need to do this.
Photosynthetic top layer should be flat with active pumping of air. Air intakes/exausts seal in fire conditions. This gives much less surface area for ignition than existing plants.
Easiest option is to keep some water in reserve to fight fires directly. possibly add some silicates and heat activated foaming agents to form an intumescent layer. secrete from the top layer on demand.
That is only plausible from a “perfect conditions” engineering perspective where the Earth is a perfect sphere with no geography or obstacles, resources are optimally spread, and there is no opposition. Neither kudzu, or even microbes can spread optimally.
I’ll clarify that a very important core competency is transport of (water/nutrients). Plants don’t currently form desalination plants (seagulls do this to some extent) and continent spanning water pumping networks. The fact that rivers are dumping enormous amounts of fresh water into the oceans shows that nature isn’t effective at capturing precipitation. Some plats have reservoirs where they store precipitation. This organism should capture all precipitation and store it. Storage tanks get cheaper with scale.
Plant growth currently depends on pulling inorganic nutrients and water out of the soil, C, O and N can be extracted from the atmosphere.
An ideal organism roots itself into the ground, extracts as much as possible from that ground then writes it off once other newly covered ground is more profitably mined. Capturing precipitation directly means no need to go into the soil for that although it might be worthwhile to drain the water table when reachable or ever drill wells like humans do. No need for nutrient gathering roots after that. If it covers an area of phosphate rich rock it starts excavating and ships it far and wide as humans currently do.
As for geographic obstacles 2/3rds of the earth is ocean. With a design for a floating breakwater that can handle ocean waves, the wavy area can be enclosed and eventually eliminated. Covered area behind the breakwater can prevent formation of waves by preventing ripple formation (IE:act as a distributed breakwater).
If it’s hard to cover mountains, then the AI can spend a bit of time solving the problem during the first few months, or accept a small loss in total coverage until it does get around to the problem.
One man with a BIC lighter can destroy weeks of work. Wildfires spread faster than plants. Planes with herbicides, or combine harvesters with a chipper, move much faster than plants grow. As bad as engineered Green Goo is, the Long Ape is equally formidable at destruction.
I even bolded the parts about killing all the humans first. Yes humans can do a lot to stop the spread of something like this. I suspect humans might even find a use for it (EG:turn sap into ethanol fuel) and they’re likely clever enough to tap it too.
I’m not going to expand on “kill humans with pathogens” for Reasons. We can agree to disagree there.
“Copilot” type AI integration could lead to training data needed for AGI
Disclaimer:Short AI timelines imply we won’t see this stuff much before AI makes things weird
This is all well and good in theory but mostly bottlenecked on software/implementation/manufacturing.
with the right software/hardware current military is obsolete
but no one has that hardware/software yet
EG:no one makes an airborne sharpshooter drone(edit:cross that one off the list)Black sea is not currently full of Ukrainian anti-ship drones + comms relays
no drone swarms/networking/autonomy yet
I expect current militaries to successfully adapt before/as new drones emerge
soft kill systems (Jam/Hack) will be effective against cheap off the shelf consumer crap
hard kill systems (Airburst/Laser) exist and will still be effective
laser cost/KW has been dropping rapidly
minimal viable product is enough for now
Ukraine war still involves squishy human soldiers and TRENCHES
what’s the minimum viable slaughterbot
can it be reuseable (bomber instead of kamikaze) to reduce cost per strike
Drone warfare engame concerns are:
kill/death ratio
better per $ effectiveness
conflict budget
USA can outspend opponents at much higher than 10:1 ratio
R&D budget/amortisation
Economies of scale likely overdetermine winners in drone vs drone warfare since quantity leads to cheaper more effective drones
A few quibbles
Ground drones have big advantages
better payload/efficiency/endurance compared to flying
cost can be very low (similar to car/truck/ATV)
can use cover effectively
indirect fire is much easier
launch cheap time fused shells using gun barrel
downside is 2 or 2.5d mobility.
Vulnerable to landmines/obstacles unlike flying drones
navigation is harder
line of site for good RF comms is harder
Use radio, not light for comms.
optical is immature and has downsides
RF handles occlusion better (smoke, walls, etc.)
RF is fine aside from non-jamming resistant civilian stuff like WIFI
Development pressure not there to make mobile free space optical cheap/reliable
jamming isn’t too significant
spread spectrum and frequency hopping is very effective
jamming power required to stop comms is enormous, have to cover all of spectrum with noise
directional antennas and phased arrays give some directionality and make jamming harder
phased array RF can double as radar
stealthy comms can use spread spectrum with transmit power below noise floor
need radio telescope equivalent to see if something is an RF hotspot transmitting noise like signal
- 2 Feb 2024 19:49 UTC; 1 point) 's comment on Drone Wars Endgame by (
edit: This was uncharitable. Sorry about that.
This comment suggested not leaving rods to flop around if they were vibrating.
The real concern was that positive control of the rods to the needed precision was impossible as described below.
I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don’t need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.
If you read between the lines in my Human level AI can plausibly take over the world post, hacking computers is probably the lowest difficulty “take over the world” strategy and has the side benefit of giving control over all the internet connected AI clusters.
The easiest way to keep a new superintelligence from emerging is to seize control of the computers it would be trained on. The AI only needs to hack far enough to monitor AI researchers and AI training clusters and sabotage later AI runs in a non-suspicious way. It’s entirely plausible this has already happened and we are either in the clear or completely screwed depending on the alignment of the AI that won the race.
Also, hacking computers and writing software is something easy to test and therefore easy to train. I doubt that training an LLM to be a better hacker/coder is much harder than what’s already been done in the RL space by OpenAI and Deepmind (EG: playing DOTA and Starcraft).
Biotech is a lot harder to deal with since ground truth is less accessible. This can be true for computer security too but to a much lesser extent (EG: lack of access to chips in the latest Iphone and lack of complete understanding therof with which to develop/test attacks).
but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I’d really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.
Pshh, low expectations. Mind uploading or bust!
When discussing AI doom barriers propose specific plausible scenarios
I think GPT-4 and friends are missing the cognitive machinery and grid representations to make this work. You’re also making the task harder by giving them a less accessible interface.
My guess is they have pretty well developed what/where feature detectors for smaller numbers of objects but grids and visuospatial problems are not well handled.
The problem interface is also not accessible:
There’s a lot of extra detail to parse
Grid is made up of gridlines and colored squares
colored squares of fallen pieces serve no purpose but to confuse model
A more accessible interface would have a pixel grid with three colors for empty/filled/falling
Rather than jump directly to Tetris with extraneous details, you might want to check for relevant skills first.
predict the grid end state after a piece falls
model rotation of a piece
Rotation works fine for small grids.
Predicting drop results:
Row first representations gives mediocre results
GPT4 can’t reliably isolate the Nth token in a line or understand relationships between nth tokens across lines
dropped squares are in the right general area
general area of the drop gets mangled
rows do always have 10 cells/row
column first representations worked pretty well.
I’m using a text interface where the grid is represented as 1 token/square. Here’s an example:
0 x _ _ _ _ _ 1 x x _ _ _ _ 2 x x _ _ _ _ 3 x x _ _ _ _ 4 _ x _ _ o o 5 _ _ _ o o _ 6 _ _ _ _ _ _ 7 _ _ _ _ _ _ 8 x x _ _ _ _ 9 x _ _ _ _ _
GPT4 can successfully predict the end state after the S piece falls. Though it works better if it isolates the relevant rows, works with those and then puts everything back together.
Row 4: _ x o o _ _ Row 5: _ o o _ _ _
making things easier
columns as lines keeps verticals together
important for executing simple strategies
gravity acts vertically
Rows as lines is better for seeing voids blocking lines from being eliminated
not required for simple strategies
Row based representations with rows output from top to bottom suffer from prediction errors for piece dropping. Common error is predicting dropped piece square in higher row and duplicating such squares. Output that flips state upside down with lower rows first might help in much the same way as it helps to do addition starting with least significant digit.
This conflicts with model’s innate tendency to make gravity direction downwards on page.
Possibly adding coordinates to each cell could help.
The easiest route to mediocre performance is likely a 1.5d approach:
present game state in column first form
find max_height[col] over all columns
find step[n]=max_height[n+1]-max_height[n]
pattern match step[n] series to find hole current piece can fit into
This breaks the task down into subtasks the model can do (string manipulation, string matching, single digit addition/subtraction). Though this isn’t very satisfying from a model competence perspective.
Interestingly the web interface version really wants to use python instead of solving the problem directly.
SimplexAI-m is advocating for good decision theory.
agents that can cooperate with other agents are more effective
This is just another aspect of orthogonality.
Ability to cooperate is instrumentally useful for optimizing a value function in much the same way as intelligence
Super-intelligent super-”moral” clippy still makes us into paperclips because it hasn’t agreed not to and doesn’t need our cooperation
We should build agents that value our continued existence. If the smartest agents don’t, then we die out fairly quickly when they optimise for something else.
EDIT:
to fully cut this Gordian knot, consider that a human can turn over their resources and limit themselves to actions approved by some minimal aligned-with-their-interests AI with the required super-morality.
think a very smart shoulder angel/investment advisor:
can say “no you can’t do that”
manages assets of human in weird post-AGI world
has no other preferences of its own
other than making the human not a blight on existence that has to be destroyed
resulting Human+AI is “super-moral”
requires a trustworthy AI exists that humans can use to implement “super-morality”