Humans for legal accountability: In the near future, organisations will increasingly rely on AI agents to complete work fast, and they will pay for it handsomely. However, if something goes wrong, fingers will be pointed. AI can definitely be a scapegoat, but right now, it doesn’t seem like we have the legal frameworks in place to hold the model developers accountable for unintended/unexpected behaviour.
Thus, bringing humans in as a control layer to AIs’ outputs reaching the real world will not only be a method of ensuring better output quality, but for the near future, a necessity from a legal accountability point of view. Much like how it’s rarely the CEO who’s punished, it would be very convenient for large corporations to blame the humans at the interface between AI models and real-world decisions.
The EU is moving faster than the US, but even its frameworks remain untested. How US and EU courts handle the first major cases of gross negligence involving AI model decision making will set the tone for decades of tech regulation.
I’m happy to hear about cases ongoing or settled that could fit the bill I’m describing.
The “are LLMs conscious” debate bugsme a lot, because it assumes brains and silicon are remotely comparable. I keep arguing that they are not.
Brains are 3D. ~100 trillion synapses in 1.4 liters, every neuron within 40μm of a capillary, with 400 miles of vasculature doing cooling and nutrient delivery at micron resolution. Our silicon chips are flat with heatsinks glued on top. We don’t have the manufacturing tech for anything else.
As for energy, a brain runs on maybe 20 watts (rough estimate, includes maintenance). Over 80 years that’s ~14,000 kWh total, and we have developed countless incredible minds across that kind of horizon. Training GPT-4 used ~50 GWh. That’s about 3,500 brain-lifetimes for one model, before inference.
And if Hameroff/Penrose are even partially right about microtubules (which I subscribe to, why not have game of life running at the brain level?): A brain has maybe 10^18 to 10^21 tubulin dimers, potentially switching at nanosecond scales (hello Wolfram!). That’s 10^27+ state transitions per second. At 20 watts. While also running a body.
Computers have always been capable of incredible accomplishments we have only dreamt of (and subsequently had to develop the hardware and software for). I don’t disagree that LLMs are incredible feats of technology, and I expect that progress will remain some form of exponential. However, asking “is ChatGPT conscious like a human” might just be a category error. The only way to resolve it true is to call everything conscious.
We don’t know what brains are doing, we can’t even fathom building anything like them, and we’re comparing across a gap we can’t even measure properly. We also don’t have the maths, the physics, or the scientific instrument to surpass this gap, and I argue it will take us a new kind of physics/statistics for emergent intelligence and complex systems.
My final question is, can we replicate a digital human brain without understanding what the human brain is doing in its completeness? It just seems too complex!
Humans for legal accountability: In the near future, organisations will increasingly rely on AI agents to complete work fast, and they will pay for it handsomely. However, if something goes wrong, fingers will be pointed. AI can definitely be a scapegoat, but right now, it doesn’t seem like we have the legal frameworks in place to hold the model developers accountable for unintended/unexpected behaviour.
Thus, bringing humans in as a control layer to AIs’ outputs reaching the real world will not only be a method of ensuring better output quality, but for the near future, a necessity from a legal accountability point of view. Much like how it’s rarely the CEO who’s punished, it would be very convenient for large corporations to blame the humans at the interface between AI models and real-world decisions.
The EU is moving faster than the US, but even its frameworks remain untested. How US and EU courts handle the first major cases of gross negligence involving AI model decision making will set the tone for decades of tech regulation.
I’m happy to hear about cases ongoing or settled that could fit the bill I’m describing.
The “are LLMs conscious” debate bugs me a lot, because it assumes brains and silicon are remotely comparable. I keep arguing that they are not.
Brains are 3D. ~100 trillion synapses in 1.4 liters, every neuron within 40μm of a capillary, with 400 miles of vasculature doing cooling and nutrient delivery at micron resolution. Our silicon chips are flat with heatsinks glued on top. We don’t have the manufacturing tech for anything else.
As for energy, a brain runs on maybe 20 watts (rough estimate, includes maintenance). Over 80 years that’s ~14,000 kWh total, and we have developed countless incredible minds across that kind of horizon. Training GPT-4 used ~50 GWh. That’s about 3,500 brain-lifetimes for one model, before inference.
And if Hameroff/Penrose are even partially right about microtubules (which I subscribe to, why not have game of life running at the brain level?): A brain has maybe 10^18 to 10^21 tubulin dimers, potentially switching at nanosecond scales (hello Wolfram!). That’s 10^27+ state transitions per second. At 20 watts. While also running a body.
Computers have always been capable of incredible accomplishments we have only dreamt of (and subsequently had to develop the hardware and software for). I don’t disagree that LLMs are incredible feats of technology, and I expect that progress will remain some form of exponential. However, asking “is ChatGPT conscious like a human” might just be a category error. The only way to resolve it true is to call everything conscious.
We don’t know what brains are doing, we can’t even fathom building anything like them, and we’re comparing across a gap we can’t even measure properly. We also don’t have the maths, the physics, or the scientific instrument to surpass this gap, and I argue it will take us a new kind of physics/statistics for emergent intelligence and complex systems.
My final question is, can we replicate a digital human brain without understanding what the human brain is doing in its completeness? It just seems too complex!