A Letter to the Editor of MIT Technology Review

Letter to Editor—MIT Tech Review—Wed, 30-Aug-2023


I submitted the below in response to Large language models aren’t people. Let’s stop testing them as if they were.

_____

In “Large language models aren’t people,” MIT Technology continues to serve as a counterweight to AI hype. The article lists several things AI /​ LLMs still cannot do, even some that preschool children can. While allowing that LLMs are capable of tricks and rote memorization, the article maintains they are still far away from human capability levels. Tests created for humans probably do not even apply.

Your current message is clear: This technology is over-hyped. AI is less human, less capable, and less scary than many people assert.

I have depended upon MIT Tech Review to help me understand technology trends for decades now.

One question: At what point should we start worrying about AI?

What tricks—if LLMs demonstrate them in the future—should concern us?

At one point, learning to code was considered one of those future tricks that should concern us. Winning at Go was going to take decades. Creative pursuits—writing and art—were going to be the last things AI could do. All these capabilities are now accepted as the new normal.

Your current chain of reasoning appears to be: 1) Currently, these models are not human and lack fundamental human capabilities. 2) We do not know what is going on inside of them. 3) Sure, as they get bigger, the list of tricks they can do gets longer. 4) Ignore the hype.

An alternative chain of thought would be:

1) Currently, these models are not human and lack fundamental human capabilities. 2) We do not know what is going on inside of them. 3) They are becoming more capable. 4) We are not good at predicting when/​how the capabilities emerge. 5) Intelligence is a potent force. (Consider the plight of stronger animals at our mercy.) 6) Be concerned.


If MIT Tech Review misses this one, all its technology analysis and commentary over the decades will not matter.

__

Mon, 04-Mar-2024

I’d like to report an update to this post. I just finished reading Large language models can do jaw-dropping things. But nobody knows exactly why by William Douglas Heaven, the same author who wrote Large Language Models are not People. I find it a helpful layman’s exposition of some of the points in my chain of thought above, specifically: 2) We do not know what is going on inside of them. 3) They are becoming more capable. 4) We are not good at predicting when/​how the capabilities emerge.

I think this represents a positive evolution in reporting on AI by a major technology information source.

No comments.