RSS

Bill Benzon

Karma: 389

The Story of My Intellectual Life

In the early 1970s I discovered that “Kubla Khan” had a rich, marvelous, and fantastically symmetrical structure. I’d found myself intellectually. I knew what I was doing. I had a specific intellectual mission: to find the mechanisms behind “Kubla Khan.” As defined, that mission failed, and still has not been achieved some 40 odd years later.

It’s like this: If you set out to hitch rides from New York City to, say, Los Angeles, and don’t make it, well then your hitch-hike adventure is a failure. But if you end up on Mars instead, just what kind of failure is that? Yeah, you’re lost. Really really lost. But you’re lost on Mars! How cool is that!

Of course, it might not actually be Mars. It might just be an abandoned set on a studio back lot.

That’s a bit metaphorical. Let’s just say I’ve read and thought about a lot of things having to do with the brain, mind, and culture, and published about them as well. I’ve written a bunch of academic articles and two general trade books, Visualization: The Second Computer Revolution (Harry Abrams1989), co-authored with Richard Friedhoff, and Beethoven’s Anvil: Music in Mind and Culture (Basic Books 2001). Here’s what I say about myself at my blog, New Savanna. I’ve got a conventional CV at Academia.edu. I’ve also written a lot of stuff that I’ve not published in a conventional venue. I think of them as working papers. I’ve got them all at Academia.edu. Some of my best – certainly my most recent – stuff is there.

Steven Wolfram on AI Alignment

Bill Benzon20 Aug 2023 19:49 UTC
65 points
15 comments4 min readLW link

The idea that ChatGPT is sim­ply “pre­dict­ing” the next word is, at best, misleading

Bill Benzon20 Feb 2023 11:32 UTC
55 points
87 comments5 min readLW link

A con­cep­tual pre­cur­sor to to­day’s lan­guage ma­chines [Shan­non]

Bill Benzon15 Nov 2023 13:50 UTC
24 points
6 comments2 min readLW link

What would it mean to un­der­stand how a large lan­guage model (LLM) works? Some quick notes.

Bill Benzon3 Oct 2023 15:11 UTC
20 points
4 comments8 min readLW link

What must be the case that ChatGPT would have mem­o­rized “To be or not to be”? – Three kinds of con­cep­tual ob­jects for LLMs

Bill Benzon3 Sep 2023 18:39 UTC
19 points
0 comments12 min readLW link

Why I hang out at LessWrong and why you should check-in there ev­ery now and then

Bill Benzon30 Aug 2023 15:20 UTC
16 points
5 comments5 min readLW link

On pos­si­ble cross-fer­til­iza­tion be­tween AI and neu­ro­science [Creativity]

Bill Benzon27 Nov 2023 16:50 UTC
15 points
22 comments7 min readLW link

The Tree of Life, and a Note on Job

Bill Benzon31 Aug 2023 14:03 UTC
13 points
7 comments4 min readLW link

Con­cep­tual co­her­ence for con­crete cat­e­gories in hu­mans and LLMs

Bill Benzon9 Dec 2023 23:49 UTC
13 points
1 comment2 min readLW link

Does ChatGPT’s perfor­mance war­rant work­ing on a tu­tor for chil­dren? [It’s time to take it to the lab.]

Bill Benzon19 Dec 2022 15:12 UTC
13 points
5 comments4 min readLW link
(new-savanna.blogspot.com)

Oper­a­tional­iz­ing two tasks in Gary Mar­cus’s AGI challenge

Bill Benzon9 Jun 2022 18:31 UTC
12 points
3 comments8 min readLW link

Are (at least some) Large Lan­guage Models Holo­graphic Me­mory Stores?

Bill Benzon20 Oct 2023 13:07 UTC
11 points
4 comments6 min readLW link

The Busy Bee Brain

Bill Benzon13 Dec 2023 13:10 UTC
11 points
0 comments6 min readLW link

The mind as a polyvis­cous fluid

Bill Benzon28 Aug 2023 14:38 UTC
8 points
0 comments3 min readLW link

Was Homer a stochas­tic par­rot? Mean­ing in liter­ary texts and LLMs

Bill Benzon13 Apr 2023 16:44 UTC
7 points
4 comments3 min readLW link