Representation in superposition would not have been a novel idea to computer scientists in 2006. Johnson-Lindenstrauss is old. But there’s nothing I can think of from back then that’d let you do computation in superposition, linearly embedding a large number of algorithms efficiently on top each other in the same global vector space so they can all be pretty efficiently executed in parallel, without wasting a ton of storage and FLOP, so long as only a few algorithms do anything at any given moment.
To me at least, that does seem like a new piece of the puzzle for how minds can be set up to easily learn lots of very different operations and transformations that all apply to representations living in the same global workspace.
@Eliezer Yudkowsky If Large Language Models were confirmed to implement computation in superposition [1,2,3], rather than just representation in superposition, would you resolve this market as yes?
Representation in superposition would not have been a novel idea to computer scientists in 2006. Johnson-Lindenstrauss is old. But there’s nothing I can think of from back then that’d let you do computation in superposition, linearly embedding a large number of algorithms efficiently on top each other in the same global vector space so they can all be pretty efficiently executed in parallel, without wasting a ton of storage and FLOP, so long as only a few algorithms do anything at any given moment.
To me at least, that does seem like a new piece of the puzzle for how minds can be set up to easily learn lots of very different operations and transformations that all apply to representations living in the same global workspace.