Ideas so far:
chunking: works sorta well, requires upfront cost to learn the concept, another cognitive cost to use the concept properly, and remember to do so in-context.
DNB: not much, if any.
spaced repetition: I said “working” memory.
writing things down: helpful, has time and depth costs, unclear how useful it is for learning new things.
whiteboards, notebooks, etc: Somewhat helpful, but has similar problems as writing, plus it doesn’t help as much when trying to grok a concept / know when to apply it.
just-in-time knowledge systems: I’m trying to build an incredibly-hokey “concept database” to do a bit of this. The main problem is still usually “knowing which thing applies to a given problem”, plus the above problems with writing things down.
As usual with my threads on this sort of topic, this is looking for wacky/anti-inductive/risky methods only.
I would recommend to only focus on the object-level problem you’re trying to solve.
for programming, things like more monitors, a better IDE (or extensions, knowing how to navigate it by traveling back and forward, having a section with last opened files, last refactored functions, etc...) will help.
for conversations, you can apply some heuristics at different points of the conversation: what are talking about again? what did they mention? are we at the midpoint, ending, etc…
In mathematics, notation is basically a solution to the small working memory that we have, you just have to find the analogue for whatever you’re trying to solve. I doubt something will permanently fix low working memory in the long-term (e.g. dual n back) you can of course try some acetylcholine release agents or reuptake inhibitors which will make you more vigilant (e.g. the mildest one being coffee.) There’s also some evidence pointing to fasting, sleep deprivation, niche nutritional protocols such as carnivore diets (or rather, strict exclusion diets where you remove some foods progressively until you find the one that does not suit you.)
Can confirm about the use of notation. Then it becomes learning/interpreting it (including in different contexts).
Something not directly about working memory, but which I found unusually helpful in the realm of “low-level yet very general learning strats”, is the advice here: https://terrytao.wordpress.com/advice-on-writing-papers/on-compilation-errors-in-mathematical-reading-and-how-to-resolve-them/
Connect a stack style memory register to a pair of peripheral neurons, so that the neurons can send three separable nerve signals (push one, push zero, pop) and receive two separable inputs from the machine (pop one, pop zero)
Leave it connected for an extended period of time so that neuroplasticity can adapt to having a sense organ that is a low metabolic cost, fast binary storage device, might be worth trying a lot of double n back so the body adapts to using the new organ, and as a bonus, you’ll get quantitative proof if it works.
Congrats, you’re a superintelligence.
If I remember correctly, something like this was done in a rat and measurably improved water maze performance.
Now this is anti-inductive and risky! Noted...
Any chance you could link to the study about augmented rats?
I went looking and couldn’t find it, but here’s something newer and probably more useful: https://www.nature.com/articles/s41598-020-58831-9
Neuralink has described the bandwidth they’re seeking as similar to the corpus callosum. I don’t think that’s actually necessary to achieve superhuman results. The brain is good at adding new sense organs (see research on vibrating belts, cameras attached to tongues, whiskers on finger etc). I presume that the brain is also good at linking to ‘more brain’. So, a low bandwidth interface, possibly only a few peripheral nerves, to either a von neumann architecture like the one I described above (and that memory interface could potentially also be connected to other hardware that could push and pop bits), or a computer simulation of neurons like the one in the linked paper is probably something that would be useful.
If you’re using an extremely loose definition of ‘AI superintelligence’, namely ‘a natural intelligence, physically connected to a machine that achieves otherwise unattainable performance in some dimension of intelligence’, such as say a large improvement in ‘digit span’, I believe that such a thing is possible today using extant technology.
In a more general sense, how much artificial augmentation of a ‘natural general intelligence’ is required before it qualifies as an AGI?
Method of loci?
Oh yeah, forgot to put that on the list!
To this day, I still remember a tub of cheese from an example in Moonwalking with Einstein. And then there’s the use of stories/metaphors to show relationships between entities...
Is it supposed to be helping working memory?
I benefited a lot from re-practicing my handwriting, so that I could take notes as I read the sequences for the first time (which you can only do once).
Taking notes via handwriting is absolutely necessary to learn new things. In school they taught us that we lose 50% if we don’t take notes but we ignored that along with all the other lame propaganda that it was mixed in with, even though it’s very, very true. Writing to paper is like a computer writing to memory instead of RAM.
And if you’re in the habit of trying to think about things worth thinking about, then that means you’ll tend to come across things worth writing down.
If exercising arm and core muscles strengthens the body, then exercising hand/wrist muscles (while practicing handwriting) strengthens the mind.
Hi, according to this research, it might not be possible to increase working memory, since the memory limit seems to be the same for birds, monkeys and humans, and in extension in all animals:
Certainly good to know, thanks for bringing this forward!
″… the existing literature on the influence of dopamine enhancing agents on working memory provides reasonable support for the hypothesis that augmenting dopamine function can improve working memory.”
—Pharmacological manipulation of human working memory, 2003