The policy change for LLM Writing got me thinking that it would be quite interesting to write out how my own thinking process have changed as a consequence of LLMs. I’m just going to give a bunch of examples because I can’t exactly pinpoint it but it is definetely different.
Here’s some background on what part it has played in my learning strategies: I read the sequences around 5 years ago after getting into EA, I was 18 then and it was a year or two later that I started using LLMs in a major way. To some extent this has shaped my learning patterns, for example, the studying techniques I’ve been using to half ass my studies effectively is to try really hard to solve problems and when I can’t do that I’ve been using LLMs to tie it together with my existing knowledge tree.
I’ve coupled applied linear algebra relatively hard to things like probability metric spaces and non-linear dynamics because I want to see how the toolkit of math goes together. An example from recent is when I was playing table tennis with my phycisist friend he was describing QFT and renormalization theories to me and my direct question was to ask how this ties into vector spaces and fields in linear algebra and how the spaces look like. My mind automatically goes to those questions because it assumes that I can get an answer to it by asking the question even when I’m not talking to an LLM.
Work & Strategy:
One of the things that I do outside of studying is that I usually have LLMs pretend to be councils of experts within various fields so that I can discuss things on the frontier with them. The other day I put together a council of Donald Knut, Karl Friston, John Wentworth and Michael Levin in order to give me some good takes on what agency might look like in CodeWars and concluded that the lack of memory might be a problem.
I also plan with LLMs in mind and so I expect first drafts to take a lot less time than they otherwise would and so I get this expanded option space of being able to do lots of things quickly to an 80⁄20 quality.
My entire learning strategy and life strategy has to some extent changed with this in mind as well since it seems like clear visions and clear understanding of deeper problems help you steer people and LLMs towards good directions. The skills to be practiced is then not necessarily to only get boggled down in details but focusing on how to combine ideas from different fields and describing them well. This is because you will have the most use of LLMs as you can stand on as many shoulders of giants as possible.
So what does the above model mean for me in terms of actions?
Learn applied category theory in order to become better at quickly mapping different fields together in a more formal way. (For reasoning verification reasons)
Learn collective intelligence and how cyborgism between AIs and humans might look like based on the existing fields that exist in the world. (To become better at coordinating AIs and humans)
Learn how to communicate and listen well so that you can incorporate many perspectives and share clear visions about the world. (This one is more important than the above for real world success)
Learn how to run and start projects in a good way in order to catalyze the insights you have into concrete outcomes.
I think LLMs allow you to serendipity max really well if you apply yourself to learning how to do it. I’m curious how other people have updated with regards to LLMs!
The policy change for LLM Writing got me thinking that it would be quite interesting to write out how my own thinking process have changed as a consequence of LLMs. I’m just going to give a bunch of examples because I can’t exactly pinpoint it but it is definetely different.
Here’s some background on what part it has played in my learning strategies: I read the sequences around 5 years ago after getting into EA, I was 18 then and it was a year or two later that I started using LLMs in a major way. To some extent this has shaped my learning patterns, for example, the studying techniques I’ve been using to half ass my studies effectively is to try really hard to solve problems and when I can’t do that I’ve been using LLMs to tie it together with my existing knowledge tree.
I’ve coupled applied linear algebra relatively hard to things like probability metric spaces and non-linear dynamics because I want to see how the toolkit of math goes together. An example from recent is when I was playing table tennis with my phycisist friend he was describing QFT and renormalization theories to me and my direct question was to ask how this ties into vector spaces and fields in linear algebra and how the spaces look like. My mind automatically goes to those questions because it assumes that I can get an answer to it by asking the question even when I’m not talking to an LLM.
Work & Strategy:
One of the things that I do outside of studying is that I usually have LLMs pretend to be councils of experts within various fields so that I can discuss things on the frontier with them. The other day I put together a council of Donald Knut, Karl Friston, John Wentworth and Michael Levin in order to give me some good takes on what agency might look like in CodeWars and concluded that the lack of memory might be a problem.
I also plan with LLMs in mind and so I expect first drafts to take a lot less time than they otherwise would and so I get this expanded option space of being able to do lots of things quickly to an 80⁄20 quality.
Great Artists Steal and people who win nobel prizes are often interdisciplinary between 2 or 3 different fields. If you sample on this then you can see the serendipitous quality in how LLMs can help you create an interconnected knowledge tree.
My entire learning strategy and life strategy has to some extent changed with this in mind as well since it seems like clear visions and clear understanding of deeper problems help you steer people and LLMs towards good directions. The skills to be practiced is then not necessarily to only get boggled down in details but focusing on how to combine ideas from different fields and describing them well. This is because you will have the most use of LLMs as you can stand on as many shoulders of giants as possible.
So what does the above model mean for me in terms of actions?
Learn applied category theory in order to become better at quickly mapping different fields together in a more formal way. (For reasoning verification reasons)
Learn collective intelligence and how cyborgism between AIs and humans might look like based on the existing fields that exist in the world. (To become better at coordinating AIs and humans)
Learn how to communicate and listen well so that you can incorporate many perspectives and share clear visions about the world. (This one is more important than the above for real world success)
Learn how to run and start projects in a good way in order to catalyze the insights you have into concrete outcomes.
I think LLMs allow you to serendipity max really well if you apply yourself to learning how to do it. I’m curious how other people have updated with regards to LLMs!