I’ve been doing something similar on my own for the past few weeks. The main difference is that an LLM can answer my questions while your questions are wilder.
Mine have looked like:
How did Lévi-Strauss respond to Sartre on existentialism?
What were post-WWII existentialists in France actually like?
Why is being mild about religion so common?
How does Blackstone’s private equity fund operate? What are the most important financial markers for this type of company?
Why are Chinese AI models so weakly secured against extraction of bio knowledge and capabilities?
Is Trump going to strike Iran?
What is understanding in mathematics? Can you become good at math without ever feeling like you truly understand things?
More of a goal design nagel-style candle holder that you can print on a
These come from news and books I’m reading at the moment.
Which points at what I expect will be one of the most common failure modes here: asking boring questions that even an LLM can answer.
These sort of questions come up pretty naturally, just, yeah, if an LLM can answer it, it’s no longer the interesting part of Thinkhaven. If you came in just planning to ask LLM-answerable questions the mentor/coach staff would be like “okay dude this is not the spirit of the thing, you can do better.”
But, synthesizing all the different answers to LLM questions into a coherent bigger picture that matters is still an important part.
(Also, the 500 words and 2500 words definitely need to be human written. The journals/essays can have arbitrary amounts of LLM-content if that’s useful, but, for meeting the Goodharty goal, you need to write human-words)
I didn’t get it put together in a way I felt ready to ship, but, a mix of LLM-answerable and not-very-LLM-answerable questions I actually asked during my week of Thinkhavening, were:
Initiating questions I was asking:
What’s up with Tsvi/JohnW/ThaneRuthenis thinking that LLMs are missing major ingredients necessary for true AGI?
Why are LLMs still sometimes ludicrously bad at thinking, despite being apparently good at it?
Does ASI require at least one major conceptual breakthrough?
What pieces along the way to modern LLMs required major conceptual breakthroughs (as opposed to just straightforwardly combining the existing ideas)
This resulted in LLM-answered questions along the way like:
What were the major innovations throughout the entire chain of ML-to-LLMs?
What prerequisites did each of those have?
Why didn’t the innovations happen sooner?
What were the details of how the Perceptron was invented?
(one answer was “it was building off the artificial neuron”)
What were the details of how the Artificial Neuron was invented?
(half-remembered-answer is “one psychologist/brain-surgeon guy (McCulloch) was obsessed with the question ‘how do human brains implement logic?’ for 20 years, and eventually met a young logician (Pitts), and then the two of them hashed out the details of how to implement logic in pen-and-paper neurons)
If McCulloch and Pitts hadn’t invented the Artificial Neuron, who would most likely have invented it instead.
(I think there were a couple answers here, but one was Alan Turing).
I didn’t really trust LLM judgment about the previous question, and a lot of the week was trying to think of questions that were pretty grounded/reasonable that seemed useful for synthesizing the answer. i.e. “was anyone working on literally this at literally the same time?”)
(An overall takeaway I had was that many innovations are mostly combining prerequisites in straightforward ways but you need one guy who really deeply understands the prequesites)
I’ve been doing something similar on my own for the past few weeks. The main difference is that an LLM can answer my questions while your questions are wilder.
Mine have looked like:
How did Lévi-Strauss respond to Sartre on existentialism?
What were post-WWII existentialists in France actually like?
Why is being mild about religion so common?
How does Blackstone’s private equity fund operate? What are the most important financial markers for this type of company?
Why are Chinese AI models so weakly secured against extraction of bio knowledge and capabilities?
Is Trump going to strike Iran?
What is understanding in mathematics? Can you become good at math without ever feeling like you truly understand things?
More of a goal design nagel-style candle holder that you can print on a
These come from news and books I’m reading at the moment.
Which points at what I expect will be one of the most common failure modes here: asking boring questions that even an LLM can answer.
These sort of questions come up pretty naturally, just, yeah, if an LLM can answer it, it’s no longer the interesting part of Thinkhaven. If you came in just planning to ask LLM-answerable questions the mentor/coach staff would be like “okay dude this is not the spirit of the thing, you can do better.”
But, synthesizing all the different answers to LLM questions into a coherent bigger picture that matters is still an important part.
(Also, the 500 words and 2500 words definitely need to be human written. The journals/essays can have arbitrary amounts of LLM-content if that’s useful, but, for meeting the Goodharty goal, you need to write human-words)
I didn’t get it put together in a way I felt ready to ship, but, a mix of LLM-answerable and not-very-LLM-answerable questions I actually asked during my week of Thinkhavening, were:
Initiating questions I was asking:
What’s up with Tsvi/JohnW/ThaneRuthenis thinking that LLMs are missing major ingredients necessary for true AGI?
Why are LLMs still sometimes ludicrously bad at thinking, despite being apparently good at it?
Does ASI require at least one major conceptual breakthrough?
What pieces along the way to modern LLMs required major conceptual breakthroughs (as opposed to just straightforwardly combining the existing ideas)
This resulted in LLM-answered questions along the way like:
What were the major innovations throughout the entire chain of ML-to-LLMs?
What prerequisites did each of those have?
Why didn’t the innovations happen sooner?
What were the details of how the Perceptron was invented?
(one answer was “it was building off the artificial neuron”)
What were the details of how the Artificial Neuron was invented?
(half-remembered-answer is “one psychologist/brain-surgeon guy (McCulloch) was obsessed with the question ‘how do human brains implement logic?’ for 20 years, and eventually met a young logician (Pitts), and then the two of them hashed out the details of how to implement logic in pen-and-paper neurons)
If McCulloch and Pitts hadn’t invented the Artificial Neuron, who would most likely have invented it instead.
(I think there were a couple answers here, but one was Alan Turing).
I didn’t really trust LLM judgment about the previous question, and a lot of the week was trying to think of questions that were pretty grounded/reasonable that seemed useful for synthesizing the answer. i.e. “was anyone working on literally this at literally the same time?”)
(An overall takeaway I had was that many innovations are mostly combining prerequisites in straightforward ways but you need one guy who really deeply understands the prequesites)