My New Learning Stack: From Podcast to Personalized Curriculum in Minutes

How AI enables custom and tailored, just-in-time knowledge distillation for me, and likely others


I just experienced something that crystallized how radically AI has transformed learning, not just in degree, but in kind. The shift isn’t about speed or convenience; it’s about the fundamental structure of how knowledge acquisition works. For me, this enables the targeted and efficient acquisition of knowledge that is directly relevant to my current work rather than trying to match a book, paper, or course that would cover the needed material but with lots of extra stuff that I don’t need.

Context: Why This Matters to Me

I’m at a point in my life where traditional pathways, going back to school, pursuing a degree in machine learning or AI, aren’t realistic options. That doesn’t make my engagement with these topics, or my attempt at genuine and potentially novel research, any less sincere. But it does mean that whatever learning I do must be absolutely on point and directly applicable to the work at hand. That said, my hunger for knowledge is insatiable, and I want to make the most of every hour I spend learning. This is not new, but I have never both met a topic I had a near limitless desire for engagement with at even a high level and the need to do so in an insanely efficient way.

This pattern I’m describing emerged from necessity. I can’t afford to spend months on prerequisite courses or years on structured programs. Every hour of learning needs to connect directly to my current project or research goals. This particular moment, encountering the tensor logic podcast, is a perfect example of that constraint driving innovation. Put simply, I understood just enough of the podcast to determine it might be a key piece of the puzzle for my research project, but I didn’t have the background to fully understand it.

The Setup

I was listening to the Machine Learning Street Talk podcast with Pedro Domingos about Tensor Logic, a potential breakthrough for unifying AI paradigms. It immediately resonated with my current AGI research project on world models and knowledge distillation, but the conversation was dense with concepts I didn’t fully grasp (but wish I did):

  • Einstein summation notation (EINSUM)

  • Predicate invention and structure learning

  • Tucker decompositions

  • Inductive logic programming

  • Reasoning in embedding space

Traditional approach: Spend weeks googling, reading papers, taking notes, trying to piece together a coherent learning path. Maybe find a textbook. Hope the explanations match your current knowledge level. Struggle to connect the concepts to your specific use case. Experience the friction of not knowing what you don’t know. Then trying to engage the material again to see if I’m then able to comprehend the topic to the level I feel I need to.

What I Did Instead

Step 1: Captured the content

I found the full transcript from YouTube via app.rescript.info, a game-changer tool that extracts clean, searchable transcripts from any YouTube video. Many podcasters (including Machine Learning Street Talk) use similar services to manage their transcripts.

The ability to work with text rather than just audio/​video fundamentally changes how you can interact with content. Text is searchable, parseable, and can be fed to AI systems for analysis. In my case, it also means that I can also use the full transcript to generate a personalized learning validation piece to help me understand and validate if I’m on the right track.

Step 2: Contextualized with my goals

Fed the 1700-line transcript into my research project repo where Claude code would have full access which includes:

  • My AGI research framework

  • My learning style (distillation over memorization, multi-perspective understanding)

  • My immediate evaluation needs (can tensor logic provide a key piece in the implementation of my architecture?)

Step 3: Generated custom curriculum

Claude produced an 8-module, 40-60 hour learning outline perfectly matched to:

  • Every technical concept in the podcast

  • My specific AI research project needs

  • Explicit bridges between tensor logic concepts and my framework

The outline includes learning objectives, suggested exercises, AI tutor prompts, assessment checkpoints, and, critically, connections showing how each concept maps to my existing mental models.

The Pattern That Matters

This isn’t just about one podcast. It’s a repeatable workflow for any high-value content:

  1. Encounter high-value content (podcast, paper, lecture)

  2. Capture it in text form (transcripts, PDFs)

  3. Contextualize with your specific needs and goals

  4. Generate personalized learning paths via AI

  5. Execute learning in voice mode (more on this below)

  6. Iterate as understanding deepens

This is not the first time I’ve used most of this pattern. I’ve used it quite extensively to just jump start and expand my general knowledge and understanding of key AI topics but it’s the first time that I built a curriculum outline that was based on a very specific subject or topic, in this case the use case of being able to fully engage this podcast and specifically tensor logic.

Why Traditional Learning Fails Here

Traditional learning is one-size-fits-all. Textbooks and courses can’t adapt to:

  • Your specific project context

  • Your existing knowledge graph and its gaps

  • Your learning style and cognitive preferences

  • Your immediate application needs

  • The relationship between new concepts and your current work

AI-powered learning is bespoke, just-in-time knowledge distillation. It’s not about consuming content faster—it’s about extracting exactly the conceptual structure you need, in the form you need it, bridged to what you already know.

I need to follow up though and make it absolutely clear that while what I am proposing first and foremost is about my context in my learning environment and experience at this point in my life, I do think that it could apply to many people in many situations. Most importantly, I am in no way suggesting a failure in traditional learning or that this should replace all traditional learning It simply represents a new option, a new possibility for learning in a way that more tightly matches a given use case, learning style, and time span.

The Game Changer: Voice Mode

Here’s what I’ve learned: Nothing, no textbook, no course, no video lecture, comes close to the learning experience of working directly with AI in voice mode. The interaction is fundamentally different from any prior educational technology. One of my first epiphany moments with AI was my first interaction with what was then, ChatGPT’s Advanced Voice Mode. This was ~8 months ago and after entering that mode that first time, I had a conversation that lasted 2+ hours and the world was never the same! As I have developed a habit of using AI voice mode I was able to:

  • Ask it to repeat something I didn’t quite catch

  • Request more detail on specific concepts that need clarification

  • Go on tangents when curiosity strikes or connections emerge

  • Get immediate feedback to verify my understanding in real-time

  • Skip topics I already know without feeling guilty or wasting time

  • Control the depth of explanation dynamically

At this time, ChatGPT’s voice mode is the gold standard for this use case: it’s completely seamless. No buttons, no typing, just natural and continuous conversation that flows like talking with an expert colleague who never gets tired, frustrated, or dismissive. It was broken for my use case fairly recently and for a stretch of time. This initially happened with the release of ChatGPT 5, and I did quite a bit of experimentation as research at the time. It appeared to primarily be changes made to the voice mode that were specifically addressing the most common use cases, as Open AI believed that maybe at that time, and probably was accurate, that it was for short-form conversation on lighter topics. But the result of this was that my ability to engage it for any length of time with any reasonable amount of depth on a serious topic was utterly broken. To cut to the chase, this was fixed as soon as ChatGPT 5.1 came out.

The Liberation of Ambient Learning

I can now learn while walking, driving, doing dishes. The constraint of traditional “sit-down study time” is gone. Learning becomes ambient, continuous, conversational. Also, and this is definitely a huge value for some of us, I’m able to maintain a level of engagement and focus that I can easily lose through other methods and tools because of how tailor made the interaction is and the fact that I have complete control over it. I maintain absolute focus and engagement and if for any second I start to feel that slipping I can either take a break, redirect, or simply ask for something to be repeated. All of these things I could not easily do with any other medium and certainly while I can keep rereading a passage in a book I can’t ask the book to rephrase it a different way, give me examples, or allow me to provide feedback in my own words to help evaluate whether my understanding is at the level I believe or hope it to be.

This isn’t just convenient—it fundamentally changes the economics of learning. Those 10-minute walks, 20-minute commutes, and random pockets of time throughout the day now become valuable learning opportunities. The opportunity cost of learning drops dramatically when you can do it while doing other things.

Consider the math: If you have 60 minutes of “dead time” per day (commuting, walking, chores), that’s 7 hours per week, or 350+ hours per year. At traditional learning rates, that’s multiple university courses worth of material, extracted from exactly the content you need, when you need it. This is compelling enough, but when you add in the compression and efficiency aspect, the fact that for any hour of engagement in a learning process like this it might be equivalent to anywhere between three to five times that amount of engagement required to get the same information out of a non custom method, the value proposition escalates even further.

My Concrete Next Steps

I’ll be working through this tensor logic curriculum entirely in ChatGPT voice mode over the next few weeks. Then I’ll return to the original podcast with a completely different level of understanding, hopefully ready to critically evaluate whether tensor logic is the right mathematical foundation for my AGI architecture. Regardless and to some extent this is particular to my own story, none of this effort will be wasted as I tried to allude to before. There is nothing I don’t want to learn or understand, it’s just a matter of realizing that I have limited time and limited resources to meet an unlimited need and desire.

This is a testable prediction: After completing the curriculum, I should be able to:

  • Follow every technical discussion in the 1700-line transcript, or most of it

  • Critically evaluate Domingos’ claims in the context of my work

  • Map tensor logic concepts to my framework

  • Identify gaps and necessary extensions

  • Make an informed decision about implementation

The Meta-Lesson

We’re at an inflection point in learning. The question isn’t “What should I learn?” but “What learning experience should I design for myself?”

AI isn’t replacing teachers, it’s becoming an infinitely patient, context-aware learning partner that meets you exactly where you are. It will not get tired, frustrated, or insist on covering topics that you can either skip or touch very lightly. This provides a potent intersection of depth with efficiency that traditional education simply cannot match.

The Workflow is the Insight

This isn’t just about one podcast or one tool. It’s about recognizing that we now have the infrastructure to:

  1. Capture any content in text form (transcripts, PDFs, papers)

  2. Contextualize it with our specific needs and goals

  3. Generate personalized learning paths that bridge content to objectives

  4. Execute the learning in whatever mode fits our life (voice, text, visual)

  5. Iterate as our understanding deepens

The future of learning is personalized, interactive, and voice-native, at least for me and likely for many others. But more fundamentally, it’s about agency: taking control of your learning experience and designing it around your actual needs rather than accepting whatever standardized curriculum happens to exist. In a world where critical thinking and thoughtful knowledge acquisition appears to be in shorter supply than you would think, this could be critical.

Epistemological Implications

This has broader implications for how we think about knowledge acquisition:

From Passive Reception to Active Design

  • Traditional: “Here’s the curriculum, adapt yourself to it”

  • New: “Here’s my goal, design the curriculum that gets me there”

From Linear Progression to Graph Traversal

  • Traditional: Chapter 1 → Chapter 2 → Chapter 3

  • New: Navigate the concept graph along paths that connect to your existing knowledge and feel more intuitive to you

From Isolated Learning to Contextualized Integration

  • Traditional: Learn concepts in isolation, hope to integrate later

  • New: Every concept is learned with explicit bridges to your existing mental models, they are tied directly to specific needs you have or goals you are working towards

From Time-Bounded to Opportunity-Bounded

  • Traditional: Learning requires dedicated “study time”

  • New: Learning happens in any available cognitive space, often time where you can do little else other than listen to music or podcasts

Limitations and Open Questions

What I haven’t tested yet:

  • Does this work for deeply mathematical subjects that require working through proofs?

  • How does retention compare to traditional methods?

  • What happens when the AI is confidently wrong about something technical?

  • Can you build genuine expertise this way, or only broad understanding?

Since I actually have a project that I’ve started, though it’s currently on the back burner, that is all about creating tooling that encompasses this overall concept and framework, these concerns actually matter to me quite a bit. I believe it’s important to have ways of validating artifacts and outputs throughout this process from getting a second or 3rd perspective on the initial learning outline to using a method of validation of your learning that uses a different model and maybe even a different tool to help ensure that what you’re learning and what’s being covered is accurate and appropriately relevant.

What I’m watching for:

  • Quality of understanding vs speed of acquisition trade-offs

  • Whether voice-based learning creates different cognitive patterns

  • How well the learning transfers to actual implementation

  • Whether this scales to truly novel research territory (no training data)

In this area I very much become my own lab rat and basically research project. As I’ve alluded to, I am engaging as deeply and sincerely an area of knowledge and research that I have limited direct exposure to, even though I’ve been a technologist for over 30 plus years. I confidently feel that this is doable but that’s nothing but a belief that I need to realize one day at a time as I continue to engage this topic to engage AI, not only as tooling I use to do work, but to make a genuine attempt at novel hypothesis and even research in this area. The work I’m doing requires that this new learning method works. If it doesn’t then I’m not going to get very far but if it does work as well as it appears to be so far then it will be interesting to see how far I can take this.

Practical Details

The full learning outline I generated is documented on my site and will be in my knowledge-distillation-framework repo when I make it public (soon).

Tools mentioned:

  • app.rescript.info—YouTube transcript extraction

  • Claude (Anthropic) - Curriculum generation

  • ChatGPT Voice Mode (OpenAI) - Interactive learning execution

Source podcast: Machine Learning Street Talk—Pedro Domingos on Tensor Logic

Huge thanks to the Machine Learning Street Talk podcast!

This is one of my main sources of more in-depth information in the quickly evolving AI space.


I’m curious about others’ experiences with AI-powered learning. Have you found patterns or workflows that work particularly well? What are the failure modes you’ve encountered? Are there any other voice tools or interfaces that you find more effective or as effective?

No comments.