Cheaper sodium production will/would also be great for reducing the cost of sodium-ion batteries, which with some more development and scaling I could easily see outperforming lithium for stationary applications.
AnthonyC
Fun fact on height: I’m almost exactly 5′11.5“, and my wife is the one who pushes me to claim I’m 6′0” instead of 5′11“ (not for dating, we’re monogamous, just in life). She’s 5′10”, so that extra half inch is very clearly visible to her when standing next to me.
I’m curious if we know, is there a 180cm effect? Does the rest of the world get away with being a whole inch shorter without feeling the need to lie?
If that’s somethin you want to work on, that’s excellent. Math is useful, and can be a lot of fun even if it doesn’t come naturally to you. Consider carefully whether a classroom is the best place for you to learn it. It might be. It might not.
It should go without saying that the answer ultimately has to come from you and not a stranger on the internet.
If you wanted to go into a STEM field, or a particular type of grad school, or become a scholar of some kind, or take a job that needs a professional certification with a particular degree type that would constrain your choices and make this process in some sense ‘easier.’
Barring that, if you want to choose a humanities major, sure, some are more directly ‘useful’ than others. But, taken seriously, all the liberal arts cover the core skills of how to learn, how to study, how to grow, how to think, how to write, how to form an opinion, and how to decide what you care about. Actually getting that out of your schooling is easier if you pick a path you really like. There are, at least for now, many jobs and careers that require a degree without much reason to care what that degree is in. Who you meet/know, where you are, and how you present yourself are just as important.
I notice you don’t really mention what you want your life to actually look like after you graduate, which is kinda important for figuring out how to back-chain to what you might want to pursue now, or at what kind of school. Do you see yourself in an office job? At a machine shop? On a farm? Do you want work at a startup? Put yourself in a place where there are opportunities to try things out and meet people related to what you think you might want to do.
Keep in mind that, “I don’t know, so I’ll hold off, try something I can do with my associate’s degree, and get my bachelor’s degree later or part time once I do,” is also an option. Some schools, like ASU, or a whole bunch of schools through Coursera, offer online degree programs, which you can enroll in from anywhere.
And personally, I wouldn’t hinge your future plans too much on how you think AI will progress. The world may change radically in all kinds of ways, and you’ll want to try to see that coming as best you can, but none of the paths are certain enough to not plan for a baseline of becoming a competent adult that can function in society as it exists today.
As @localdeity said, you won’t find what you’re asking for, because it isn’t strictly true among humans. The way our minds are constructed, we are not equipped by default to accept arbitrary information and interpret and apply it effectively. There are theorems about how, in the limit of infinite computing power, a mind that correctly updates beliefs in response to evidence should never become predictably worse off (either epistemologically or in its ability to make plans to achieve goals) as it acquires new information, but those theorems don’t strictly apply to our own bounded rationality, heuristic-laden, limited minds.
That said: it’s true enough that it’s worth having a very high bar against trying to force yourself to believe in falsehoods you think are useful, or convincing others to do so. Even if you can do such a thing, it’s dangerous in a variety of ways, some obvious and some not so obvious. It’s definitely worth knowing that even when a fiction is useful in the short term, it’s almost always the case that once you recognize it as such, you as an individual are going to be better off admitting it and getting through to the other side where you understand why it is a useful approximation or pragmatic implementation of some deeper and more-true principle.
Ok, that all makes sense, and yes I’m very familiar with vipassana meditation. FWIW I credit MCTB2 with about 30% of how I got out of a 12 year long depression.
I think it’s always good to have more presentations of ideas from different perspectives. I would say that a lot of what you’re describing is covered in the Mysterious Answers part of the Map and Territory sequence and the A Human’s Guide to Words part of the Machine in the Ghost sequence . One thing that gets mentioned many times, I think in the posts but definitely in the comments, is a set of anecdotes from “Surely You’re Joking, Mr. Feynman” which, if you haven’t read it, is a great lighthearted description of some of these kinds of not-actually-science that get passed off as science.
Also, respectfully, Wittgenstein moved in rarified circles and in that quote was a describing (and correctly criticizing) a much higher standard of understanding than most elite college graduates have, let alone the rest of society. You can tell, because the ‘modern system’ gave way to the postmodern system, whose pioneers mostly correctly diagnosed the problem and were then promptly misunderstood in all sorts of useless, destructive, and ridiculous ways.
You’re exploring ideas you find interesting, taking the exploration seriously and giving it real effort. I respect that. However, the way you write gives the impression of trying to preserve some sort of sacred mystery about the ideas you’re exploring, instead of trying to resolve your own confusions and thereby replace the mystery with deepened understanding when you can attain such.
For example: I’m not sure what it is about the concept of intrinsic nature that got you thinking about this, but you correctly notice that this is not a concept that actually helps explain anything, and doesn’t accurately describe the nature of the world you live in. Congratulations! Yes, really. But then you dwell on this in ways that don’t seem to add anything further except a vibe of mysterianism.
I’m not sure what your goal is in writing this, but for your own explorations, consider that there are a lot of other directions you could choose to take your developing understanding and use it to build on itself. Ways that acknowledge you’ve dissolved the concept and asked what’s next. For a few examples:
You might ask, well, what is the emptiness like? Actually, for the past near-century now, that’s a question of physics! The void is pregnant with infinite possibilities. By which I mean, not something mysterious, but that a vacuum has a precise structure, which gives rise to the laws of physics, which lets us calculate the behavior of all sorts of other things, and when we run the experiments, the calculations are right, because the models of vacuum structure are pointing at something inherent in the nature of the real world. As our models describe it, the void spontaneously gives rise to virtual particles, which is how forces are conveyed across space. The void interacts with matter—for example, you can see this in the Casimir effect, where conductive plates change the allowed quantization of the electromagnetic field, which changes the field’s zero-point energy, which creates a force between the plates. Emptiness is fascinating, not as a deep and abiding mystery, but about the behavior of reality. It mediates a complex web of interactions. The interactions themselves exist. Other physicists are asking where the structure came from, what space and time are made of, or what might replace them as deeper descriptions of the reality which the quantum spacetime vacuum description tries to model.
You might ask, well, what does it all mean now that we know it’s empty? This is a different kind of question, but if you’re wondering about meaning and value and how or whether a concept like “water” can have meaning in a world where “intrinsic nature” is not a thing, have you tried reading the blog Meaningness? There are a lot of reasons I recommend it. One is to help find a way out of the nihilism that for many often follows the realization that things lack intrinsic nature or meaning. Another is to highlight that oftentimes, the answer to a deep question depends entirely on the reasons someone is asking it in the first place. In other words, the answer (and the meaning) arise from the complex web of interactions that gives rise to a mind that has a reason for asking a question using a particular set of words, and the same set of words can call for a different answer when posed by a different mind. This isn’t fundamentally mysterious. For example, the answer to “Where do babies come from?” is different depending on whether the asker is a toddler, an obstetrician, or a new hire at an infant daycare center, as well as whether the answerer is a mom, a kindergartener, or a med student . But there’s a lot to explore in how it cashes out in practice.
You might also ask, why did I (or why do others) think intrinsic nature is real, or important? What did I think I was getting out of it, do I still think I need that, and if so, where am I getting it now? If not, what changed, what dependencies in my mind resulted in the change, how do I feel about that, and how do I want to react to that? What else in my mind is downstream of those things and should also change in response? If you don’t clarify the stakes, the reasons you’re having the discussion or exploring the idea, then what comes out is likely to look like nonsense to anyone who isn’t you, and any actually useful insight is likely to be lost even to future-you.
I had previously only been paying for ChatGPT and Gemini, not Claude. I have now resumed my paid Claude subscription. Thanks, Anthropic!
FWIW, that was only Picard’s answer: definitive, as befits a ship’s captain. The judge’s answer was that they were not equipped to make such a determination, and in the face of that uncertainty, the right choice was to defer to Data himself to explore it. They also did a very similar episode in Voyager regarding whether the captain or crew had the right to edit the memories of the holographic doctor, with basically the same conclusion.
I think questions of AI moral personhood will also, for some people, force us to really confront what we claim are our opinions regarding the use of violent or deadly force. It makes a lot of the implicit and unexamined value determinations, that we all make all the time, explicit and scary, The Good Place-style.
There’s a part of me that suspects we (collectively, not individually) may not really grapple with digital personhood until we learn how to store human minds in digital form (assuming such a thing happen in our timeline). If those two teddy bears were not AIs, but the devices that stored your grandparents, or a thousand strangers, well, then what?
If you were going to have someone write a book like that, who else would you choose?
If I had my druthers, I might make it a trio and add Euripedes, one of his contemporaries, or a modern classicist who had deeply studied the Bacchae and the Dionysian cults; someone who understood the dual nature of Dionysus enough to value the ideas of eudaimonia and ecstatic madness while recognizing their dangers if used improperly, as a counterpoint to the Buddhist attention to alleviating suffering over elevating joy. (O/T: Happiness aside, I imagine the Dalai Llama would have a lot to talk about with an expert on another religion whose god repeatedly dies, then returns to the world, in an eternal cycle of renewal, growth, and transformation.)
The probability of finding a ‘statistically significant’ relation somewhere in this dataset is p > 95%^28 = 23.8%. Better than 3⁄4 times.
I think you mean “p(>=1 ‘statistically significant’ result) = 1 - (.95^28) = 76.2%”?
Not sure what he’s done on AI since, but Tim Urban’s 2015 AI blog post series mentions how he was new to AI or AI risk and spent a little under a month studying and writing those posts. I re-read them a few months ago and immediately recommended them to some other people with no prior AI knowledge, because they have held up remarkably well.
I never read the paper and haven’t looked closely into the recent news and events about it. But, I will admit I didn’t (and still don’t) find the general direction and magnitude of the results implausible, even if the actual paper has no value or validity and is fraudulent. For about a decade, leading materials Informatics companies have reported that use of machine learning for experimental design in materials and chemicals research reduces the number of experiments needed to reach a target level of performance by 50-70%. The now-presumably-fraudulent MIT paper mostly seemed to claim the same, but in a way that is much broader and deeper.
So: yes, given recent news we should regard this particular paper as providing essentially zero information. But also, if you were paying attention to prior work on AI in materials discovery, and the case studies and marketing claims made regarding same, then the result was also reasonably on-trend. As for the claimed effects on the people doing materials research, I have no idea, I hadn’t seen it studied before; that’s what I’m disappointed about, and I really would like to know the reality.
They are amusing, clever, and self-indulgent. They often have explanatory power. Sometimes I feel like they unnecessarily padded the word count.
I think this is an artifact of GEB’s age—it had to be written as a physical book. Imagine if it had been written today, as a Sequence, with hyperlinks. You could have the exact same content, but organized so you could easily jump around, back, and forth, in various orders and portions that are optimal for different readers.
I will also say that, 20 years after I first read it, the dialogs are the pieces that let me remember the technical content at all, for the same reason I remember lyrics and poetic quotes better than sentences that lack those extra layers of structure and meaning.
They also serve as convenient mental shorthands. When I see people talking about the idea of LLMs using steganography to hide messages in their CoT or output, and doubting whether that is viable in practice, my thoughts jump almost immediately to Contracrostipunctus. When I talk to (or read things by) people who doubt that LLMs (or any other digital construct) could be ‘intelligent’ by whatever definition, or conscious, etc., I have a lot of reasons I disagree, but one of the first places my mind goes is Six Part Ricercar. I can then reconstruct more detailed explanations if I need to give them to others, but my thinking is faster because I don’t need to recreate them for myself.
I think this same idea is the main source of value I get from EY’s and Scott Alexander’s fiction, having read their nonfiction writing. Understanding all the detailed arguments is valuable, but calling them to mind is slow. It’s much faster to be able to think of Moloch, whale cancer, Fnargl, Ebborians, Baby Eaters, or beisutsukai, and then take the time if needed to figure out why I thought that. I think this is also similar to the skill Feynman talked about for spotting flaws in arguments he didn’t fully understand, by creating a concrete mental visualization that encoded some of the essential structure.
Yeah, I was going to say, in addition to its own merits, GEB is a great background read for The Mind’s I and I Am a Strange Loop.
It sounds like, on reflection, your previous post was less about reduction, and more about misapplying the idea of reduction in a way that ignores or elides map-territory distinctions, instead pretending our best known current map is actually reality. Would you agree with that?
Yes, my thinking is similar. Elementary school teachers often barely understand the math they are required to be teaching, and don’t have the fluidity needed to handle a more free-flowing discussion about a book that doesn’t conform to a specific curriculum. The whole system frequently retreats into drilling specific procedures that mean nothing to the teachers and students involved, even when the explicit stated goal is to help build understanding and problem solving skills. The idea that math classes even could include reading books is just not part of the conversation. Only English classes assign books to read—not history, not foreign languages, and definitely not science and math. Related: I had exactly one math teacher, in seventh grade, who assigned a term paper on any math topic of our choice. I got a 70, the lowest math grade I ever received in any year, and it was because, as he told me in his own words, he didn’t understand what I’d written and couldn’t follow it.
I will say, there are some English language books that deliberately incorporate math in ways that are both fun and educational, if you had a teacher able and willing to lead such discussions. There’s many such books by Ian Stewart. Alice in Wonderland would be a fair choice, and the kids probably already know the story. For middle or high schoolers especially, it doesn’t have to just be fiction, either. For the “When will we ever need this?” crowd, something like Nonplussed or Impossible?, both by Julian Haveil, could be a welcome and eye-opening change of pace.
I love things like this, and always wondered why we never had these kinds of books as part of math curricula in elementary and middle school in the US.
I agree with that, yes.