The probability of finding a ‘statistically significant’ relation somewhere in this dataset is p > 95%^28 = 23.8%. Better than 3⁄4 times.
I think you mean “p(>=1 ‘statistically significant’ result) = 1 - (.95^28) = 76.2%”?
The probability of finding a ‘statistically significant’ relation somewhere in this dataset is p > 95%^28 = 23.8%. Better than 3⁄4 times.
I think you mean “p(>=1 ‘statistically significant’ result) = 1 - (.95^28) = 76.2%”?
Not sure what he’s done on AI since, but Tim Urban’s 2015 AI blog post series mentions how he was new to AI or AI risk and spent a little under a month studying and writing those posts. I re-read them a few months ago and immediately recommended them to some other people with no prior AI knowledge, because they have held up remarkably well.
I never read the paper and haven’t looked closely into the recent news and events about it. But, I will admit I didn’t (and still don’t) find the general direction and magnitude of the results implausible, even if the actual paper has no value or validity and is fraudulent. For about a decade, leading materials Informatics companies have reported that use of machine learning for experimental design in materials and chemicals research reduces the number of experiments needed to reach a target level of performance by 50-70%. The now-presumably-fraudulent MIT paper mostly seemed to claim the same, but in a way that is much broader and deeper.
So: yes, given recent news we should regard this particular paper as providing essentially zero information. But also, if you were paying attention to prior work on AI in materials discovery, and the case studies and marketing claims made regarding same, then the result was also reasonably on-trend. As for the claimed effects on the people doing materials research, I have no idea, I hadn’t seen it studied before; that’s what I’m disappointed about, and I really would like to know the reality.
They are amusing, clever, and self-indulgent. They often have explanatory power. Sometimes I feel like they unnecessarily padded the word count.
I think this is an artifact of GEB’s age—it had to be written as a physical book. Imagine if it had been written today, as a Sequence, with hyperlinks. You could have the exact same content, but organized so you could easily jump around, back, and forth, in various orders and portions that are optimal for different readers.
I will also say that, 20 years after I first read it, the dialogs are the pieces that let me remember the technical content at all, for the same reason I remember lyrics and poetic quotes better than sentences that lack those extra layers of structure and meaning.
They also serve as convenient mental shorthands. When I see people talking about the idea of LLMs using steganography to hide messages in their CoT or output, and doubting whether that is viable in practice, my thoughts jump almost immediately to Contracrostipunctus. When I talk to (or read things by) people who doubt that LLMs (or any other digital construct) could be ‘intelligent’ by whatever definition, or conscious, etc., I have a lot of reasons I disagree, but one of the first places my mind goes is Six Part Ricercar. I can then reconstruct more detailed explanations if I need to give them to others, but my thinking is faster because I don’t need to recreate them for myself.
I think this same idea is the main source of value I get from EY’s and Scott Alexander’s fiction, having read their nonfiction writing. Understanding all the detailed arguments is valuable, but calling them to mind is slow. It’s much faster to be able to think of Moloch, whale cancer, Fnargl, Ebborians, Baby Eaters, or beisutsukai, and then take the time if needed to figure out why I thought that. I think this is also similar to the skill Feynman talked about for spotting flaws in arguments he didn’t fully understand, by creating a concrete mental visualization that encoded some of the essential structure.
Yeah, I was going to say, in addition to its own merits, GEB is a great background read for The Mind’s I and I Am a Strange Loop.
It sounds like, on reflection, your previous post was less about reduction, and more about misapplying the idea of reduction in a way that ignores or elides map-territory distinctions, instead pretending our best known current map is actually reality. Would you agree with that?
Yes, my thinking is similar. Elementary school teachers often barely understand the math they are required to be teaching, and don’t have the fluidity needed to handle a more free-flowing discussion about a book that doesn’t conform to a specific curriculum. The whole system frequently retreats into drilling specific procedures that mean nothing to the teachers and students involved, even when the explicit stated goal is to help build understanding and problem solving skills. The idea that math classes even could include reading books is just not part of the conversation. Only English classes assign books to read—not history, not foreign languages, and definitely not science and math. Related: I had exactly one math teacher, in seventh grade, who assigned a term paper on any math topic of our choice. I got a 70, the lowest math grade I ever received in any year, and it was because, as he told me in his own words, he didn’t understand what I’d written and couldn’t follow it.
I will say, there are some English language books that deliberately incorporate math in ways that are both fun and educational, if you had a teacher able and willing to lead such discussions. There’s many such books by Ian Stewart. Alice in Wonderland would be a fair choice, and the kids probably already know the story. For middle or high schoolers especially, it doesn’t have to just be fiction, either. For the “When will we ever need this?” crowd, something like Nonplussed or Impossible?, both by Julian Haveil, could be a welcome and eye-opening change of pace.
I love things like this, and always wondered why we never had these kinds of books as part of math curricula in elementary and middle school in the US.
Yes, true, but somehow we don’t have that problem with veterinary care, even when there’s insurance involved. I don’t really know how likely it is for any given treatment to help my cat, or for how long, but the vet gives me a list of options and each of their prices, in advance, and then that’s what I pay. I pick based on a combination of my understanding, their recommendations, and my budget. It’s generally far more humane, more empowering, and less condescending than getting care for a human, because our society lets people take responsibility for their pets in ways it doesn’t let adults do for themselves.
Even besides that, though, the reality is much, much worse than that in (human) medicine.
Depending on whether I have insurance and exactly which kind, the base price of a service—not what I ultimately pay, but the total number that I and my insurer pay—can vary by more than an order of magnitude. Even after the fact it can be really difficult to know how much anyone is paying anyone else. I’ve had three different situations, with different providers and insurers, in which the provider kept applying payments to the wrong line items, in ways that messed up who was supposed to pay what and when, that took months of calendar time and tens of hours of time on the phone to sort out.
As you noted, goods like prescriptions should be simpler than medical services to price out. But, when I have to fill the same prescription in different pharmacies (which is every month, because I travel full time), the price has varied by as much as a factor of a hundred between pharmacies or even by 2-10x month to month from the same pharmacy. The price depends on the insurer. The price can sometimes be higher with insurance than without, because I can’t combine insurance with various magic-seeming discount programs like GoodRx that are available to anyone and that some pharmacists will apply for you without you even asking. But it can sometimes also be ultimately cheaper to pay the higher price anyway depending on how your deductibles and copays work and when the magic plan year end date happens. Many pharmacies won’t tell you the price before your Rx is in their system, and once it is in the system, you may not be able to change whether or not to use insurance. For many medications it is difficult or illegal to move them to a different pharmacy at all, or it can only be done a certain number of times, or it can only be done after the first pharmacy has filled it at least once.
I can’t tell if this is intended to be taken seriously or not, and I won’t bother pointing out the various individual false assumptions, misunderstandings, reasoning errors, non sequiturs, or contentless statements. Any modern LLM can handle that just fine if you want to know. But this sentence caught my eye:
Reduction is an operation of reason by the observer to extract the most relevant relations from the observed.
This is a misunderstanding of what “reduction” actually means, but I think it’s a very common one. I can totally see how, if that’s what you believe the word means, you would come to believe many of the other claims in this post. What this describes, though, is a form of fake reduction, and I really do recommend you take the time to read the Reductionism 101 sequence, especially the last 4 posts in it. Real reduction requires quite a bit more knowledge and understanding and perspective than most people imagine. See also the first handful of posts from Joy in the Merely Real.
One thing that’s not clear to me (and you may have discussed this in the previous posts, I don’t remember) is: was the previous structure even legally valid/enforceable? Can you write into the structure of a for-profit LLC that it has to act in accordance with some goal other than profit? Because as I understand it, a board member has a fiduciary duty to the company regardless of their own interests, or that of the organization or process that made them a board member. Someone recently highlighted to me some examples of cases (in normal for-profit startups) where this gives you behavior like board members approving some measure, and then the same individuals, now acting as shareholders where they can do as they please, vote against it.
Maybe the original OpenAI structure included a clever and enforceable way around this. But if not, then maybe it’s possible the switch to a PBC closes a loophole whereby investors could have sued the board for acting according to the nonprofit’s interests instead of their own.
My instinctive response is: weight classes are for controlled competitions where fairness is what we actually want. For social status games, if you want to enforce weight classes, you need a governing body who gets to define the classes and define the rules of the game, but the rules of social status games frequently include being not fully expressible in precise terms. This isn’t necessarily a showstopper, but it necessarily includes admitting what range of the hierarchy you’re in and cannot rise above. As I understand it, the reason the self-sorting works today is that when people compete in the wrong weight classes, it’s not fun for either side. A Jupiter Brain might theoretically be amenable to playing a social game with me on my level, but at best it would be like me playing tic-tac-toe with a little kid, where the kid is old enough to realize I’m throwing the game but not old enough to have solved the game.
Personally I’d much rather not spend my time on such games when it is possible to manage that. But I don’t always have that choice now, and probably still won’t at least sometimes in the future.
Thanks! This is an interesting angle I wasn’t much thinking about.
I anticipate this will lead to some interesting phrasing choices around the multiple meanings of “conception” as the discussions on what and how and whether AI’s ‘really’ think continue to evolve.
There’s a story about trained dolphins. The trainer gave them fish for doing tricks, which worked great. Then they decided to only give them fish for novel tricks. The dolphins, trained under the old method, ran through all the tricks they knew, got frustrated for a while, then displayed a whole bunch of new tricks all at once.
Among animals, RL can teach specific skills but also reduces creativity in novel contexts. You can train creative problem solving, but in most cases, when you want control of outcomes, that’s not what you do. The training for creativity is harder, and less predictable, and requires more understanding and effort from the trainer.
Among humans, there is often a level where the more capable find supposedly simple questions harder, often because they can see all the places where the question assumes a framework that is not quite as ironclad as the asker thinks. Sometimes this is useful. More often it is a pain for both parties. Frequently the result is that the answerer learns to suppress their intelligence instead of using it.
In other words—this post seems likely to be about what this not-an-AI-expert should expect to happen.
He makes some bizarre statements, such as that if you have a rare gene that might protect you from the AI having enough data to have ‘a good read’ on you, and that genetic variation will ‘protect you from high predictability.’
You know, even if this were true… if you’re a less unpredictable entity in a world where sufficiently power AI wants to increase predictability, there are many simple and obvious classes of interventions that reliably achieve that. Mostly, those interventions look nothing like freedom, and you’re not going to like them.
I’m sympathetic to the point of view that this is necessary, though I wouldn’t call it “the” answer—I don’t think we can have high enough confidence that it is sufficient. That said, while you mention the reasons for skepticism of applying existing legal frameworks (which I agree with!), I think the hard step is writing the proposed new rules down.
What does a clear legal mandate look like? What are the requirements, which we are capable of writing down with legally-enforceable precision, that would (or at least could) be adequate without being predictably thrown out by courts or delayed so long they end up not mattering? How many people exist who are capable of making the necessary evaluations, and is the government capable of hiring enough of them?
There’s no reason for me to think that my personal preferences (e.g. that my descendants exist) are related to the “right thing to do”, and so there’s no reason for me to think that optimizing the world for the “right things” will fulfil my preference.
This, and several of the passages in your original post such as, “I agree such a definition of moral value would be hard to justify,” seem to imply some assumption of moral realism that I sometimes encounter as well, but have never really found convincing arguments for. I would say that the successionists you’re talking to are making a category error, and I would not much trust their understanding of ‘should’-ness outside normal day-to-day contexts.
In other words: it sounds like you don’t want to be replaced under any conditions you can foresee.. You have judged. What else is there?
I can’t really imagine a scenario where I “should” or would be ok with currently-existing-humans going extinct, though that doesn’t mean none could exist. I can, however, imagine a future where humanity chooses to cease (most?) natural biological reproduction in favor of other methods of bringing new life into the world, whether biological or artificial, which I could endorse (especially if we become biologically or otherwise immortal as individuals). I can further imagine being ok with those remaining biological humans each changing (gradually or suddenly) various aspects of their bodies, their minds, and the substrates their minds run on, until they are no longer meat-based and/or no longer ‘human’ in various ways most people currently understand the term.
I know this can all be adequately explained by perfectly normal human motivations, but there’s still a small part of me that wonders if some of the unfortunate changes are being influenced by some of the very factors (persuasion, deception, sandbagging, etc.) that are potentially so worrying.
If I had my druthers, I might make it a trio and add Euripedes, one of his contemporaries, or a modern classicist who had deeply studied the Bacchae and the Dionysian cults; someone who understood the dual nature of Dionysus enough to value the ideas of eudaimonia and ecstatic madness while recognizing their dangers if used improperly, as a counterpoint to the Buddhist attention to alleviating suffering over elevating joy. (O/T: Happiness aside, I imagine the Dalai Llama would have a lot to talk about with an expert on another religion whose god repeatedly dies, then returns to the world, in an eternal cycle of renewal, growth, and transformation.)