On sticky nominal wages: any attempt to overcome this bias has to deal with the entire mess of existing financial obligations that people take on which are denominated in fixed nominal dollars. My mortgage payment is fixed for the next thirty years in nominal dollars, an arrangement predicated on longstanding assumptions about average future price changes. My car payment, similarly fixed for five years. If prices of everything go down by some percentage, my single largest expenses don’t change, and so a comparable change in my nominal wage could easily be devastating. And if the prices of houses go down, I can’t even sell my house and buy a different one b/c I might not make enough on the sale to do so after paying back the existing mortgage. What proposals have been put forward that might overcome this?
This is very true, but I think it misses a key point in what makes katas useful for actually learning a martial art in the first place. As noted in Matt Goldenberg’s answer, partner work is much more important for actually learning to use a martial art. Just practicing a kata by rote may look pretty, but it won’t tell you anything about how to use it. My own best teachers would teach moves and combinations and katas by rote at first, then very quickly move on to exercises that require creative application. Things like:
Mushin practice—get attacked and respond quickly, and make it work
Randori practice—get attacked repeatedly by multiple opponents, and make it work
Sparring and grappling practice
All of these can be modified with constraints to make them easier or harder. Easier might be “Every attack against you will be this kind of punch.” Harder can be something like “Choose one part of one kata. Come up with a way to use it effectively against whatever your opponent decides to throw at you.” Or “Figure out X different ways to use that same sequence of moves, with only minor variations, in different situations, then execute it and see how well it works.” Hopefully you’ve been practicing smart all along, and visualizing opponents while practicing your katas in the air, and developing your understanding of the moves and body mechanics so your interpretations make sense!
To bring this back to the original question: what are we thinking of as the fundamental “moves” we’re stringing together to make a “kata”? And is a rationality “kata” practice a one-person or two-person activity? Maybe a rationality kata is made up of moves like “1) Figure out what question/problem you’re trying to solve. 2) Spend five minutes thinking about what you know that may be at all relevant. 3) Make a list of unknowns that would be useful to know, and estimate how hard it would be to get the answers. 4) Make a list of strategies and techniques you might try to find a solution, and estimate their odds of success and time required. 5) Rank the results of #4 however you like, and work your way down the list until the problem is solved. 6) Go back to #2 and repeat until you have a good enough solution.” Or they can be more specific—katas for calibration, katas for probability estimation, and so on. “Practicing” katas, then, would be a matter of repeating the words of the steps, to commit them to the mental equivalent of muscle memory. This will be partly crystallizing useful concepts into short handles that leap to mind when needed, a kind of mental Miyagi-ing.
Also, in martial arts, katas can be practiced many different ways. For my own black belt test, some of the kata portion included things like “Precisely execute every kata you know, in order, in less than X minutes total,” and “choose one kata you know, and execute is as slowly as possible, minimum Y minutes, while maintaining precision and focus on every part of every movement,” and “perform this kata correctly, but make it fit completely in a box no more than Z feet across.” I think I’ve also been asked to do the mirror image of a kata a few times, though not as part of a rank test. Similarly, analyzing and verbalizing why a kata is a certain way, and what it’s trying to explore and teach, can be helpful for guiding further practice.
Then, more applied practice might look more like randori, where you get random problems thrown at you repeatedly and you have to use a specific kata to solve them. Or like mushin, where you have to practice quickly figuring out which kata to use for a problem that gets thrown at you, and then make that one work.
And yes, I realize I’ve just accidentally come very close to describing some of Jeffreysai’s methods, which is obviously not a coincidence, I’m sure.
I always enjoy reading posts like this, because I’m 34 and if I visit my parents on Long Island, my mom still won’t want me to go for a walk near their house by myself. My range as a kid was 0. Until I was in middle school I basically wasn’t even allowed to play in the backyard by myself unless I was somewhere someone could see me. I wasn’t allowed to go anywhere without being driven until I had a driver’s license (we lived in an extremely safe neighborhood, literally in the middle of a county park). Also, possibly related, I have an absolutely horrible sense of direction, and readily get lost in towns I’ve lived in for years going to places I’ve been at least a dozen times.
Note: my parents themselves grew up in NYC and were allowed to bike around much of the city by themselves by the time they were 10. I am amazed how much changed in the thirty years between their childhoods and my own.
I know that’s not very constructive, I just wanted to share and say I’m glad whenever I see people pushing in the opposite direction.
My impression is that the fact that it is hard to objectively determine why someone is enforcing the rules is part of the point. The effect on the woman in your example is the same either way, but I think the employee’s internal state does matter in terms of how it affects the future health and functioning of the organization. The employee warned the woman of the risks, she took them anyway, then chose to complain. If she hadn’t been warned, she’d have a point, and in that case refusing to give a refund just because it’s the rule instead of making an exception, without giving a reason like “I’m sorry, but I’m not allowed to, or I’ll be fired,” would be justifiably called blankfacing. It’s worse if the employee knew about the crowdedness and deliberately chose not to say anything, than if they were unaware or just didn’t think of it in time. It’s much worse if the manager also refuses to bend the rules, because that should be part of what managers are for. In that case it would also be appropriate to refund the popcorn, or offer free tickets to a future showing.
I’m curious what you make of this example from my own life. After graduating college I got a job, moved in with my girlfriend, and leased a car. As a result, my name wasn’t on the apartment lease (we were not in violation of zoning rules due to me living there, and the landlord knew I was moving in) or any of the bills (the accounts were already set up), and the car registration listed the leasing company’s address, not mine. To park on the street I needed to get a town permit. I went to the police station and was told I wasn’t allowed to get a resident permit because I couldn’t prove residency. The RMV told me there was no way or need to change the address on my license, I just needed to write the new address on a sticker and put it on the ID; this wasn’t good enough for the police station either (reasonable enough so far). I asked what I could do to prove residency under these circumstances, and the person at the police station said she didn’t know. No one else in the apartment had a car, so it wasn’t an amount-of-parking issue. We had a visitor’s parking permit, and I used that for lack of a better option. I got a ticket for being a resident using a visitor’s permit. I went back and explained this to the person who issues the permits, and she noted that yes, that’s how the system works, and no I still can’t have a permit, and yes I will keep getting and have to pay tickets. She admitted that the rules are set up that way to make it hard for students to prove residency and get parking permits, in order to preserve spots for non-student residents, and this was catching me even though I was not a student. So I went back at other days and times until a different person was working, and then I was able to get a permit no problem—same documentation. I think “blankface” describes the permit-refuser’s behavior extremely well, at least after I got the ticket.
Well, what drives the shifts in what resources define an age?
Accumulation of material reaches a point where more isn’t (currently) valuable.
A new resource replaces uses of an old one.
Improved technology makes a resource much cheaper, and accumulation of wealth and industrial base make production/scarcity of a resource no longer a limiting factor in most cases.
As @Dagon noted, in some sense “information,” once it exists at all, is not scarce, and is already easy to replicate and distribute. But 1) we don’t have all the information we could want, and 2) we don’t (know how to) use it effectively. The former is a matter of research+sensors+any other data collection (solvable through hardware and personnel), the latter is a problem of intelligence/data science/analysis/data access/knowing what we want.
Society is doing...not a great job, but an ok one, recognizing the importance of the former and investing in it. We still kinda suck at the latter which is related to this site’s focus on both AI and alignment, and these seem like a strong candidate for the next limiting factor that could define an age.
Alternatively, that could turn out not to be a big deal (we get AI right, at which point cheap copying and hardware make AI scarcity not a thing). At that point we should have enough know-how to collect enough matter for or needs and continuously process and recycle it into whatever forms we want. It seems like we then go back to energy being the limiting factor in running our machines—securing supply and dissipating waste heat.
I freely grant that this maximally strengthened version of the orthogonality thesis is false, even if only for the reasons @Steven Byrnes mentioned below. No entity can have a goal that requires more bits to specify than are used in the specification of the entity’s mind (though this implies a widening circle of goals with increasing intelligence, rather than convergence).
I think it might be worth taking a moment more to ask what you mean by the word “intelligence.” How does a mind become more intelligent? Bostrom proposed three main classes.
There is speed superintelligence, which you could mimic by replacing the neurons of a human brain with components that run millions of times faster but with the same initial connectome. It is at the very least non-obvious that a million-fold-faster thinking Hitler, Gandhi, Einstein, a-random-peasant-farmer-from-the-early-bronze-age, and a-random-hunter-gatherer-from-ice-age-Siberia would end up with compatible goal structures as a result of their boosted thinking.
There is collective superintelligence, where individually smart entities work together form a much smarter whole. At least so far in history, while the behavior of collectives is often hard to predict, their goals have generally been simpler than those of their constituent human minds. I don’t think that’s necessarily a prerequisite for nonhuman collectives, but something has to keep the component goals aligned with each other, well enough to ensure the system as a whole retains coherence. Presumably that somehow is a subset of the overall system—which seems to imply that a collective superintelligence’s goals must be comprehensible to and decided by a smaller collective, which by your argument would seem to be itself less constrained by the forces pushing superintelligences towards convergence. Maybe this implies a simplification of goals as the system gets smarter? But that competes against the system gradually improving each of its subsystems, and even if not it would be a simplification of the subsystems’ goals, and it is again unclear that one very specific goal type is something that every possible collective superintelligence would converge on.
Then there’s quality superintelligence, which he admits is a murky category, but which includes: larger working and total memory, better speed of internal communication, more total computational elements, lower computational error rate, better or more senses/sensors, and more efficient algorithms (for example, having multiple powerful ANI subsystems it can call upon). That’s a lot of possible degrees of freedom in system design. Even in the absence of the orthogonality thesis, it is at best very unclear that all superintelligences would tend towards the specific kind of goals you’re highlighting.
In that last sense, you’re making the kind of mistake EY was pointing to in this part of the quantum physics sequence, where you’ve ignored an overwhelming prior against a nice-sounding hypothesis based on essentially zero bits of data. I am very confident that MIRI and the FHI would be thrilled to find strong reasons to think alignment won’t be such a hard problem after all, should you or any of them ever find such reasons.
Note: even so, this objection would imply an in increasing range of possible goals as intelligence rises, not convergence.
@adamShimi’s comment already listed what I think is the most important point: that you’re already implicitly assuming an aligned AI that wants to want what humans would want to have told it to want if we knew how, and if we knew what we wanted it to want more precisely. You’re treating an AI’s goals as somehow separate from the code it executes. An AI’s goals aren’t what a human writes on a design document or verbally asks for, they’re what are written in its code and implicit in its wiring. This is the same for humans: our goals, in terms of what we will actually do, aren’t the instructions other humans give us, they’re implicit in the structure of our (self-rewiring) brains.
You’re also making an extraordinarily broad, strong, and precise claim about the content of the set of all possible minds. A priori, any such claim has at least billions of orders of magnitude more ways to be false than true. That’s the prior.
My layman’s understanding is that superintelligence + self modification can automatically grant you 1) increasing instrumental capabilities, and 2) the ability to rapidly close the gap between wanting and wanting to want. (I would add that I think self-modification within a single set of pieces of active hardware or software isn’t strictly necessary for this, only an AI that can create its own successor and then shut itself down).
Beyond that, this argument doesn’t hold. You point to human introspection as an example of what you think AGI should be automatically would be inclined to want, because the humans who made it want it to want those things, or would if they better understood the implications of their own object- and meta-level wants. Actually your claim is stronger than that, because it requires that all possible mind designs achieve this kind of goal convergence fast enough to get there before causing massive or unrecoverable harm to humans. Even within the space of human minds, for decisions and choices where our brains have the ability to easily self-modify to do this, this is a task at which humans very often fail, sometimes spectacularly, whether we’re aware of the gap or not, even for tasks well within our range of intellectual and emotional understanding.
From another angle: how smart does an AI need to be to self-modify or create an as-smart or smarter successor? Clearly, less smart than its human creators had to be to create it, or the process could never have gotten started. And yet, humans have been debating the same basic moral and political questions since at least the dawn of writing, including the same broad categories of plausible answers, without achieving convergence in what to want to want (which, again, is all that’s needed for an AI that can modify its goals structure to want whatever it wants to want). What I’m pointing to is that your argument in this post, I think, includes an implicit claim that logical necessity guarantees that humankind as we currently exist will achieve convergence on the objectively correct moral philosophy before we destroy ourselves. I… don’t think that is a plausible claim, given how many times we’ve come so close to doing so in the recent past, and how quickly we’re developing new and more powerful ways to potentially do so through the actions of smaller and smaller groups of people.
I know it’s off topic, but I hope Omega is precise in how it phrases questions, because Paris is in Ohio, and the Eiffel Tower is in Cincinnati.
This sounds like the individual version of EY’s explanation of how Schwarzenegger became governor of California.
It also sounds a lot like Scott Alexander’s explanations of predictive processing.
he does point out that school didn’t teach him anything useful and now he’s Scott Alexander, and it didn’t teach me much of anything either, so search your experiences and draw your own conclusions, and then draw your own secondary conclusions about the pandemic.
Well, it took 15 years, but school did eventually teach me to stop trusting authority figures...
I don’t mention Vitamin D as much as I should, as it’s one of the practical things an individual can do that has high expected value in terms of preventing or helping with Covid, that would be a good idea even without Covid. And yet I struggle to remember to take it.
Do you take anything else daily, successfully? I find weekly pill boxes labeled with the weekdays helpful, I fill them once a week. Although, vitamin D doesn’t necessarily have to be taken daily, it builds up slowly in the body over time so you could easily double up after days you miss. One time I had an actual diagnosed deficiency, and the prescription vitamin D was a huge dose (50k IU) once a week for 8 weeks.
I think it’s interesting that you label protein and vegetables as high-satiety foods, when that just isn’t the case for me. Lean meats and veggies satiate me for longer than refined grains, but nor nearly as long as food higher in fat, as long as they’re relatively healthy fats (olive or avocado oil, grass fed butter and cream, cheese, nuts and seeds, things like that). That result definitely varies somewhat between people, but my experience isn’t out of the ordinary. Eating veggies or protein without fat just leaves me feeling full but unsatisfied, waiting until my stomach will let me eat more.
I agree with your point about the magnitude of the change. People didn’t suddenly start eating vastly more food after 1980. But that potentially cuts both ways: most of the other trends in diet and exercise were gradual and started much earlier, yet weight wasn’t increasing at a population level then. So why would slight reductions reverse the trend now, when slight increases didn’t generate it before? Why this recommend this specific slight intervention when so many other things have changed in our lives and environments, especially when you know that it just will come off as insulting to most people who’ve actually struggled to lose weight?
Yes, sometimes it is that simple. I know people who’ve just cut our soda and/or started walking for half an hour a day and lost tens of pounds in a year. And I’m glad for them! But not everyone’s body responds that way, and that’s kinda the point.
Edit to add: also, if the amount of calorie variation needed to lose 20 pounds really were as small as you say, at the level of a single cookie weighing less than 50 grams, then no, intuition for portion sizes would not be sufficient for controlling food intake, and you really would have to measure things. That would mean that being off by a teaspoon of oil when grilling a chicken breast in a pan each day is worth 5 pounds of body fat over time, and that’s just one part of one meal. Ditto for replacing a cup of strawberries with the same volume of apple or melon, or a cup of apple or melon with the same volume of banana—which are the kinds of things that over time just take way too much mindshare to keep up with for every single food decision, even for smart people who like math and measuring things.
This matches my experience very closely as well, though I’m only about halfway to my goal (dropped from 238 to 215, want to get down to the low 190s) after 4 years of trying a bunch of different things.
What the OP is suggesting doesn’t work in practice for rats and mice, let alone humans who have many more levers with which to confound simple interventions through behavior, conscious or not.
It took me eight years to gain 40 pounds. That’s a difference of about 50-200 calories per day (increasing as base weight rises and it takes more food to generate a sustained weight gain), on average, by pure calorie math. AKA initially no more than the difference between standing vs sitting for one hour, walking an extra 0.5 mile vs not, or eating half an apple vs not. Seems like it should be a breeze to fix! Just a few minutes a day, or one simple action! And yet the years in which I switched to a standing desk (and extra 4-6 hours standing daily) I didn’t lose any weight, nor did I gain any when I stopped. The year I hiked 300 miles more than I normally do as part of a challenge, no weight loss. And all of that is in line with the research that exercise is not that helpful for controlling weight most of the time.
Eating keto (for me, <10% carbs, <20% protein) did help me lose weight steadily, and gave me more energy, but it just wasn’t sustainable for me. I basically lost the ability to eat with other people in many circumstances. 1) That’s very isolating, 2) I tend to eat more when eating alone, 3) it’s not feasible when eating is tied to work events or travel, and 4) on days when I did eat carbs I got significant temporary side effects, I couldn’t just take a one day break for Thanksgiving and Christmas (and birthdays, and Easter, and anniversaries, and...). I managed it for all of 2017, then stopped. Lost 20 pounds, gained 10 back. Then in early 2020 I cut out almost all refined oils and sugar and reduced refined grains my more than half, and lost that 10 again. This January I started 16:8 IF and lost another 5. I’d need to lose another 25 to hit an officially healthy BMI. With IF, as with keto previously, I have more energy, better mood, and less hunger between meals. Also, with cutting out refined ingredients, I don’t even enjoy most fast food and sweets anymore, they taste fake and have no depth of flavor.
I also notice that I never fidget anymore when sitting still. I noticed this change during grad school, which is about when I started gaining most of the weight, when for most of my life before that I tapped my foot to the point of regularly needing to be told to stop shaking the car. This is apparently equivalent to hundreds of extra calories burned per day according to some studies, and if I could somehow upregulate fidgeting I would apparently lose my remaining 20 pounds in under a year, without any other changes! But of course I can’t do that, and I have no idea why this changed, or if there’s any meaningful causal relation between that and my weight. Just one of many unconscious factors I’ve noticed.
Personally I am very slow at typing on my phone. Always have been, I’m old at heart. I also find inputting values with large exponents to be inconvenient and slow. So if I’m not by my laptop, I tend to do quick calculations in my head, then only use my phone to double check order of magnitude. I actually prefer pencil and paper to typing on my phone a lot of the time.Edit to add: My wife prefers to use her calculator, and is usually a tad slower than me, but does catch errors I miss, maybe one time in 10?
Manufacture? No. Use? Only at lab scale, in animals or a few patients for small studies. Design? Yes, and the design process did not require any additional new tech nor the resources of a large pharmaceutical company.
Yes, old school vaccines have a lot more history behind them and large organizations were more familiar with them and so they were better equipped to get them through trials and scaled up for deployment. But the mRNA vaccines that did make it through seem to be more effective than those old school vaccines.
Of those 18, how many actually failed trials vs. other reasons for not having come to market (didn’t get funding, didn’t have the pre-existing expertise needed, didn’t move fast enough relative to competitors)? Also, four of those 18 were BioNTech, and counting that as a success and three failures seems like a mistake when it’s the same company trying multiple things initially and then proceeding with the best one.
How many old-school vaccine development efforts didn’t pan out, or only got approved because of extensive government support in their countries of origin?
My take is that we have had, for at least a handful of years, the ability to design a new mRNA vaccine against a novel virus in a matter of days, test it in a matter of months, and scale it up in less than a year. Instead, the companies founded to develop that kind of technology had to go in a different direction (targeting cancer), and without the pandemic they would have languished much longer without bringing any mRNA therapeutics to market at all. The pandemic cut through enough bureaucracy that they got a product out and built manufacturing capacity, and now that they have done that, they’re quickly able to repeat that success for other diseases.
I do not expect the pandemic to lead to a flurry of other new traditional vaccines, because those haven’t suddenly gotten easier or cheaper to develop, we just threw more resources at them for covid.
As far as I can tell, if we had had better regulatory and research policy, we could have lived in the world where Moderna or BioNTech had been working on infectious disease mRNA vaccines all along, and launched an improved flu vaccine back in 2016 or so, so that by early 2020 they already had some manufacturing capacity and supply chains in place that they could expand and replicate (with at least the medical community knowing this was a thing that had been used millions of times). I do not believe that that world ever had to have made any technological advances that ours didn’t make, yet they would have been able to mass produce our most effective covid vaccines much, much faster than we did. They would know that they knew how to fight a virus.
Edit to add: I worry that many people (1) think this is what a worst-case-scenario pandemic looks like, and (2) think that next time there’s a new disease, the solution will be to mask up and shut down indefinitely, instead of immediately designing an mRNA vaccine, conducting large trials as fast as possible, and pre-emptively manufacturing hundreds of millions of dose so they’re ready to go ASAP, with an expectation that all relevant regulatory agencies will work round the clock to remove all unnecessary roadblocks to approval.
Now I realize there’s another: the mRNA vaccines are not that new, they simply never made it public before. Just something that dawned on me.
Early papers on mRNA therapeutics date back to the late 1980s/early 1990s, with a number of small scale tests and trials starting by the late 1990s. Making custom arbitrary mRNA has been affordable/feasible since at least the mid-2000s. BioNTech was founded in 2008, and Moderna was founded in 2011, which means the relevant tech at that point was already far enough along to warrant founding companies that were going to need a lot of funding to bring anything to market. But instead of making vaccines targeting infectious diseases, they both mostly targeted cancer immunotherapy. I assume that’s because those are the treatments they were able to get funding for, even though it’s a much harder problem technically.
I don’t know when the first year was they they could have designed a successful vaccine against a novel virus in a day and a half, but the odds of that happening just before covid hit are obviously very low, especially since more recent trials and studies are showing that we can also quickly develop (better) vaccines against the flu and malaria, and the success with covid was not an unlikely outcome.
RE: That example of a company requiring only the vaccinated to go into the office: It could be worse. I know of at least one company where they are letting anyone who declares themselves to be afraid of getting covid stay home, but are having everyone else come back. As you might imagine, the former group has a much higher vaccination rate than the latter.
This is true, but implementing it as general policy may benefit from a multilevel world model, where you’re aware of the continuous underlying reality but set discrete thresholds anyway. The space of states may be nearly continuous, but that is true on many axes at once, whereas the number of actions we can take in a given window of time to move along those axes is bounded, and we need to make discrete choices of how to prioritize. And ideally, set aside time periodically to review the prioritization process. That last one seems to be where a lot of people stumble, recognizing that the approximations aren’t eternal.
My own experience is that in many contexts there models live in different peoples’ heads. Biologists and (most?) doctors know there is a spectrum of health, but use simpler discrete models for treatment decisions. It creates its own problems (when insurers and regulatory agencies enforce them rigidly, for example), but also lets them help more people on average.
Personally I don’t mind spitting out the pits at all, but for cases where it may be annoying (like in a fruit salad) I use a cherry pitter with a hopper: https://m.media-amazon.com/images/I/31vipTWimyL._AC_.jpg
Fun, too. Though it does tend to spray some cherry juice around.