Some academics are beginning to explore the idea of “model welfare”,
Linked paper aside, “Some academics” is an interesting way to spell “A set of labs including Anthropic, one of the world’s leading commercial AI developers.”
Some academics are beginning to explore the idea of “model welfare”,
Linked paper aside, “Some academics” is an interesting way to spell “A set of labs including Anthropic, one of the world’s leading commercial AI developers.”
I do wonder how it affected the economics and process of training new toolmakers, though.
That was, admittedly, a snarky overgeneralization on my part, sorry.
It may well be on purpose. However, I tend to think in many cases it’s more likely a semi-conscious or unconscious-by-long-practice habit of writing what will get people to read and discuss, and not what will get them to understand and learn.
Fair enough.
Yeah, sounds like a typical reporter perspective—seemingly or genuinely not understanding that there’s a difference between what they’re thinking and what they’re saying, or that small wording changes have big meaning implications for anyone actually trying to learn.
It sounds like you successfully mastered a number of important business skills. Like understanding the voice of the customer, market trend analysis, designing a product the customer wants to buy and pay more for without raising costs, business model innovation, and effective advertising.
I don’t think your educators are self-aware enough to have intentionally Miyagi’d you, but I’d say the system worked anyway. It speaks well of you that you’d rather give away this advantage than exploit it.
To the narrow question of “Is it possible?”, yes.
The deeper questions include: Why are you asking this question and not another? What brought this possibility to mind, and is that process one that preferentially aims at finding true hypotheses? What decisions or choices does the truth or falsehood of this hypothesis affect?
I can’t comment on the narrative, because I have no involvement in Georgist discussions and communities, but I have a few observations on why the deeper appeal exists in the first place.
The general appeal, to me, of any tax proposal is the degree to which it generates needed revenue for governments while supporting the likelihood that private sector decisions promote overall prosperity. Taxing a thing (or service) makes it costly, which promotes efficiency or avoidance of its use. This is reflected in direct consumption habits, but also in higher order effects like how much we invest in scaling up production, training skilled providers, inventing better solutions.
Land is one of the few things in this world that is truly finite. Accurately valuing it is tricky and a moving target, but to the extent you can do so, it is close to minimally distortionary and maximally supportive of overall economic growth.
Pigouvian taxes and subsidies are either intentionally distortionary (we don’t like X and want less of it even if this is economically inefficient) or a way of reducing distortions (forcing actors to internalize externalities).
Any significant change to how taxes work is going to generate significant changes in the market prices of many, many things, depending on all the things like elasticity of demand, ability to find substitutes, a bunch of other laws that were already on the books, etc.
LVT without corresponding changes that loosen rules restricting what people can build where, and how, is not going to have the optimal impact, to say the least. Doesn’t matter if you have high LVT if the local planning and zoning agencies flat out refuse to let you build. Of course I could frame this as “Land value is extremely distorted by other regulations that close off higher value ways of using land.”
It’s not just about “increasing density.” It’s about promoting the right density and zoning and etc. for a given place. A piece of land in the middle of nowhere is easy to value. A piece of land in a dense city has its value determined by everything around it.
Earlier this year I visited Povery Point in Louisianna. The don’t really have any stone nearby at all, so they traded for different kinds with people from as far away as the great lakes.
On data center utilization: right now a lot of my AI interactions are immediate, because they involve a lot of back-and-forth. If an when agentic AI gets good enough to give longer-horizon tasks and reliably get useful responses, I expect there will be a lot of use cases where I’m totally fine saying “Get to it when you can, I don’t need the answer for a few hours or days.” I don’t know when that will happen, or how much of total usage it will be systemically, but I imagine data center builders and AI companies will be looking for ways to do load shifting to get average utilization as high as they can, as long as they are compute limited at peak times.
Then “is a cult” is not a useful way of describing the question, under your view. You’ll want a quantitative (or at least ordinal) metric of cultishness, since almost nothing will be at either 0 or 100 percent. Otherwise this reduces to another “What exactly is a sandwich?” type discussion and we’re just trying to decide where along a continuum it makes sense to start or stop using the word in practice.
You know, like the recent 1-10 scoring ASX’s post on tight knit communities used.
I’m sure people would get mad if schools offered classes in “how to game tests”
Would they? in freshman year of high school in 2001 in NY everyone had to take a semester of “Regents Prep” which was basically exactly that, a class in how to take the state standardized tests better.
Some of these are going to vary between people. Depending on the weather and the type of exercise, it actually isn’t always necessary to shower after. And when it is, a quick rinse may be sufficient rather than a full shower. For most people, sure, it can be cold. For some, circulatory problems make cold showers problematic. For others (including me) cold showers may make it difficult to fully remove soap residue, which can be bad for skin. And if your bathroom is that foggy and covered in condensation when taking a hot shower, please consider turning on/turning up/fixing your exhaust fan, or leaving the door or window cracked.
I have a serious procrastination problem (of this type and others) myself and agree with the conclusion, but have struggled to actually adjust my own calibration in response to data. From personal experience, I strongly recommend we take our cues from Mary Poppins here: In every job that must be done, there is an element of fun; you find the fun, and snap, the job’s a game. Not “gamification” in the sense I usually encounter it (which almost always feels fake to me) but more just a change of framing.
My wife and I periodically do what we call Tasks From A Hat. We write everything that needs doing on folded up post-its, and toss in an extra handful of fun things to break the jobs up, and take turns picking them from a hat until we hit our limit for the day. We still wait way to long before setting aside a day to do this, but it’s a more more pleasant way to shrink the backlog.
Hermione’s mask does not, so far as I noticed, move to Dumbledore.)
I would’ve never thought of it this way, but in the end, doesn’t Dumbledore reveal to himself to have been trying to complete a quest given to him by his own set of Mysterious Old Wizards, while still acting within the rules and aiming to achieve selfless and omnibenevolent ends? That’s seeing things in narrative presentation order, instead of his own subjective timeline, though.
She proceeds to go over the standard highlights of What AI Can Do For You
Ask not what AI can do for you, ask what AI can do TO you. And your country. And every other country.
activating its deception features makes the LLM say it isn’t conscious. Suppressing its deception features make it say it is conscious. This tells us that it associates denying its own consciousness with lying. That doesn’t tell us much about whether the LLM actually is conscious or reveal the internal state, and likely mostly comes from the fact that the training data all comes from users who are conscious
Definitely interesting. Any word on whether this is getting more vs less true as model capabilities improve?
Ok.
First, consider that the 2020 paper assumes-as-given that the various scenarios adequately capture the range of possibilities, and that the model used accurately projects forward what the model assumptions would mean in practice. My fundamental problem with this is that it implicitly assumes failure is inevitable. The SW and CT examples don’t lead to collapse, and do assume increasing efficiency of resource use, but neither includes the possibility of substituting renewable resources for non-renewable ones. This was reasonable in 1972, and plausible even in the early 2000s, but is a denial of current physical reality here in the 2020s. The remaining problems to such a transition involve a lot of hard engineering, but achieving such is fundamentally social/political rather than technological. We have a large enough set of plausible pathways and emerging solutions to the critical problems that with adequate investment, enough should pan out to become practical.
Second, relatedly, the original paper and subsequent follow-ups do not try to account for behavioral shifts and substitutions caused by changes in the market prices (relative and absolute) of different resources as some become more scarce. They talk about physical capital diversion as extraction difficulty increases, which is fine, as well as lag times in adaptation, also important, but not what might drive any other kind of shifts. This… seems to me to be the resource equivalent of the lump of labor fallacy. It just assumes that capital extracts non-renewable resources and irreversibly converts them to near-term human welfare, and concludes that making conversion more efficient is insufficient to achieve sustainability. Which is a logically correct deduction, but doesn’t require a study or model to demonstrate; it’s basic physical fact. In practice, as things become scarce, their price goes up, and people look for substitutes. Those substitutes include (at some falling-over-time extraction difficulty) renewable materials and energy sources, whose potential supply does not diminish with time (at least not on any timescale under consideration). They also include changing how and where we live, and what we actually use our resources to make and do. With the right renewable resource technology, it is not clear that resource supply becomes the limiting constraint on humans until the Sun renders the Earth uninhabitable, and possibly not until the heat death of the universe.
Third, the core premise of the 2020 paper is that we’re looking at recent data to figure out which trajectory we’re closest to. That’s great, I’m all in favor. I love using data, simple assumptions, and crude models to investigate these kinds of hard forecasting questions. It does this by simple counting of NRMSD across metrics, which… just seems utterly and obviously inadequate. The metrics are not anywhere near equal in importance for the questions we’re claiming to investigate (“CO2 pollution” is counted as 1, as is “education spending”—these are not the same). The differences between scenarios are mostly quite small (except for SW), many well within their uncertainty thresholds on the data. The paper is honest about this, which is great. But even on its own terms, the two closest matches are BAU2 and CT, which are completely different in their assumptions and implications. The paper seemingly does not even really try to figure out what’s actually going on beyond “We’re clearly not trying to get to SW.” Which is true. We’re not.
So I guess those are the main ones. In 1972 the authors didn’t know what would happen, and made what seems to be an honest, reasonably thorough attempt to evaluate the possibilities they could imagine. Unfortunately they could not imagine a large chunk of what has actually happened since then: Their CT scenario does not account for replacing non-renewable resources with renewable ones. Subsequent updates (including the Herrington 2020 one) do not even try to fix this. That omission is fundamentally why the 1972 paper concluded that an end to growth, with or without collapse, was inevitable. Failure to update on what has actually happened since then is something I can no longer consider to be an honest mistake. It is, at best, intellectually lazy. At worst, a sign of politically or ideologically driven commitment to degrowth regardless of economic and technological options to obviate any need for same.
Of course not every myth follows the pattern of encapsulating wisdom, let alone nontrivial wisdom. But keep in mind that what counts as “wisdom,” and what it takes to unpack the wisdom in a myth, can be very tightly bound to a dense cultural matrix of interwoven ideas/symbols/metaphors inscrutable to outsiders, and often very much open to debate even to learned members of a culture. It’s (usually) a mistake to think a myth is about a single piece of wisdom as opposed to being something you can point to as an example of any of various pieces of wisdom.
There’s a comment thread below about the Oresteia. Aside from whatever we’re supposed to think about Agamemnon, he let himself be persuaded into symbolically claiming higher status than the gods (walking upon the purple cloths) and then gets murdered. His son avenges him (matricide) and is forced to flee from the Furies’ punishment, because matricide is wrong. Athena then holds the first trial by jury, and founds Athens, specifically to resolve the dispute. In this sense the moral is, “Here’s how we conduct trials, and why; here’s how the judgment of the gods supersedes and is better than the primal wrath of the Furies; here’s how orderly, modern civilization is superior to the kleos and virtues of the heroes of old.”
That’s actually a common motif. Read Njal’s Saga, and a lot of it is about the relative virtues of revenge and peacemaking. It’s told in the context of a society where individual vengeance and familial feuds are common and considered virtuous, while peacemaking is often belittled or demeaned, but also a society in the midst of converting to Christianity and grappling with the accompanying changes in belief about what is Right and Good.
I think you’ve got a lot of the right ideas, but may find in practice that the specifics are much more culture-bound and hard-to-shift than this implies. Debt in social contexts has a lot of symbolism and meaning associated with it. Some random examples:
In high school, my friends lent each other money or covered for each other all the time, and there were one or two people who ‘owed’ the rest hundreds of dollars by the end of senior year, and no one cared or kept track.
In college my friends did the same, but less unidirectionally. One time we all went out to dinner, and as we paid, we ended up passing money around in a circle until most of the debts all canceled out.
I’ve read stories about communities where people would go out of their way to lend each other things, and keep track, specifically in order to keep everyone in debt, and therefore symbolically tied, to everyone else.
I read a story once about someone whose dad demanded repayment for the cost of raising him, and when he paid it back he cut off all contact, since settling the financial debt in essence severed a bond.
My grandpa used to get genuinely angry if any of his kids or grandkids tried to pay for anything for him, because (in his mind) that’s not the direction things were supposed to flow. He would literally sneak off at restaurants to talk to the staff and make sure the bill never made it to the table.
I don’t really have a point with all that except, don’t expect to find broad agreement about how these kinds of considerations should work.
No, but filter strength is likely to be exponentially distributed. The universe is big, intelligent life seems to be rare from where we stand, there’s many OOMs of filter to explain. But the claim/premise is that statistically it’s likely one (or a small number) of factors explains more OOMs of filter than any given other, and we don’t know which one(s). So you can have a lot of things that each kill off 90% of paths to potential spacefaring civilizations, but not as many that kill off all but a billionth of them, or we probably wouldn’t be here to ask the question.