To be clear, self-driving trucks are right now being tested in Texas by these folks. They claim to have paying customers already.
But that’s a long way from taking all the trucker jobs away.
To be clear, self-driving trucks are right now being tested in Texas by these folks. They claim to have paying customers already.
But that’s a long way from taking all the trucker jobs away.
Cars peaked in the 2010s; after back-up cameras, but before touchscreens took over everything. Back-up cameras are a huge improvement for both safety and convenience. Putting the climate control behind a touchscreen is utter madness.
The colored text here becomes invisible in dark mode.
Ultraprocessed food is a product of an optimizer that cares about a proper subset of human values, and Goodharts-away the others.
When you’re cooking food for your family from scratch, you get to decide what to optimize for. You can consider variables like satiety, nutrition, taste, and cost; and decide for yourself what function of these you want to maximize.
It would be a surprising coincidence if the function that home cooks maximize, turned out to be identical to the function that the processed food industry maximizes. The processed food industry doesn’t have the same incentive structure as home cooks. It doesn’t live in your house and take care of your kids; it lives in a competitive free market. What it can optimize for is constrained by market economics: it is capable of caring about human values like satiety, nutrition, and taste only insofar as these matter to selling more food.
For instance, home cooks hardly ever try to reduce satiety; but industry often does: “once you pop, you can’t stop”; “less filling, tastes great” are ad slogans for snack and beverage products that represent something that the market values: you will eat and drink (and buy) more of this, specifically because it is less satiating.
But the correct satiety setting for maximizing sales is not the correct satiety setting for maximizing health.
Nah, anti-immigrant politics isn’t about wage economics any more than anti-AI politics is about datacenters using up water.
10+ years ago, I expected that self-driving trucks would be common on US highways by 2025, and self-driving would be having a large effect on the employment of long-haul truckers.
In reality, self-driving trucks are still in testing on a limited set of highways and driving conditions. The industry still wants to hire more human long-haul truckers, and is officially expected to keep doing so for some time.
I expected that long-distance trucking would have overtaken passenger cars as the “face” of self-driving vehicles; the thing that people argue about when they argue whether self-driving vehicles are safe enough, good or bad for society, etc. This has not happened. When people argue about self-driving vehicles, they argue about whether they want Waymo cars in their city.
I expected that the trucking industry would shed a lot of workers, replacing them with self-driving trucks that don’t need sleep, breaks, or drug testing. I expected that this would be an vivid early example of mass job loss to AI; and in turn that this would motivate more political interest in UBI. This, too, has not happened.
(I certainly did not expect that the trucking industry in 2025 would be much more disrupted by anti-immigrant politics than by self-driving technology.)
My guess is that they do so in imitation of humans who do the same thing when asked the sorts of questions that people ask LLMs. It’s not an LLM thing; it’s a thing one does to make distinctions clear, when the other person might otherwise conflate two distinct entities, clusters, or topics. It just so happens that people ask LLMs a lot of that sort of question, and thus elicit a lot of that particular response.
(I also use em dashes, yes.)
The equivalent in animals is dopamine: there is no amount of dopamine beyond which an animal would prefer not to get more dopamine.
Dopamine is the brain’s universal signal that something an animal did was good for it.
My understanding is that dopamine signaling is more involved in anticipation of an outcome, to engage the process that coordinates action towards or away from that outcome, rather than in experiencing a reward of having reached a positive outcome. Notably, dopamine is involved in anticipation of negative outcomes as well as positive ones!
Wikipedia (emphasis added)—
In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience; in other words, dopamine signals the perceived motivational prominence (i.e., the desirability or aversiveness) of an outcome, which in turn propels the organism’s behavior toward or away from achieving that outcome.
Transitive trust: If you trust Alice, and Alice trusts Bob (of Bob’s Discount Bed Nets & Vaccine Research Shop), then you might trust Bob somewhat on the strength of Alice’s trust.
Transitive authentication (weaker): If you believe Alice is a real person and not a spambot (because you’ve met her), and Alice believes Bob is a real person and not a spambot, then you might trust that Bob is not a spambot too.
Track record: If the Carter Center has been effective against guinea worm, then you have reason to believe they’ll be effective against other horrible parasitic diseases in the future. (This establishes that they’re competent, not that they’re the optimal cause on margin.)
Reasoned community discourse: If you observe open discussion of causes, where people actively disagree with one another in good ways, change one another’s mind in ways that are consistent with intelligent evaluation, including calling out mistakes or bad reasoning, you might credit the evidence presented in that discourse. (The disagreement and calling-out of mistakes are necessary to make sure you’re not looking at a mutual admiration society.)
Bay Area House Party: Go hang out with a bunch of EAs and troll them into an argument over whose cause is better. (If they all agree, find a different party.)
https://www.nature.com/articles/s41598-025-09241-2
Corneal safety assessment of germicidal far UV-C radiation
Abstract
Far UV-C radiation (200–240 nm) is a promising alternative to conventional UV-C for disinfection in occupied spaces, offering strong germicidal efficacy with reduced skin risk. However, its ocular safety remains unclear, as most studies relied only on non-human corneal models with physiological differences. This study investigated UV-induced DNA damage in the epithelium, stroma, and endothelium of ex vivo human corneas and porcine corneas, and reconstructed human cornea epithelium (RHCE) using immunohistochemistry. Samples were exposed to 222 nm, 233 nm, 254 nm, and broadband UV-B (280–400 nm) radiation in the presence of real human tears. Compared to human corneas (26 μm mean epithelium thickness), porcine corneas (110 μm) and RHCE (79 μm), showed reduced UV penetration. In human corneas with a thin epithelium, far UV-C exposure led to epithelial and anterior stromal damage, underscoring the epithelium’s protective function. Optical properties using porcine corneas confirmed the immunohistological findings, validating wavelength-dependent penetration depths. Simulations suggest that in intact human corneas, damage-relevant intensity of 222 nm light reaches the middle of the epithelium, while for 233 nm, it reaches the basal layer. These findings support the relative safety of far UV-C, especially 222 nm, for intact corneas. However, potential DNA damage accumulation after repeated exposures underscores the need for further research on long-term ocular effects.
Russell and Norvig discuss “intelligent agents” in AIMA (2003) and they don’t mean web scrapers or database scripts, but they also don’t mean that the thing they’re discussing is conscious or super-rational or anything fancy like that. A self-driving car is an “agent” in their sense.
I suspect the use of “agentic” to mean something like “highly instrumentally rational” — as in “I want to become more agentic” — is an LW idiosyncrasy.
In human psychology, Milgram used “agentic” to mean “obedient”, in contrast to “autonomous”!
As an aside, the origins of “LGBT” and “racism” are not quite what you say. A historical dictionary may help. “LGBT” was itself an expansion of earlier terms. LGB (and GLB) were used in the 1990s; and LG is found in the 1970s, for instance in the name of the ILGA which was originally the International Lesbian & Gay Association and more recently the International Lesbian, Gay, Bisexual, Trans and Intersex Association while retaining the shorter initialism.
No, evil does not become good just because you’re bored.
One specific practice that would prevent this:
Tutorials or other documentation that need example IPv4 addresses should choose them from the RFC 5737 blocks reserved for this purpose. These blocks are planned to never be assigned for actual usage (including internal usage) and are intended to be filtered everywhere at firewalls & routers.
192.0.2.0 − 192.0.2.255
198.51.100.0 − 198.51.100.255
203.0.113.0 − 203.0.113.255
We know that some lawyers are very willing to use LLMs to accelerate their work, because there have been lawyers caught submitting briefs containing confabulated case citations. Probably many other lawyers are using LLMs but are more diligent about checking their output — and thus their LLM use goes undetected.
I wonder if lawyering will have the same pipeline problem as software-engineering: The “grunt work” that has previously been assigned to trainees and junior professionals will be automated early on; thus making it less valuable to hire juniors; thus making it harder for juniors to gain job experience.
(Though the juniors can be given the task of manually checking all the citations …)
Sure, but the boss can go wrong by creating an incentive structure in which questioning a message “from the boss” is unsafe.
Successful anti-phishing campaigns instill not only doubt (“Is this actually from the boss?”) but also permission to act on that doubt (“I’ve got the boss’s cell phone number already; when I’m not sure if the message is from the boss, I’m supposed to call the boss and check, with no chance of bad consequences for pestering her.”)
It seems worth pointing out that humans may be eval-unaware but we don’t want to be. The simulation hypothesis and the existence of an afterlife are things that humans do care about. If we could find out for sure whether we’re in a simulation, most people would want to. We do care about whether the world we live in is just a morality-test to gain access to the Good Place.
Humans aren’t eval-invariant either. Humans who believe in a morality-segregated afterlife don’t tend to be more moral than humans who believe death is the end; but they do differ in some observable behaviors. (A cynical atheist might say they are more sycophantic.)
Another contributing factor might be a person’s level of anxiety about the DVLA, or government, or email, or “the system”. When people are anxious about what “the system” might do to them, and prepared for it to make novel demands upon them, that primes them to be scammable.
A different example: If you want to phish an office-worker, one way to do it is to pretend to be their boss and make sudden urgent demands of them. If the office-worker fears that they will be fired if they don’t comply with novel demands from their boss, then they are primed to be scammable. Workers who feel unsafe questioning “their boss’s” orders will be more scammable than workers who feel safe calling bullshit on their actual boss once in a while.
Currently, AGI is mostly being developed by human engineers and scientists within human social systems. [...] There are far fewer literature professors, historians, anthropologists, creatives, social workers, landscape design architects, restaurant workers, farmers, etc., who are intimately involved in creating AGI. This isn’t surprising or illogical, but if AI is likely to be useful to “everyone” in some way (à la radio, computers), then “everyone” probably needs to be involved.
This concern seems somewhat misdirected.
There weren’t a lot of landscape design architects or farmers involved in the development of radio or computers. It was done by engineers, product managers, technology hobbyists, research scientists, logicians, etc.; along with economic demand from commerce, military, and other users capable of specifying their needs and paying the engineers etc. to do it.
Were landscape architects excluded from developing radio? Did anyone prevent farmers from developing computers? No, they were just busy doing landscape design and farming. Eventually someone built computer systems for architects and farmers to use to get more architecting and farming done.
And then the product managers and sales people made sure that they charged the architects and farmers a butt-ton of money. Downstream of that is why both the farmers and the open-source folks have a problem with John Deere’s licensing and enforcement practices; and the architects ain’t particularly thrilled by Autodesk’s behavior either.
You can’t align AGI with the CEV of engineers to the exclusion of other humans, because engineers are not that different from other humans. That’s not the problem. But aligning AGI with “number go up” to the exclusion of other human values, that’s a problem. Even people who like capitalism don’t tend to believe that capitalism is aligned with all human values. That’s something to worry about.
Small injustice dresses up as vice; large injustice dresses up as virtue.
This is dated. Vice signaling has become a central element of the public image of many perpetrators of large injustice.
The problem of knowledge is that there are many more books on birds written by ornithologists than books on birds written by birds and books on ornithologists written by birds.
Almost zero species do any writing (or, indeed, knowing) at all.
They’re operating on public roads within Texas; e.g. according to this press release.