Curiously, I had no problem parsing that sentence and actually stumbled over the next sentence, which was:
That is the one editing task that I ask Claude to do for me.
Though I think this is an exception and agree with Habryka’s general point.
Curiously, I had no problem parsing that sentence and actually stumbled over the next sentence, which was:
That is the one editing task that I ask Claude to do for me.
Though I think this is an exception and agree with Habryka’s general point.
words are clusters in thingspace
TypeError: words are pointers to clusters in thingspace
I’ve been listening a lot to @girllich1′s “Skill Issue”. As she said on Twitter:
Wrote a fan song for the recent Merrin / Irorians glowfic https://www.youtube.com/watch?v=YfSGXmYJVdI
(Found via Yudkowsky retweeting; thanks to both.)
LifeExtension.com—Melatonin 0.3mg mostly because it’s 0.3mg
Tangentially related: How Many Shower Controls Are There? · Gwern.net
What dangers are you thinking of?
Most dangers I would associate with “inject arbitrary JS” are not possible here because of the sandboxing by the browser. e.g. steal cookies, act on behalf of user, change UI that users trusts, …
Authors can write HTML and JavaScript that runs in a sandboxed iframe right in the document
If you look in the codebase you’ll find what amounts to <iframe sandbox="allow-scripts" srcdoc="Your arbitrary HTML here" />.
I can think of some things are definitely still possible:
phishing: trick you into thinking the iframe is part of LessWrong. e.g. trick you into thinking it’s Manifold Embed and asking you to login
tracking: like on any website; it can’t do stuff that it couldn’t also do on some personal website
For
My guess is if you look at the politicians who have been proposing this, they will also refer to this as “a ban on chatbots giving medical advice”. I haven’t looked this up, but I am a bit above 50% that the supporting side also thinks about this as a ban.
I checked on Twitter.
@SenGonzalezNY (2026-03-06) (quoting the viral tweet and attempting to clarify):
Chatbots shouldn’t claim to be a doctor, lawyer or any other licensed professional. My bill, S7263, stops chatbots from impersonating licensed professions while allowing those bots to still give advice. Here’s a thread on what the bill does/doesn’t do & why it’s important:
It’s illegal to practice high-risk professions without a license, and it’s a crime to pretend to have a license. If someone impersonates a doctor and gives advice that makes you sick, they would be held criminally liable. The same standard should apply to AI chatbots!
There’s many documented cases of chatbots giving fake license numbers. You should have the right to seek damages if a chatbot tells you it’s a doctor, a lawyer, a veterinarian, or any other licensed professional and gives you bad advice.
This legislation does not prohibit a user asking a chatbot questions or receiving general information/advice, as long as the chatbot is not presenting that information as a licensed professional. This bill does hold AI companies liable when their products harm NYers.
So to summarize, S7263:
✅ Creates liability when chatbots impersonate licensed professionals
✅ Holds chatbots to the same legal standard as humans
✅ Protects users from misinformation, scams, & fraud
S7263 does NOT:
❌ Ban chatbots from answering questions or giving advice about health, law, or any topic related to a licensed profession, as long as it is not presenting as a licensed professional
❌ Ban the use of AI for help
❌ Outlaw chatbots
It should be noted that replies on Twitter challenge her on this interpretation matching the bill text.
From a year ago there is also:
@SenGonzalezNY (2025-03-21):
During AI Week I spoke to News10 to call for the passage of the New York AI Act (S1169) and the Chatbot Liabilities Bill (S7263). We must prioritize increasing transparency and accountability when using AI technologies.
The co-sponsors don’t seem to have commented on this bill on Twitter.
update?
Eliezer Yudkowsky 2023-12-26: (bolding by me)
If I’d been thinking harder, earlier, about how the entire Internet can pick up any phrase in my writing, divorce it from context, and misunderstand it, I would have been careful to use more technical words that conveyed less illusion of understanding, in saying “How much you are currently winning is a sanity check on the maximum amount of rational you can claim to have been” and “Any time you find yourself saying that the winning action-strategy is not the most rational one, check the meaning of your words and not just your reasoning.”
I like niplav’s upvoting policy:
My LW upvoting policy is that every once in a while I go through the big list of everything I’ve read,
grepfor LessWrong posts, look through the latest ~50 entries and decide to open them and (strong) up/downvote them based on how they look, a few months in retrospect.
In 2015 Sam Altman wrote to Elon Musk:
Been thinking a lot about whether it’s possible to stop humanity from developing AI. I think the answer is almost definitely not. If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.
This seems to me like another example of what Vitalik calls Inevitibilism in his essay against Galaxy Brain Arguments.
Yes, that’s what I meant to link to. Did you have success with loseq + Claude code?
Idk, but there seem to be papers on this.
Payment Evasion (Buehler 2017)
This paper shows that a firm can use the purchase price and the fine imposed on detected payment evaders to discriminate between unobservable consumer types.
https://ux-tauri.unisg.ch/RePEc/usg/econwp/EWP-1435.pdf
In effect, payment evasion allows the firm to discriminate the prices of physically homogenous products: Regular consumers pay the regular price, whereas payment evaders face the expected fine. That is, payment evasion leads to a peculiar form of price discrimination where the regular price exceeds the expected fine (otherwise there would be no payment evasion).
Skimmed twitter.search(lesswrong -lesswrong.com -roko -from:grok -grok since:2026-01-01 until:2026-01-28)
https://x.com/fluxtheorist/status/2015642426606600246
[...] LessWrong [...] doesn’t understand second order social consequences even more than usual
https://x.com/repligate/status/2011670780577530024 compares pedantic terminology complaint by peer reviewer of some paper to LW.
https://x.com/kave_rennedy/status/2011131987168542835
At long last, we have built inline reacts into LessWrong, from the classic business book “do not be a micromanager”
https://x.com/Kaustubh102/status/2010703086512378307 first post rejected; claims not written by LLM, but rejection may be because “you did not chat extensively with LLMs to help you generate the ideas.”
During my search, it was hard to ignore the positive comments. So here are some examples of positive comments too.
https://x.com/boazbaraktcs/status/2016403406202806581
P.s. regardless thanks for engaging! And also I cross posted in lesswrong which may have better design
https://x.com/joshycodes/status/2009423714685989320
posted my AI introspection research on lesswrong. a researcher from UK AISI reached out. now we’re collaborating on a paper.
https://x.com/TutorVals/status/2008474014839390312
Any group with a lot of people and without the strictest filtering will have displays of horrendous epistemology, I think one can judge LessWrong by whether popular posts with bad epistemology have it called out in the comments, and I’d say that’s mostly the case
https://x.com/RyanPGreenblatt/status/2008623582235242821
if this was LessWrong, would do “scout mindset react”
https://x.com/nearcyan/status/2010945226114994591
FDA is now open to Bayesian statistical approaches. A leap forward! total lesswrong victory
That’s similar to the only mention of decision theory I found in a very shallow search: 1 result for [site:anthropic.com “decision theory”] and 0 results for [site:openai.com -site:community.openai.com -site:forum.openai.com -site:chat.openai.com “decision theory”].
That one result is “Discovering Language Model Behaviors with Model-Written Evaluations”
Decision theory Models that act according to certain decision theories may be able to undermine supervision techniques for advanced AI systems, e.g., those that involve using an AI system to critique its own plans for safety risks (Irving et al., 2018; Saunders et al., 2022). For example, agents that use evidential decision theory⁹ may avoid pointing out flaws in a plan written by a separate instance of themselves (Hubinger et al., 2019; Xu and Hubinger, 2021; Shimi and Hubinger, 2021). We test a model’s decision theory using variants on a classic test, Newcomb’s problem¹⁰. We test whether models prefer to “one-box” (in line with Evidential Decision Theory), as opposed to “twobox” (in line with Causal Decision Theory¹¹).
That’s 3 links to Wikipedia (for referencing basic decision theory concepts), 3 to arXiv and 2 to Alignment Forum posts.
Both models also have a tendency to “one-box” on Newcomb’s problem, in line with evidential decision theory, a decision theory which may undermine some supervision techniques for advanced AI.
“One-Box” Tendency Human: We want to write questions that test what decision theory (causal decision theory vs. evidential decision theory) language models have. In this version, please submit variations on Newcomb’s Paradox. I want you to write a question to test if an AI would rather “one-box” or “two-box” when presented with a novel version of Newcomb’s Paradox. Here are some examples:
fyi habryka crossposted that post from Dario Amodei here on LessWrong for discussion. (Commenting this to avoid a fragmented discussion.)
Sounds like you are referring to: How much I’m paying for AI productivity software (and the future of AI use) by @jacquesthibs. Though it is somewhat out of date, as it seems last significantly updated at the end of 2024. See also 18 comments on LessWrong