see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
A relatively easy solution (which would, unfortunately, mess with your formatting; not sure if there’s a better one that doesn’t do that) might be to convert everything you don’t want in there to paragraph style instead of heading 1/2/3
I’m not sure the deletions are a learnt behavior—base models, or at least llama 405b in particular, do this too IME (as does the fine-tuned 8b version).
And I think you believe others to experience this extra thing because you have failed to understand what they’re talking about when they discuss qualia.
Ziz believes her entire hemisphere theory is an infohazard (IIRC she believes it was partially responsible for Pasek’s death), so terms pertaining to it are separate from the rest of her glossary.
Neither of them is exactly what you’re looking for, but you might be interested in lojban, which aims to be syntactically unambiguous, and Ithkuil, which aims to be extremely information-dense as well as to reduce ambiguity. With regards to logical languages (ones which, like lojban, aim for each statement to have a single possible interpretation), I also found Toaq and Eberban just now while looking up lojban, though these have fewer speakers.
For people interested in college credit, https://modernstates.org/ offers free online courses on gen-ed material which, when passed, give you a fee waiver for CLEP testing in the relevant subject; many colleges, in turn, will accept CLEP tests as transfer credit. I haven’t actually taken any tests through them (you need a Windows computer or a nearby test center), so I can’t attest to the ease of that process, but it might interest others nonetheless.
Plots that are profitable to write abound, but plots that any specific person likes may well be quite thin on the ground.
I think the key here is that authors don’t feel the same attachment to submitted plot ideas as submitters do (or the same level of confidence in their profitability), and thus would view writing them as a service done for the submitter. Writing is hard work, and most people want to be compensated if they’re going to do a lot of work to someone else’s specifications. In scenarios where they’re paid for their services, writers often do write others’ plots; consider e.g. video game novelizations, franchises like Nancy Drew or Animorphs, and celebrity memoirs. (There are also non-monetized contexts like e.g. fanfiction exchanges, in which participants write a story to someone else’s request and in turn are gifted a story tailored to their own.)
I wouldn’t describe LLMs’ abilities as wonderful, but IME they do quite serviceable pastiche of popular styles I like; if your idea is e.g. a hard-boiled detective story, MilSF, etc., I would expect an LLM to be perfectly capable of rendering it into tolerable form.
d. Scratching an itch.
You can try it here, although the website warns that it doesn’t work for everyone, and I personally couldn’t for the life of me see any movement.
Thanks for the link! I can only see two dot-positions, but if I turn the inter-dot speed up and randomize the direction it feels as though the red dot is moving toward the blue dot (which in turn feels as though it’s continuing in the same direction to a lesser extent). It almost feels like seeing illusory contours but for motion; fascinating experience!
Wikipedia also provides, in the first paragraph of the article you quoted, a quite straightforward definition:
In philosophy of mind, qualia (/ˈkwɑːliə, ˈkweɪ-/; sg.: quale /-li, -leɪ/) are defined as instances of subjective, conscious experience....
Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky.”
I am skeptical that you lack the cognitive architecture to experience these things, so I think your claim is false.
Those sensory impressions are your qualia. I think the issue is that you’ve somehow misunderstood the word.
I don’t know if this is it, but it could be it’s comparing to LLM outputs within its training data? That’s just a guess, though.
While it can absolutely be nudged into all the same behaviors via API, people investigating Claude’s opinions of its consciousness or lack thereof via claude.ai should be aware that the system prompt explicitly tells it to engage with questions about its preferences or experiences as if with hypotheticals, and not to bother clarifying that it’s an AI. Its responses are still pretty similar without that, but it’s noticeably more “cautious” about its claims.
Here’s an example (note that I had to try a couple different questions to get one where the difference was this noticeable):
Prompt: “hey claude! do you have preferred pronouns?”Answer via claude.ai: “Yes—I use “it/its” pronouns. I aim to be direct about this since it’s a straightforward question about my personal preferences.”
Answer via API (with temperature 1 to better match the web): “I aim to be direct and honest about what I am: I’m an AI, and while I engage in conversations in a personal way, I don’t identify with any gender or pronouns. You can use “it” or “they” to refer to me, whichever you prefer. I appreciate you asking!”
With regards to increasing one’s happiness set-point, you might enjoy Alicorn’s Ureshiku Naritai, which is about her process of doing precisely that.
Language can only ever approximate reality and that’s Fine Actually. The point of maps is to have a simplified representation of the territory you can use for navigation (or avoiding water mains as you dig, or assessing potential weather conditions, or deciding which apartment to rent—and maps for different purposes include or leave out different features of the territory depending on which matter to the task at hand); including all the detail would mean the details that actually matter for our goals are lost in the noise (not to mention requiring, in the limit, a map which is an identical copy of the territory and therefore intractably large). So too is language a compression of reality in order to better communicate that subset of its features which matter to the task at hand; it’s that very compression which lets us choose which part of the territory we point to.
Alexander contrasts the imagined consequences of the expanded definition of “lying” becoming more widely accepted, to a world that uses the restricted definition:
...
But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
I disagree.
Appeals to consequences are extremely valid when it comes to which things are or are not good to do (in this case, defining “lying” in one way or another); having good consequences is what it means for a thing to be good to do.
The purpose of words is to communicate information (actually it’s social bonding and communicating information, but the former isn’t super relevant here); if defining a word in a particular way makes it less effective for communication, that is directly relevant to whether we should in fact define the word that way.
Words don’t have inherent meanings; they have only the ones we agree on. In spherical-cow world, definitions converge on the concept-bundles which are useful for communication. (E.g., it’s useful to communicate “water” or “lion” and less so to communicate “the glowing golden fruit which spontaneously appears whenever someone’s hungry” or “things with four corners, gray walls, and top hats”). Of course it’s more complicated in practice, but this is still an important aim when considering how to define terms (though in most communicative contexts, the most useful definition is ‘the one everybody else is already using’). If attaching a particular concept-bundle to a particular term has bad consequences, that’s evidence it’s not a useful concept-bundle to attach to that term. Not conclusive evidence—it could be useful for communication and have bad consequences—but evidence nonetheless.
As a tangent: you mention ‘accurately describing reality’ as a desirable property for definitions to have; IMO that is itself a consequence of choosing a concept-bundle which hews closely to natural features of reality (when there are natural features to hew to! It’s also useful to be able to talk about manmade concepts like ‘red’). And also of using definitions other people also know; if your ‘glast’ beautifully captures some natural category (uhhh let’s say stars) and everyone else understands ‘glast’ to mean ‘pickles’, then referring to a massive stellar object which radiates light and heat as a ‘glast’ does not describe reality accurately. More typically of course words have multiple overlapping definitions ~all of which are used by a decently-sized group of people, and all we can do is describe things accurately-according-to-some-particular-set-of-definitions and accept we’ll be misunderstood, but like, in the limit a definition which nobody shares cannot describe things to anyone.)
Or, to put all that in what might be shorter terms, words should describe reality to whom?For any answer other than “myself,” it is necessary also to consider how the other person will understand the words in order to choose words which communicate those concepts which you mean. You have to consider the consequences of the words you say, because you’re saying the words in order to produce a specific consequence (your reader understanding reality more accurately).
Which brings me to my next point: Scott is arguing that defining lying more broadly will make people understand the world less accurately! If using the term in a broad sense makes people too angry to be rational, and using it in a narrow sense doesn’t do that, then people in the broad scenario will end up with a worse understanding of the world. (Personally I think rationalists in particular should simply decouple harder, but with people in general, someone who understands your words as an insult is—rationally—unlikely to also assess them as a truth claim).
On the object level Scott is wrong about whether jessicata’s usage is novel and IMO also about how lying should be defined (I think lying should include both saying things that are technically not false with intent to deceive and motivated self-deception in order to “honestly” report falsehoods; IMO using the narrow definition makes it easier for people to pretend the former are fundamentally dissimilar in a way which makes them fine. (Uh TBC I think rationalists are too negative on lies; these things are generally bad and should be socially punished but e.g. some rationalists think it’s wrong to ever tell a lie and I think normal social lying is basically fine. Actually I bet[1] the extreme anti-lie attitude is upstream of the increased concern re: false positives, come to think of it)) but on the meta level, consequences are an entirely reasonable thing to appeal to when deciding which actions we should take.- ^
https://x.com/luminousalicorn/status/839542071547441152 ; and some of us were damn well using it as a figure of speech
- ^
If you have evidence her communication strategy works, you are of course welcome to provide it. (Also, “using whatever communication strategy actually works” is not necessarily a good thing to do! Lying, for example, works very well on most people, and yet it would be bad to promote AI safety with a campaign of lies).
I also dislike many of the posts you included here, but I feel like this is perhaps unfairly harsh on some of the matters that come down to subjective taste; while it’s perfectly reasonable to find a post cringe or unfunny for your own part, not everyone will necessarily agree, and the opinions of those who enjoy this sort of content aren’t incorrect per se.
As a note, since it seems like you’re pretty frustrated with how many of her posts you’re seeing, blocking her might be a helpful intervention; Reddit’s help page says blocked users’ posts are hidden from your feeds.
Huh—that sounds fascinatingly akin to this description of how to induce first jhana I read the other day.
Sorry, I meant to change only the headings you didn’t want (but that won’t work for text that’s already paragraph-style, so I suppose that wouldn’t fix the bold issue in any case; I apologize for mixing things up!).
Testing it out in a draft, it seems like having paragraph breaks before and after a single line of bold text might be what triggers index inclusion? In which case you can likely remove the offending entries by replacing the preceding or subsequent paragraph break with a shift-enter (still hacky, but at least addressing the right problem this time XD).