I think it’d be good to flag April Fools posts when it’s not April 1 anymore, no?
Not that I don’t appreciate the intellectual challenge of figuring out that it’s a joke, I’m just concerned about non-LWers misinterpreting it.
I think it’d be good to flag April Fools posts when it’s not April 1 anymore, no?
Not that I don’t appreciate the intellectual challenge of figuring out that it’s a joke, I’m just concerned about non-LWers misinterpreting it.
There is much bikeshedding about eyestrain. I’ve seen convincing arguments, especially from older hackers, that a white background is actually less strainful for the eyes. I forgot what the arguments were—will write them down next time—but I don’t think it’s as simple as the amount of light hitting the eye. Currently I’d advise just trusting in personal experience.
And maybe experiment with increasing ambient light rather than reduce light from the screen.
One problem with the Kindle Scribe is that I couldn’t switch from the note-taking application to the book I was reading very quickly. It would take about 5 to 10 seconds in total to press all the menu buttons
Ah, yes! With the reMarkable (another e-reader), I have a trick: I installed an app switcher so I could merely use a gesture to switch between a writing app and reading app.
I quite appreciated having a single slate to read and write on, in environments like the bus and the beach. Anyway, the software was somewhat buggy… and then I lost my stylus pen and then the replacement stylus pen. So now I just use a paper notepad, which I find works nicely.
I have a question. Would a paper notepad have worked for you instead of a second device? What’s better with the device?
On the analogy with fasting,
Even if sleep works the way you suppose, this analogy looks like apples and oranges, so I don’t like it.
With fasting, you can infer that it’s harmless just by knowing that (1) the average lean human has fat reserves to last three months, (2) total fasters don’t go through some calamity like losing lots of muscle protein (if they did, there’d be unambiguous results everyone knew) and (3) in the EEA it was probably common to have periods of scarcity such that you go several days without finding food. In other words, fasting was about as unusual as, you know, cloudy weather. These observations are already strong enough evidence to me that I consider this topic “done”—I will be surprised if a RCT shows it to be harmful, and I’ll need a deep meditation on where I went so wrong.
With sleep, it’s not so clear, because… unsourced claim here, but The Primal Blueprint among other popular books have claimed that an average “working day” in the EEA was less than ~2 hours, or at any rate shorter than a modern working day. That’s a lot of free time for sleep!
That ~2 hour figure can certainly be unpacked: does idle foraging while on a walk with your friends count as “work”? But neither sleep nor work-time seem like resources to be conserved in the same unambiguous way as calories. Sleep or no, bodies need downtime after extended exertion, and they may as well take a nap then. Wakefulness or no, there may be nothing useful to do at certain hours of the day, so we may as well sleep extra. So the amount we slept may have been quite open to modulation by external influences.
This can still back up the idea that you can subtract a few hours off your sleep need with the right stimulus—I just think that an analogy with fasting is a type error. In particular, the sub-analogy that feeling sleepy would signal something good.
Another problem with the analogy. People who fast regularly will tell you that they usually don’t feel hungry. So if you like the analogy, you also shouldn’t usually feel sleepy.
Then, please edit! :-) People come back to LW comments years and even decades after the fact.
I think the post disagrees with you:
I expect having a handle with which to say “no I don’t have a concise argument about why this work is wrong, and that’s a fact about the work” to be very useful.
That a work is Epistemically Legible doesn’t mean you’ll comprehend it: you may be lacking necessary background context, for example. See the section Legibility vs Inferential Distance.
In this case, an E-Legible work will still bless you with the awareness that you were missing background context, so that you didn’t understand what was said—as opposed to giving you a fake feeling that you understood. That’s more a property of the work than of the reader, right? You cannot sensibly discount the work’s contribution to such a result.
For all the Linux-friendliness, it can be easy to miss that it’s closed source. https://github.com/obsidianmd only lists nonessential components.
I’m no historian, but I cannot fit your exiling/killing theory to any recent society I know of.
I know the most about Sweden, so I’ll discuss that society. Thinking about Sweden made several things obvious:
First, an alternative mechanism with similar effect as exiling/killing: simply making the next generation better, and watching the stats improve over time.
It’s not just a question of good norms or correct education, as if these could develop in any direction independent of the government and system in general. Sweden underwent a transformation over many decades of social democracy (1930-1980), and it seems widely accepted now that crime rates went way down because society provided for every last member. Crime is habit-forming, and if no one ever needs to get into the habit, then you get your high-trust society. In fact, I’ll add the hypothesis that you don’t even need high education nor attempt to directly influence culture.
It’s true that motivated cognition and such issues are at work; they always are! But this is no ding on demanding epistemic legibility.
Even if the author never was interested in transmitting truth (like the CDC in your example), now you know how to detect a message that’s hard to critique / spot check.
When you study practical rhetoric, you learn to hold speeches without any written memory-aid. Instead, you use something like the method of loci to remember a sequence of concepts that you want to lay out to the audience, but you do not memorize any exact phrasings.
The first time you pull it off is almost magical, because the benefits are immense and obvious. You have full freedom to walk around, stand in front of the lectern or wherever you like, look everyone in the eyes and ascertain whether they’re following along with you, and to change the speech on-the-fly.
Oddly, it’s a lot less stressful this way.
You remember everything you want to say, just not how you’re going to say it. You trust yourself to find suitable words when you get there. So have you “memorized the speech” or not? I think yes, in every way that matters.
I’d like to tie this into illiteracy. The privileged class in Ancient Rome were literate, of course, but several ancient Roman teachers said that it was better to compose the speech without writing any part of it.
That is, if you write a speech and then try to memorize it, it will tend to be in a shape that’s more difficult to memorize!
It’s better to instead generate the sequence of concepts in your head, like an illiterate person! The result tends to be more amenable to memorization.
(The Roman elites of course still wrote during some parts of the process, notably “inventio”, which is not composing the final speech, merely writing lots of lists/mindmaps to explore the subject)
I’m like Alicorn, with the addition that I love disruption at random periods, because it lets me fall asleep again: pure pleasure.
On the issue of flying insects, the people who do “cowboy camping” (sleeping without a tent) have relevant experience. They recommend finding high ground far away from any lake, because still bodies of water attract bugs.
I’m a pretty slow reader and I really get frustrated and distracted with not-correctly written text, so I see the subsequent editing of the text as something really threatening and time-consuming for me.
I’ve become a fast reader in recent years, but like you, I also get disturbed by incorrectly written text.
To me it sounds like you will get used to these issues in time. You know it’s (1) your own words, (2) dictated by an imperfect program, and (3) mostly meant to be deleted. 1 would help me read faster, and 2 and 3 would help me tolerate the “writing flaws”.
Reading fast is fundamentally about skipping, and being okay with skipping. I think that should be easy if you remember saying the sentence that the words on screen refer to. If you remember the sentence, you’re reminded of the general concept you were getting at. Your job is after all only to figure out whether this whole sentence or section is worth keeping, and you only need to read the first few words to know that, probably.
You could also do a second dictation, to summarize what you’re reading. That one’ll be much shorter.
Rereading your comment, I think you’re saying that legibility will arise by itself well enough so long as someone is on Simulacrum level 1, caring only about the truth, and if their writing is not legible, they probably have an agenda and you’d better focus on finding out what that is, or just ignore what they said.
But
This feels unactionable—it’s just a rephrasing of old critical reading advice “find out the writer’s agenda and biases so you know where they’re coming from”. Which is so vague—even having that info, how do I debias just the right amount?? How do I avoid overcorrecting and falling prey to my own confirmation bias?
My experience writing legibly actually flagged areas in my belief system I didn’t realize was so weak—a huge boon for myself here—and in retrospect, if I’d published illegible writings about those topics I’d now want to take down those posts, as it’s both embarrassing to me as well as a disservice to readers. This is despite me being on Simulacrum 1 (or so I think I was).
If they will come into existence later, they have moral weight now. If I may butcher the concept of time, they already exist in some sense, being part of the weave of spacetime. But if they will never exist, it is an error to leap to their defense—there are no rights being denied. Does that make more sense?
Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare.
This logic holds if it is an unassailable given that they will be born. If you remove that presupposition and make it optional, then these people can be counted as imaginary as jbash says. They become a real part of the future, and thus of reality, only once we decide they shall be. We might not. Maybe we opt for the alternative of just allowing the currently alive human beings to live forever, and decline to make more.
PS: Anyone know a technical term for the cognitive heuristic that results in treating hypothetical entities that don’t exist yet as real things with moral weight, just because we use the same neural circuitry to comprehend them, that we use to comprehend a real entity?
I can’t really see where this line of inquiry is going, so I’m not the right person to comment, but the list seems to be missing at least one thing:
Ask people to do you a favor
Oddly that makes people like you more, even though there is nothing obvious traded in return. I got that from either Dale Carnegie or Robert Cialdini.
I’ve often had the thought that controversial topics may just be unknowable: as soon as a topic becomes controversial, it’s deleted from the public pool of reliable knowledge.
But yes, you could get around it by constructing a clear chain of inferences that’s publicly debuggable. (Ideally a Bayesian network: just input your own priors and see what comes out.)
But that invites a new kind of adversary, because a treasure map to the truth also works in reverse: it’s a treasure map to exactly what facts need to be faked, if you want to fool many smart people. I worry we’d end up back on square one.
It happens, but you can’t exchange complex ideas this way. You know when someone’s talking and you nod or say “Yeah” to show you get it without interrupting? There’s a number of other short phrases you could say if you wanted, like “I know” or “Impossible” or “Dunno”, and that’s mostly what we deafies in Sweden do IME. It’s rare that hearing people do this, breaks a norm I guess, but it’s in principle you could do it. With sign you can also say a bit more complicated things without breaking flow like “That’s a misunderstanding” or “You’re lying” or sometimes drop in a whole sentence like “Actually no she didn’t ”… but at that point the conversation is getting heated and starting to break down.
I guess if you wanted to construct a fulltime full-duplex mode of conversation it would be a bit easier with hands than voices. Or to let one speaker use hands and the other use voice, so as to use different parts of the brain.
Today, dynomight made an interesting nuance in Observations about writing and commenting on the internet. It seems that just optimizing epistemic legibility may cause people to fail to listen altogether:
At least when I make my reasoning transparent and easy to falsify, I feel like I discard other qualities of writing, because I feel that should be enough. But to get an audience, it’s still important to try to sell it. Not by reverting to opaque reasoning, of course, but perhaps by demonstrating empathy with the reader, understanding of the inferential gap and… I don’t know what else.
Maybe it’s related to the concept in rhetoric of creating a “good audience”: encouraging them to be “attentus, docilis et benevolus”:
attentus (or “attentive”—because you cannot persuade if your audience is not paying attention)
docilis (or “teachable”—because you cannot persuade unless your audience can learn from you)
benevolus (or “benevolent”—because you cannot persuade unless you make a good impression on your audience).