Probably depends on the terrain. Detect someone in an empty desert? In a city? In a forest where animals live?
Viliam
Yes, social incentives are important. But it is also important that people donate to actually effective charities… otherwise they could get the same (maybe even better!) social rewards for locating to a local church.
Given that social rewards are usually only very loosely correlated with how good something is, it is great to have a community that aligns them better. But it easy to goodhart these things. (For example by visiting EA events, but actually not donating… maybe with the excuse that “I will donate later… much later...”.)
One of the advantages of home office is that I can do things like this during work. Sadly, in open space office this is not an option.
Well, we don’t get to see the parallel timeline, so it is difficult to say precisely what is the difference. Intuitively, I compare the previous state to the current state, but that assumes that in the parallel timeline “nothing happened”. Perhaps other interesting things have happened there.
The most obvious change in my behavior immediately after reading the Sequences: I stopped debating politics online. (Previously I wasted a lot of time doing that. Although now I am spending that much time on LW and ACX.) Suddenly, debating politics online felt like talking to retards: people were making the same obvious mistakes over and over again with no intention to ever learn. I think this was good for my mental health.
I have also met a few friends in the rationalist community. As a result of our discussions I started to care about my health more, and bought some bitcoins. I think some of these things would not happen in most of the parallel timelines. I probably use AIs a bit more and better than I would without reading LW.
I wish I had some more impressive results, but I am happy even for these improvements. (My excuse is that I have small kids and the community in this part of the world is very small. My benefits seem to come mostly as a result of interacting with other rationalists. I guess it is much easier for me to take ideas seriously when I also receive some social support for that. Reading alone does not have the same effect.)
I know some people who met each other on LW meetups, started doing some crypto business together, now they are rich and… moved away and I lost regular contact with them, sadly. I am not saying here that crypto = rationality. But it was the rationalist community that allowed them to meet each other: smart people who share some perspective and can trust each other’s sanity.
Across the world, I think Scott Alexander has benefited a lot from the community. It would make more sense to ask him what specifically he attributes to it.
I think there’s also two ways to break down the sanity project, which is one- can you make a receptive audience rational or improve their lives through rationality, and two, can you raise the societal sanity waterline, like make the average person saner.
In my case (not sure how typical), the greatest value seems to come from small local groups. Local, because meeting people in real life seems better than chatting with them online; our monkey brains treat we meet as “more real” than the ones who only seem to exist on the screen. (That’s probably a good intuition for the era of online bots.)
There is a lot of value in taking five minutes by the clock to actually think about a problem. I think there is even more value in taking fifteen minutes to talk about a problem with your trusted fellow rationalist friends. There is something powerful in having people whose opinion you can trust, who practice some basic epistemic hygiene so that their advice does not contain things like “you have to pray” or “that’s fate” or “hey, try this scam I found online, it will totally work”.
I’m interested in the model where schools are generally a big cause of the problem of societal irrationality, but that doesn’t seem like something people talk about much in this community.
There are many articles about education, many of them critical. (It’s just difficult to find them among all those AI-related posts.) I think the consensus is that schools are mostly a waste of time, a very costly signal of conformity that many employers want.
(I don’t think this is a complete waste. There are people who are unable to keep a job because they are simply unable to wake up and come to the job every day consistently. The society benefits from a mechanism that trains them and certifies this. It’s just a huge waste of time for everyone smarter and more disciplined than that. Without the school system the society would probably split to a small group of homeschooled geniuses, a medium-sized of kids with mixed results approximately as good as they have now, and the largest group of completely unemployable idiots at the bottom. So far, we still the people at the bottom to be able to get jobs. Also, the idea of even stupider voters in democracy is scary.)
The perspective of school not merely as a waste of time but actively harmful… well, there is the “teacher’s password” anti-pattern, but I do not remember anything more in this direction.
When I tell people here that my initial point of contact to this world was not Scott Alexander or Gwern, but rather “Aella’s gangbang flowchart” or “Decker’s encounter with the US Secret Service”, it raises eyebrows. I think someone referred to that as “third generation rationalism” or something along those lines, but that may have been derisive.
I guess the the first generation found Yudkowsky at Overcoming Bias, the second generation found Scott Alexander at Slate Star Codex, and the third generation found Aella at Substack.
It’s possible that I understand some or all of these terms incorrectly.
You understood all the terms correctly.
He thinks there’s probably some amount of selection in favour of autistic people for Rationalism generally and the Inkhaven Residency specifically.
I guess mild autism (asperger) generally correlates with taking ideas seriously, and doing things outside of what is considered normal.
Or, from the opposite perspective, being a normie correlates with carefully doing and saying things that people in your peer group do.
They say what they think and they don’t soften their language for fear of negative feedback. That conviction takes courage. [...] People here operate with a sort of baseline fearlessness.
Within the community. You don’t know whether they act the same way outside. It is easier (I am not saying always easy) to have courage when you feel safe.
It’s a weird and tiny thing, but the presence of families here makes Lighthaven feel so much more alive.
I guess instinctively, families make a group feel like a tribe, as opposed to a task force.
I need to figure out whether what I enjoy are the tenets of the philosophy, culture, and community or the vibes of the Inkhaven Writer’s Residency.
Yep. I hesitated how to say this, especially after you wrote such a nice blog, but...
...there is a difference between being a rationalist, and liking to hang out with rationalists.
A good example is the discussion forum at Astral Codex Ten—a group of people who like to read texts of Scott Alexander, who is a rationalist, but maybe 1⁄3 of the blog readers are. Which is OK, if that is OK for Scott (and it seems that it is), but sometimes it creates a confusion when someone at ACX talks about “rationalists” and actually means “readers of ACX”. (As in “why do rationalists say X?” because there were a few comments saying X, but the person does not bother to check whether the authors of those comments identify as rationalists.)
Analogically, I used to hang out with a group of Christians, but I was never religious myself. I liked some of the vibes, but I couldn’t take the stories about the supernatural seriously, and I was aware how much “what Bible says” is their selective reading (because it obviously said different things to different groups—which is why I liked this specific group, not Christianity in general).
Seems like the empiric lesson is that it is dangerous to be a half-rationalist, and we do not have a good way to test who would be helped and who would be hurt by learning about rationality.
There are people who benefited from being introduced to LW-style rationality and the rationalist community in general, sometimes a lot. (I include myself in this group.)
There are also people who got hurt, or hurt others, as a result of being exposed to the rationalist memes and the community. (A long list culminating with Zizians.)
We communicate openly online, so there are no gatekeepers to the knowledge; no driving license.
Introduce some secrecy? That seems to increase harm: some problematic groups (Zizians, Leverage Research) positioned themselves aside from the mainstream rationalist community, and didn’t communicate their original insights. The result was a dramatic decrease of sanity, as even obviously wrong and harmful ideas could propagate in a small group supported by the charisma of its leader.
Introduce more common sense and conservatism? Seems like a post-rationalist project, which from my perspective is just watered-down rationalism. (Plus Buddhism, which is like… how the fuck can people who have read the Sequences and understood how mysterious answers are wrong and religions are silly, suddenly embrace some stupid thousand years old religion, just because it has promised them superpowers if they meditate hard enough? After a few years of experimenting, does anyone already have the superpowers, or is it that you just have to keep believing and keep practicing indefinitely without expecting any experimentally verifiable results?) I frankly, do not see any higher sanity here. It is just the normie epistemology, with all of its advantages and disadvantages.
Perhaps we should have a group of “certified sane rationalists”, and a way to ask them, if you have a crazy mindblowing idea. (Creating the group would be relatively simple: start with some core, e.g. Eliezer Yudkowsky and Anna Salamon and Scott Alexander, and then add or remove people based on majority vote of the current group members.) The problem is, either most people wouldn’t bother asking them, or they would be too busy responding.
Last time I did this I put them on a new YouTube channel. In retrospect, that was a mistake: I haven’t uploaded anything to that channel since that initial burst, and there’s a good chance I never upload again. So I’ve just put these on my regular channel.
You can make a “playlist” of videos on YouTube. They are still in your channel, but grouped together, so you can share the link to either the channel or the playlist. (e.g. channel—playlist)
So both pro- and anti-capitalist people seem to underestimate how much big companies break the law? Pro-capitalists, because they want to defend all companies (they don’t realize how much an essential part of capitalism is that bad companies fail). Anti-capitalists, because they see the problem with companies per se, or market per se, so they don’t care much about details.
Yeah, I would expect that big companies win unfairly by lobbying and changing the laws in their favor, not by simply breaking the laws. But it makes sense that if you can bribe the legislative part of the government, you can probably bribe the judicial part, too. So breaking the law and not getting punished is easier than waiting for the law to be changed in your favor, and gives you more of an advantage against competitors.
I am not familiar with the American justice system, so I can’t comment on it. Here in Slovakia, the justice system is utterly corrupt. We had situations like the mother of a local crime boss was the regional judge, and she always ruled in favor of her son, no matter what he did. There is also a big company famous for winning all big construction contracts from the government, giving all the actual work to subcontractors, and often simply not paying the subcontractors—putting not just the profit but the entire budget in their own pockets. I kinda hoped it was better in other countries.
Cynically speaking, when you break the law as a CEO, you have multiple lines of defense:
you may simply not get caught
the prosecution may decline to prosecute you
your expensive lawyers may find a way to win
you may bribe the judge
worst case, the company (i.e. the shareholders) will pay the penalty, not you
I changed my mind about what big companies are like and about how capitalist, rights-respecting and law-abiding our society is. I wrote Capitalism Means Policing Big Companies. I lowered my opinion of billionaires in general. And I lowered my opinion of anarcho-capitalism. I see errors in the anarcho-capitalist literature that I don’t want to associate with.
There is a thing I was thinking a lot about recently, that I have never seen written, until now.
The non-aggression principle says that people should not initiate violence or fraud. The libertarians I see online keep complaining about violence (especially when talking about tax) all the time. But they are suspiciously silent about fraud. Or customer manipulation, which is basically fraud-lite. If there is a debate about fraudulent businesses, the only contribution of local libertarians is typically something like “I hope you do not suggest that the government do something about it, because government is funded by taxes, and taxation is violence”.
This asymmetry makes me think that many libertarians are probably quite okay with fraud and manipulation; that they see them as an essential part of the sacred freedom. Perhaps not consciously; but unconsciously, thinking about regulations of fraud makes them angry, thinking about fraud itself does not. Perhaps the idea is that smart people would research everything carefully, and the stupid people kinda deserve it.
(Even when I think about the books describing libertarian utopias, e.g. written by Heinlein, the protagonist is often a super skillful lawyer or amateur lawyer, reads all his contracts carefully, notices all suspicious parts, and can craft his own bulletproof contracts. So there is a strong “fraud—that could never happen to me” vibe.)
From my perspective, even exploiting someone’s stupidity is not fundamentally different from exploiting someone’s weakness. Stupidity is a weakness of mind, and fraud is a violence against mind. I would even go so far that in my utopia, if your advertisement confuses an IQ 80 person to believe something, and then you go like “ha ha, the small print says otherwise”, you should be treated as if your contract literally said what your ad says, ignoring the small print. (The small print can provide additional details, not fundamentally change the nature of the contract.) If you said it, and the other person heard it, own it. If it’s knowingly false, don’t put it in print.
Specifically, the things that make other people happy will not necessarily make you happy. So the advice on how to achieve them, even if factually correct, may be irrelevant for you.
a ruined body or or mind usually comes from persistent bad habits rather than a single try.
Then the relevant advice could be something like:
It is good to try various things out of curiosity (if they are harmless), but after trying them, evaluate whether the experience was good or bad for you, and feel free to stop doing the things that were bad. The fact that you tried something once does not create an obligation to continue. You are free to start doing things and to stop doing thing.
That said, consider how much time would it take for a noticeable effect to happen. For example, going to a gym once won’t have much of a positive effect, so it may feel like just wasted time. On the other hand, people sometimes procrastinate on ending a useless activity. Try to decide in advance how much time should you spend on the activity until it either delivers the positive results or you stop.
Thank you! I was already thinking in a similar direction; the main difference was the following: instead of giving Claude instructions in text files, I would provide them in chat, and tell Claude to create the files itself.
When I am vibecoding, I typically tell Claude what to do, and when it does, then I tell it to also write project documentation. Then I read the documentation and comment on it. The idea is that the documentation is a live document, like if I come up with some creative idea in the middle of the project, I tell Claude to do it, and when I am happy with the outcome, I tell it to also update the documentation.
I usually provide some rough structure for documentation. So in this case I would probably tell Claude to create files “conversation.md” for conversational rules it is supposed to use with me, “viliam.md” for general facts about me, “goals.md” for the goals I am trying to reach… uhm, that’s just the first idea; the advantage of using an AI is that refactoring it later takes almost no effort.
Providing a current date every day sounds very useful; it can give Claude e.g. the possibility to ask: “three days ago you said that you planned to do X, did you actually?”
The main thing I am worrying about right now—but maybe I should just go ahead and try it, instead of speculating—is that when I am talking with my friends, there are different “modes”: just listening and providing empathy, or giving advice. I think the important part is the flexibility; if the friend only listened, I would be missing the good ideas and perspectives; but if the friend always provided advice, I would feel not seen sufficiently. It is annoying when LLMs end every single response with some suggestion what to do next. But I would appreciate a suggestion sometimes. Maybe I should be explicit about it? It is also a question of time: if I have enough time, I am more open to suggestions; but sometimes I want to get to the point without being interrupted or distracted.
I guess I will start with your instructions, add some of my own, but I will write it all in the chat and tell Claude to make notes for itself… and then I’ll see how it goes.
EDIT:
The first conversation was just as good as I imagined. Thank you for giving me inspiration and specific advice!
Okay, I’ll try.
But I am mostly thinking about how powerful meta move is it to invent mechanisms that prevent pies from being confiscated. Something like GNU GPL that helped create the entire ecosystem of free software. I can easily imagine a parallel universe where this license does not exist, and people not only can’t imagine it, but many of them signal cleverness by economical arguments why something like this is impossible in principle.
What other anti-pie-grabbing mechanisms exist in parallel universes but not in ours?
It seems to me that the greatest winners in real life are the people who spend their actual effort on getting as large part of the pie as possible, while convincing everyone around them that the virtuous thing is growing the pie. Imagine someone like Sam Altman—convince lots of smart people with technical skills to create an “open” AI to benefit the entire humanity… then stab them in the back and make the AI company closed and profit-oriented.
In my experience, this seems more like a rule than an exception. This is how people get to the top. You need to talk a lot about growing the pie… if you can’t inspire enough people to do it, the pie won’t grow large enough. But while everyone around you is busy growing the pie, you set up the mechanism that will allow you to take it all.
Now, we have some mechanisms to prevent this kind of traps. There are free software licenses, which prevent the project leader from simply kicking out the developers after they have completed their work, and capturing the long-term value. There are cooperatives, which prevent the boss from capturing the long-term value of the company and kicking out the early employees who burned out working for him. But of course, the people who plan to capture the value will try to discourage others from using these solutions. “Just trust me, bro.”
I mean, I am totally in favor of cooperation, but optimism alone is not enough. Sometimes, if you spend 5 seconds—not even minutes—thinking about it, you can predict who will get the pie and how, because it is often trivial. It typically only requires them to say “I don’t need you anymore” after the pie is ready.
I suspect the complexity of work might expand to consume the slack you have just created.
For example, agile software development allowed developers to better react to last-minute changes in plans. As a response to that, many companies mostly stopped planning—what’s the point of thinking about something in advance, if you can change your mind at any moment you want, even repeatedly. The technique that was created to deal with essential chaos coming from outside the company, is now mostly used to battle incidental chaos created by the company itself.
It even leads to more micromanagement, because where previously the managers made the plans, and then the developers made decisions on how to implement it, now the managers can throw dozens of little jira tickets at them, which means that the very task of “splitting a large piece of work into smaller pieces and prioritizing them” was taken away from the developers.
For example, if I told you that you have to create 20 dialog windows which have 95% of the same content, only some different details, you could create one superclass or template class, and then quickly produce 20 subclasses. But if instead I give you 20 jira tickets, one by one, you need to implement each dialog separately, because you can never justify why this specific ticket requires that you create the common abstraction.
I do not have sufficient skills at AI coding to predict what exactly will go wrong; I just assume that it will, based on my previous experience. Now the system saves you some cognitive power—I expect that in response, the accidental chaos in the companies will increase to the degree when you again will have to spend just as much cognitive power as before, only most of it will be spent on battling the new chaos, not producing the software features. For example, the management will change their opinion on what kind of software you are producing 20 times a day, or something like that. Some kind of horror that we can’t even imagine today, but two years later it will be considered unprofessional to complain about it. (Just like today, saying “can’t you guys simply spend 5 minutes thinking about the thing before you create me a jira ticket?” makes you insufficiently “agile”.)
Two possible objections:
1) Tradeoffs that seem reasonable at the moment may appear less reasonable later, when the environment changes. For example, when web pages were mostly texts, it didn’t matter if there was some bloat in the web browser. But then the web pages themselves exploded in size, and now opening five new tabs of Less Wrong caused my Firefox to slow down and sometimes crash, which makes me angry at both Less Wrong and Firefox—I see no reason why inactive tabs should tax the computer so much. So today, making the browsers more efficient would help all Less Wrong readers, all Substack readers, all Electron app users, etc.
2) The usual problem that creating value is not the same as capturing value, and the companies only care about the latter. Making each part of the system more efficient makes the entire system more efficient as a whole, but people are only willing to pay for some parts of the system.
That is, before I read Edward Teach’s Sadly, Porn that is outright misanthropic, and still feels pretty accurate whenever I can make any sense of it.
Sounds like I should read the book, which sucks, because the only consistent message in reviews is that it is hard to guess what the author is trying to say.
People also have aesthetic preferences (read: values) that do not have obvious self-interested purpose.
Yep. The extremely cynical explanations can be cool and edgy, but their maps miss large parts of the territory. A world where everyone is a psychopath in denial would have some things similar to our world, but also some things wildly different. Unless you keep adding epicycles until everything can be explained as some kind of 5D chess move, but then the theories lose a lot of predictive power.
(The entire point of claiming that someone is selfish is to make a prediction that in certain types of situation their previously hidden selfishness will manifest in their behavior. But whenever it does not, you add an extra explanation, like “well, even if they no longer need to lie to others, they still keep lying to themselves” and “they have managed to convince themselves of their own goodness so thoroughly that they no longer recognize the right opportunity to stop”… yeah, maybe it is true in some metaphysical sense, but if you can justify such things retroactively, you should also include them in your predictions of future behavior. You can’t predict that people would certainly do a bad thing if they got an opportunity and keep finding excuses whenever they don’t.)
Having a wrong mental model is not the same as not having a mental model at all. I agree that expecting the capabilities to level out soon is unjustified, but it’s probably what most people believe.
This is a lazy but natural generalization from the past experience: There are no flying cars. The light bulbs are everywhere, but they don’t grow exponentially to the point where they would already burn entire cities. All white collar jobs require computers, but you still need plumbers to fix broken pipes.
Why should this new shiny toy be any different? Priors say the hype is unjustified.
Sure, we know better, but most people do not think on that level. They do not see that some things generalize in ways that most things don’t. They do not see that a better mousetrap only replaces the older mousetrap, but e.g. a computer can replace a typewriter and calculator and television and phone and many other things, to the degree that some people already use computers for most things they do. And that artificial intelligence will be even more like this for the intellectual tasks, and even more when it also gets robotic bodies, and that it could take humans out of the loop entirely.
The outside view heuristic fails when it encounters something that happens to be truly exceptional.
I would appreciate a more detailed explanation of how specifically you use Claude.
My attempts to use Claude as some kind of coach / therapists lead to Claude adopting various annoying personalities. So either you are doing something very differently, or you have greater tolerance for that.
The self-preservation instinct makes the difference. Your employer made a big mistake by failing to detect it.