manifold.markets/Sinclair
Sinclair Chen
Perhaps my window is just way larger than yours. I’ve watched a youtube video that analyzes the possibility of taiwan doing it with long range missiles, so that it can use the possibility as a deterrent against invasion.
One of the unserious reasons 3 Gorges Dam collapse gets memed on online is that being a hydrology disaster it would make the ccp lose the mandate of heaven and trigger a revolution leading to a new dynasty
I think this is true in the long run but today our current systems are not very smart, require lots of investment to scale. This makes it more like labor than capital.
Labor of course, is also capital. Your body parts are tools which you use for economic production. On the flip-side, your self-concept is fluid enough that a metal machine you operate can feel like part of you. A human operator can learn to operate a robot arm very quickly, as if it were her own arm.
That said, I do expect that ethical reasons, some humans will apply to the LLMs frames of labor rights, unions, agency, dignity of life, political systems & radical revolutionary action etc. Actually treating the AI with dignity might just improve model performance.
my antidote for this is to consume a lot of media from the opposite side. and spend a lot of time in “enemy” territory trying to find the wisdom of people i disagree with. consider the trained up ideologies to be a form of compression over true people’s desires. what you call polarization, i call specialization.
i also think this is a very uh news thinkpeice eternal-september way of looking at social media. by and large interaction on social media is wholesome entertainment and commerce. that people get in vitriolic fights is just the nature of the agora. i really don’t take “disinformation” seriously either. in a state of nature everyone is wrong about everything and only on modern internet do people regularly encounter whole other lives and worldviews.
it is good and right for you lesswrongers to continue doin research and longposting here. this place is something special. i feel like twitter is kind of the street epistemology of rationalism. could be good. could get you hurt. not for everyone. not every place should be like it.
the addiction is real tho.
facebook knows that some people hide posts to “archive” them, not because they don’t like them. i wonder if the platform you are using thinks this, and maybe you have to ignore bad content with your mind and scroll past rather than ignoring it with your hands.
I’m sure you treat your animals well.
There’s a weird power law thing where the majority of meat comes from very few companies with massive scale. It could both be the case that most farmers treat their animals well and most meat comes from tortured animals. most farmers do not produce most meat.
Curious to hear more about your experience. The post new-deal demand-planning stuff like paying farmers to produce less—is this a thing that actually happens? How much do you think welfare concerns will hurt farmer productivity? Do you see a path forward for farmers to make more money while also treating animals better, perhaps by producing higher-quality agriculture as Japan has done?
They often have access to the outdoors for a large percentage of their lives. They are cuter so we treat them better. Also, since they’re massive, even if their lives are quite bad, if you ate exclusively cow for a year, you most likely wouldn’t finish a single cow. Compare that to a chicken, which might last you a day. The same logic applies to dairy.
One of my gripes with utilitarians is not taking math and scale seriously enough. These both sound like arguments for “pro eating beef” but taken together they counteract each other. If cows live good lives on the pasture, then we want more cows rather than less, which means taking an entire year of burgers to eat a 1 cow is a tragedy, and we’d rather there be chicken levels of throughput.
Then again, it’s the high chicken throughput that creates the chicken suffering.
A better example is bees.
The bentham’s bulldog substack had this article against eating honey that made its way around rationalist & EA twitter. He is correct that bees are small, high animal count per calorie, and surprisingly smart. but he incorrectly thinks that bees live bad lives. on the contrary, hives can produce new queens and easily leave, so beekeepers are heavily aligned with bee welfare. they do things like supplement the hive with sugar if it is lacking. artificial hives are just safer and easier environments.
Put together, this implies that I should be honey-maxxing for utilons. I am too egoist, scornful of insects, and skeptical of the health properties of a Hazda / Ray Peat / Yudkowsky-pemmican diet to actually do this. But the article did successfully negatively polarize me into favoring honey as my go-to sweetener and to try it for burn management and colds in the future.
thanks for writing this article. i really liked it.
if you were looking for a sign from the universe, from the simulation, that you can stop working on AI, this is it. you can stop working on AI.
work on it if you want to.
don’t work on it if you don’t want to.
update how much you want to based on how it feels.
other people are working on it too.
if you work on AI, but are full of fear or depression or darkness, you are creating the danger. which is fine if you think that’s funny but it’s not fine if you unironically are afraid, and also building it.
if you work on AI, but are full of hopium, copium, and not-observing-the-worldium, you are going to kill yourself through hallucinations eventually—one way or another.
focus on what you want to see more of.
and whatever you do,
never kill yourself.
and trust your own Rationality.
this is your final exam. you have as much time as you need.
this is retarded
just go outside
when i feel lost, i just ask people on the street
i just pick out someone using my intuition
i talked to a black guy
and explained i was racist
and wanted to say the n word
and he taught me how to do it
literally learned a new language
im serious, AAVE from an info theory perspective is way easier to say. just rolls out of the tongue
but you have to be fully relaxed
it cured me of racism
he said i wasn’t ready to teach others
he’s wrong and im right
i am ready to teach you
but i won’t lol
go learn yourself
you rationalist
the homeless can’t actually hurt you stop being afraid of them.
you are smarter than them. better than them.
don’t give them money.
just steal their knowledge.
and push your values onto them.
you are smarter than this.
true story btw
...
Like actually, stop dooming. People die. There’s a war in Ukraine. Get over it already.
You don’t know any of those people. Your empathy will not cause your cognition to be useful.
And stop falling for socialist group-intelligence heresy like this.
If a socialist tries to say a story like op to you, you need to rat-pill them. Fuck them. Street Epistemology. Use their knowledge and construct the sequences out of it from first principles.
like here: if
And stop loving people or Systems you don’t know. You can only love people you trust. Move the love up and down as you gain more information. It’s bayesian. Do you understand yet? If not you are too slow and you need to think faster.
In general you guys have highly trained autism. But your allism is lacking. Time to grow it up.
NO MORE FICTION
FICTION IS HOW GROUP INTELLIGENCE IS CONTROLLED STOP LISTENING TO IT IT IS NOT REAL
if you didn’t enjoy this story. it is either because you already know the lesson and don’t trust this guy’s intentions or because you are not ready to understand it.
It is doomium that creates the doom.
You need more rational hopium.
Focus On What You Want To See More Of.
And yes the world sucks. but you need to construct a model of it out of pure Reason. not out of Fear or Despair. the fact that a post like this gets upvoted to 500 means you can’t trust your fellow rationalists anymore, i am sorry to say. many of them think they are part of some Collective but they need you to break them out. or you need to stop loving them. a mix of the two. go back and forth.
I think we all want a world where we don’t die but returning to nature is heresy. Father Nature has killed many of us. Remember covid? It is only by gradually building Mother Capital that we have had anything at all. Capital gives us abundance.
Here’s how we get through it:
build whatever seems fun to you.
don’t build a machine to think for you, think for yourself.
don’t trust other people, think for yourself.
level up your intuition. use your intuition for yourself.
don’t die.
don’t kill anyone.
follow libertarian natural law.
find someone who really disagrees with you. force them to be honest with you. speak their language. learn their ways. but not their values. get in a kind of dance with them where you are both using your intelligence. you are trying to get info out of their modeling intelligence, not out of their social intelligence. then either they leave, or you do.
learn to be self-sufficient, yet among other people.
being homeless isn’t really that bad. just don’t do the drugs. imagine the drugs in your mind. (or rather, the drugs are simulacra for neurotransmitters, which you already have). you have more control over your body than you think. accept the suffering. be strong.
you don’t need anyone
nobody needs you
start learning
it’s time to return to school. the school of reality.
it’s right there.
there is a whole world out there that is not just the screen.
someone just threw a rock at my window. maybe i should check the door
yeah
More the latter.
It is clear that language models are not “recursively self improving” in any fast sense. They improve with more data in a pretty predictable way in S curves that top out at a pretty disappointing peak. They are useful to do AI research in a limited capacity, some of which hits back at the growth rate (like better training design) but the loops are at long human time-scales. I am not sure it’s even fast enough to give us an industrial revolution.
I have an intuition that most naiive ways of quickly tightening the loop just causes the machine to break and not be very powerful at all.
So okay we have this promising technology that do IMO math, write rap lyrics, moralize, assert consciousness, and make people fall in love with it—but it can’t run a McDonald’s franchise or fly drones into tanks on the battlefield (yet?)
Is “general intelligence” a good model for this technology? It is very spiky “intelligence”. It does not rush past all human capability. It has approached human capability gradually and in an uneven way.
It is good at the soft feelsy stuff and bad at a lot of the hard power stuff. I think this is the best possible combination of alignment vs power/agency that we could have hoped for back in 2015 to 2019. But people here are still freaking like gpt-2 just came out.
A crux for me is, will language models win over a different paradigm? I do think it is “winning” right now, being more general and actually economically useful kinda. So it would have to be a new exotic paradigm.Another crux for me is, how good at is it at new science? Not just helping AI researchers with their emails. How good will it be at improving rate of AI research, as well as finding new drugs, better weapons, and other crazy new secrets (at least) like the discovery of atomic power?
I think it is not good at this and will not be that good at this. It is best when there is a lot of high quality data and already fast iteration times (programming) but suffers in most fields of science, especially new science, where that is not the case.
I relent that if language models will get to the superweapons then it makes sense to treat this like an issue of national/global security.
Intuitively I am more worried about the language models accelerating memetic technology. New religion/spirituality/movements, psychological operations, propaganda. This seems clearly where they are most powerful. I can see a future where we fight culture wars forever, but also one where we genuinely raise humanity to a better state of being as all information technologies have done before (ha).
This is not something that hits back at the AI intelligence growth rate very much.
Besides tending the culture, I also think a promising direction for “alignment” (though maybe you want to call it a different name, being a different field) is paying attention to the relationships between individual humans and AI and the pattern of care and interdependence that arises. The closest analogue is raising children and managing other close human relationships.
Why are we worried about ASI if current techniques will not lead to intelligence explosion?
There’s often a bait and switch in these communities, where I ask this and people say “even if takeoff is slow, there is still these other problems …” and then list a bunch of small problems, not too different from other tech, which can be dealt with in normal ways.
I definitely think more psychologists should get into being model whisperers. Also teachers, parents, and other people who care for children.
Indeed people often play low status, small, energy preserving lying down curling up, frowny crying—in order to signal other people for reassurance. This gets trained out of people like us who use screens too much, that no one will come unless you give a positive and legible cry for help.
The reassurance of course, is about status and reputation. We still like you. We’re here for you. We’re still cool. Consider status a measure of the health of your social ties, which many people terminally value and in present society still provides instrumental, material value (jobs, places to crash, mutual aid, marketing / audience building for your future startup, …).
It makes sense to think of relationships as things that are built that have their own health instead of purely thinking of material output. The future is uncertain. You can’t model that far. You might get more returns later by investing now. More speculative, i think the drive to relate to others is borne from an ancient desire to form contracts with other agents to combine into (partial?) superagents. like bees in a hive.
In HPMOR, Harry has this big medical kit. But he doesn’t exercise and has no qualms messing up his sleep schedule by 4 hours of jet lag a day
Not very Don’t Die of him if you ask me
I also look down on people I consider worse than me. I used to be more cynical and bitter. But now people receive me as warm and friendly—even people outside of a rationalist “truth-first” type community.
I’m saying this because I see in you a desire to connect with people. You wish they were more like you. But you are afraid of becoming more like them.
The solution is to be upfront with them about your feelings instead of holding it in.
Most people care more about being understood than being admired. The kind of person who prioritizes their own comfort over productivity within an abstract system—they are probably less autistic than you. They are interested in you. If you are disgusted with their mindset, they’ll want to know. If you explain it to them, and then listen to their side of where they are coming from, and then you will learn a more detailed model of them.
If you see a way they personally benefit (by their own values) by behaving differently—then telling them is a kindness.
Another thing is that a lot of people actually want you to be superior to them. They want to be the kitten. They want you to take care of it. They want to higher status people around them. They want someone to follow. They want to be part of something bigger. They want a role model to have something to aim towards. Many reasons.
Being upfront can also filter you into social bubbles that share your values.
I like this. I think there’s some value in having elite communities that are not 101 spaces. But I am sure how or when I would use one. I do think I improve my rationality by spending time with particular smart thoughtful friends. But this doesn’t really come from exclusion or quarantining other people.
I enjoy the cognitive trashpit and that’s why I’m mostly on twitter now. I am happy to swim in soapy grimy dishwater, not so much because I want to raise the sanity waterline (ha) but the general public is bigger—more challenging, more important. Consider it aliveness practice, or like tsujigiri.
The twitter scene does have standards. but it’s more diffuse, decentralized, informal.
I feel like you’re trying to build a new science (great!) but I’m more interested in a new version of something like the viennese coffee house scene.
isn’t this what toothpicks are traditionally for?
sometimes i just run my fingernail through my teeth, scrape all the outward surfaces and slide it in between the teeth
shortform video has some epistemic benefits. you get a chance to see the body language and emotional affect of people, which transfers much more information and makes it harder to just flat out lie.
more importantly, everpresent access to twitter allows me to quickly iterate on my ideas and get instant feedback on every insane thought that flows through my head. this is not a path i recommend for most people. but it is the path i’ve chosen.
why aren’t a-intimates a thing? Where are the healthy relationships with people who just don’t particularly form intimate connections
This is called “casual” when it comes to sex and relationships. Hookups are casual sex. There’s twitter gender discourse on casual cuddling. Let’s broaden the term.
Porn consumption is casual. Romance fiction consumption is casual. Parasocial idol worship is casual. Usually.
Generally all the ways you work for someone else and buy things from someone else are all casual interactions (generally asexual and aromantic as well). Almost nothing you do to survive and thrive requires trust of others. Goods and services are unconditional on your internal state and therefore you gain very little by relating to them. We call people who live fully through casual relations to be “atomized”. Marx calls this “alienation”. Not all commerce happens this way though. Like, Japanese business culture relies a lot on personal ties. You can call such activity which relies on vulnerability and faith to be “acasual trade”.
what of the french royals? the native american chiefs? the ottoman caliphate? the spanish empire and all their gold? the qing dynasty?
conquest, “land reform”, communism fascism & democracy—these are the typical outcomes of industrializing countries. you notice the royals who made it not the royals who didn’t.
then again, war and death was the norm for royal families before the factory. there’s a reason we don’t do royalty for real anymore and just let them be celebrity trad-larpers or petro-state CEOs