manifold.markets/Sinclair
Sinclair Chen
More the latter.
It is clear that language models are not “recursively self improving” in any fast sense. They improve with more data in a pretty predictable way in S curves that top out at a pretty disappointing peak. They are useful to do AI research in a limited capacity, some of which hits back at the growth rate (like better training design) but the loops are at long human time-scales. I am not sure it’s even fast enough to give us an industrial revolution.
I have an intuition that most naiive ways of quickly tightening the loop just causes the machine to break and not be very powerful at all.
So okay we have this promising technology that do IMO math, write rap lyrics, moralize, assert consciousness, and make people fall in love with it—but it can’t run a McDonald’s franchise or fly drones into tanks on the battlefield (yet?)
Is “general intelligence” a good model for this technology? It is very spiky “intelligence”. It does not rush past all human capability. It has approached human capability gradually and in an uneven way.
It is good at the soft feelsy stuff and bad at a lot of the hard power stuff. I think this is the best possible combination of alignment vs power/agency that we could have hoped for back in 2015 to 2019. But people here are still freaking like gpt-2 just came out.
A crux for me is, will language models win over a different paradigm? I do think it is “winning” right now, being more general and actually economically useful kinda. So it would have to be a new exotic paradigm.Another crux for me is, how good at is it at new science? Not just helping AI researchers with their emails. How good will it be at improving rate of AI research, as well as finding new drugs, better weapons, and other crazy new secrets (at least) like the discovery of atomic power?
I think it is not good at this and will not be that good at this. It is best when there is a lot of high quality data and already fast iteration times (programming) but suffers in most fields of science, especially new science, where that is not the case.
I relent that if language models will get to the superweapons then it makes sense to treat this like an issue of national/global security.
Intuitively I am more worried about the language models accelerating memetic technology. New religion/spirituality/movements, psychological operations, propaganda. This seems clearly where they are most powerful. I can see a future where we fight culture wars forever, but also one where we genuinely raise humanity to a better state of being as all information technologies have done before (ha).
This is not something that hits back at the AI intelligence growth rate very much.
Besides tending the culture, I also think a promising direction for “alignment” (though maybe you want to call it a different name, being a different field) is paying attention to the relationships between individual humans and AI and the pattern of care and interdependence that arises. The closest analogue is raising children and managing other close human relationships.
Why are we worried about ASI if current techniques will not lead to intelligence explosion?
There’s often a bait and switch in these communities, where I ask this and people say “even if takeoff is slow, there is still these other problems …” and then list a bunch of small problems, not too different from other tech, which can be dealt with in normal ways.
I definitely think more psychologists should get into being model whisperers. Also teachers, parents, and other people who care for children.
Indeed people often play low status, small, energy preserving lying down curling up, frowny crying—in order to signal other people for reassurance. This gets trained out of people like us who use screens too much, that no one will come unless you give a positive and legible cry for help.
The reassurance of course, is about status and reputation. We still like you. We’re here for you. We’re still cool. Consider status a measure of the health of your social ties, which many people terminally value and in present society still provides instrumental, material value (jobs, places to crash, mutual aid, marketing / audience building for your future startup, …).
It makes sense to think of relationships as things that are built that have their own health instead of purely thinking of material output. The future is uncertain. You can’t model that far. You might get more returns later by investing now. More speculative, i think the drive to relate to others is borne from an ancient desire to form contracts with other agents to combine into (partial?) superagents. like bees in a hive.
In HPMOR, Harry has this big medical kit. But he doesn’t exercise and has no qualms messing up his sleep schedule by 4 hours of jet lag a day
Not very Don’t Die of him if you ask me
I also look down on people I consider worse than me. I used to be more cynical and bitter. But now people receive me as warm and friendly—even people outside of a rationalist “truth-first” type community.
I’m saying this because I see in you a desire to connect with people. You wish they were more like you. But you are afraid of becoming more like them.
The solution is to be upfront with them about your feelings instead of holding it in.
Most people care more about being understood than being admired. The kind of person who prioritizes their own comfort over productivity within an abstract system—they are probably less autistic than you. They are interested in you. If you are disgusted with their mindset, they’ll want to know. If you explain it to them, and then listen to their side of where they are coming from, and then you will learn a more detailed model of them.
If you see a way they personally benefit (by their own values) by behaving differently—then telling them is a kindness.
Another thing is that a lot of people actually want you to be superior to them. They want to be the kitten. They want you to take care of it. They want to higher status people around them. They want someone to follow. They want to be part of something bigger. They want a role model to have something to aim towards. Many reasons.
Being upfront can also filter you into social bubbles that share your values.
I like this. I think there’s some value in having elite communities that are not 101 spaces. But I am sure how or when I would use one. I do think I improve my rationality by spending time with particular smart thoughtful friends. But this doesn’t really come from exclusion or quarantining other people.
I enjoy the cognitive trashpit and that’s why I’m mostly on twitter now. I am happy to swim in soapy grimy dishwater, not so much because I want to raise the sanity waterline (ha) but the general public is bigger—more challenging, more important. Consider it aliveness practice, or like tsujigiri.
The twitter scene does have standards. but it’s more diffuse, decentralized, informal.
I feel like you’re trying to build a new science (great!) but I’m more interested in a new version of something like the viennese coffee house scene.
isn’t this what toothpicks are traditionally for?
sometimes i just run my fingernail through my teeth, scrape all the outward surfaces and slide it in between the teeth
shortform video has some epistemic benefits. you get a chance to see the body language and emotional affect of people, which transfers much more information and makes it harder to just flat out lie.
more importantly, everpresent access to twitter allows me to quickly iterate on my ideas and get instant feedback on every insane thought that flows through my head. this is not a path i recommend for most people. but it is the path i’ve chosen.
why aren’t a-intimates a thing? Where are the healthy relationships with people who just don’t particularly form intimate connections
This is called “casual” when it comes to sex and relationships. Hookups are casual sex. There’s twitter gender discourse on casual cuddling. Let’s broaden the term.
Porn consumption is casual. Romance fiction consumption is casual. Parasocial idol worship is casual. Usually.
Generally all the ways you work for someone else and buy things from someone else are all casual interactions (generally asexual and aromantic as well). Almost nothing you do to survive and thrive requires trust of others. Goods and services are unconditional on your internal state and therefore you gain very little by relating to them. We call people who live fully through casual relations to be “atomized”. Marx calls this “alienation”. Not all commerce happens this way though. Like, Japanese business culture relies a lot on personal ties. You can call such activity which relies on vulnerability and faith to be “acasual trade”.
the books in lighthaven are a trap. talk to people!
It would be so cool if the ea / rat extended universe bought a castle. You’d be able to host events like this. Acquiring the real estate would actually be very cheap, castles are literally being given away for free. (though maintenance might suck idk)
btw whytham abbey doesn’t count because it’s not even a castle
is reciprocity.io still up? did it move? link seems dead. I wanted to link to it in my substack article about manifold.love
… is it still hosted out of someone’s laptop? i’d be willing to help people get it onto better infra.
I wonder if this has more to do with how taxing it is to display 100s or 1000s of elements under modern unoptimized web dev practices. In particular GitHub’s commits page used to rerender the entire page on scroll. It is easy to program things arbitrarily badly and many an engineer would prefer just displaying fewer things rather than do it the better-quality but harder way.
what’s the deal with bird flu? do you think it’s gonna blow up
this is too harsh. love is a good feeling actually. it is something that many people deeply and truly want.
it is good to create mental frameworks around common human desires which are congruent with a philosophy of truthseeking.
interesting. what if she has her memories and some abstract theory of what she is, and that theory is about as accurate as anyone else’s theory, but her experiences are not very vivid at all. she’s just going through the motions running on autopilot all the time—like when people get in a kind of trance while driving.
You are definitely right about tradeoff of my direct sensory experience vs other things my brain could be doing like calculation or imagination. I hope with practice or clever tool use I will get better at something like doing multiple modes at once, task switching faster between modes, or having a more accurate yet more compressed integrated gestalt self.
tbh, my hidden motivation for writing this is that I find it grating when people say we shouldn’t care how we treat AI because it isn’t conscious. this logic rests on the assumption that consciousness == moral value.
if tomorrow you found out that your mom has stopped experiencing the internal felt sense of “I”, would you stop loving her? would you grieve as if she were dead or comatose?
yeah