From Newcastle, Australia to Berkeley, San Francisco. I arrived yesterday for Less.online. I’ve had a bit of culture shock, a big helping of being increasingly scared, and quite a few questions. I’ll start with those. Feel free to skip them.
These questions are based on warnings I’ve gotten from local non-rationalists. Idk if they’re scared because of the media they consume or because of actual stats. I’m asking these because they feel untrue.
Is it ok to be outside after dark?
Will I really get ‘rolled’ mid day in Oakland?
Are there gangs walking around Oakland looking to stab people?
Will all the streets fill up with homeless people at night?
Are they chill? In Aus they’re usually down to talk if you are.
Culture shocks for your enjoyment:
Why is everyone doing yoga?
To my Uber driver: “THAT TRAIN IS ON THE ROAD!?”
“I thought (X) was just in movies!”
Your billboards are about science instead of coal mining!
“Wait, you’re telling me everything is vegan?” Thank Bayes, this is the best. All our vegan restaurants went out of business.
People brag about things? And they do it openly? At least, I think that’s what’s happening?
“Silicon Valley is actually a valley?!” Should have predicted this one. I kinda knew, but I didn’t know like I do now.
“Wow! This shop is openly selling nangs!” (whip its) “And a jungle juice display!”
All your cars are so new and shiny. 60% of ours are second hand
Most people I see in the streets look below 40. It’s like I’m walking around a university!
Wow. It’s really sunny.
American accents irl make me feel like I’m walking through a film.
“HOLY SHIT! A CYBER TRUCK?!”
Ok this is a big one. Apps I’ve had for 8+ years are suddenly different when I arrive here?
This is what Uber is meant to be. I will go back to Australia and cry. Your airport has custom instruction… in app! WHAT!? The car arrives in 2 minutes instead of 30 minutes. Also, the car arrives at all.
The google app has a beaker for tests now?
Snap maps has gifs in it
Apple Maps lets you scan buildings? And has tips about good restaurants and events?
When I bet in the Manifold app. A real paper Crain flies from the nearest tree, lands in front of me and unfolds. Written inside, “Will Eliezer Yudkowsky open a rationalist bakery?” I circle “Yes”. The paper meticulously folds itself back to a Crain. It looks at me. Makes a little sound that doesn’t echo in the streets but in my head, and it burns. Every time this happens I save the ashes. Are Manifold creating new matter? How are they doing this?
That one was a lie
Things that won’t kill me but scare me rational/irrational:
What if I’ve been wrong? What if this is all a scam? A cult? What if Mum was right?
What if I show up to the location and there is no building there?
What if I make some terribly awkward cultural blunder for SF and everyone yells at me?
What if no one tells me?
I’m sure I’ll be at least in the bottom 5% for intelligence at Less Online. I won’t be surprised or hurt if I’ve got the least Gs of people there. But what if it all goes over my head? Maybe I can’t even communicate with smart people about the things I care about.
What if I can’t handle people telling me what they think of my arguments without kid gloves? What if I get angry and haven’t learnt to handle that?
I’m just a Drama teacher and Psych student. My head is filled with improv games and fun facts about Clever Hans! ‘Average’ Americans seem to achieve much higher than ‘average’ Australians. I’m scared of feeling under qualified.
Other things:
Can you think of something I should be worried about, that I’ve not written here?
I’ve brought my copies of the Rationality A-Z books. I want to ask people I meet to sign their favourite post in the two books. Is that culturally acceptable? Feels kinda weird bc Yud is going to be there. But it would be a really warm/fuzzy item to me in the future.
I don’t actually know what a lot of the writers going look like. I hope this doesn’t result in a blunder. But might be funny, given that I expect rationalists to be pretty chill.
Are other people as excited about the Fooming Shoggoths as I am?
I’m 23, I have no idea if that is very old, very young, or about normal for a rationalist. I’d guess about normal, with big spread across the right of a graph.
It feels super weird to be in the same town as a bunch of you guys now. I’ve never met a rationalist irl. I talked to Ruby over zoom once, who said to me “You know you don’t have to stay in Australia right?” I hope Ruby is a good baseline for niceness levels of you all.
If you’re going, I’ll see you at Less.Online. If you’re not, I’d still love to meet you. Feel free to DM me!
I would bet hard cash that “I’m sure I’ll be at least in the bottom 5% for intelligence at Less Online. I won’t be surprised or hurt if I’ve got the least Gs of people there” is wrong.
But okay, let’s just go with your fears and assume for sake of argument that you’re right… Last year, Aella did a live polling event at Manifest where she asked us to, in general, first predict how we relatively compared to the crowd on any N, then define actual buckets of values and arrange ourselves by those buckets.
(So, e.g., “predict how recently have you exercised compared to others; the most recent, all the way on the left, the least recent, all the way on the right” and then “okay, let’s define from that fence to that chair as ‘in the last hour’, the chair to the steps as ‘in the last day’, etc.” and have folks line up twice based on first their relative guess, then their actual value.)
I was overall deeply surprised when she had folks line up under that system by SAT scores and IQ scores. It was, generally, extremely poorly correlated with how awesome it was to get to talk with a given person.
This post already expresses a great deal of the vibes that make me think it will be awesome to meet you, and that you get a lot of the vibes folks are aiming for, as far as I can tell. Looking forward to meeting ya!
Re safety, I don’t know about Oakland but some parts of SF are genuinely the most dangerous feeling places I’ve ever been to after dark (because normally I wouldn’t go somewhere, but SF feels very fine until it isn’t). If I am travelling to places in SF after dark I’ll check how dodgy the street entrances are.
berkeley, san francisco is like saying maitland, newcastle
re: #1: yeah I don’t feel unsafe outside after dark in the bay. If a homeless person walks by, I’ll just say hi and ask if they have any urgent unmet needs. even just acknowledging them as a person is a nice gesture, though. many will try to engage much more than you have time or interest for; it’s okay to just walk away from the convo.
had to look up what “getting rolled” is. yeah, it’s possible, but not that hard to avoid. if an area seems very poor, there will be more desperate people. but the highest risk of being robbed is probably opportunistically on the train. keep your eyes mobile; it’s probably a 1 in 300 to 1 in 3,000 train trips event, but it’s pretty annoying when it happens, to put it mildly.
I’m not aware of there being an intense presence of organized aggressive groups in oakland, but there’s certainly plenty of disorganized aggression, again mostly from desperate people. I got out of what was going to be a mugging once by offering to send them internet money (venmo) before they asked for anything, and they were so knocked off balance by this (I was saying “I don’t have cash but I can send it on an app”) that they almost bolted instead of accepting it. carry cash if you want to share it on purpose (people ask for money a lot and it feels nicer to say yes than no); don’t if you don’t. it’s not as bad as some places though, because the warmth means less desperation from homeless folks; homeless folks are usually pretty chill, if rather upset at the system. there is a specific ongoing aggressive presence: there are organized car-breakin and bike theft rings. but I don’t think it’s like gangs you may have heard about in the past in LA. the theft rings generally want to grab the thing and get the fuck away, not engage. if you hadn’t asked and nobody had told you, you probably wouldn’t even have noticed anything besides harmless homeless people mumbling something they think is interesting under their breath and not expecting to be understood because they get ignored by everyone.
yeah homeless people often have tents. it’s not where a civ would hope to be, but tents are just houses. treat it similarly.
is it a cult: you tell me whether it has the bad patterns that define cults. I’d personally say there have been cults spawned by it, but it’s more of a general community, with reasonably healthy community patterns. Don’t (ever) let your guard down about cults, though, in any context.
you should be worried someone convinces you to move to the bay. it’s not worth it. like, literally entirely for cost of housing reasons, no other reason, everything else is great, there’s a reason people are there anyway. but phew, the niceness comes with a honkin price tag. and no, living in a 10ft by 10ft room to get vaguely normal sounding rent is not a good idea, even though it’s possible.
average bay area people are definitely overachievers, see above about cost of housing. this is not true of america in general.
the most important california warnings are about weed: don’t buy weed. DON’T USE INHALED WEED. edibles can be a bad time if you take more than you think you’re taking, but won’t ruin your whole life as long as you go in with steadfast rules about when you have them, and rules like not buying them yourself. in fact, never use an inhaled or injected recreational drug, period—the fast uptake is extremely dangerous and will likely actually knock your motivation system off balance hard enough to probably ruin your life. you probably won’t be offered weed unless you ask for it, and even then most people won’t have any to share. If they do, it might be because they have a bad habit. It’s a fun drug when contained to a social setting, though. if someone has some I might suggest trying 2mg or less (ie, one fifth chunk of a normal 10mg edible), even if you’re used to weed it’s not the vibe I’d suggest for highly technical conversations.
you should be worried someone convinces you to move to the bay. it’s not worth it. like, literally entirely for cost of housing reasons, no other reason, everything else is great, there’s a reason people are there anyway. but phew, the niceness comes with a honkin price tag. and no, living in a 10ft by 10ft room to get vaguely normal sounding rent is not a good idea, even though it’s possible.
Why’s this not a good idea? 10ft by 10ft is a lot of room. More than I had in some flats when I went to university.
That’s fair, but it sounds like a personal preference. I asked because maybe you knew there was something unusually bad about small flats in the Bay Area that even folks like me would find annoying.
in fact, never use an inhaled or injected recreational drug, period—the fast uptake is extremely dangerous and will likely actually knock your motivation system off balance hard enough to probably ruin your life.
I don’t think this can be remotely justified by the evidence, formal or anecdotal. Inhaling weed isn’t dangerous, let alone extremely so, and will almost certainly not ruin anyone’s life, as the hundreds of millions of happy users can attest (get yours today!) Hell, shisha is an inhaled recreational drug!
I’m not sure it makes sense to generalise about an entire method of delivery, when all sorts of substances with very different effects can be consumed that way.
That first point made me laugh. It’s exactly the type of mistake I expected to make, and I still didn’t see it coming.
I appreciate all this safety advice and will update my decision making based on that.
Geez, the weed thing surprises me. I hadn’t planned to smoke any until after the event. But I think I’ll avoid that now. I’m already struggling with motivation from jet lag. I don’t want to increase that feeling.
The weed thing is not true. It can sap your motivation acutely, and perhaps even have a more sustained (if definitely temporary) effect. But it certainly doesn’t ruin your life by instantly “knocking your motivational system off balance”.
It’s a relatively chemically safe drug, but is easily habit forming and knocks you out of a productive space if used more than once every 3 to 6 months, imo. your reasoning seems reasonable. have fun with the trip!
I think go ahead and ask people to sign things. I’ve done it before and it went great, and the resulting book is a great memento. You’ve got a good conversation starter right there with asking them their favourite sequences post.
Re: safety, it depends on exactly where you are, your skill in assessing strangers’ intentions from a distance, and probably the way you carry yourself.
Speaking of which, I’d be interested in playing some improv games with you at less.online, if you want to do that!
I’m messing around with a post about this. However:
overwhelmingly positive
with a few things I was a little icked about
But I need to think about that more to understand if it’s a problem with me, or the thing that made me icked out
And thanks for checking in. That’s very kind of you
An app that legitimately finds people love, is probably a good candidate for warm fuzzy donations, perhaps the general public would be receptive to this?
Questions
Does anyone know of any dating apps (other than Duolicious, which I’ll write about in a comment) that have experimented with a not-for-profit, or charity model?
What am I missing? This seems like a big ol’ hunk of power, social capital, and utility, just sitting on the ground of central station!
I’m skipping the explanation of why dating apps are bad for their users.
I think you have failed to understand why dating apps are bad for their users.
On my current models, the core problem isn’t the incentives, it’s the medium. A very large chunk of attraction for most women is about social behaviors, which just don’t come across in some pictures and a text blurb; they mostly come across in conversations and other social settings. The net result is that women mostly don’t find the guys on apps attractive, even when they’d find many of those same guys attractive if they socialized with them a bit. That, in turn, results in the notorious statistical problems of the apps, and the frustrating experience of most users.
I know lots of people mourn the loss of 2014-era okcupid, but I would guess the median experience on okcupid was qualitatively similar to today’s apps. I recall the okcupid data blog itself noting that mostly the pictures were the only thing which had significant predictive power for people matching up.
What am I missing? This seems like a big ol’ hunk of power, social capital, and utility, just sitting on the ground of central station!
This sounds like one of the riskiest and most thankless kind of (legal...?) websites which it is possible to run in exchange for not even a will-o-the-whisp of profit or payday.
After reading about Craigslist and OnlyFans and Facebook and Instagram and OKCupid and Backpage and Kuro5hin and MeFi and Tea and Mastodon-everything etc, I think I would rather slam a door on my private parts than sign up to try to create or moderate such a site for sub-minimum wage (at a total opportunity cost of likely millions of dollars, exclusive of psychic damage), and have to deal with people like, well, your following comment, and whatever regrettable decisions they inevitably will make.
I know of one experiment like this, Duolicious. But other than the name, which gives me the feeling I imagine the word ‘moist’ gives to others. The app has… too niche a community.
Fig. The landing page for the Duolicious website. They claim to have 19,482 active members.
Fig. An example of the type of community Duolicious has.
Despite it’s oddities, Duolicious is community funded, and open source. I think with a little work, it’s interface could be really great! The “Clubs” system, which stands in for what would usually be “Interests” on another dating app, is fun, and highly customisable, letting you create your own Clubs, and easily tap other profile’s Clubs to join them. It’s a great customization loop!
Fig. Example of the tags system. Again, this is a pretty niche, heavily internet cultured community. Not shown here, is that all tags you have in common, are listed separately.
Same goes for the Bios, which look like they can be as long as you desire, and add character to a person’s profile. I haven’t seen many “Date Me Docs”, but I imagine this is as close as you’ll get to one, while keeping the modern dating app interface.
Fig. A lovely little bio I came across. I thought this was so pretty, I copied the whole thing, and added it to my Obsidian vault for use in future writing.
Based on estimated GPT5 gave me, I was expecting running a dating app to be hugely cost intensive, however, Duolicious makes this claim:
o:
I feel surprise. I guess I believed something wrong. Going to the donate page, it’s costing them $360 a month for servers.
That’s an insanely low cost! How are they doing this? I wonder how hard it would be to shift the culture of the app, to something friendlier to a wider audience. Because at this cost, and for what it is, Duolicious is doing an above average job!
I don’t broadly approve of trying to diagnose people over the Internet, nor am I qualified to, but it’s striking how much the “i love mind games” bio suggests borderline personality disorder. It has chronic feelings of emptiness (“i have no passions or goals in life.”), instability in interpersonal relationships (“i love mind games, i love drama, i love fake people.”, “i would not hesitate to betray any of my loved ones at any moment.”), negative self-image (“[...] really no reason for anyone to be around me.”), and so on.
If you are dating and this bio doesn’t make your HUD light up bright red, you are in danger. Read up on personality disorders so you can make more informed decisions about people you are getting involved with.
it mostly communicates that they’re a heavy user of the internet.
That true fact in no way contradicts dbohdan’s point. (“The import of an act lies not in what that act resembles on the surface, Mr. Potter, but in the states of mind which make that act more or less probable.”)
This, um, dramatically changes the picture. It could be nothing.
As a heavy user of the Internet, I didn’t recognize this copypasta. My mistake was only googling a large chunk in double quotes.
Edit: “Dramatically” is intended as a pun on “drama”, hence the italics. I think the new information changes the picture significantly, and yet the bio remains a red flag.
I saw that others have commented about how the bio is an edited meme rather than real, but just on the perception of various personality disorders, I feel like the statements you highlighted would show too much self-endorsement of that interpersonally bleak and exploitative outlook to be typical of (just) BPD. If we had to pick something that the dating profile statement seem typical of, it sounds more like ASPD (maybe together with BPD) to me. If someone only has BPD, it would probably be more typical for them to feel super attached and positive towards their loved ones for at least large parts of the time. And while they might split and end up betraying their loved ones, the person with BPD doesn’t typically have the insight to understand that this is a likely thing that they might do, so liking drama and being ready to betray others wouldn’t be a part of how they see themselves.
Disliking/unendorsing the negative features of one’s personality instead of endorsing them is an important ingredient for success chances with therapy, which is why BPD by itself is easier to treat than NPD or ASPD, or combinations where either of those come comorbid with BPD.
Two weeks after launching the website, someone will get date-raped by someone they met through your app, and they will sue you. The story will be in all clickbait media, which will report it as totally your fault. On your Wikipedia page, 90% will be the “Controversies” section, listing everything bad that has ever happened to any of your users.
Unless you spend tons of money on PR and lawyers, of course.
OpenAI plans to introduce a ‘TikTok-like’ short form video product, using Sora to generate the platform’s content.
I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.
My thoughts (read after your Yoda timer):
I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don’t take a little hit of fent, just to see what it’s like.
A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.
For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in ‘things that keep lots of users very engaged space’ is actually more harmful than Bird Videos.
Why are you so certain it’s dangerous to try once even at the beginning? My guess is that it won’t immediately be particularly compelling, but get more so over time as they have time to do RL on views or whatever they are trying to do.
But I also have a large error bar. This might, in the near future, be less compelling than either of us expect. It’s genuinely difficult to make compelling products, and maybe Sora 2 isn’t good enough for this.
I’m more concerned about Youtube Shorts to be honest, in the long term.
Nobody is actually going to use it. The general public has already started treating AI-generated content as pollution instead of something to seek out. Plus, unlike human-created shortform videos, a video generated by a model with a several-months-ago (at best) cutoff date can’t tell you what the latest fashion trends are. The release of Sora-2 has led me to update in favor of the “AI is a bubble” hypothesis because of how obviously disconnected it is from consumer demand.
EDIT: Apparently, degrading business took over not just Meta with its AI companions and xAI whose owner was dumb enough to avoid caring about safety in the slightest, but one of the three companies which was supposed to create the ASI and align it to human values. What’s next, the loss of Google DeepMind or Anthropic? Or outright AI takeover in the name of preserving human values?
I think that suddenly starting to using written media (even journals), in an environment without much or any guidance, is like pressing too hard on the gas; you’re gaining incredible power and going from zero to one on things faster than you ever have before.
Depending on their environment and what they’re interested in starting out, some people might learn (or be shown) how to steer quickly, whereas others might accumulate/scaffold really lopsided optimization power and crash and burn (e.g. getting involved in tons of stuff at once that upon reflection was way too much for someone just starting out).
This seems incredibly interesting to me. Googling “White-boarding techniques” only gives me results about digitally shared idea spaces. Is this what you’re referring to?
I’d love to hear more on this topic.
I was recently ease-dropping on a loud phone conversation. A man, maybe late 40s? Had been cheated on. He was angry. He was arguing with his girlfriend.
I noticed that all he argued the way people argue in movies. “Put yourself in my shoes”, “No, you listen to me!”, “I thought you would be the one!”. It hurt to listen to someone in pain.
A few weeks later, I was in a heated discussion with some Non-Rationalist friends. Long story short, I failed to output correct emotional signals, which was making them upset. I noticed the same thing in them. Copying the way people argue on social media. It felt ‘Therapy Culture’ for lack of a better descriptor.
But then I noticed myself.
Do you remember when Sam Altman went on world tour? Doing tens of interviews. I watched every one of those. And that is now how I speak in arguments. I think this sounds like a bad thing. Because of what we now know about Sam. But I personally trusted him when he spoke back then.
The moral is, you are what you consume. Be careful.
Which was not terribly secret. The details of the Project were indeed super-secret, to the point where most of the politicians hadn’t known anything, but despite the massive global-scale censorship & secrecy, many had observed the signs of a major project of some sort and some got as far as a nuclear bomb specifically. Also, commercial satellites with meter resolution did not exist which could quantify major facilities or new cities like Los Alamos or Hanford (but overflights, and then satellites, now exist and have helped reveal later top-secret nuclear bomb programs). An AI Manhattan Project, as we currently think of it, would be amusingly similar in footprint (eg. energy consumption) to the original and often observable from space: all those gigawatts have to go somewhere, after all.* I’m sure you can find plenty more about secrecy breaches in Rhodes.
This was not necessarily all that useful in the context of WWII—of course America had some big secret projects going, everyone did. It was a total world war. Everyone was aware there was a war on. The devil was in the details of what the program was—a failure like the V2-s, or a success like Enigma decrypts and Manhattan? But a binary exists/does-not-exist is useful in a peacetime context and the current discussion.
(If nothing else, the fact that DeepSeek keeps publishing is a signal. I would note here BTW that you cannot argue, without tying yourself into some pretzel knots explaining 4-D chess logic, that Chinese AI is about to catch up to and surpass the West because the best Chinese AI group, DeepSeek, just released a model or published this-or-that revealing the secrets of OA, and argue that there is already a secret all-out Chinese Manhattan Project going on which will potentially reach AGI first—because the first thing the latter would have done is stop the former from publishing anything which might help Western AI and then devour it for researchers.)
* A wag on Twitter has pointed out that the total energy/heat output of something like a GPT-4 or GPT-5 training run is the same as or larger than the output of a Hiroshima/Nagasaki-scale nuclear bomb explosion. Which is helpful intuition for why your datacenters need so much cooling, at least.
I think the idea is that there would be clues, particularly talented engineers not doing public work. Of course, I don’t know who’s carefully tracking that for Chinese scientists.
Scientists did guess the Manhattan project existed because several top physicists had ceased publishing. I don’t know if that made it to the relevant governments. That would be vastly easier to do with the internet—if anyone is bothering.
A potentially good way to avoid low level criminals scamming your family and friends with a clone of your voice is to set a password that you each must exchange.
An extra layer of security might be to make the password offensive, an info hazard, or politically sensitive. Doing this, criminals with little technical expertise will have a harder time bypassing corporate language filters.
Good luck getting the voice model to parrot a basic meth recipe!
Good luck getting the voice model to parrot a basic meth recipe!
This is not particularly useful, plenty of voice models will happily parrot absolutely anything. The important part is not letting your phrase get out; there’s work out there on designs for protocols for how to exchange sentences in a way that guarantees no leakage even if someone overhears.
Hmm. I don’t doubt that targeted voice-mimicking scams exist (or will soon). I don’t think memorable, reused passwords are likely to work well enough to foil them. Between forgetting (on the sender or receiver end), claimed ignorance (“Mom, I’m in jail and really need money, and I’m freaking out! No, I don’t remember what we said the password would be”), and general social hurdles (“that’s a weird thing to want”), I don’t think it’ll catch on.
Instead, I’d look to context-dependent auth (looking for more confidence when the ask is scammer-adjacent), challenge-response (remember our summer in Fiji?), 2FA (let me call the court to provide the bail), or just much more context (5 minutes of casual conversation with a friend or relative is likely hard to really fake, even if the voice is close).
But really, I recommend security mindset and understanding of authorization levels, even if authentication isn’t the main worry. Most friends, even close ones, shouldn’t be allowed to ask you to mail $500 in gift cards to a random address, even if they prove they are really themselves.
I now realize that my thinking may have been particularly brutal, and I may have skipped inferential steps.
To clarify, If someone didn’t know, or was reluctant to repeat a password, I would end contact or request an in person meeting.
But to further clarify, that does not make your points invalid. I think it makes them stronger. If something is weird and risky, good luck convincing people to do it.
A LW LLM I would like is a “What post was the one where Eliezer talks about logic or maybe algebra or something? I think it might have been in Plane Crash but idk?”
Sometime I hit a roadblock in my thinking because I’m trying to remember a specific LW post that talks about a skill I’m currently trying to practice. It’s quite flow breaking to try and search the normal way. Current LLMs aren’t up to the task and Hallucinate LW posts every time I do it.
I’m not entirely sure why, but I find it trivial to get GPT-4o to output harmful content in advanced voice mode (AVM), given that it doesn’t have a direct content filter on it e.g. NSFW.
In the span of 30 minutes it gave me 1) instructions to make a pipe bomb to attach to a car. And 2) Instructions on how to leak a virus into the public without detection (avoiding saying more on this).
I have a theory as to why it might be easy for me specifically. But I would like to know if this is the experience that others have with AVM?
Omg. Oops! I completely forgot about custom instructions and memory! I’ll run some more trials with those off.
Thank you very much for pointing this out.
Thank You to People of the Lightcone: an ambient song
From the window of our room, Isabella and I could see the Lightcone staff office. While I wont name anyone directly, it became a joke between Isabella and I, that no matter the time of day/night, if we looked over to those windows, always, there would be a certain member of the Lightcone staff, sitting, working at that computer. Clearly visible, like a busy fish in a lit tank. Outside of that tank, a half party, half conference chugged on.
Today is Isabella and my last day at Lighthaven. I’d like to pay my respects to people like the aforementioned member of the Lightcone staff, and all other staff at Lighthaven. To the people who gave talks, and the people who talked to me, to the people who told me about their problems, and those who took their own time to solve my problems. I want to honor those who kept fridges stocked, and those who bought marshmallows to share. And of course thank you to the niche-internet-micro-celebrities, that delt with unique social pressures. Oh! And the volunteers, they did so much! Thank you, to all who have touched my life. I am coming away from this place with the distinct feeling, that yes, the Lightcone really is nothing without it’s people.
I spent this morning/afternoon generating a song. The song is based on the feeling that being at Lighthaven gives me. It is a melancholic song, because in some way, we are here to stop horrible things from happening. It is a hopeful song, because we are here with friends.
And there is a more personal, home movie music video version, which I call “Something to Protect”.
The song is an ambient track, you can listen to it while you work. I recommend listening to it while at Lighthaven (If you are here) in this way, I hope it will always remind you of this place, and this time.
My timelines have now updated to something closer to fast takeoff. In a world like this, how valuable is educating the general public?
Claude claims science started worrying about the climate in the 50s/60s. It wasn’t until 2010s that we saw meaningful action beginning to take place.
Do we have the time to educate?
To be clear, this is more of a question than an opinion that I hold. I am working to form an opinion.
In my experience, most of the general public will verbally agree that AI X-risk is a big deal, but then go about their day (cause reasonably, they have no power). There’s no obvious social role/action to do in response to that.
For climate, people understand that they should recycle, not keep the water running, and if there’s a way to donate to clean the ocean on a Mr. Beast video, then some will even donate (sadly, none of these are very effective for solving the climate problem though! Gotta avoid that for our case).
Having a clear call-to-action seems relevant. For example, educating the public about AI taking jobs for the purpose of building support for UBI. It’s then clear what to communicate and the call-to-action.
I’d be curious to hear what you think an ask should be?
Alternatively, you could argue that generally informing folks on a wide scale about the risks involved will then allow general public to do what they believe is locally best. This could involve a documentary or realistic movie.
What do you mean with “meaningful action” regarding climate in the 2010s?
If you look at solar energy, panel prices go down largely in a straight (on a logarithmic scale) since 1975 with a short pause between 2005 to 2010. German pro-solar policy started in the 1990 and the biggest change in 2011 was the new five-year plan of th CCP which was probably driven more by economic justifications.
There are few people who currently think we should do whatever is possible to reduce temperature rise. Doing so would mean to have a plan for geoengineering. Climate activist often use climate as a justification to push for anticapitalist policies that they independently believe.
The fight for human flourishing doesn’t end at the initiation of takeoff [echo many points from Seth Herd here]. More generally, it’s very possible to win the fight and lose the war, and a broader base of people who are invested in AI issues will improve the situation.
(I also don’t think this is an accurate simplification of the climate movement or its successes/failures. But that’s tangential to the point I’d like to make.)
If I had sufficient funds. I would consider if it would be beneficial to invade a few subreddits, and offer $1000 to whoever can make the most viral meme that subtly teaches basic concepts of AI Doom.
This thought stems from a comment on “The Best Lay Argument is not a Simple English Yud Essay”.
I have more thoughts, but not much time. If my reasoning is unclear I apologise.
Idk if people around the world have those cardboard “Recycle your phone” boxes. That take phones and recycle their components. But I’ve rummaged through a certain one ~5 different times and found 2 completely functional and reset iPhones.
Surely there’s something better we can do with these than disassembling them?
I currently am completing psychological studies for credit in my university psych course. The entire time, all I can think is “I wonder if that detail is the one they’re using to trick me with?”
I wonder how this impacts results. I can’t imagine being in a heightened state of looking out for deception has no impact.
I have been working on a post in which I propose that the window for establishing healthy media consumption habits may be closing as content generation technologies advance, and make the suggestion that you GTFO before it’s too late.
I’m looking for push back on this thesis statement to try and strengthen my argument. All comments welcome.
But there’s also a chance future social networks are about to be much healthier and fulfilling, but simply weren’t possible with past technology. An upward trajectory.
The intuition there is that current ads are relatively inefficient at capturing value, as well as that current content algorithms optimize for short-term value creation/addiction rather than offering long term value. That’s the status quo, which, relative to what may be coming—ie relative to AI-powered semantic routing which could connect you to the content and products which long term would benefit you most—is a way smaller economy.
TLDR: more fulfilling social network economics would generate more money, and therefore become selected for once technically feasible.
I agree this stuff is addictive. AI makes things more interactive. Somepeople who never considered themselves vulnerable got sucked in to AI relationships.
Possible push back:
What if short bits of addictive content generated by humans (but selected by algorithms) are already near max addictiveness? And by the time AI can design/write a video game etc. twice as addictive than humans can design, we already have a superintelligence explosion, and either addiction is solved or we are dead?
In my Obsidian.md vault I use the note aliases to point out to myself when I’ve said something that indicates that I Am Confused. I’ve progressively added and pruned this over a year. Here are the “Confusion Phrases” that I currently use.
aliases: I am confused, I don’t understand, This is confusing, This is very confusing, I am quite confused, Wait what, I don’t know where I fucked up, I Feel Confused, I notice I am Confused, I’m confused, I don’t get it, I just don’t get it, How am I meant to, How do I understand this, That’s surprising, I’m not sure I understand, I’m completely lost, I’m lost, you lost me, I have no idea what’s going on, what are they even trying to say, what are they trying to say, what is she trying to say, what is he trying to say, I’m lost for words, I can’t wrap my head around this, I can’t get my head around, I’m struggling to make sense of, I guess I just forgot about, IDK, I don’t get,
Kinda related: I was inspired by the Fooming Shoggoths to make a song based on the original Your Strength as a Rationalist post a while ago. Here’s that song.
How often do these things become “un-confused”—like for every 20 of these, how many do you have an “ah-ha” or a “now I see” moment of clear resolution? Following on, do you find that you’re able to find a way to think of that faster—i.e. that you can see what cognitive processes cause you to be confused and how you could have resolved that quicker?
This isn’t an extremely useful technique. What it really does for me is break me out of undirected thinking with my writing and get me to actively start thinking things like “ok, but why would this be happening?”
I think 75% of the time it’s not helpful. Sometimes unhelpful when breaking a flow.
I’m working on thinking things faster. Though, it’s not a skill I’d say I have yet.
However, it’s pretty low cost for any payoff at all.
25% of the time it being helpful sounds pretty good to me.
Just to be clear, when you say “undirected thinking” do you mean thinking that is not pertinent to your intention or goal with a writing session or a piece of writing; or is it knowing that you want to write something but wandering aimlessly because you’re not sure what that thing is? Or am I well off the mark on both?
Closer to the first one.
I find when writing to think my mind has two modes. Very system 1 and system 2. If I’ve been going for a while on a side branch system 1 takes over. The writing becomes less about thinking and more about the act of writing. This leads to me making a hypothesis and saying “idk why that is”. That triggers the alias, which points out to me that I’m not really ‘thinking’. I then switch to “How could I test if this is true?”
I appreciate the 25% reframing. That’s something I wish I’d thought faster.
I recently saw a tweet that asked GPT, what micro skills a person should master.
I asked Claude a similar question, though had preloaded the conversation with “things a modern day Aristotle would teach a 24yo student”.
I think this is probably a valuable exercise to undertake yourself. I’m providing my prompt, and Claude’s response as an ‘appetiser’ to get you interested in this task.
—
Prompt
Wonderful. Now Let’s think less utilized by society. What are some more niche micro skills that you would want a person who you are tutoring to have. And this is given that you want them to be as successful as possible.
—
Claude
Let me think deeply about valuable micro skills that are often overlooked but can create significant advantages in life...
Cold Reading & Behavioral Analysis
Understanding micro-expressions
Picking up on subtle vocal tone changes
Recognizing behavioral patterns in groups
Reading body language clusters rather than isolated gestures
This isn’t about manipulation, but rather developing deeper empathy and understanding
Would you like me to elaborate on any of these? I find them fascinating because they’re rarely taught explicitly yet can be transformative when mastered. I’m curious which ones resonate most with you?
These skills compound over time and create second-order advantages that aren’t immediately obvious to others. They’re also relatively timeless—they worked in ancient Athens and still work today.
I’m curious why you opted for Aristotle (albeit “modern”) as the prompt pre-load? Most of those responses seem not directly tethered to Aristotelian concepts/books or even what he directly posits as being the most important skills and faculties of human cognition. For example, cold reading, I don’t recall anything of the sort anywhere in any Aristotle I’ve read.
While we’re not sure Aristotle himself designed the layout of the corpus, we do know that in the Nicomachean Ethics lists the faculties of “whereby the soul attains Truth”:
Techne (Τεχνε) - which refers to conventional ways of achieving goals, i.e. without deliberation Episteme (Επιστήμε) - which is apodeiktike or the faculty of arguing from proofs Phronesis (Φρονέσις) - confusingly translated as “practical wisdom” this refers to the ability to deliberate to attain goals by means of deliberation. Excellence in phronesis is translated by the latinate word ‘Prudence’. Sofia (Σοφια) - often translated as ‘wisdom’ - Aristotle calls this the investigation of causes. Nous (Νους ) - which refers to the archai—or the ‘first principles’
According to Diogenes Laertius, the corpus (at least as it has come to us) divides into the practical books and the theoretical—the practical itself would be subdivided between the books on Techne (say Rhetoric and Poetics), and Phronesis (Ethics and Politics), the theoretical is then covered in works like the Metaphysics (which is probably not even a cohesive book, but a hodge-podge), Categories etc. etc.
This would appear to me to be a better guide for the timeless education in Aristotelian tradition and how we should guide a modern adaptation.
I asked for further details on the 10th point and Claude listed a bunch of stuff I’ve absolutely never heard of. I’d say it’s probably related to meditation if I had to guess. Here’s that.
—
Claude
Let me break down Time Perception Management into its deeper components, as this is one of the most subtle yet powerful micro-skills...
Understanding Psychological Time vs Clock Time
How different mental states alter time perception
Flow states can make hours feel like minutes
Anxiety can make minutes feel like hours
Learning to intentionally shift between time perspectives
When to use event-based time vs clock-based time
How to enter “timeless” states for creative work
Reading your internal clock
Understanding your natural rhythms
Recognizing when time perception is distorted by emotions
Managing Time Horizons
Developing multiple concurrent time frames
Thinking in day/month/year/decade simultaneously
Balancing immediate tasks with long-term vision
Time horizon switching
When to zoom out to years
When to zoom in to minutes
How to maintain strategic clarity while handling tactics
Future-self continuity
Techniques for making future consequences feel real
Building strong mental connections to your future self
Chronotype Optimization
Understanding your natural energy patterns
Peak creative hours
Peak analytical hours
Recovery periods
Designing your life around your chronotype
When to schedule different types of work
How to handle obligations that fight your natural rhythm
Seasonal adjustments
Adapting to changing daylight
Modifying routines for different seasons
Timeline Compression and Expansion Techniques
How to make time feel longer when needed
Breaking experiences into smaller chunks
Increasing novelty to slow perceived time
Using attention to expand moments
How to make time feel shorter when beneficial
Chunking similar tasks
Creating flow states
Using routine to speed through necessary tasks
Personal Tempo Awareness
Understanding your natural pace
Speaking tempo
Walking pace
Decision-making speed
Tempo matching and mismatching
When to match others’ tempo
When to intentionally mismatch
How to set the tempo in group situations
Rhythm management
Creating sustainable work rhythms
Building in recovery cycles
Recognizing when to speed up or slow down
Advanced Applications:
Using time perception in negotiations
Creating or relieving time pressure
Understanding how others perceive time
Memory engineering
Making good times feel longer in memory
Making difficult times feel shorter in memory
Social time management
Managing group time perception
Creating shared time experiences
Decision timing
Understanding when to decide quickly vs slowly
Recognizing optimal timing windows
This skill set is particularly valuable because most people are unconscious of how their time perception affects their decisions and experiences. Mastering it gives you significant advantages in:
Personal productivity
Emotional regulation
Strategic planning
Social influence
Life satisfaction
Would you like me to elaborate on any particular aspect? Or shall we explore how this connects with some of the other micro-skills we discussed?
Off the cuff thought: I think IABI being kinda simplistic is both fine, and probably beneficial. I have not yet received my copy, but from reviews I’ve read, the book consists of the same arguments that originally convinced me to join ‘the fight’ all those years ago. Nuance in my understanding came later. I’d guess that’s generally the same path we all took?
If you’d like an explanation from Claude that starts quite basic and builds up, I have had great success with the following phrase as a “Style” in app, no changes necessary.
“Wait, start from the baseline and work your way up to the explanation”
Perhaps a silly question, but does the recent “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs” paper, imply that people calling a model good or bad online results in a self fulfilling prophecy?
I’ve seen a lot about GPT4o being kinda bad, and I’ve experienced that myself. This surprises me.
Now I will say something that feels like a silly idea. Is it possible that having the audio/visual part of the network cut off results in 4o’s poor reasoning?
As in, the whole model is doing some sort of audio/visual reasoning. But we don’t have the whole model, so it can’t reason in the way it was trained to.
If that is the case, I’d expect that when those parts are publicly released, scores on benchmarks shoot up?
Do people smarter and more informed than me have predictions about this?
Without a detailed Model Card for 4o it is impossible to know ” for sure” why models drift in performance over time, but drift they do.
It is entirely possible that Open-AI started with a version of GPT-4 Turbo, parallelize processing and performed an extensive “fine tune” to improve the multi-modal capabilities.
Essentially, the model could “forget” how to complete prompuppies. Workhfrom just a week ago, because some of its “memory” was over-written with instructions to complete requests for multi-modal replies.
I’m confused by what you mean that GPT-4o is bad? In my experience it has been stronger than plain GPT-4, especially at more complex stuff. I do physics research and it’s the first model that can actually improve the computational efficiency of parts of my code that implement physical models. It has also become more useful for discussing my research, in the sense that it dives deeper into specialized topics, while the previous GPT-4 would just respond in a very handwavy way.
Man, I wish that was my experience. I feel like I’m constantly asking GPT4o a question, getting a weird or bad response. Then switching to 4 to finish the job.
Benchmarks are consistent with GPT-4o having different strengths than GPT4-Turbo, though at a similar overall level—EQ-Bench is lower, MAGI-Hard is higher, best tested model for Creative Writing according to Claude Opus, but notably worse at judging writing (though still good for its price point).
In my experience different strengths also mean different prompt strategies are necessary; a small highly instruction-focused model might benefit from few-shot repetition and emphasis that just distract a more powerful OpenAI model for example. Which might make universal custom instructions more annoying.
I feel a deep love and appreciation for this place, and the people who inhabit it.
From Newcastle, Australia to Berkeley, San Francisco. I arrived yesterday for Less.online. I’ve had a bit of culture shock, a big helping of being increasingly scared, and quite a few questions. I’ll start with those. Feel free to skip them.
These questions are based on warnings I’ve gotten from local non-rationalists. Idk if they’re scared because of the media they consume or because of actual stats. I’m asking these because they feel untrue.
Is it ok to be outside after dark?
Will I really get ‘rolled’ mid day in Oakland?
Are there gangs walking around Oakland looking to stab people?
Will all the streets fill up with homeless people at night?
Are they chill? In Aus they’re usually down to talk if you are.
Culture shocks for your enjoyment:
Why is everyone doing yoga?
To my Uber driver: “THAT TRAIN IS ON THE ROAD!?”
“I thought (X) was just in movies!”
Your billboards are about science instead of coal mining!
“Wait, you’re telling me everything is vegan?” Thank Bayes, this is the best. All our vegan restaurants went out of business.
People brag about things? And they do it openly? At least, I think that’s what’s happening?
“Silicon Valley is actually a valley?!” Should have predicted this one. I kinda knew, but I didn’t know like I do now.
“Wow! This shop is openly selling nangs!” (whip its) “And a jungle juice display!”
All your cars are so new and shiny. 60% of ours are second hand
Most people I see in the streets look below 40. It’s like I’m walking around a university!
Wow. It’s really sunny.
American accents irl make me feel like I’m walking through a film.
“HOLY SHIT! A CYBER TRUCK?!”
Ok this is a big one. Apps I’ve had for 8+ years are suddenly different when I arrive here?
This is what Uber is meant to be. I will go back to Australia and cry. Your airport has custom instruction… in app! WHAT!? The car arrives in 2 minutes instead of 30 minutes. Also, the car arrives at all.
The google app has a beaker for tests now?
Snap maps has gifs in it
Apple Maps lets you scan buildings? And has tips about good restaurants and events?
When I bet in the Manifold app. A real paper Crain flies from the nearest tree, lands in front of me and unfolds. Written inside, “Will Eliezer Yudkowsky open a rationalist bakery?” I circle “Yes”. The paper meticulously folds itself back to a Crain. It looks at me. Makes a little sound that doesn’t echo in the streets but in my head, and it burns. Every time this happens I save the ashes. Are Manifold creating new matter? How are they doing this?
That one was a lie
Things that won’t kill me but scare me rational/irrational:
What if I’ve been wrong? What if this is all a scam? A cult? What if Mum was right?
What if I show up to the location and there is no building there?
What if I make some terribly awkward cultural blunder for SF and everyone yells at me?
What if no one tells me?
I’m sure I’ll be at least in the bottom 5% for intelligence at Less Online. I won’t be surprised or hurt if I’ve got the least Gs of people there. But what if it all goes over my head? Maybe I can’t even communicate with smart people about the things I care about.
What if I can’t handle people telling me what they think of my arguments without kid gloves? What if I get angry and haven’t learnt to handle that?
I’m just a Drama teacher and Psych student. My head is filled with improv games and fun facts about Clever Hans! ‘Average’ Americans seem to achieve much higher than ‘average’ Australians. I’m scared of feeling under qualified.
Other things:
Can you think of something I should be worried about, that I’ve not written here?
I’ve brought my copies of the Rationality A-Z books. I want to ask people I meet to sign their favourite post in the two books. Is that culturally acceptable? Feels kinda weird bc Yud is going to be there. But it would be a really warm/fuzzy item to me in the future.
I don’t actually know what a lot of the writers going look like. I hope this doesn’t result in a blunder. But might be funny, given that I expect rationalists to be pretty chill.
Are other people as excited about the Fooming Shoggoths as I am?
I’m 23, I have no idea if that is very old, very young, or about normal for a rationalist. I’d guess about normal, with big spread across the right of a graph.
It feels super weird to be in the same town as a bunch of you guys now. I’ve never met a rationalist irl. I talked to Ruby over zoom once, who said to me “You know you don’t have to stay in Australia right?” I hope Ruby is a good baseline for niceness levels of you all.
If you’re going, I’ll see you at Less.Online. If you’re not, I’d still love to meet you. Feel free to DM me!
I would bet hard cash that “I’m sure I’ll be at least in the bottom 5% for intelligence at Less Online. I won’t be surprised or hurt if I’ve got the least Gs of people there” is wrong.
But okay, let’s just go with your fears and assume for sake of argument that you’re right… Last year, Aella did a live polling event at Manifest where she asked us to, in general, first predict how we relatively compared to the crowd on any N, then define actual buckets of values and arrange ourselves by those buckets.
(So, e.g., “predict how recently have you exercised compared to others; the most recent, all the way on the left, the least recent, all the way on the right” and then “okay, let’s define from that fence to that chair as ‘in the last hour’, the chair to the steps as ‘in the last day’, etc.” and have folks line up twice based on first their relative guess, then their actual value.)
I was overall deeply surprised when she had folks line up under that system by SAT scores and IQ scores. It was, generally, extremely poorly correlated with how awesome it was to get to talk with a given person.
This post already expresses a great deal of the vibes that make me think it will be awesome to meet you, and that you get a lot of the vibes folks are aiming for, as far as I can tell. Looking forward to meeting ya!
Re safety, I don’t know about Oakland but some parts of SF are genuinely the most dangerous feeling places I’ve ever been to after dark (because normally I wouldn’t go somewhere, but SF feels very fine until it isn’t). If I am travelling to places in SF after dark I’ll check how dodgy the street entrances are.
Have fun! I won’t be going. Some random notes:
berkeley, san francisco is like saying maitland, newcastle
re: #1: yeah I don’t feel unsafe outside after dark in the bay. If a homeless person walks by, I’ll just say hi and ask if they have any urgent unmet needs. even just acknowledging them as a person is a nice gesture, though. many will try to engage much more than you have time or interest for; it’s okay to just walk away from the convo.
had to look up what “getting rolled” is. yeah, it’s possible, but not that hard to avoid. if an area seems very poor, there will be more desperate people. but the highest risk of being robbed is probably opportunistically on the train. keep your eyes mobile; it’s probably a 1 in 300 to 1 in 3,000 train trips event, but it’s pretty annoying when it happens, to put it mildly.
I’m not aware of there being an intense presence of organized aggressive groups in oakland, but there’s certainly plenty of disorganized aggression, again mostly from desperate people. I got out of what was going to be a mugging once by offering to send them internet money (venmo) before they asked for anything, and they were so knocked off balance by this (I was saying “I don’t have cash but I can send it on an app”) that they almost bolted instead of accepting it. carry cash if you want to share it on purpose (people ask for money a lot and it feels nicer to say yes than no); don’t if you don’t. it’s not as bad as some places though, because the warmth means less desperation from homeless folks; homeless folks are usually pretty chill, if rather upset at the system. there is a specific ongoing aggressive presence: there are organized car-breakin and bike theft rings. but I don’t think it’s like gangs you may have heard about in the past in LA. the theft rings generally want to grab the thing and get the fuck away, not engage. if you hadn’t asked and nobody had told you, you probably wouldn’t even have noticed anything besides harmless homeless people mumbling something they think is interesting under their breath and not expecting to be understood because they get ignored by everyone.
yeah homeless people often have tents. it’s not where a civ would hope to be, but tents are just houses. treat it similarly.
is it a cult: you tell me whether it has the bad patterns that define cults. I’d personally say there have been cults spawned by it, but it’s more of a general community, with reasonably healthy community patterns. Don’t (ever) let your guard down about cults, though, in any context.
you should be worried someone convinces you to move to the bay. it’s not worth it. like, literally entirely for cost of housing reasons, no other reason, everything else is great, there’s a reason people are there anyway. but phew, the niceness comes with a honkin price tag. and no, living in a 10ft by 10ft room to get vaguely normal sounding rent is not a good idea, even though it’s possible.
average bay area people are definitely overachievers, see above about cost of housing. this is not true of america in general.
the most important california warnings are about weed: don’t buy weed. DON’T USE INHALED WEED. edibles can be a bad time if you take more than you think you’re taking, but won’t ruin your whole life as long as you go in with steadfast rules about when you have them, and rules like not buying them yourself. in fact, never use an inhaled or injected recreational drug, period—the fast uptake is extremely dangerous and will likely actually knock your motivation system off balance hard enough to probably ruin your life. you probably won’t be offered weed unless you ask for it, and even then most people won’t have any to share. If they do, it might be because they have a bad habit. It’s a fun drug when contained to a social setting, though. if someone has some I might suggest trying 2mg or less (ie, one fifth chunk of a normal 10mg edible), even if you’re used to weed it’s not the vibe I’d suggest for highly technical conversations.
Why’s this not a good idea? 10ft by 10ft is a lot of room. More than I had in some flats when I went to university.
I went pretty stir crazy without enough room to move around.
That’s fair, but it sounds like a personal preference. I asked because maybe you knew there was something unusually bad about small flats in the Bay Area that even folks like me would find annoying.
I don’t think this can be remotely justified by the evidence, formal or anecdotal. Inhaling weed isn’t dangerous, let alone extremely so, and will almost certainly not ruin anyone’s life, as the hundreds of millions of happy users can attest (get yours today!) Hell, shisha is an inhaled recreational drug!
I’m not sure it makes sense to generalise about an entire method of delivery, when all sorts of substances with very different effects can be consumed that way.
That first point made me laugh. It’s exactly the type of mistake I expected to make, and I still didn’t see it coming.
I appreciate all this safety advice and will update my decision making based on that.
Geez, the weed thing surprises me. I hadn’t planned to smoke any until after the event. But I think I’ll avoid that now. I’m already struggling with motivation from jet lag. I don’t want to increase that feeling.
The weed thing is not true. It can sap your motivation acutely, and perhaps even have a more sustained (if definitely temporary) effect. But it certainly doesn’t ruin your life by instantly “knocking your motivational system off balance”.
It’s a relatively chemically safe drug, but is easily habit forming and knocks you out of a productive space if used more than once every 3 to 6 months, imo. your reasoning seems reasonable. have fun with the trip!
I think go ahead and ask people to sign things. I’ve done it before and it went great, and the resulting book is a great memento. You’ve got a good conversation starter right there with asking them their favourite sequences post.
Welcome to the US!
Re: safety, it depends on exactly where you are, your skill in assessing strangers’ intentions from a distance, and probably the way you carry yourself.
Speaking of which, I’d be interested in playing some improv games with you at less.online, if you want to do that!
Hmmm, I think I’m mostly bad at those things. I’ll play it safe.
And thanks for the good idea! I’ve added a session at 3pm on the Sunday.
How did you like your trip in the end?
I’m messing around with a post about this. However:
overwhelmingly positive
with a few things I was a little icked about But I need to think about that more to understand if it’s a problem with me, or the thing that made me icked out
And thanks for checking in. That’s very kind of you
Welcome to the US; excited for your time at LessOnline (and maybe Manifest too?)
And re: 19., we’re working on it![1]
(Sorry, that was a lie too.)
Please don’t ask on Manifold, you might incentivize creating one.
To Make Dating Apps Not Terrible, Make Them Not-for-Profit?
I’m skipping the explanation of why dating apps are bad for their users.
To fix these problems, why wouldn’t we make a not-for-profit dating app?
It removes incentives to keep users lonely for as long as possible.
There are rich people who are worried about population decline, who might fund it.
I’m assuming some governments are probably getting worried too.
An app that legitimately finds people love, is probably a good candidate for warm fuzzy donations, perhaps the general public would be receptive to this?
Questions
Does anyone know of any dating apps (other than Duolicious, which I’ll write about in a comment) that have experimented with a not-for-profit, or charity model?
What am I missing? This seems like a big ol’ hunk of power, social capital, and utility, just sitting on the ground of central station!
I think you have failed to understand why dating apps are bad for their users.
On my current models, the core problem isn’t the incentives, it’s the medium. A very large chunk of attraction for most women is about social behaviors, which just don’t come across in some pictures and a text blurb; they mostly come across in conversations and other social settings. The net result is that women mostly don’t find the guys on apps attractive, even when they’d find many of those same guys attractive if they socialized with them a bit. That, in turn, results in the notorious statistical problems of the apps, and the frustrating experience of most users.
I know lots of people mourn the loss of 2014-era okcupid, but I would guess the median experience on okcupid was qualitatively similar to today’s apps. I recall the okcupid data blog itself noting that mostly the pictures were the only thing which had significant predictive power for people matching up.
This sounds like one of the riskiest and most thankless kind of (legal...?) websites which it is possible to run in exchange for not even a will-o-the-whisp of profit or payday.
After reading about Craigslist and OnlyFans and Facebook and Instagram and OKCupid and Backpage and Kuro5hin and MeFi and Tea and Mastodon-everything etc, I think I would rather slam a door on my private parts than sign up to try to create or moderate such a site for sub-minimum wage (at a total opportunity cost of likely millions of dollars, exclusive of psychic damage), and have to deal with people like, well, your following comment, and whatever regrettable decisions they inevitably will make.
I know of one experiment like this, Duolicious. But other than the name, which gives me the feeling I imagine the word ‘moist’ gives to others. The app has… too niche a community.
Fig. The landing page for the Duolicious website. They claim to have 19,482 active members.
Fig. An example of the type of community Duolicious has.
Despite it’s oddities, Duolicious is community funded, and open source. I think with a little work, it’s interface could be really great! The “Clubs” system, which stands in for what would usually be “Interests” on another dating app, is fun, and highly customisable, letting you create your own Clubs, and easily tap other profile’s Clubs to join them. It’s a great customization loop!
Fig. Example of the tags system. Again, this is a pretty niche, heavily internet cultured community. Not shown here, is that all tags you have in common, are listed separately.
Same goes for the Bios, which look like they can be as long as you desire, and add character to a person’s profile. I haven’t seen many “Date Me Docs”, but I imagine this is as close as you’ll get to one, while keeping the modern dating app interface.
Fig. A lovely little bio I came across. I thought this was so pretty, I copied the whole thing, and added it to my Obsidian vault for use in future writing.
Based on estimated GPT5 gave me, I was expecting running a dating app to be hugely cost intensive, however, Duolicious makes this claim:
o:
I feel surprise. I guess I believed something wrong. Going to the donate page, it’s costing them $360 a month for servers.
That’s an insanely low cost! How are they doing this? I wonder how hard it would be to shift the culture of the app, to something friendlier to a wider audience. Because at this cost, and for what it is, Duolicious is doing an above average job!
I don’t broadly approve of trying to diagnose people over the Internet, nor am I qualified to, but it’s striking how much the “i love mind games” bio suggests borderline personality disorder. It has chronic feelings of emptiness (“i have no passions or goals in life.”), instability in interpersonal relationships (“i love mind games, i love drama, i love fake people.”, “i would not hesitate to betray any of my loved ones at any moment.”), negative self-image (“[...] really no reason for anyone to be around me.”), and so on.
If you are dating and this bio doesn’t make your HUD light up bright red, you are in danger. Read up on personality disorders so you can make more informed decisions about people you are getting involved with.
The bio is an edited meme, not an original; it mostly communicates that they’re a heavy user of the internet. Example from a year ago
That true fact in no way contradicts dbohdan’s point. (“The import of an act lies not in what that act resembles on the surface, Mr. Potter, but in the states of mind which make that act more or less probable.”)
No, actually; the mindset implied by repeating that text as a meme is quite different than the mindset implied by unironically generating it.
Indeed they are different mindsets. They are, however, both mindsets of concern.
This, um, dramatically changes the picture. It could be nothing.
As a heavy user of the Internet, I didn’t recognize this copypasta. My mistake was only googling a large chunk in double quotes.
Edit: “Dramatically” is intended as a pun on “drama”, hence the italics. I think the new information changes the picture significantly, and yet the bio remains a red flag.
I saw that others have commented about how the bio is an edited meme rather than real, but just on the perception of various personality disorders, I feel like the statements you highlighted would show too much self-endorsement of that interpersonally bleak and exploitative outlook to be typical of (just) BPD. If we had to pick something that the dating profile statement seem typical of, it sounds more like ASPD (maybe together with BPD) to me. If someone only has BPD, it would probably be more typical for them to feel super attached and positive towards their loved ones for at least large parts of the time. And while they might split and end up betraying their loved ones, the person with BPD doesn’t typically have the insight to understand that this is a likely thing that they might do, so liking drama and being ready to betray others wouldn’t be a part of how they see themselves.
Disliking/unendorsing the negative features of one’s personality instead of endorsing them is an important ingredient for success chances with therapy, which is why BPD by itself is easier to treat than NPD or ASPD, or combinations where either of those come comorbid with BPD.
Two weeks after launching the website, someone will get date-raped by someone they met through your app, and they will sue you. The story will be in all clickbait media, which will report it as totally your fault. On your Wikipedia page, 90% will be the “Controversies” section, listing everything bad that has ever happened to any of your users.
Unless you spend tons of money on PR and lawyers, of course.
OpenAI plans to introduce a ‘TikTok-like’ short form video product, using Sora to generate the platform’s content.
I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.
My thoughts (read after your Yoda timer):
I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don’t take a little hit of fent, just to see what it’s like.
I wrote more about this-in a fiction exploration type way-when I wrote “GTFO of the Social Internet Before You Can’t”.
A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.
For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in ‘things that keep lots of users very engaged space’ is actually more harmful than Bird Videos.
Response to your thoughts after the yoda timer
Why are you so certain it’s dangerous to try once even at the beginning? My guess is that it won’t immediately be particularly compelling, but get more so over time as they have time to do RL on views or whatever they are trying to do.
But I also have a large error bar. This might, in the near future, be less compelling than either of us expect. It’s genuinely difficult to make compelling products, and maybe Sora 2 isn’t good enough for this.
I’m more concerned about Youtube Shorts to be honest, in the long term.
My prediction:
Nobody is actually going to use it. The general public has already started treating AI-generated content as pollution instead of something to seek out. Plus, unlike human-created shortform videos, a video generated by a model with a several-months-ago (at best) cutoff date can’t tell you what the latest fashion trends are. The release of Sora-2 has led me to update in favor of the “AI is a bubble” hypothesis because of how obviously disconnected it is from consumer demand.
Why, in the name of Chapin Lenthall-Cleary, did they announce the platform?!
EDIT: Apparently, degrading business took over not just Meta with its AI companions and xAI whose owner was dumb enough to avoid caring about safety in the slightest, but one of the three companies which was supposed to create the ASI and align it to human values. What’s next, the loss of Google DeepMind or Anthropic? Or outright AI takeover in the name of preserving human values?
Note to self, write a post about the novel akrasia solutions I thought up before becoming a rationalist.
Figuring out how to want to want to do things
Personalised advertising of Things I Wanted to Want to Do
What I do when all else fails
Have you tried whiteboarding-related techniques?
I think that suddenly starting to using written media (even journals), in an environment without much or any guidance, is like pressing too hard on the gas; you’re gaining incredible power and going from zero to one on things faster than you ever have before.
Depending on their environment and what they’re interested in starting out, some people might learn (or be shown) how to steer quickly, whereas others might accumulate/scaffold really lopsided optimization power and crash and burn (e.g. getting involved in tons of stuff at once that upon reflection was way too much for someone just starting out).
This seems incredibly interesting to me. Googling “White-boarding techniques” only gives me results about digitally shared idea spaces. Is this what you’re referring to? I’d love to hear more on this topic.
Maybe I could even write a sequence on this?
I was recently ease-dropping on a loud phone conversation. A man, maybe late 40s? Had been cheated on. He was angry. He was arguing with his girlfriend.
I noticed that all he argued the way people argue in movies. “Put yourself in my shoes”, “No, you listen to me!”, “I thought you would be the one!”. It hurt to listen to someone in pain.
A few weeks later, I was in a heated discussion with some Non-Rationalist friends. Long story short, I failed to output correct emotional signals, which was making them upset. I noticed the same thing in them. Copying the way people argue on social media. It felt ‘Therapy Culture’ for lack of a better descriptor.
But then I noticed myself. Do you remember when Sam Altman went on world tour? Doing tens of interviews. I watched every one of those. And that is now how I speak in arguments. I think this sounds like a bad thing. Because of what we now know about Sam. But I personally trusted him when he spoke back then.
The moral is, you are what you consume. Be careful.
Thought: Confidently saying “(X) has no Manhattan Project”. Is forgetting how secret the Manhattan Project was.
Which was not terribly secret. The details of the Project were indeed super-secret, to the point where most of the politicians hadn’t known anything, but despite the massive global-scale censorship & secrecy, many had observed the signs of a major project of some sort and some got as far as a nuclear bomb specifically. Also, commercial satellites with meter resolution did not exist which could quantify major facilities or new cities like Los Alamos or Hanford (but overflights, and then satellites, now exist and have helped reveal later top-secret nuclear bomb programs). An AI Manhattan Project, as we currently think of it, would be amusingly similar in footprint (eg. energy consumption) to the original and often observable from space: all those gigawatts have to go somewhere, after all.* I’m sure you can find plenty more about secrecy breaches in Rhodes.
This was not necessarily all that useful in the context of WWII—of course America had some big secret projects going, everyone did. It was a total world war. Everyone was aware there was a war on. The devil was in the details of what the program was—a failure like the V2-s, or a success like Enigma decrypts and Manhattan? But a binary exists/does-not-exist is useful in a peacetime context and the current discussion.
(If nothing else, the fact that DeepSeek keeps publishing is a signal. I would note here BTW that you cannot argue, without tying yourself into some pretzel knots explaining 4-D chess logic, that Chinese AI is about to catch up to and surpass the West because the best Chinese AI group, DeepSeek, just released a model or published this-or-that revealing the secrets of OA, and argue that there is already a secret all-out Chinese Manhattan Project going on which will potentially reach AGI first—because the first thing the latter would have done is stop the former from publishing anything which might help Western AI and then devour it for researchers.)
* A wag on Twitter has pointed out that the total energy/heat output of something like a GPT-4 or GPT-5 training run is the same as or larger than the output of a Hiroshima/Nagasaki-scale nuclear bomb explosion. Which is helpful intuition for why your datacenters need so much cooling, at least.
You mean Hanford.
What signal do we get from DeepSeek continuing to publish?
Before the internet?
I think the idea is that there would be clues, particularly talented engineers not doing public work. Of course, I don’t know who’s carefully tracking that for Chinese scientists.
Scientists did guess the Manhattan project existed because several top physicists had ceased publishing. I don’t know if that made it to the relevant governments. That would be vastly easier to do with the internet—if anyone is bothering.
A potentially good way to avoid low level criminals scamming your family and friends with a clone of your voice is to set a password that you each must exchange.
An extra layer of security might be to make the password offensive, an info hazard, or politically sensitive. Doing this, criminals with little technical expertise will have a harder time bypassing corporate language filters.
Good luck getting the voice model to parrot a basic meth recipe!
This is not particularly useful, plenty of voice models will happily parrot absolutely anything. The important part is not letting your phrase get out; there’s work out there on designs for protocols for how to exchange sentences in a way that guarantees no leakage even if someone overhears.
Hmm. I don’t doubt that targeted voice-mimicking scams exist (or will soon). I don’t think memorable, reused passwords are likely to work well enough to foil them. Between forgetting (on the sender or receiver end), claimed ignorance (“Mom, I’m in jail and really need money, and I’m freaking out! No, I don’t remember what we said the password would be”), and general social hurdles (“that’s a weird thing to want”), I don’t think it’ll catch on.
Instead, I’d look to context-dependent auth (looking for more confidence when the ask is scammer-adjacent), challenge-response (remember our summer in Fiji?), 2FA (let me call the court to provide the bail), or just much more context (5 minutes of casual conversation with a friend or relative is likely hard to really fake, even if the voice is close).
But really, I recommend security mindset and understanding of authorization levels, even if authentication isn’t the main worry. Most friends, even close ones, shouldn’t be allowed to ask you to mail $500 in gift cards to a random address, even if they prove they are really themselves.
I now realize that my thinking may have been particularly brutal, and I may have skipped inferential steps.
To clarify, If someone didn’t know, or was reluctant to repeat a password, I would end contact or request an in person meeting.
But to further clarify, that does not make your points invalid. I think it makes them stronger. If something is weird and risky, good luck convincing people to do it.
A LW LLM I would like is a “What post was the one where Eliezer talks about logic or maybe algebra or something? I think it might have been in Plane Crash but idk?”
Sometime I hit a roadblock in my thinking because I’m trying to remember a specific LW post that talks about a skill I’m currently trying to practice. It’s quite flow breaking to try and search the normal way. Current LLMs aren’t up to the task and Hallucinate LW posts every time I do it.
Highly Advanced Epistemology 101?
I’m not entirely sure why, but I find it trivial to get GPT-4o to output harmful content in advanced voice mode (AVM), given that it doesn’t have a direct content filter on it e.g. NSFW.
In the span of 30 minutes it gave me 1) instructions to make a pipe bomb to attach to a car. And 2) Instructions on how to leak a virus into the public without detection (avoiding saying more on this).
I have a theory as to why it might be easy for me specifically. But I would like to know if this is the experience that others have with AVM?
I haven’t tried harmful outputs, but FWIW I’ve tried getting it to sing a few times and found that pretty difficult.
Hu. That is extremely useful. Thank you.
I’ve got a lot of singing out of AVM. While my current method works well for this, I find it more challenging than eliciting harmful outputs.
Did you use any specific prompt in memory or custom instructions?
Omg. Oops! I completely forgot about custom instructions and memory! I’ll run some more trials with those off. Thank you very much for pointing this out.
Thank You to People of the Lightcone: an ambient song
From the window of our room, Isabella and I could see the Lightcone staff office. While I wont name anyone directly, it became a joke between Isabella and I, that no matter the time of day/night, if we looked over to those windows, always, there would be a certain member of the Lightcone staff, sitting, working at that computer. Clearly visible, like a busy fish in a lit tank. Outside of that tank, a half party, half conference chugged on.
Today is Isabella and my last day at Lighthaven. I’d like to pay my respects to people like the aforementioned member of the Lightcone staff, and all other staff at Lighthaven. To the people who gave talks, and the people who talked to me, to the people who told me about their problems, and those who took their own time to solve my problems. I want to honor those who kept fridges stocked, and those who bought marshmallows to share. And of course thank you to the niche-internet-micro-celebrities, that delt with unique social pressures. Oh! And the volunteers, they did so much! Thank you, to all who have touched my life. I am coming away from this place with the distinct feeling, that yes, the Lightcone really is nothing without it’s people.
I spent this morning/afternoon generating a song. The song is based on the feeling that being at Lighthaven gives me. It is a melancholic song, because in some way, we are here to stop horrible things from happening. It is a hopeful song, because we are here with friends.
There is the music only version, which I call “An Ode to the People of the Lightcone”.
And there is a more personal, home movie music video version, which I call “Something to Protect”.
The song is an ambient track, you can listen to it while you work. I recommend listening to it while at Lighthaven (If you are here) in this way, I hope it will always remind you of this place, and this time.
My timelines have now updated to something closer to fast takeoff. In a world like this, how valuable is educating the general public? Claude claims science started worrying about the climate in the 50s/60s. It wasn’t until 2010s that we saw meaningful action beginning to take place. Do we have the time to educate?
To be clear, this is more of a question than an opinion that I hold. I am working to form an opinion.
In my experience, most of the general public will verbally agree that AI X-risk is a big deal, but then go about their day (cause reasonably, they have no power). There’s no obvious social role/action to do in response to that.
For climate, people understand that they should recycle, not keep the water running, and if there’s a way to donate to clean the ocean on a Mr. Beast video, then some will even donate (sadly, none of these are very effective for solving the climate problem though! Gotta avoid that for our case).
Having a clear call-to-action seems relevant. For example, educating the public about AI taking jobs for the purpose of building support for UBI. It’s then clear what to communicate and the call-to-action.
I’d be curious to hear what you think an ask should be?
Alternatively, you could argue that generally informing folks on a wide scale about the risks involved will then allow general public to do what they believe is locally best. This could involve a documentary or realistic movie.
What do you mean with “meaningful action” regarding climate in the 2010s?
If you look at solar energy, panel prices go down largely in a straight (on a logarithmic scale) since 1975 with a short pause between 2005 to 2010. German pro-solar policy started in the 1990 and the biggest change in 2011 was the new five-year plan of th CCP which was probably driven more by economic justifications.
There are few people who currently think we should do whatever is possible to reduce temperature rise. Doing so would mean to have a plan for geoengineering. Climate activist often use climate as a justification to push for anticapitalist policies that they independently believe.
The fight for human flourishing doesn’t end at the initiation of takeoff [echo many points from Seth Herd here]. More generally, it’s very possible to win the fight and lose the war, and a broader base of people who are invested in AI issues will improve the situation.
(I also don’t think this is an accurate simplification of the climate movement or its successes/failures. But that’s tangential to the point I’d like to make.)
If I had sufficient funds. I would consider if it would be beneficial to invade a few subreddits, and offer $1000 to whoever can make the most viral meme that subtly teaches basic concepts of AI Doom.
This thought stems from a comment on “The Best Lay Argument is not a Simple English Yud Essay”. I have more thoughts, but not much time. If my reasoning is unclear I apologise.
Idk if people around the world have those cardboard “Recycle your phone” boxes. That take phones and recycle their components. But I’ve rummaged through a certain one ~5 different times and found 2 completely functional and reset iPhones.
Surely there’s something better we can do with these than disassembling them?
Sounds like that would encourage theft.
I currently am completing psychological studies for credit in my university psych course. The entire time, all I can think is “I wonder if that detail is the one they’re using to trick me with?”
I wonder how this impacts results. I can’t imagine being in a heightened state of looking out for deception has no impact.
I have been working on a post in which I propose that the window for establishing healthy media consumption habits may be closing as content generation technologies advance, and make the suggestion that you GTFO before it’s too late.
I’m looking for push back on this thesis statement to try and strengthen my argument. All comments welcome.
What exactly will happen to people who don’t “get out” in time?
I think GTFO is plausibly a good strategy.
But there’s also a chance future social networks are about to be much healthier and fulfilling, but simply weren’t possible with past technology. An upward trajectory.
The intuition there is that current ads are relatively inefficient at capturing value, as well as that current content algorithms optimize for short-term value creation/addiction rather than offering long term value. That’s the status quo, which, relative to what may be coming—ie relative to AI-powered semantic routing which could connect you to the content and products which long term would benefit you most—is a way smaller economy.
TLDR: more fulfilling social network economics would generate more money, and therefore become selected for once technically feasible.
I agree this stuff is addictive. AI makes things more interactive. Some people who never considered themselves vulnerable got sucked in to AI relationships.
Possible push back:
What if short bits of addictive content generated by humans (but selected by algorithms) are already near max addictiveness? And by the time AI can design/write a video game etc. twice as addictive than humans can design, we already have a superintelligence explosion, and either addiction is solved or we are dead?
In my Obsidian.md vault I use the note aliases to point out to myself when I’ve said something that indicates that I Am Confused. I’ve progressively added and pruned this over a year. Here are the “Confusion Phrases” that I currently use.
aliases: I am confused, I don’t understand, This is confusing, This is very confusing, I am quite confused, Wait what, I don’t know where I fucked up, I Feel Confused, I notice I am Confused, I’m confused, I don’t get it, I just don’t get it, How am I meant to, How do I understand this, That’s surprising, I’m not sure I understand, I’m completely lost, I’m lost, you lost me, I have no idea what’s going on, what are they even trying to say, what are they trying to say, what is she trying to say, what is he trying to say, I’m lost for words, I can’t wrap my head around this, I can’t get my head around, I’m struggling to make sense of, I guess I just forgot about, IDK, I don’t get,
Kinda related: I was inspired by the Fooming Shoggoths to make a song based on the original Your Strength as a Rationalist post a while ago. Here’s that song.
How often do these things become “un-confused”—like for every 20 of these, how many do you have an “ah-ha” or a “now I see” moment of clear resolution? Following on, do you find that you’re able to find a way to think of that faster—i.e. that you can see what cognitive processes cause you to be confused and how you could have resolved that quicker?
This isn’t an extremely useful technique. What it really does for me is break me out of undirected thinking with my writing and get me to actively start thinking things like “ok, but why would this be happening?”
I think 75% of the time it’s not helpful. Sometimes unhelpful when breaking a flow.
I’m working on thinking things faster. Though, it’s not a skill I’d say I have yet.
However, it’s pretty low cost for any payoff at all.
25% of the time it being helpful sounds pretty good to me.
Just to be clear, when you say “undirected thinking” do you mean thinking that is not pertinent to your intention or goal with a writing session or a piece of writing; or is it knowing that you want to write something but wandering aimlessly because you’re not sure what that thing is? Or am I well off the mark on both?
Closer to the first one. I find when writing to think my mind has two modes. Very system 1 and system 2. If I’ve been going for a while on a side branch system 1 takes over. The writing becomes less about thinking and more about the act of writing. This leads to me making a hypothesis and saying “idk why that is”. That triggers the alias, which points out to me that I’m not really ‘thinking’. I then switch to “How could I test if this is true?”
I appreciate the 25% reframing. That’s something I wish I’d thought faster.
Trigger: I see a %# Action: Switch it in my head
Upvoted for the song.
I recently saw a tweet that asked GPT, what micro skills a person should master. I asked Claude a similar question, though had preloaded the conversation with “things a modern day Aristotle would teach a 24yo student”.
I think this is probably a valuable exercise to undertake yourself. I’m providing my prompt, and Claude’s response as an ‘appetiser’ to get you interested in this task.
—
—
Cold Reading & Behavioral Analysis
Understanding micro-expressions
Picking up on subtle vocal tone changes
Recognizing behavioral patterns in groups
Reading body language clusters rather than isolated gestures
This isn’t about manipulation, but rather developing deeper empathy and understanding
Strategic Silence & Information Management
Knowing when not to speak
Understanding information asymmetry
The power of selective disclosure
Using thoughtful pauses in conversation
The art of asking rather than telling
Scenario Planning & Secondary Consequence Analysis
Ability to see 3-4 moves ahead in decisions
Understanding cascade effects
Identifying hidden risks and opportunities
Planning for both success and failure scenarios
Recognition of system dynamics
Social Capital Accounting
Understanding relationship networks
Tracking favors and obligations implicitly
Building genuine reciprocity
Recognition of social momentum
Understanding status games without playing them
Energy Management & Recovery Tactics
Understanding personal energy patterns
Quick recovery techniques between intense tasks
Environmental design for optimal performance
Micro-rest optimization
State switching abilities
Pattern Recognition Across Domains
Transferring insights between unrelated fields
Spotting emerging trends early
Recognition of historical rhymes
Understanding cyclic behaviors
Identifying underlying principles
Meta-Learning Optimization
Learning how you specifically learn best
Creating personal knowledge management systems
Information synthesis techniques
Retention optimization
Skill transfer mechanisms
Probabilistic Thinking & Calibration
Understanding base rates
Recognizing cognitive biases in real-time
Bayesian updating in daily life
Risk assessment calibration
Decision journaling
Attention Direction & Management
Understanding attention as a finite resource
Strategic ignorance
Focus maintenance techniques
Context switching optimization
Signal-to-noise ratio improvement
Time Perception Management
Understanding psychological time vs clock time
Managing time horizons
Chronotype optimization
Timeline compression and expansion techniques
Personal tempo awareness
Would you like me to elaborate on any of these? I find them fascinating because they’re rarely taught explicitly yet can be transformative when mastered. I’m curious which ones resonate most with you?
These skills compound over time and create second-order advantages that aren’t immediately obvious to others. They’re also relatively timeless—they worked in ancient Athens and still work today.
I’m curious why you opted for Aristotle (albeit “modern”) as the prompt pre-load? Most of those responses seem not directly tethered to Aristotelian concepts/books or even what he directly posits as being the most important skills and faculties of human cognition. For example, cold reading, I don’t recall anything of the sort anywhere in any Aristotle I’ve read.
While we’re not sure Aristotle himself designed the layout of the corpus, we do know that in the Nicomachean Ethics lists the faculties of “whereby the soul attains Truth”:
Techne (Τεχνε) - which refers to conventional ways of achieving goals, i.e. without deliberation
Episteme (Επιστήμε) - which is apodeiktike or the faculty of arguing from proofs
Phronesis (Φρονέσις) - confusingly translated as “practical wisdom” this refers to the ability to deliberate to attain goals by means of deliberation. Excellence in phronesis is translated by the latinate word ‘Prudence’.
Sofia (Σοφια) - often translated as ‘wisdom’ - Aristotle calls this the investigation of causes.
Nous (Νους ) - which refers to the archai—or the ‘first principles’
According to Diogenes Laertius, the corpus (at least as it has come to us) divides into the practical books and the theoretical—the practical itself would be subdivided between the books on Techne (say Rhetoric and Poetics), and Phronesis (Ethics and Politics), the theoretical is then covered in works like the Metaphysics (which is probably not even a cohesive book, but a hodge-podge), Categories etc. etc.
This would appear to me to be a better guide for the timeless education in Aristotelian tradition and how we should guide a modern adaptation.
I asked for further details on the 10th point and Claude listed a bunch of stuff I’ve absolutely never heard of. I’d say it’s probably related to meditation if I had to guess. Here’s that.
—
Understanding Psychological Time vs Clock Time
How different mental states alter time perception
Flow states can make hours feel like minutes
Anxiety can make minutes feel like hours
Learning to intentionally shift between time perspectives
When to use event-based time vs clock-based time
How to enter “timeless” states for creative work
Reading your internal clock
Understanding your natural rhythms
Recognizing when time perception is distorted by emotions
Managing Time Horizons
Developing multiple concurrent time frames
Thinking in day/month/year/decade simultaneously
Balancing immediate tasks with long-term vision
Time horizon switching
When to zoom out to years
When to zoom in to minutes
How to maintain strategic clarity while handling tactics
Future-self continuity
Techniques for making future consequences feel real
Building strong mental connections to your future self
Chronotype Optimization
Understanding your natural energy patterns
Peak creative hours
Peak analytical hours
Recovery periods
Designing your life around your chronotype
When to schedule different types of work
How to handle obligations that fight your natural rhythm
Seasonal adjustments
Adapting to changing daylight
Modifying routines for different seasons
Timeline Compression and Expansion Techniques
How to make time feel longer when needed
Breaking experiences into smaller chunks
Increasing novelty to slow perceived time
Using attention to expand moments
How to make time feel shorter when beneficial
Chunking similar tasks
Creating flow states
Using routine to speed through necessary tasks
Personal Tempo Awareness
Understanding your natural pace
Speaking tempo
Walking pace
Decision-making speed
Tempo matching and mismatching
When to match others’ tempo
When to intentionally mismatch
How to set the tempo in group situations
Rhythm management
Creating sustainable work rhythms
Building in recovery cycles
Recognizing when to speed up or slow down
Advanced Applications:
Using time perception in negotiations
Creating or relieving time pressure
Understanding how others perceive time
Memory engineering
Making good times feel longer in memory
Making difficult times feel shorter in memory
Social time management
Managing group time perception
Creating shared time experiences
Decision timing
Understanding when to decide quickly vs slowly
Recognizing optimal timing windows
This skill set is particularly valuable because most people are unconscious of how their time perception affects their decisions and experiences. Mastering it gives you significant advantages in:
Personal productivity
Emotional regulation
Strategic planning
Social influence
Life satisfaction
Would you like me to elaborate on any particular aspect? Or shall we explore how this connects with some of the other micro-skills we discussed?
Off the cuff thought: I think IABI being kinda simplistic is both fine, and probably beneficial. I have not yet received my copy, but from reviews I’ve read, the book consists of the same arguments that originally convinced me to join ‘the fight’ all those years ago. Nuance in my understanding came later. I’d guess that’s generally the same path we all took?
If you’d like an explanation from Claude that starts quite basic and builds up, I have had great success with the following phrase as a “Style” in app, no changes necessary.
“Wait, start from the baseline and work your way up to the explanation”
Perhaps a silly question, but does the recent “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs” paper, imply that people calling a model good or bad online results in a self fulfilling prophecy?
e.g.
Bob Says “Alice.ai is bad”
Alice.ai is trained on this data
The next iteration of Alice.ai will think of itself as worse than if Bob had never made that comment. This results in Alice.ai creating bad outputs
Those bad outputs push Charlie over a threshold and Charlie says “Alice.ai is bad”
Loop
Edit: Oops, I didn’t realize Alice.ai was a real site. Though it’s got a pretty art style, so I’ll keep it in here.
I’ve seen a lot about GPT4o being kinda bad, and I’ve experienced that myself. This surprises me.
Now I will say something that feels like a silly idea. Is it possible that having the audio/visual part of the network cut off results in 4o’s poor reasoning? As in, the whole model is doing some sort of audio/visual reasoning. But we don’t have the whole model, so it can’t reason in the way it was trained to.
If that is the case, I’d expect that when those parts are publicly released, scores on benchmarks shoot up?
Do people smarter and more informed than me have predictions about this?
Without a detailed Model Card for 4o it is impossible to know ” for sure” why models drift in performance over time, but drift they do.
It is entirely possible that Open-AI started with a version of GPT-4 Turbo, parallelize processing and performed an extensive “fine tune” to improve the multi-modal capabilities.
Essentially, the model could “forget” how to complete prompuppies. Workhfrom just a week ago, because some of its “memory” was over-written with instructions to complete requests for multi-modal replies.
I’m confused by what you mean that GPT-4o is bad? In my experience it has been stronger than plain GPT-4, especially at more complex stuff. I do physics research and it’s the first model that can actually improve the computational efficiency of parts of my code that implement physical models. It has also become more useful for discussing my research, in the sense that it dives deeper into specialized topics, while the previous GPT-4 would just respond in a very handwavy way.
Man, I wish that was my experience. I feel like I’m constantly asking GPT4o a question, getting a weird or bad response. Then switching to 4 to finish the job.
Benchmarks are consistent with GPT-4o having different strengths than GPT4-Turbo, though at a similar overall level—EQ-Bench is lower, MAGI-Hard is higher, best tested model for Creative Writing according to Claude Opus, but notably worse at judging writing (though still good for its price point).
In my experience different strengths also mean different prompt strategies are necessary; a small highly instruction-focused model might benefit from few-shot repetition and emphasis that just distract a more powerful OpenAI model for example. Which might make universal custom instructions more annoying.