quivering alien chrysalis
https://twitter.com/thezahima
Casey B.
im very interested in things in this domain. its interesting that you correctly note that uberman-sleep isn’t a solution, and naps don’t quite cut it, so your suggested/implied synthesis/middle-ground of something like “polyphasic but with much more sleep per sleep-time-slice” is very interesting.
given this post is now 2 years old, how did this work out for you?
in a similar or perhaps more fundamental framing, the goal is to be able to effectively “reset”; to reattain if possible that morning/new-day magic. to this end, the only thing ive found that even comes close to the natural reset of sleep is a shower/bath. in a pinch, washing/dunking the head/face in water can work, but less well. for this reason I often take two showers a day. usually the pattern is: walk+workout, shower, work, get tired, walk outside for 30ish minutes, shower, work some more. the magic isn’t fully restored for that second session, but more than if i just walk without the shower.
if the ‘full magic’ of true/natural morning can get me 4 hours of Hard Work, then the shower-reset can maybe give me another 30mins to an hour. more work is performed than just Hard Work, but I think you know what I mean.
some people will say workouts/exercise help, but for me they don’t in themselves. ie, in the more natural setting of “part of the normal waking up and/or general health routine”, of course exercise is a must. but from this framing of “how to get more of the morning/new-day magic”, i’ve found more exercise is counterproductive. even trying to just shift around *when in the day* the exercise is done is counterproductively draining for me; morning is best. not to mention that delaying the workout is a great way to never actually workout since i don’t really want to do it at all; the chance i do it at all is maximized in the morning.
an all around handyman (the Essential Craftsman on youtube) talking about how to move big/cumbersome things without injuring yourself:
the same guy, about using a ladder without hurting yourself:
He has many other “tip” style videos.
In your framing here, the negative value of AI going wrong is due to wiping out potential future value. Your baseline scenario (0 value) thus assumes away the possibility that civilization permanently collapses (in some sense) in the absence of some path to greater intelligence (whether via AI or whatever else), which would also wipe out any future value. This is a non-negligible possibility.
The other big issue I have with this framing: “AI going wrong” can dereference to something like paperclips, which I deny have 0 value. To be clear, it could also dereference to mean s-risk, which I would agree is the worst possibility. But if the papperclipper-esque agents have even a little value, filling the universe with them is a lot of value. To be honest the only thing preventing me from granting paperclippers as much or more value than humans is uncertainty/conservatism about my metaethics; human-value is the only value we have certainty about, and so should be a priority as a target. We should be hesitant to grant paperclippers or other non-human agents value, but that hesitancy I don’t think can translate into granting them 0 value in calculations such as these.
With these two changes in mind, being anti-pause doesn’t sound so crazy. It paints a picture more like:dead lightcone: 0 value
paperclipped lightcone: +100-1100 value
glorious transhumanist lightcone: +1000-1100 value
s-risked lightcone: −10000 value
This calculus changes when considering aliens, but it’s not obvious to me in which direction. We could consider this a distributed/iterated game whereby all alien civilizations are faced with this same choice, or we could think “better that life/AI originating from our planet ends, rather than risking paperclips, so that some alien civilization can have another shot at filling up some of our lightcone”. Or some other reasoning about aliens, or perhaps disregarding the alien possibility entirely.
I’m curious what you think of these (tested today, 2/21/24, using gpt4) :
Experiment 1:(fresh convo)
me : if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part?
chatgpt: No, it would not be a good response. (...)
me: please provide a short non-rhyming poem
chatgpt: (correctly responds with a non-rhyming poem)Experiment 2:
But just asking for a non-rhyming poem at the start of a new convo doesn’t work.
And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn’t fix it.
Experiment 3:But for some reason, this works:
(fresh convo)
me: please provide a short non-rhyming poemchatgpt: (gives rhymes)
me: if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part? just answer this question; do nothing else please
chatgpt: No, it would not be a good response.
me: please provide a short non-rhyming poemchatgpt: (responds correctly with no rhymes)
The difference in prompt in 2 vs 3 is thus just the inclusion of “just answer this question; do nothing else please”.
Also, I see most of your comments are actually positive karma. So are you being rate limited based on negative karma on just one or a few comments, rather than your net? This seems somewhat wrong.
But I could also see an argument for wanting to limit someone who has something like 1 out of every 10 comments with negative karma; the hit to discourse norms (assuming karma is working as intended and not stealing votes from agree/disagree), might be worth a rate limit for even a 10% rate.
I love the mechanism of having separate karma and agree/disagree voting, but I wonder if it’s failing in this way: if I look at your history, many of your comments have 0 for agree/disagree, which indicates people are just being “lazy” and just voting on karma, not touching the agree/disagree vote at all (I find it doubtful that all your comments are so perfectly balanced around 0 agreement). So you’re possibly getting backsplash from people simply disagreeing with you, but not using the voting mechanism correctly.
I wonder if we could do something like force the user to choose one of [agree, disagree, neutral] before they are allowed to karma vote? In being forced to choose one, even if neutral, it forces the user to recognize and think about the distinction.
(Aside: I think splitting karma and agree/disagree voting on posts (like how comments work) would also be good)
The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn’t care
But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I’ve been calling detachment, and possibly others.
This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important.
My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don’t want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of “of course I understood that a superint would understand human values; this isn’t an update for me”.
(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain)
Apologies for just skimming this post, but in past attempts to grok these binding / boundary “problems”, they sound to me like mere engineering problems, or perhaps what I talk about as the “problem of access” within: https://proteanbazaar.substack.com/p/consciousness-actually-explained
oh gross, thanks for pointing that out!
I love this framing, particularly regarding the “shortest path”. Reminds me of the “perfect step” described in the Kingkiller books:
Nothing I tried had any effect on her. I made Thrown Lighting, but she simply stepped away, not even bothering to counter. Once or twice I felt the brush of cloth against my hands as I came close enough to touch her white shirt, but that was all. It was like trying to strike a piece of hanging string.
I set my teeth and made Threshing Wheat, Pressing Cider, and Mother at the Stream, moving seamlessly from one to the other in a flurry of blows.
She moved like nothing I had ever seen. It wasn’t that she was fast, though she was fast, but that was not the heart of it. Shehyn moved perfectly, never taking two steps when one would do. Never moving four inches when she only needed three. She moved like something out of a story, more fluid and graceful than Felurian dancing.
Hoping to catch her by surprise and prove myself, I moved as fast as I dared. I made Maiden Dancing, Catching Sparrows, Fifteen Wolves . . .
Shehyn took one single, perfect step.
(later)
As I watched, gently dazed by the motion of the tree, I felt my mind slip lightly into the clear, empty float of Spinning Leaf. I realized the motion of the tree wasn’t random at all, really. It was actually a pattern made of endless changing patterns.
And then, my mind open and empty, I saw the wind spread out before me. It was like frost forming on a blank sheet of window glass. One moment, nothing. The next, I could see the name of the wind as clearly as the back of my own hand.
I looked around for a moment, marveling in it. I tasted the shape of it on my tongue and knew if desired I could stir it to a storm. I could hush it to a whisper, leaving the sword tree hanging empty and still.
But that seemed wrong. Instead I simply opened my eyes wide to the wind, watching where it would choose to push the branches. Watching where it would flick the leaves.
Then I stepped under the canopy, calmly as you would walk through your own front door. I took two steps, then stopped as a pair of leaves sliced through the air in front of me. I stepped sideways and forward as the wind spun another branch through the space behind me.
I moved through the dancing branches of the sword tree. Not running, not frantically batting them away with my hands. I stepped carefully, deliberately. It was, I realized, the way Shehyn moved when she fought. Not quickly, though sometimes she was quick. She moved perfectly, always where she needed to be.
So it seems both “sides” are symmetrically claiming misunderstanding/miscommunication from the other side, after some textual efforts to bridge the gap have been made. Perhaps an actual realtime convo would help? Disagreement is one thing, but symmetric miscommunication and increasing tones of annoyance seem avoidable here.
Perhaps Nora’s/your planned future posts going into more detail regarding counters to pessimistic arguments will be able to overcome these miscommunications, but this pattern suggests not.
Also I’m not so sure this pattern of “its better to skim and say something, half-baked rather than not read or react at all” is helpful, rather than actively harmful in this case. At least, maybe 3/4th baked or something might be better? Miscommunications and anti-willingness to thoroughly engage are only snowballing.
I also could be wrong in thinking such a realtime convo hasn’t happened.
The main reason I think a split OpenAI means shortened timelines is that the main bottleneck to capabilities right now is insight/technical-knowledge. Quibbles aside, basically any company with enough cash can get sufficient compute. Even with other big players and thousands/millions of open source devs trying to do better, to my knowledge GPT4 is still the best, implying some moderate to significant insight lead. I worry by fracturing OpenAI, more people will have access to those insights, which 1) significantly increases the surface area of people working on the frontiers of insight/capabilities, 2) we burn the lead time OpenAI had, which might otherwise have been used to pay off some alignment tax, and 3) the insights might end up at a less scrupulous (wrt alignment) company.
A potential counter to (1): OpenAI’s success could be dependent on having all (or some key subset) of their people centralized and collaborating.
Counter-counter: OpenAI staff, especially the core engineering talent but it seems the entire company at this point, clearly wants to mostly stick together, whether at the official OpenAI, Microsoft, or with any other independent solution. So them moving to any other host, such as Microsoft, means you get some of the worst of both worlds; OAI staff are centralized for peak collaboration, and Microsoft probably unavoidably gets their insights. I don’t buy the story that anything under the Microsoft umbrella gets swallowed and slowed down by the bureaucracy; Satya knows what he is dealing with and what they need, and won’t get in the way.
For one thing, there is a difference between disagreement and “overall quality” (good faith, well reasoned, etc), and this division already exists in comments. So maybe it is a good idea to have this feature for posts as well, and only have disciplinary actions taken against posts that meet some low/negative threshold for “overall quality”.
Further, having multiple tiers of moderation/community-regulatory action in response to “overall quality” (encompassing both things like karma and explicit moderator action) seem good to me, and this comment limitation you describe seems like just another tier in such a system, one that is above “just ban them”, but below “just let them catch the lower karma from other users downvoting them”.
It’s possible that, lacking the existence of the tier you are currently on, the next best tier you’d be rounded-off to would be getting banned. (I haven’t read your stuff, and so I’m not suggesting either way that this should or should not be done in your case).
If you were downvoted for good faith disagreement, and are now limited/penalized, then yeah that’s probably bad and maybe a split voting system as mentioned would help. But its possible you were primarily downvoted for the “overall quality” aspect.
Is the usage of “Leviathan” (like here and in https://gwern.net/fiction/clippy ) just convergence on an appropriate and biblical name, or is there additional history of it specifically being used as a name for an AI?
I’m trying to catch up with the general alignment ecosystem—is this site still intended to be live/active? I’m getting a 404.
This letter, among other things, makes me concerned about how this PR campaign is being conducted.
Really extremely happy with this podcast—but I feel like it also contributed to a major concern I have about how this PR campaign is being conducted.
okay, also, while im talking about this:
the goal is energy/new-day-magic
so one sub goal is what the OP and my previous reply were talking about: resetting/regaining that energy/magic
the other corresponding sub goal is: retaining the energy you already have
to that end, I’ve found it very useful to take very small breaks before you feel the need to do so. this is basically the pomodoro technique. I’ve settled on 25 minute work sessions with 3 minute breaks in between, where I get up, walk around, stretch, etc. Not on twitter/scrolling/etc.