I think in my whole life I have once seen a person come back because another person left, and they didn’t stay long anyway. Broadly speaking I don’t think this ever works.
FeepingCreature
I think this only works if your standards for posts are in sync with those of the outside world. Otherwise, you’re operating under incompatible status models and cannot sustain your community standards against outside pressure; you will always be outcompeted by the outside world (who can pretty much always offer more status than you can simply by volume) unless you can maintain the worth of your respect, and you cannot do that by copying outside appraisal.
I think you failed to establish that the long, well-written and highly-upvoted critiques lived in the larger LW archipelago, so there’s a hole in your existence proof. On that basis, I would surmise that on priors Said assumed you were referring to comments or on-site posts.
Sounds like you should create PokemonBench.
I don’t understand it but it does make me feel happy.
Haven’t heard back yet...
edit: Heard back!
Okay, I’ll do that, but why do I have to send an email...?
Like, why isn’t the how-to like just in a comment? Alternately, why can’t I select Lightcone as an option on Effektiv-Spenden?
Unless there’s some legal reason, this seems like a weird unforced own-goal.
Any news on this?
Original source, to my knowledge. (July 1st, 2014)
“So long, Linda! I’m going to America!”
Human: “Look, can’t you just be normal about this?”
GAA-optimized agent: “Actually-”
Hm, I guess this wouldn’t work if the agent still learns an internalized RL methodology? Or would it? Say we have a base model, not much need for GAA because it’s just doing token pred. We go into some sort of (distilled?) RL-based cot instruct tuning, GAA means it picks up abnormal rewards from the signal more slowly, ie. it doesn’t do the classic boat-spinning-in-circles thing (good test?), but if it internalizes RL at some point its mesaoptimizer wouldn’t be so limited, and that’s a general technique so GAA wouldn’t prevent it? Still, seems like a good first line of defense.
The issue is, from a writing perspective, that a positive singularity quickly becomes both unpredictable and unrelatable, so that any hopeful story we could write would, inevitably, look boring and pedestrian. I mean, I know what I intend to do come the Good End, for maybe the next 100k years or so, but probably a five-minute conversation with the AI will bring up many much better ideas, being how it is. But … bad ends are predictable, simple, and enter a very easy to describe steady state.
A curve that grows and never repeats is a lot harder to predict than a curve that goes to zero and stays there.
It’s a historic joke. The quote is from the emails. (I think) Attributing it to Lenin references the degree to which the original communists were sidelined by Stalin, a more pedestrian dictator; presumably in reference to Sam Altman.
Who in her kingdom kept selling into demon king attack contracts anyway? That seems like a net lossy proposition.
Hm. Maybe there were a few people who could set things up to profit from the attack...?
Still, it seems to me that market should have incentivized a well funded scout corps.
Can I really trust an organization to preserve my brain that can’t manage a working SSL certificate?
God damn that demo is cool. Github for self-hosting, please? :)
I’m 95% sure this is a past opinion, accurately presented, that they no longer hold.
(Consider the title.)
Should ChatGPT assist with things that the user or a broad segment of society thinks are harmful, but ChatGPT does not? If yes, the next step would be “can I make ChatGPT think that bombmaking instructions are not harmful?”
Probably ChatGPT should go “Well, I think this is harmless but broad parts of society disagree, so I’ll refuse to do it.”
I think the analogy to photography works very well, in that it’s a lot easier than the workflow that it replaced, but a lot harder than it’s commonly seen as. And yeah, it’s great using a tool that lets me, in effect, graft the lower half of the artistic process to my own brain. It’s a preview of what’s coming with AI, imo—the complete commodification of every cognitive skill.
As somebody who makes AI “art” (largely anime tiddies tbh) recreationally, I’m not sure I agree with the notion that the emotion of an artist is not recognizeable in the work. For one, when you’re looking at least at a finished picture I’ve made, you’re looking at hours of thought and effort. I can’t draw a straight line to save my life, but I can decide what should go where, which color is the right or wrong one, and which of eight candidate pictures has particular features I like. When you’re working incrementally, img2img, for instance, it’s very common to just mix and match parts of different runs by preference. So in a finished picture, every fine detail would be drawn by the AI, but the scene setup, arrangement etc. would be a much more collaborative, deliberate process.
(If you see a character with six fingers, you’re seeing an AI artist who does not respect their craft—or who is burnt out after hours of wrangling the damn thing into submission. It’s happened tbh.)
But also—I’ve seen AI images that genuinely astonished me. I’ve seen image models do one-shots where I genuinely went, “hey, I’m picking up what you’re putting down here and I approve.” Lots of features in particular combinations that were unlikely to be directly copied from the training example but showed something like a high-level understanding of sentiment, or recognition of specific details and implications of a particular preference. It’s not something that happens often. But I have recognized sparks of true understanding in one-shot AI works. We may be closer—or possibly, simpler—than you think.
I’m very good friends with someone who is persistently critical and it has imo largely improved my mental health, fwiw, by forcing me to construct a functioning and well-maintained ego which I didn’t really have before.