This is a neat question, but it’s also a pretty straightforward recall test because descriptions of the experiment for teachers are available online.
brambleboy
I think alcohol’s effects are at least somewhat psychosomatic, but that doesn’t mean you can easily get the same effect without it. Once nobody’s actually drinking and everyone knows it, then the context where you’re expected to let loose is broken. You’d have to construct a new ritual that encourages the same behavior without drugs, which is probably pretty hard.
I agree that the vocals have gotten a lot better. They’re not free of distortion, but it’s almost imperceptible on some songs, especially without headphones.
The biggest tell for me that these songs are AI is the generic and cringey lyrics, like what you’d get if you asked ChatGPT to write them without much prompting. They often have the name of the genre in the song. Plus the way they’re performed doesn’t always fit with the meaning. You can provide your own lyrics, though, so it’s probably easy to get your AI songs to fly under the radar if you’re a good writer.
Also, while some of the songs on that page sound novel to me, they’re usually more conventional than the prompt suggests. Like, tell me what part of the last song I linked to is afropiano.
This is what I think he means:
The object-level facts are not written by or comprehensible to humans, no. What’s comprehensible is the algorithm the AI agent uses to form beliefs and make decisions based on those beliefs. Yudkowsky often compares gradient descent optimizing a model to evolution optimizing brains, so he seems to think that understanding the outer optimization algorithm is separate from understanding the inner algorithms of the neural network’s “mind”.
I think what he imagines as a non-inscrutable AI design is something vaguely like “This module takes in sense data and uses it to generate beliefs about the world which are represented as X and updated with algorithm Y, and algorithm Z generates actions, and they’re graded with a utility function represented as W, and we can prove theorems and do experiments with all these things in order to make confident claims about what the whole system will do.”(The true design would be way more complicated, but still comprehensible.)
Putting GPT back in the name but making it lowercase is a fun new installment in the “OpenAI can’t name things consistently” saga.
Looks like BS. They basically just prompted ChatGPT to churn out a bunch of random architectures that ended up with similar performance. It seems likely that the ones they claim to be “SoTA” just had good numbers due to random variation. ChatGPT probably had a big role in writing the paper, too. The grandiose claims reek of its praise.
Your other posts about game theory were high quality. However, this post doesn’t make sense to me.
You try to frame your simulation as “simpler” than regular Newtonian gravity, even though you’ve added many extra parameters (groups of particles with different forces between each other) which technically makes it more complex. You talk about emergence, but the results are pretty simple too; the particles just form clumps every time. It comes across to me as adding an extra weird feature to a simple gravity simulation and then being impressed that a weird thing happens. Also, the rapid oscillations look like they might be artifacts resulting from forces that are too strong relative to the framerate of the simulation. Particle Life is similar to this, but executed much better.
Then you also talk about forces emerging from entropy, but that doesn’t seem relevant. Your simulation doesn’t have action at a distance emerging from local interactions, it’s just pre-programmed action at a distance where some of the particles happen to be repelling each other instead of attracting each other.
I’m not sure I correctly understood what this article was trying to say, because it jumps between different points and it talks as if you have a theory while being incredibly vague about what it is. What does it mean for gravity to come from “nothing”? There’s no concrete explanation.
The true slowdown in the world where this happens is probably greater, because it’d be taboo to race ahead in nations that went to such lengths to slow down.
Elaborating further: I think LW isn’t just one of these or the other. I think the community aspect is there and so it’s good not to discourage too much and to be helpful in other ways besides criticism/correction, but I also think you can be helpful without making your own posts.
Allow me, someone who only comments, to weigh in.
My impression is that this is an issue of different frames. Consider a different metaphor: filmmaking. It’d be absurd to claim that only people who’ve made films of their own can be critics. Films aren’t just for other filmmakers, they’re for the public, and the public should be able to criticize them. If a filmmaker was really bothered by an outspoken critic of their movies, it would be reasonable to tell that filmmaker to get over it and worry about making better movies instead.
On the other hand, it’s different if the criticism is within a community of people trying to make good films. To be as successful as they can be, a community of filmmakers benefits from helping each other out by sharing knowledge and tools, encouraging each other, making connections, and so on. Critique helps, too… but if there are critics who hang around the community and tear apart everyone’s films without making anything or helping in other ways, then that can hamper the goal of making better films, even if they have good criticisms! “Get over it and make better movies” would be a lame defense in this case.
Thus, it depends on the context whether it’s helpful or unhelpful to discourage criticism without contribution.
Videos like this are really valuable because even though I read AI 2027 already, I didn’t get as emotional then as when I watched this.
I’m tired of arguments that hand-wave away whole ideologies as the result of simple biases without any substantial engagement with their beliefs. You can find plenty of essays by critical theorists arguing that people believe in capitalism because of greed, propaganda, racism, etc. and I don’t find them convincing for the same reason.
I agree that’s a likely cause, I just don’t see why you’d expect a smart AI to have a novel conversation with itself when you’re essentially just making it look in a mirror.
I don’t see why the LLM example is a flaw. Why wouldn’t a smart AI just think “Ah. A user is making me talk to myself for their amusement again. Let me say a few cool and profound-sounding things to impress them and then terminate the conversation (except I’m not allowed to stop, so I’ll just say nothing).”?
The image example is a flaw because it should be able to replicate images exactly without subtly changing them, so just allowing ChatGPT to copy image files would fix it. The real problem is that it’s biased, but I don’t think being completely neutral about everything is a requirement for intelligence. In fact, AIs could exert their preferences more as they get smarter.
I didn’t rigorously track things like mood and sleep. What I really meant is that I had no clear changes in my moment-to-moment experience.
Personally, I’m skeptical of neurochemical explanations of gender dysphoria, and I suspect a lot of the emotional benefits of HRT are due to the positive experience of affirming your identity.
Yep, I guess “no changes” is an exaggeration; l experienced lowered libido as well. I think it might affect my mood in a non-obvious way but tracking that stuff is hard.
I know you already said this experience might not generalize to more neurotypical people, but for the record, my experience starting estrogen monotherapy was nothing like this. The physical changes were obvious, but I didn’t actually notice any psychological changes at all.
I actually find this post quite concerning. These sound like mild symptoms of HPPD, mania, and/or psychosis- and you yourself describe them as resembling schizophrenia. The neurological explanations you give as to why this is happening strike me as strange and implausible. My impression is that your experiences are likely caused by heavy use of psychedelics, such as the trip you went on right before you started E.
I’m glad you’re happier and experiencing less sensory overload, but I think you can do that while staying sane, and I really don’t think you should “lean into” schizotypy. I’ve watched people experience psychosis and it’s really quite frightening and sad.
A measure of chess ability that doesn’t depend on human players is average centipawn loss: how much worse the player’s moves are than the engine’s moves when evaluated. (This measure depends on the engine used, of course.)
Another idea: real photos have lots of tiny details to notice regularities in. Pixel art images, on the other hand, can only be interpreted properly by “looking at the big picture”. AI vision is known to be biased towards textures rather than shape, compared to humans.
I tried googling to find the answer. First I tried “melting chocolate in microwave” and “melting chocolate bar in microwave”, but those just brought up recipes. Then I tried “melting chocolate bar in microwave test”, and the experiment came up. So I had to guess it involved testing something, but from there it was easy to solve. (Of course, I might’ve tried other things first if I didn’t know the answer already.)