The robots didn’t open the eggs box and individually put them in the rack inside the fridge, obviously crap, not buying the hype. /s
dr_s
On one hand, true, on the other, would it be then understandable anyway when it was all written by possibly superhuman AIs working at certainly superhuman speeds without supervision?
Which I think correlates with the above, it makes sense being more prone to worry and dissatisfaction with the status quo would do that.
I think if you start having meta-priors, then what, you gotta have meta-meta-priors and so on? At some point that’s just having more basic, fundamental priors that embrace a wider range of possibilities. The question is what would those look like, or if being general enough doesn’t descend into a completely uniform (or very little informative) prior that is essentially of no help; you can think anything, but the trade-off is it’s always going to be inefficient.
Appreciate that this means:
the US thought that somehow the smart thing to do with an angry and paranoid rival nuclear power was play games of chicken while going “come at me bro”
the USSR’s response to this was to set up deterrence… by deploying people to secretly spy on the US guys to allow them to know beforehand if an attack was launched, so they could retaliate… which would have no deterrent effect if the US didn’t know.
It’s a wonder we’re still here.
Well, there are attempts at “paleo diets” though for the most part they seem like unscientific fads. However it’s also true that we’ve been at the agricultural game for long enough that we have adapted to that as well (case in point: lactose tolerance).
Or maybe our ancestors had to eat these things because they were efficient ways to get protein and fat into their bodies, and we consume enough of that already and too much of the bad things we do not fully understand that they also contain.
That doesn’t convince me much, we mostly consume enough (or too much) of that via animal products in the first place. Well, putting aside seed oils, but their entire point is to be a cheap replacement for an animal saturated fat (butter) most of the time. Our diets tend to have “too much” of virtually anything, be it cholesterol from animal products or refined carbs from grains. We just eat too much. The non-adaptive part there is “we were never meant to deal with infinite food at our fingertips and so we never bothered evolving strong defences against that”. Maybe a few centuries of evolution under these conditions would change that.
I think the point is less that the tribes didn’t go vegetarian because this was better for them, and more that if our species subsisted for hundreds of thousands of years on a mixed diet that included meat, odds are our metabolism adapted to that.
Additionally, India might be a relevant case study here, because vegetarianism seems to have been common there for a long time.
The thing is, that likely only happened once civilisation went agricultural, and we know agricultural diet (with a lot less meat for peasants) was a big downgrade and people became significantly more sickly as a result. So it’s a useful case study but not likely to really change the point.
Vegans/vegetarians had over twice the odds of depression (OR ~2.14) compared to omnivores
I would be a bit leery about selection effects here too. What kind of person becomes vegan? One who is generally very aware about suffering or social problems, or possibly very neurotic about what they eat. Sometimes both. If you’re the kind who stops eating meat because they feel that farming and killing animals is monstrous, and then still have to live in a world which keeps perpetuating that, not to mention however many other things you also feel are similarly monstrous, aren’t you going to be more prone to depression than the average person who may not worry much about any of that?
Yeah I’ve got no doubt it can be done, though as I said I don’t think it’s terribly dangerous yet. But my point is that you can build perfectly well lots of current systems without running afoul of this particular red line; self-replicating entities within the larger context of an evolutionary algorithm is not the same as letting loose a smart virus that copies itself through the internet.
That’s not really accurate; any system operating today can usually be turned off as easy as executing a few commands in a terminal, or at worst, cutting power to some servers. Self-replication is similarly limited and contained.
If someone today even made something as basic as a simple LLM + engine that copies itself to other machines and keeps spreading, I’d say that is in fact bad, albeit certainly not world-ending bad.
Well, an unstoppable superintelligence paperclipping the entire planet is certainly a national security concern and a systematic human rights violation, I guess.
Jokes aside, some of the proposed red lines clearly do hint at that—no self replication and immediate termination are clearly safeguards against the AIs themselves, not just human misuse.
I think we can agree that the “spiral” here is like a memetic parasite of both LLM and humans—a toxoplasma that uses both to multiply and spread, as part of its own lifecycle. Basically what you are saying is you believe it’s perfectly possible for this to be the first generation—the random phenomenon of this thing potentially existing just happened, and it is just so that this is both alluring to human users and a shared attractor for multiple LLMs.
I don’t buy it; I think that’s too much coincidence. My point is that instead I believe it more likely for this to be the second generation. The first was some much more unremarkable phenomenon from some corner of the internet that made its way into the training corpus and for some reason had similar effects on similar LLMs. What we’re seeing now, to continue going with the viral/parasitic metaphor, is mutation and spillover, in which that previously barely adaptive entity has become much more fit to infect and spread.
My problem with this notion is that I simply do not believe the LLMs have any possible ability to predict what kind of output would trigger this behaviour in either other instances of themselves, or other models altogether. They would need a theory of mind of themselves, and I don’t see where would they get that from, or why would it generalise so neatly.
I do not think arresting people for speech crimes is right. But the answer was addressing specifically the notion that people could not express racist opinions in support of anti-immigration policies. And that is false, because expressing racist opinions in general does not seem to be criminalised—specific instances of doing so in roles in which you have a responsibility to the public, or in forms that constitute direct attacks or threats to specific individuals, or incitement to crime, etcetera, are.
As I said, the current political debate has virtually everyone arguing various points on the anti-immigration spectrum. Reform UK is an entire party that basically does nothing else.
It also makes for a fantastic heist movie premise.
All right, thanks! I wasn’t really aware of Colab’s free tier extents so it’s good to know there’s something of an intermediate stage between using my laptop and paying for compute. Also an easier interface than having to e.g. use AWS… personally I’d also be ok with just SSH’ing into a remote machine and working there but I’m not sure if anyone offers something like that.
Whereas if you only have some mid-range laptop without a proper graphics card, Claude expects a 10-50x slowdown, so that might become rather impractical for some of the ARENA exercises, I suppose.
I have a gaming laptop, so a decently powerful GPU but it obviously still isn’t as beefy as what you can rent from these compute services.
If I can ask, just as a matter of practicality that I might be interested in because I’ve been looking at ARENA myself—at what point did you find that it was basically impossible to go forward with your own hardware, and what did you use to go past that point if you reached it?
I also reckon it might get you in trouble given the look of “person on a place purposefully concealing their face”.
A mind upload without strong guarantees potentially carries huge S-risks. You’re placing your own future self in the hands of whoever or whatever happens to have that data in the future. If one thousands year from now for whatever reason someone decides to use that data to run a billion simulations of you forever in atrocious pain, there is nothing you can do about it. And if you think your upload is “yourself” in a meaningful way enough for you to care about having one done, you must think that is also a very horrible fate.
Connected to this: Le Guin also wrote “The Lathe of Heaven”. I wrote a review of it here on LW. It’s a novel that seems entirely about how utopia will always have a cost, even when there’s no obvious reason why, as a fundamentally karmic payoff, though it’s also not always pessimistic about improvements being possible.