I understand. I’ll try to keep it more civil.
LWLW
If you’re making fun of what I’ve expressed about S-risks go fuck yourself. If you’re not then I think you’re naive. Anger is the main way change happens. You’ve just been raised on a society that got ravaged by Russian psy-ops that the elites encouraged to weaken the population. It can feel good to uplift others while simultaneously feeling fucking awful knowing that innocent people are suffering.
And just to be fucking clear, if you were making fun of me, please say it like a fucking man and not some fucking castrated male. If you were making fun of me you’re a low T faggot who’s not as smart as he thinks he is. There are 10 million Chinese people smarter than you.
To be clear, I only intend the last paragraph if you were being a bitch. If not then consider that it’s only addressed to a hypothetical cunty version of you.
It’s not fear. It’s anger. Also good people are rare. The people you think of as good people are likely just friendly.
Sure. The people I’m talking about choose to care as much as they do. Good and courageous people can choose to not have hope and not care about others, but they choose to care.
I think a lot of people are confused by good and courageous people and don’t understand why some people are that way. But I don’t think the answer is that confusing. It comes down to strength of conscience. For some people, the emotional pain of not doing what they think is right hurts them 1000x more than any physical pain. They hate doing what they think is wrong more than they hate any physical pain.
So if you want to be an asshole, you can say that good and courageous people, otherwise known as heroes, do it out of their own self-interest.
I just don’t understand why the people there would lie about something like this. This isn’t even very believable. It looks like the guy who founded it was a bright ML PhD and if he’s not telling the truth why would he throw away his reputation over this? Maybe it’s real but I’m pretty skeptical. I looked at their Zochi paper and I don’t see that they offered any proof that the papers they attributed to Zochi were written by Zochi.
Is intology a legitimate research lab? Today they talked about having an AI researcher that performed better than humans on RE-bench at 64 hr time horizons. This seems really unbelievable to me. The AI system is called Locus.
I think it leads to S-risks. I think people will remain in charge and use AI as a power-amplifier. The people most likely to end up with power like having power. They like having control over other people and dominating them. This is completely apparent if you spend the (unpleasant) time reading the Epstein documents that the House has released. We need societal and governmental reform before we even think about playing with any of this technology.
The answer to the world’s problems doesn’t rely on a bunch of individuals who are good at puzzles solving a puzzle and then we get utopia. It involves people recognizing the humanity of everyone around them and working on societal and governmental reform. And sure this stuff sounds like a long-shot but we’ve got to try. I wish I had a less vague answer but I don’t.
I just can’t wrap my head around people who work on AI capabilities or AI control. My worst fear is that AI control works, power inevitably concentrates, and then the people who have the power abuse it. What is outlandish about this chain of events? It just seems like we’re trading X-risk for S-risks, which seems like an unbelievably stupid idea. Do people just not care? Are they genuinely fine with a world with S-risks as long as it’s not happening to them? That’s completely monstrous and I can’t wrap my head around it. The people who work at the top labs make me ashamed to be human. It’s a shandah.
This probably won’t make a difference, but I’ll write this anyways. If you’re working on AI-control, do you trust the people who end up in charge of the technology to wield it well? If you don’t, why are you working on AI control?
Another reply, sorry I just think what you said is super interesting. The insight you shared about Eastern spirituality affecting attitudes towards AI is beautiful. I do wonder if our own Western attitudes towards AI are due to our flawed spiritual beliefs. Particularly the idea of a wrathful, judgemental Abrahamic god. I’m not sure if it’s a coincidence that someone who was raised as an Orthodox Jew (Eliezer) came to fear AI so much.
On another note, the Old Testament is horrible (I was raised reform/californian Jewish, I guess I’m just mentioning this because I don’t want to come across as antisemitic). It imbues what should be the greatest source of beauty with our weakest, most immature impulses. The New Testament’s emphasis on mercy is a big improvement/beautiful, but even then I don’t like the Book of Revelation talking about casting the sinners into a lake of fire.
Those are all good points. Well I hope these things are nice.
I really don’t think it’s crazy to believe that humans figure out a way to control AGI at least. There’s enormous financial incentive for it, and power hungry capitalists want that massive force multiplier. There are also a bunch of mega-talented technical people hacking away at the problem. OpenAI is trying to recruit a ton of quants as well, so I think by throwing thousands of the greatest minds alive at the problem they might figure it out (obviously one might take issue with calling quants “the greatest minds alive.” So if you don’t like that replace “greatest minds alive” with “super driven, super smart people.”)
I also think it’s possible that the U.S. and China might already be talking behind the scenes about a superintelligence ban. That’s just a guess though. (Likely because it’s much more intuitive that you can’t control a superintelligence). AGI lets you stop having to pay wages and makes you enormously rich. But you don’t have to worry about being outsmarted.
Fun Fact of the Day: Kanye West’s WAIS is within two points of a fields medalist’s (the fields medalist is Richard Borcherds, their respective IQs are 135 and 137).
Extra Fun Fact: Kanye West was bragging about this to Donald Trump in the Oval Office. He revealed that his digit span was only 92.5 (which is what makes me think he actually had a psychologist-administered WAIS).
Extra Extra Fun Fact: Richard Borcherds was administered the WAIS-R by Sacha Baron Cohen’s first cousin.
Thank you so much! I will contact her.
I am pretty good at math. At a T20 math program I was chosen for special mentorship and research opportunities over several people who made Top 500 on the Putnam due to me being deemed “more talented” (as nebulous as that phrase is, I was significantly faster in lectures than them and was able to digest graduate texts much quicker than them, I was also able to solve competition-style problems they couldn’t). My undergrad got interrupted by a health crisis so I never got a chance to actually engage in research or dedicated Putnam prep, but I believe most (maybe all if I’m being vain) of my professors would have considered me the brightest student in my year. I don’t know a lot about programming or ML at this point, but I am confident I could learn. I’m two years into my undergrad and will likely be returning next year.
I’m weighing my career options, and the two issues that seem most important to me are factory farming and preventing misuse/s-risks from AI. Working for a lab-grown meat startup seems like a very high-impact line of work that could also be technically interesting. I think I would enjoy that career a lot.
However, I believe that S-risks from human misuse of AI and neuroscience introduce scenarios that dwarf factory-farming in awfulness. I think that there are lots of incredibly intelligent people working on figuring out how to align AIs to who/what we want. But I don’t think there’s nearly the same amount of effort being made towards the coordination problem/preventing misuse. So naturally, I’d really like to work on solving this, but I just don’t even know how I’d start tackling this problem. It seems much harder and much less straightforward than “help make lab-grown meat cheap enough to end factory farming.” So, any advice would be appreciated.
What do you mean by solve alignment? What is your optimal world? What you consider “near-optimal flourishing” is likely very different than many other people’s ideas of near-optimal flourishing. I think people working on alignment are just punting on this issue right now while they figure out how to implement intent and value alignment but I assume there will be a lot of conflict about what values a model will be aligned to and who a model will be aligned to if/when we have the technical ability to align powerful AIs.
I think that the woman you met on FEELD was engaging in wishful thinking. I do not understand the line of reasoning that supports the conclusion that the concentration of power will stop at “people who work at a leading AI lab.” Why would it stop there?
But haven’t you read about the BSTc findings? It’s a sexually dimorphic region in the lizard brain and trans women’s BSTc regions were similar to cis women’s while trans men’s were similar to cis men’s. This was controlled for HRT as well.
There’s no solid proof for it yet, but the idea that something went wrong during fetal development where the body masculinized but the brain feminized or vice versa makes the most sense to me.
I got into reading about near death experiences and it seems a common theme is that we’re all one. Like each and every one of us is really just part of some omniscient god that’s so omniscient and great that god isn’t even a good enough name for it: experiencing what it’s like to be small. Sure, why not. That’s sort of intuitive to me. Given that I can’t verify the universe exists and can only verify my experience it doesn’t seem that crazy to say experience is fundamental.
But if that’s the case then I’m just left with an overwhelming sense of why. Why make a universe with three spatial dimensions? Why make yourself experience suffering? Why make yourself experience hate? Why filter your consciousness through a talking chimpanzee? If I’m an omniscient entity why would I choose this? Surely there’s got to be infinitely more interesting things to do. If we’re all god then surely we’d never get bored just doing god things.
So you can take the obvious answer that everything exists. But then you’re left with other questions. Why are we in a universe that makes sense? Why don’t we live in a cartoon operating on cartoon logic? Does that mean there’s a sentient SpongeBob? And then there’s the more pressing concern of astronomical suffering. Are there universes where people are experiencing hyperpain? Surely god wouldn’t want to experience I Have No Mouth and I Must Scream. It doesn’t seem likely to me that there are sentiences living in cartoons, so I’ll use that to take the psychologically comforting position that not everything we can imagine exists.
But if that’s the case then why this? Why this universe? Why this amount of suffering? If there’s a no-go zone of experience where is it? I have so many questions and I don’t know where the answers are.