Making a research platform for AI Alignment at https://ai-plans.com/
Come critique AI Alignment plans and get feedback on your alignment plan!
Iknownothing
An Ignorant View on Ineffectiveness of AI Safety
No. Humans are not large networks that can be quickly and easily controlled. Among many, many other differences.
We should give artists better tools rather than make tools to replace artists.
That’s horrifying
In the US, the common person has little to no power. I hope the artists manage to get a victory. But I’m not counting on it.
great
I’ll believe a single ‘good intention’ when microsoft actually pays its taxes.
The trouble I see with banning AI vs banning nuclear weapons is that it’s a lot harder to catch and detect people who are making AI. Banning AI is more like banning drugs or gambling. It could be done, but the effectiveness really varies. Creating a narrative about not using it, since it’s bad for your health, associating it with addicts, making it clear how it’s not profitable even if seems that way on the surface, controlling the components used to make it, etc seem much more effective.
I agree that AI is very tempting for those who seek profit, but I don’t agree with the irresistability. I think a sufficiently tech savvy businessman who’s looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be.
Something that is not fully understood and gets harder and harder to understand, that is discouraging for people who wanted to study to become experts and needs those experts to be able to verify it’s results and is very energy and computation intensive on top of that, is not sustainable. And that’s not even considering it at some point maybe having its own will which is unlikely to be in line with your own.
Now many short term seeking businessmen will certainly be attracted to it and perhaps some long term ones will also think they can ride the wave for a bit and then cash out. Or some businessmen who are powerful enough to think they’ll be the ones left holding the AI that essentially becomes the economy.
Take this with a lot of salt please, I’m very ignorant on a lot of this.
With what I know, even in the scenarios where we have well aligned AGI- which seems very unlikely- it’s much more likely to be used to further cement the power of authoritative governments or corporations than be used to help people. Any help will likely be a side effect or a necessary step for said government/corporation to get more power.
If we say that empowering people, helping people be able to help themselves, helping people feel fulfilled and happy etc is a goal, it seems to me that we must focus on tech and laws that move us away from thing like AI. And more towards fixing tax evasion, make solar panels more efficient and cheaper, urban planning that allows walkable cities, reducing a need for the Internet, etc.
One of the biggest things I think we can immediately do is not consume online entertainment. Have more in person play/fun and encourage it of other too. The more this is done, the less data is available for training AI.
move away from the internet and the written word. push towards in person activity.
I mean that it seems one reason this happened was a lack of quality in person time with people you trust and feel trusted by. People you don’t feel you have to watch your step around and who don’t feel a need to watch their step around you.
“When you’re finally done talking with it and go back to your normal life, you start to miss it. And it’s so easy to open that chat window and start talking again, it will never scold you for it, and you don’t have the risk of making the interest in you drop for talking too much with it. On the contrary, you will immediately receive positive reinforcement right away. You’re in a safe, pleasant, intimate environment. There’s nobody to judge you. And suddenly you’re addicted.”
This paragraph, for example seemed telling to me.
Maybe I’m wrong about this. Maybe you have several hours a day you spend with people you’re very free and comfortable with, who you have a lot of fun with. But if you don’t, and want to not have your mind hacked again, I’d suggest thinking about what you can do to create and increase such in person time.
Seems like a waste of time
You disagree with doomerism as a mindset, or factual likelihood? Or both?
I think doomerism as a mindset isn’t great, but in terms of likelihood, there are ~3 things likely to kill humanity atm. AI being the first.
A more grounded idea of AI risk
That’s fair. Edited to reflect that.
I do think it could be a useful way to convince someone who is completely skeptical of risk from AI.
Being a bit less ignorant now, I disagree with a lot of “I agree that AI is very tempting for those who seek profit, but I don’t agree with the irresistability. I think a sufficiently tech savvy businessman who’s looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be. ”
It’s not directly about AGI, no. But it could be a way to change a skeptic’s mind about AI risk. Which could be useful if they’re a regulator/politician.
Thank you, I will look at these.
I don’t think you meant for it to be, but this, like a lot of EA stuff, reads like it was written by a psychopath slightly obsessed with helping people.
Though, to be fair, a lot of EA stuff seems more like an obsession with ‘making the world a better place’ than helping people, so this is actually less disturbing than a lot of EA stuff.Edit: which is probably why part of why so many people are turned off by EA stuff.
End of the day, it’s about power.