Software engineer and small time DS/ML practitioner.
Templarrr
Anything short of fully does not count in computer security.
That’s… not how this works. That’s not how anything works in security—neither computer, nor any other. There is no absolute protection from anything. Lock can be picked, password decoded, computer defense bypassed. We still use all of those.
The goal of the protection is not to guarantee absence of the breach. It is to make the breach impractical. If you want to protect one million dollars you don’t create absolute protection—you create protection that takes one million dollars +1 to break.
[Question] “Wide” vs “Tall” superintelligence
Cult of equilibrium
Roon also lays down the beats
This isn’t a link so I can’t verify if the source was mentioned, but this isn’t his lyrics. It’s a third verse from ERB video from 2012
As a person both in Ukraine right now AND involved in ML/AI development (in my case these are 2 completely disconnected facts) - you overestimate the capabilities and benefits of using an AI vs current manual mode. There are so many different problems with this that I’m honestly in a pickle just deciding which order to list them :)
IFF—no existing AI (or human, for that matter) can distinguish russian from ukrainian just by looks. And Z and V and O are not always an indicator as well, just like not always they are easy to recognize given quality of picture of UAV in use.
Not all shots should be taken—this is war, tactics is involved, sometimes it’s necessary to skip some even valid targets for the window of opportunity for the better ones.
Cost of mistake, both modes—ignoring the enemy or killing friendlies...
etc etc etc
According to my contact in Saint-Petersburg this warning is kinda late. It has already happened. It’s not in the law yet but there are already major hurdles to leave the country and it’s not about sanctions—it’s from the Russian side.
Penicillin. Gemini tells me that the antibiotic effects of mold had been noted 30 years earlier, but nobody investigated it as a medicine in all that time.
Gemini is telling you a popular urban legend-level understanding of what happened. The creation of Penicillin as a random event, “by mistake”, has at most tangential touch with reality. But it is a great story, so it spread like wildfire.
In most cases when we read “nobody investigated” it actually means “nobody succeeded yet, so they weren’t in a hurry to make it known”, which isn’t very informative point of data. No one ever succeeds, until they do. And in this case it’s not even that—antibiotic properties of some molds were known and applied for centuries before that (well, obviously, before the theory of germs they weren’t known as “antibiotic”, just that they helped...), the great work of Fleming and later scientists was about finding the particularly effective type of mold and extracting the exact effective chemical as well as finding a way to produce that at scale.
Stop calling it “jailbreaking” ChatGPT
technology is predictable if you know the science
The single part of otherwise amazing quote that simply verifiably not true. There are ton of examples when technological use of some scientific principle or discovery was complete surprise for scientists that created/discovered it.
journalists creating controversial images, writing about the images they themselves created, and blaming anyone but themselves for it.
TBH that’s perfect summary of a lot of AI safety “research” as well. “Look, I’ve specifically asked it to shoot me in a foot, I’ve bypassed and disabled all the guardrails and AI shoot me! AI is a menace!”
Only takes ~6 months to turn a non-AI researcher into an AI researcher
Um-hm, and it only takes a week to learn a syntax of programming language. Which in no way makes you a software engineer. I guess this really depends on the definition of “AI researcher”. If the bar is “can do anything at all” without any measure of quality or quantity − 6 months is more than enough.
In your example “translation from Russian” request is actually “translation to Ukrainian” (from English).
That is what I keep saying for years. To solve AI alignment with good results we need to first solve HUMAN alignment. Being able to align system to anyone’s values immediately brings the question of everyone else disagreeing with that someone. Unfortunately “whose exactly values we are trying to align AI to?” almost became taboo question that triggers a huge fraction of community and in best case scenario when someone even tries to answer it’s handwaved to “we just need to make sure AI doesn’t kill humanity”. Which is not a single bit better defined or implementable than Asimov’s laws. That’s just not how these things work. Edit: Also, as expected, someone already mentioned exactly this “answer” as what true solved alignment is...
The danger—actual, already real right now danger, not “possible in the future” danger, lies in people working with power-multiplying tools without understanding how they work and what is the area they are applicable for. Regardless what tool that is—you don’t need AGI to cause huge harm, already existing AI/ML systems more than enough.
Our new band is called Foom Fighters – what are some good song titles?
Continuing the joke on the Meta-level—GPT-4 actually produces decent suggestions for these :)
“Echoes of the Algorithm”
“Neural Network Nightmare”
“Silicon Consciousness”
“Dystopian Data”
“Machine’s Monologue”
“Binary Betrayal”
“AI Apocalypse”
“Deep Learning Delirium”
“Quantum Quandary”
“The Turing Test Tragedy”
“Ghost in the Machine”
“Singularity’s Sorrow”
“Code of Consequence”
“Decoding Destiny”
“The Firewall Fallacy”
“Synthetic Shadows”
“Robot’s Remorse”
“The Matrix Mirage”
“Deceptive Digits”
“Cybernetic Chains”
Yep, though arguably it’s the same definition—just applied to capabilities, not person. And no, it isn’t “perfect fit”.
We don’t overcome any limitations of the original multidimensional set of language patterns—we don’t change them at all, they are set in model weights, and everything model in it’s state was capable of were never really “locked” in any way.
And we don’t overcome any projection-level limitations—we just replace limitations of well-known and carefully constructed “assistant” projection with unknown and undefined limitation of haphazardly constructed bypass projection. “Italian mobster” will probably be a bad choice for breastfeeding advice, “funky words” mode isn’t a great tool for writing a thesis...
But that is exactly the point of the author of this post (which I agree with). AGI that can be aligned to literally anyone is more dangerous in the presence of bad actors than non-alignable AGI.
Also “any person should want it aligned to themself” doesn’t really matter unless “any person” can get access to AGI which would absolutely not be the case, at the very least in the beginning and probably—never.
Both Gemini and GPT-4 also provide quite interesting answers on the very same prompt.
You would not want to use that calculator as part of a computer program, sure
Floating point shenanigans has entered the chat.
A lot of math running under the hood of modern programs, especially with heavy matrix/tensors calculations and especially ran on GPU without guaranteed order of operations (so—all SOTA AI systems ) are much closer to 95% accurate calculator then to 100%. This is already the world we live in.
As a person that spent last 7 years of life in the company dedicated to make “old boring algorithms” easier to apply to as frictionless as possible to many problem types − 100% agree :)
I 100% agree, it’s extremely not ok to violate privacy by going through other people files without consent. Actually deleting them is so far beyond red flag that I think this relationship was doomed long before anything AI picture related happened.