Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital—valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.
Jan_Rzymkowski
I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow “cheating” or “hacking” the problem.
Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees—our visual cortexes are quite specialized in this kind of task.
There’s also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can’t use gravity to devise traps, there’s no fire, much simpler geology, lithe prospects for farming, etc.
After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can’t imagine not getting it earlier.
Little correction:
Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust. Phosphorus is thus also well-suited to its role in adenosine triphosphate, ATP, your body’s chief method of storing chemical energy.
Actually, the above isn’t true. Reactivity is a property of a molecule, not of an element. Elemental phosphorus is prone to get oxidised with atmospheric oxygen, producing lots of energy. ATP is reactive, because anhydride bonds are fairly unstable—but none change of oxidation takes place. That it contains phosphorus, isn’t the actual reason for ATP to be an easy usable form of stroring energy. Salts of phosphoric acid also contain phosphorus, while being fairly unreactive. Thus the implication just doesn’t make sense.
I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data? If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim. I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that. But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box. More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it’s hard to tell where would our preferences point to.
If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
I hope it would be in any way helpful.
Now I think I shouldn’t mention hindsight bias, it doesn’t really fit here. I’m just saying that some events would be more probably famous, like: a) laymen posing extraordinary claim and ending up being right b) group of experts being spectacularly wrong
If some group of experts met in 1960s and pose very cautious claims, chances are small that it would end up being widely known. And ending up in above paper. Analysing famous predictions is bound to end up with many overconfident predictions—they’re just more flashy. But it doesn’t yet mean most of predictions are overconfident.
Actually for (2) the optimizer didn’t know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting “bugs” of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)
Let’s add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.
Isn’t this article highly susceptible to hindsight bias? For example, the reason authors analyse Dreyfus’s prediction is that, he was somewhat right. If he weren’t, authors woudn’t include that data-point. Therefore it skewes the data, even if it is not their intention.
It’s hard to take valuable assessements from the text, when it would be naturally prone to highlight mistakes of the experts and correct predictions by laymen.
It reminds me greatly my making of conlangs (artificial languages). While I find it creative, it takes vehement amounts of time to just create a simple draft and an arduous work to make satisfactory material. And all I’d get is just two or three people calling it cool and showing just a small interest. And I always know I’ll get bored with that language in few days and never make as much as to translate simple texts.
And yet every now and then I get an amazing idea and can’t stop myself from “wasting” hours, planning and writing about some conlang. And I end up being unsatisfied.
I don’t think it is about Sunk Cost. It’s more about a form of addiction toward creative works. Some kind of vicious cycle, where brain engages in activity, that just makes you want more to do that activity. The more you work on it, the more you want to do it, until reaching saturation, when you just can’t look at it anymore.
Isn’t it the formalization of Pascal mugging? It also reminds of the human sacrifice problem—if we don’t sacrifice a person, the Sun won’t come up the next day. We have no proof, but how can we check?
“I would not want to be an unconscious automaton!”
I strongly doubt that such sentence bear any meaning.
Ad 4. Elite judges is quite arbitrary. I’d rather iterate the test, each time choosing only those judges, who recognized program correctly or some variant of that (e.g. top 50% with most correct guesses). This way we select those, who go beyond simply conforming to a conversation and actually look for differences between program and human. (And as seen from transcripts, most people just try to have a conversation, rather than looking for flaws) Drawback is that, if program has set personality, judges could just stick to identifing that personality rather than human characteristics.
Another approach might be that, the same pair program-human is examined by 10 judges consecutively, each spending 5 minutes with both. The twist is that judges can leave instructions for next judges. So if program fails to perform “If you want to prove you’re human, simply do nothing for 4 minutes, then re-type this sentence I’ve just written here, skipping one word out of 2”, than every judge after the one, who found that flaw, can use that and make right guess.
My favourite method would be to give bot a simple physics textbook and then ask him to solve few physics test problems. Even if it wouldn’t be actual AI, it would still prove helluva powerful. Just toss it summarized knowledge on quantum physics and ask to solve for GUT. Sadly, most humans wouldn’t pass such high-school physics test.
is actually original Turing Test.
EDIT:
is bad. It would exclude equally many actual AI and blind people as well. It is actually more general problem with Turing Test. It helps testing programs that mimic humans, but not AI in general. For text based AI, senses are alien. You could develop real intelligence, which would fail, when asked “How you like the smell of glass?”. Sure it can be taught that glass don’t smell, but it actually needs superhuman abilities. So while superintelligence can perfectly mimic human, human-level AI wouldn’t pass Turing Test, when asked about sensual stuff, just as humans would fail, when asked about nuances of geometry in four dimensions.
Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter—most of the universe’s mass was already used up by alien civs.
Small observation of mine. While watching out for sunk cost fallacy it’s easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it’s destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily—when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.
What is R? LWers use it very often, but Google search doesn’t provide any answers—which isn’t surprising, it’s only one letter.
Also: why is it considered so important?
Or, conversely, Great Filter doesn’t prevent civilizations from colonising galaxies, and we’ve been colonised long time ago. Hail Our Alien Overlords!
And I’m serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.
It wasn’t my intent to give a compelling definition. I meant to highlight, which features of the internet I find important and novel as a concept.
My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you’re not believing in God, people will come to your house and kill your family—and if you believed in God they wouldn’t do that, because they’d somehow fear the God. I don’t see how is this anything else but an emotional trick.
I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don’t regularly pay a racket. But the opposite thing is creating exciting graphic visions. Watching your loved one raped is not as bad as losing a loved one—but it creates a much better psychological effect, targeted to elicit emotional blackmail.