Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital—valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.
Jan_Rzymkowski
Surprising examples of non-human optimization
Paperclip Maximizer Revisited
Baysian conundrum
The Waker—new mode of existence
I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow “cheating” or “hacking” the problem.
Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees—our visual cortexes are quite specialized in this kind of task.
There’s also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can’t use gravity to devise traps, there’s no fire, much simpler geology, lithe prospects for farming, etc.
Prediction of the Internet
After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can’t imagine not getting it earlier.
Little correction:
Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust. Phosphorus is thus also well-suited to its role in adenosine triphosphate, ATP, your body’s chief method of storing chemical energy.
Actually, the above isn’t true. Reactivity is a property of a molecule, not of an element. Elemental phosphorus is prone to get oxidised with atmospheric oxygen, producing lots of energy. ATP is reactive, because anhydride bonds are fairly unstable—but none change of oxidation takes place. That it contains phosphorus, isn’t the actual reason for ATP to be an easy usable form of stroring energy. Salts of phosphoric acid also contain phosphorus, while being fairly unreactive. Thus the implication just doesn’t make sense.
I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data? If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim. I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that. But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box. More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it’s hard to tell where would our preferences point to.
If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
I hope it would be in any way helpful.
Tips for writing philosophical texts
Now I think I shouldn’t mention hindsight bias, it doesn’t really fit here. I’m just saying that some events would be more probably famous, like: a) laymen posing extraordinary claim and ending up being right b) group of experts being spectacularly wrong
If some group of experts met in 1960s and pose very cautious claims, chances are small that it would end up being widely known. And ending up in above paper. Analysing famous predictions is bound to end up with many overconfident predictions—they’re just more flashy. But it doesn’t yet mean most of predictions are overconfident.
Actually for (2) the optimizer didn’t know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting “bugs” of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)
Let’s add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.
Isn’t this article highly susceptible to hindsight bias? For example, the reason authors analyse Dreyfus’s prediction is that, he was somewhat right. If he weren’t, authors woudn’t include that data-point. Therefore it skewes the data, even if it is not their intention.
It’s hard to take valuable assessements from the text, when it would be naturally prone to highlight mistakes of the experts and correct predictions by laymen.
It reminds me greatly my making of conlangs (artificial languages). While I find it creative, it takes vehement amounts of time to just create a simple draft and an arduous work to make satisfactory material. And all I’d get is just two or three people calling it cool and showing just a small interest. And I always know I’ll get bored with that language in few days and never make as much as to translate simple texts.
And yet every now and then I get an amazing idea and can’t stop myself from “wasting” hours, planning and writing about some conlang. And I end up being unsatisfied.
I don’t think it is about Sunk Cost. It’s more about a form of addiction toward creative works. Some kind of vicious cycle, where brain engages in activity, that just makes you want more to do that activity. The more you work on it, the more you want to do it, until reaching saturation, when you just can’t look at it anymore.
Warsaw – ACX Meetups Everywhere Spring 2024
Isn’t it the formalization of Pascal mugging? It also reminds of the human sacrifice problem—if we don’t sacrifice a person, the Sun won’t come up the next day. We have no proof, but how can we check?
My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you’re not believing in God, people will come to your house and kill your family—and if you believed in God they wouldn’t do that, because they’d somehow fear the God. I don’t see how is this anything else but an emotional trick.
I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don’t regularly pay a racket. But the opposite thing is creating exciting graphic visions. Watching your loved one raped is not as bad as losing a loved one—but it creates a much better psychological effect, targeted to elicit emotional blackmail.