Hey Sable, I am sorry about your situation. Perhaps I am pointing out the obvious, but you just achieved something. You wrote a post and people are reading it. Keep ’em coming!
Anders Lindström
Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an “AI-washing” attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is “aligned” to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:
“AI will not make humans redundant.”
“AI is not an existential risk.”
...
I am not so sure that Xi would like to get to AGI any time soon. At least not something that could be used outside of a top secret military research facility. Sudden disruptions in the labor market in China could quickly spell the end of his rule. Xi’s rule is based on the promise of stability and increased prosperity so I think that the export ban of advanced GPU’s is a boon to him at time being.
The Paper Clip
Scene: The earth
Characters: A, an anti-humanist
B, a pro-humanist
A: “We need to reduce the population by 90-95% to not deplete all resources and destroy the ecosystem”
B: “We need a larger population so we get more smart people, more geniuses, more productive people”(Enter ASI)
ASI: “Solved. What else can I help you with today?”
Imagine having a context window that fits something like PubMed or even The Pile (but that’s a bit into the future...), what would you be able to find in there that no one could see using traditional literature review methods? I guess that today a company like Google could scale up this tech and build a special purpose supercomputer that could handle a 100-1000 millions token context window if they wanted, or perhaps they already have one for internal research? its “just” 10x+ of what they said they have experimented with, with no mentions of any special purpose built tech.
Dagon thank you for follow up on my comment,
yes, they are in some ways oranges and apples but both of them put a limit on your possibility to create things. One can argue that immaterial rights have been beneficial for humanity as a whole, but it is at the same time criminalizing one of our most natural instincts which is to mimic and copy what other humans do to increase our chance of survival. Which lead to the next question, would people stop innovate and create if they could not protect it?
Dagon, yes that seems like a reasonable setup. Its pretty amazing that world and life altering inventions gets a protection for a maximum of 20 years from the filing date where as if someone doodles something on a paper get a protection that lasts the life of the author plus 70 years. But… maybe the culture war is more important to win than the technology war?
Anyways, with the content explosion on the internet I would assume that pretty much every permutation of everything that you can think of is now effectively copyrighted well into the foreseeable future. Will that minefield prove to be the reason to reform copyright law so that it fits into a digital mass creation age?
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
Logan Zoellner thank you for your question,
In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything.I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly?
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Gerald Monroe thank you again clarifying you thoughts,
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
Logan Zoellner thank you for highlighting one of your previous points,
You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know.
I would not use Manifold as any data point in assessing the potential danger of future AI.
Gerald Monroe, thank you for expanding your previous comments.
You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
Thank you Gerald Monroe for your comments,
My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the “aliens” could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI.
We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea. What if AGI is like turning on a light switch, that you from one model to the next get a trillion fold increase in capability, how will the AI safety bots deal with that? We have no idea how to classify intelligence in terms of levels. How much smarter is a human compared to a dog? Or a snake? Or a chimpanzee? Assume for the sake of argument that a human is twice as “smart” as a chimpanzee on some crude brain measure scale thingy. Are humans than twice as capable than chimpanzees? We are probably close to infinitely more capable even if the raw brain power is NOT millions or billions or trillions times that of a chimpanzee.
We just do not have any idea what just a “slightly smarter” thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us.
Logan Zoellner thank you for further expanding on your thoughts,
No, I will not agree that GPT5 will not destroy the world, cause I have no idea what it will be capable of.
I do not understand you assertion that we would be better fending off aliens if we have access to GPT5 than if we do not. What exactly do you think GPT5 could do in that scenario?
Why do you think that having access to powerful AI’s would make AGI less likely to destroy us?
If anything, I believe that the Amish scenario is less dangerous than the slow take off scenario you described. In the slow take off scenario there will be billions of interconnected semi-smart entities that a full blown AGI could take control over. In the Amish scenario there would be just one large computer somewhere that is really really smart, but that does not have the possibility to hijack billions of devices, robots and other computers to reek havoc.My point is this. We do not know. Nobody knows. We might create AGI and survive, or we might not survive. There are no priors and everything going forward from now on is just guesswork.
Logan Zoellner, thank you for clarifying the concept.
However, it is possible to argue about semantics but since no one knows when AGI will happen if you increase the compute and or deploy new models, all take offs are equally dangerous. I think a fair stance by all AI researcher and companies trying to get to AGI is to admit that they have zero clue when AGI will be achieved, how that AI will behave and what safety measures are needed that can keep it under control.
Can anyone with certainty say that for instance a 100x in compute and model complexity over the state of the art today does not constitute an AGI? A 100x could be achieved within 2-3 years if someone poured a lot of money into it i.e. if someone went fishing for trillions in venture capital...
We are on a path for takeoff. Brace for impact.
There are always outliers, but given how unremarkable that guys seems to be its a complete BS article. If he would have been gay, then maybe I could have believed those numbers if they were divided with at least 10. I know some fellas that hit up dudes on Grindr and that’s a different ball game (no pun).
Anyways, I think that this video does a pretty good job trying to explain the math behind the skewness in likes/matched that heterosexual men and women experience on dating apps
Thank you ryan_b for expanding on your thoughts,
I have been under the weather for a week, I meant to answer you earlier.
To me having a goal and knowing why I have that goal are two separate things and a goal does not become less of a goal because you do not know the origin of it. Perhaps goals are a hierarchy. We all* have some default goals like eat, survive and reproduce. On top of those we can add goals invented buy ourselves or others. In the case you are without a goal, I believe you still have goals defined by your biology. Every action or inaction is due to a goal. Why do you eat? are you hungry? Bored? Tired? Compulsion? Want to gain weight? Want to loose weight? There is always a goal.Take people with OCD. In what way are those persons contradicting any goals by doing OCD stuff, like checking if the stow is off 157 times before leaving the house so they missed work? Yes, the goal of getting to work was missed, but the MORE important goal of not accidentally burning down the house and killing 35 neighbors and being the disgrace of the neighborhood was effectively achieved. So its not that fiddling with the stow was with out a goal canceling out the “real” goal of getting to work for a none goal. They were just of different importance.
If I may comment on you sex qua sex analogy. I am convinced that the sex act involved a social interaction where you wanted the other person(s) to behave in a specific way to make the act of sex as enjoyable as possible (what ever that my mean). The act of sex did not happen in a vacuum. You or the other person(s) wanted to have it, no matter what the goal was. And you or the other person(s) had to manipulate the other(s) to achieve what ever goal there was to the sex.
Yes, I agree that we need coordination with other people to achieve things, and that they my be benign. But to me there is no distinction between benign or malevolent attempts to persuade or influence someone. They are both acts of manipulation. Either you managed to get someone to do something or you did not. Why did you want this person to do this in the first place? Because you had a goal of some sort, you did not act out of a vacuum. “But I just did it to be silly, or stupid, or because I was bored”, well… than that was the goal, but a goal none the less.
Thank you ryan_b for your comment,
I do not agree. I don’t believe that there is not any action that any living organism, much less humans, take without a specific goal. When people say that they “just want to spread some selfless love in this grim world without asking for anything in return”, they have a goal nontheless.
I cannot of course say exactly what kind of goal they have, but for the sake of simplicity say that selflesslovespreader A wants to make other people feel good to feel good about making other people feel good. So how does Selflesslovespread A know that the goal have been achieved in that interaction?
Well, is it that far fetched to assume that a smile or a thank you from the person that the selfless love was directed at is a good measure of the success? I.e. Selflessloverspread A have manipulated the person to respond with a certain behavior that made it possible for Selflesslovespreder A to reach the goal of feeling good about making other people feel good.
I believe there is a self serving motif behind every so called selfless act. This does not make the act less good or noble, but the act serve as a mean for that person to reach a goal, what ever that goal is.
Can a human perform any type of action without a goal, no matter how small or insignificant?
Glad to hear you are doing better!
Ok, that is an interesting route to go. Let “us” know how it goes if you feel for sharing your journey