As far as I understand, the difference between AlphaGo and the real potentially dangerous AIs is the following. Whatever ontology or utility function AlphaGo has[1], it doesn’t describe anything except for the Go board and whatever potential moves the opponent might come up with. AlphaGo wouldn’t learn almost anything about the opponent from what he/she/it does on the Go board.
On the other hand, we have LLMs trained on huge amounts of text-related data, which is enough to develop complex ontologies. For example, unlike AlphaGo, GPT-4o has somehow learned to elicit likes out of the user by being sycophantic. If AI takeover or living independently of the creators’ will is not in the realm of the LLM’s abilities, then why would the LLM attempt it in the first place?
As far as I understand, the difference between AlphaGo and the real potentially dangerous AIs is the following. Whatever ontology or utility function AlphaGo has[1], it doesn’t describe anything except for the Go board and whatever potential moves the opponent might come up with. AlphaGo wouldn’t learn almost anything about the opponent from what he/she/it does on the Go board.
On the other hand, we have LLMs trained on huge amounts of text-related data, which is enough to develop complex ontologies. For example, unlike AlphaGo, GPT-4o has somehow learned to elicit likes out of the user by being sycophantic. If AI takeover or living independently of the creators’ will is not in the realm of the LLM’s abilities, then why would the LLM attempt it in the first place?
One should also remember that EpochAI estimated AlphaGo as having 8.2e6 parameters, so complex ontologies could be unlikely to even fit into AlphaGo.