most crimes committed between now and 2000 are going to become solved
I don’t believe this for sexual assault, domestic violence, or many forms of child abuse, elder abuse, and the like; because these crimes are quite often committed in private.
Sure, if you mean bank robberies, jewel heists, and other movie-plot crimes. But the clearance rate for those is already quite high.
The word “agent” sure means a lot of things.
In business and economics, “agent” is relative to “principal”. An agent acts on behalf of a principal; is intended to act for the principal’s interest; but can experience conflicts of interest (a “principal-agent problem”). An agent can be (at least in theory) held accountable for misrepresenting its principal. An agent is expected to actively pursue its principal’s interest (see e.g. Matthew 25:14-30).
In traditional AI, “agent” is relative to “environment”. Russell & Norvig define an agent as “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.” An agent receives percepts and takes actions. A “rational agent”, in turn, is an agent “that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.” (This in turn implies utility theory, probabilistic reasoning, and a bunch of other infrastructure.)
In philosophy, an “agent” is any entity that takes actions. In ethics specifically, a “moral agent” is one who takes moral actions; actions that can be judged right or wrong. “Agent” here is contrasted with “patient”; the doer of an action vs. the sufferer of its effect. (A “moral patient” is an entity that has moral interests; one that can suffer a wrong or benefit from a good deed. Not all moral patients are moral agents.)
In psychology, “agency” is the trait of choosing one’s own actions; of making decisions for oneself. We experience agency when we make our own choices for our own reasons and purposes; we recognize others’ agency when we see them make their own choices for their own reasons and purposes. (We may be in error about agency: we may mistake passive parts of our environment as being agents; we may underestimate our own freedom to act.)
However, Milgram used “agentic” to mean nearly the opposite: a person enters the agentic state when they “come to view themselves as the instrument for carrying out another person’s wishes, and they therefore no longer see themselves as responsible for their actions.” An obedient agent (in this sense) experiences less agency (in the sense of the previous paragraph); indeed, Milgram’s agentic state is an escape from that kind of agency.
A very different “agency” is found in the 2021 CFAR handbook, which defines the word to mean “The property of both having and exercising a capacity for relevant action; one’s ability to meaningfully affect the world around oneself and effectively move toward achieving one’s goals. In particular, agency implies the ability to move beyond default patterns and cached answers, and to think and act strategically.”
“Agent AI” has previously been used in contrast to “tool AI”, for instance by Karnofsky (2012). Karnofsky discussed making AI safe by limiting it to “tool AI”, with Google Maps as an example of a tool: it tells you the optimal route to your destination, unlike a Waymo, it doesn’t take actions in the world to take you there. This distinction seems somewhat quaint today, because …
… today, “AI agent” is often used to mean something like “a process that repeatedly uses an LLM for planning and composition of instructions, which it then carries out.” Such an “agent” is an odd fit for many of the above senses of “agent”:
It need not act on behalf of a principal, so it’s not a business agent. (It also can’t be held accountable.)
It is not a rational agent; it does not maximize an expected utility. (And a sensors-and-actuators agent is an odd fit for the shape of LLM-based agents too; but R&N’s “can be viewed” is quite broad, so I’ll allow it.)
It does take actions, so can be said to have some sorts of philosophical agency; its moral agency is pretty doubtful though.
Opinions on its psychological agency vary widely … including in opinions generated by LLMs trained to play “agent” characters.
Similarly with regards to its degree of Milgram-agentic obedience to human authority.
Most do not appear to be CFAR-agenty in the sense of having their own goals and actively pursuing them.
And the tool/agent distinction seems obsolete when you can build an “agent” out of a thin wrapper around a query/response “tool”.
Or, to put it another way:
In the agent era, the agents may not be agents, but they’re agents, and they’re getting even agentier.