Tool AI

TagLast edit: 15 Sep 2020 17:32 UTC by Multicore

A tool AI is a type of Artificial Intelligence which is built to be used as a tool by the creators, rather than being an agent with its own action and goal-seeking behavior.

Generally meant to refer to AGI, tool AI is a proposed method for gaining some of the benefits of the intelligence while avoiding the dangers of having it act autonomously. It was coined by Holden Karnofsky, co-founder of GiveWell, in a critique of the Singularity Institute. Karnofsky proposed that, while he agreed that agent-based AGI was dangerous, it was an unnecessary path of development. His example of tool AI behavior was Google Maps, which uses complex algorithms and data to plot a route, but presents these results to the user instead of driving the user itself.

Eliezer Yudkowsky responded to this by enumerating several ways in which tool AI had similar difficulties in technical specification and safety. He also pointed out that it was not a common proposal among leading AGI thinkers.

See Also

External Links

Tools ver­sus agents

Stuart_Armstrong16 May 2012 13:00 UTC
42 points
39 comments5 min readLW link

Solv­ing the whole AGI con­trol prob­lem, ver­sion 0.0001

Steven Byrnes8 Apr 2021 15:14 UTC
41 points
4 comments26 min readLW link

Gw­ern’s “Why Tool AIs Want to Be Agent AIs: The Power of Agency”

habryka5 May 2019 5:11 UTC
22 points
1 comment1 min readLW link

[Question] Why not tool AI?

smithee19 Jan 2019 22:18 UTC
18 points
9 comments1 min readLW link

Su­per­in­tel­li­gence 16: Tool AIs

KatjaGrace30 Dec 2014 2:00 UTC
12 points
37 comments7 min readLW link

Think­ing of tool AIs

Michele Campolo20 Nov 2019 21:47 UTC
6 points
2 comments4 min readLW link

Re­ply to Holden on ‘Tool AI’

Eliezer Yudkowsky12 Jun 2012 18:00 UTC
146 points
357 comments17 min readLW link

In defense of Or­a­cle (“Tool”) AI research

Steven Byrnes7 Aug 2019 19:14 UTC
20 points
11 comments4 min readLW link

The Fu­sion Power Gen­er­a­tor Scenario

johnswentworth8 Aug 2020 18:31 UTC
105 points
25 comments3 min readLW link

Re­ply to Holden on The Sin­gu­lar­ity Institute

lukeprog10 Jul 2012 23:20 UTC
69 points
215 comments26 min readLW link

AI: re­quire­ments for per­ni­cious policies

Stuart_Armstrong17 Jul 2015 14:18 UTC
11 points
3 comments3 min readLW link

Tools want to be­come agents

Stuart_Armstrong4 Jul 2014 10:12 UTC
24 points
81 comments1 min readLW link

Su­per­in­tel­li­gence 15: Or­a­cles, ge­nies and sovereigns

KatjaGrace23 Dec 2014 2:01 UTC
11 points
30 comments7 min readLW link

The Self-Unaware AI Oracle

Steven Byrnes22 Jul 2019 19:04 UTC
20 points
38 comments8 min readLW link

Thoughts on the Sin­gu­lar­ity In­sti­tute (SI)

HoldenKarnofsky11 May 2012 4:31 UTC
324 points
1,287 comments29 min readLW link
No comments.