RSS

Tool AI

TagLast edit: 5 Aug 2022 18:41 UTC by Granit Hoda

A tool AI is a type of Artificial Intelligence that is built to be used as a tool by the creators, rather than being an agent with its own action and goal-seeking behavior.

Generally meant to refer to AGI, tool AI is a proposed method for gaining some of the benefits of the intelligence while avoiding the dangers of having it act autonomously. It was coined by Holden Karnofsky, co-founder of GiveWell, in a critique of the Singularity Institute. Karnofsky proposed that, while he agreed that agent-based AGI was dangerous, it was an unnecessary path of development. His example of tool AI behavior was Google Maps, which uses complex algorithms and data to plot a route, but presents these results to the user instead of driving the user itself.

Eliezer Yudkowsky responded to this by enumerating several ways in which tool AI had similar difficulties in technical specification and safety. He also pointed out that it was not a common proposal among leading AGI thinkers.

See Also

External Links

Gw­ern’s “Why Tool AIs Want to Be Agent AIs: The Power of Agency”

habryka5 May 2019 5:11 UTC
26 points
3 comments1 min readLW link
(www.gwern.net)

Re­ply to Holden on ‘Tool AI’

Eliezer Yudkowsky12 Jun 2012 18:00 UTC
152 points
357 comments17 min readLW link

Tools want to be­come agents

Stuart_Armstrong4 Jul 2014 10:12 UTC
24 points
81 comments1 min readLW link

Thoughts on the Sin­gu­lar­ity In­sti­tute (SI)

HoldenKarnofsky11 May 2012 4:31 UTC
326 points
1,287 comments29 min readLW link

Tools ver­sus agents

Stuart_Armstrong16 May 2012 13:00 UTC
47 points
39 comments5 min readLW link

Solv­ing the whole AGI con­trol prob­lem, ver­sion 0.0001

Steven Byrnes8 Apr 2021 15:14 UTC
59 points
7 comments26 min readLW link

[In­tro to brain-like-AGI safety] 11. Safety ≠ al­ign­ment (but they’re close!)

Steven Byrnes6 Apr 2022 13:39 UTC
25 points
1 comment10 min readLW link

Agenty AGI – How Tempt­ing?

PeterMcCluskey1 Jul 2022 23:40 UTC
21 points
3 comments5 min readLW link
(www.bayesianinvestor.com)

[Question] Why not tool AI?

smithee19 Jan 2019 22:18 UTC
19 points
10 comments1 min readLW link

Su­per­in­tel­li­gence 16: Tool AIs

KatjaGrace30 Dec 2014 2:00 UTC
12 points
37 comments7 min readLW link

Think­ing of tool AIs

Michele Campolo20 Nov 2019 21:47 UTC
6 points
2 comments4 min readLW link

In defense of Or­a­cle (“Tool”) AI research

Steven Byrnes7 Aug 2019 19:14 UTC
21 points
11 comments4 min readLW link

The Fu­sion Power Gen­er­a­tor Scenario

johnswentworth8 Aug 2020 18:31 UTC
136 points
29 comments3 min readLW link

Re­ply to Holden on The Sin­gu­lar­ity Institute

lukeprog10 Jul 2012 23:20 UTC
69 points
215 comments26 min readLW link

AI: re­quire­ments for per­ni­cious policies

Stuart_Armstrong17 Jul 2015 14:18 UTC
11 points
3 comments3 min readLW link

Su­per­in­tel­li­gence 15: Or­a­cles, ge­nies and sovereigns

KatjaGrace23 Dec 2014 2:01 UTC
11 points
30 comments7 min readLW link

The Self-Unaware AI Oracle

Steven Byrnes22 Jul 2019 19:04 UTC
21 points
38 comments8 min readLW link

Some rea­sons why a pre­dic­tor wants to be a consequentialist

Lauro Langosco15 Apr 2022 15:02 UTC
23 points
16 comments5 min readLW link

[Question] Favourite new AI pro­duc­tivity tools?

Gabriel Mukobi15 Jun 2022 1:08 UTC
14 points
4 comments1 min readLW link

Deon­tol­ogy and Tool AI

Nathan11235 Aug 2022 5:20 UTC
4 points
5 comments6 min readLW link

In­ter­pretabil­ity/​Tool-ness/​Align­ment/​Cor­rigi­bil­ity are not Composable

johnswentworth8 Aug 2022 18:05 UTC
111 points
8 comments3 min readLW link

[Question] What is the prob­a­bil­ity that a su­per­in­tel­li­gent, sen­tient AGI is ac­tu­ally in­fea­si­ble?

Nathan112314 Aug 2022 22:41 UTC
−3 points
6 comments1 min readLW link

Simulators

janus2 Sep 2022 12:45 UTC
452 points
102 comments44 min readLW link
(generative.ink)

Gen­er­a­tive, Epi­sodic Ob­jec­tives for Safe AI

Michael Glass5 Oct 2022 23:18 UTC
10 points
3 comments8 min readLW link

Ap­ply­ing su­per­in­tel­li­gence with­out col­lu­sion

Eric Drexler8 Nov 2022 18:08 UTC
81 points
56 comments4 min readLW link