RSS

Tool AI

TagLast edit: 13 May 2023 1:24 UTC by Xodarap

A tool AI is a type of Artificial Intelligence that is built to be used as a tool by the creators, rather than being an agent with its own action and goal-seeking behavior.

Generally meant to refer to AGI, tool AI is a proposed method for gaining some of the benefits of the intelligence while avoiding the dangers of having it act autonomously. It was coined by Holden Karnofsky, co-founder of GiveWell, in a critique of the Singularity Institute. Karnofsky proposed that, while he agreed that agent-based AGI was dangerous, it was an unnecessary path of development. His example of tool AI behavior was Google Maps, which uses complex algorithms and data to plot a route, but presents these results to the user instead of driving the user itself.

Eliezer Yudkowsky responded to this by enumerating several ways in which tool AI had similar difficulties in technical specification and safety. He also pointed out that it was not a common proposal among leading AGI thinkers.

See Also

External Links

Thoughts on the Sin­gu­lar­ity In­sti­tute (SI)

HoldenKarnofsky11 May 2012 4:31 UTC
329 points
1,274 comments29 min readLW link

Tools ver­sus agents

Stuart_Armstrong16 May 2012 13:00 UTC
48 points
39 comments5 min readLW link

Re­ply to Holden on ‘Tool AI’

Eliezer Yudkowsky12 Jun 2012 18:00 UTC
152 points
356 comments17 min readLW link

Re­ply to Holden on The Sin­gu­lar­ity Institute

lukeprog10 Jul 2012 23:20 UTC
70 points
214 comments26 min readLW link

Tools want to be­come agents

Stuart_Armstrong4 Jul 2014 10:12 UTC
24 points
81 comments1 min readLW link

Su­per­in­tel­li­gence 15: Or­a­cles, ge­nies and sovereigns

KatjaGrace23 Dec 2014 2:01 UTC
11 points
30 comments7 min readLW link

Su­per­in­tel­li­gence 16: Tool AIs

KatjaGrace30 Dec 2014 2:00 UTC
12 points
37 comments7 min readLW link

AI: re­quire­ments for per­ni­cious policies

Stuart_Armstrong17 Jul 2015 14:18 UTC
11 points
3 comments3 min readLW link

[Question] Why not tool AI?

smithee19 Jan 2019 22:18 UTC
19 points
10 comments1 min readLW link

Gw­ern’s “Why Tool AIs Want to Be Agent AIs: The Power of Agency”

habryka5 May 2019 5:11 UTC
26 points
3 comments1 min readLW link
(www.gwern.net)

The Self-Unaware AI Oracle

Steven Byrnes22 Jul 2019 19:04 UTC
21 points
38 comments8 min readLW link

In defense of Or­a­cle (“Tool”) AI research

Steven Byrnes7 Aug 2019 19:14 UTC
22 points
11 comments4 min readLW link

Think­ing of tool AIs

Michele Campolo20 Nov 2019 21:47 UTC
6 points
2 comments4 min readLW link

The Fu­sion Power Gen­er­a­tor Scenario

johnswentworth8 Aug 2020 18:31 UTC
140 points
29 comments3 min readLW link

Solv­ing the whole AGI con­trol prob­lem, ver­sion 0.0001

Steven Byrnes8 Apr 2021 15:14 UTC
63 points
7 comments26 min readLW link

[In­tro to brain-like-AGI safety] 11. Safety ≠ al­ign­ment (but they’re close!)

Steven Byrnes6 Apr 2022 13:39 UTC
35 points
1 comment10 min readLW link

Some rea­sons why a pre­dic­tor wants to be a consequentialist

Lauro Langosco15 Apr 2022 15:02 UTC
23 points
16 comments5 min readLW link

[Question] Favourite new AI pro­duc­tivity tools?

Gabriel Mukobi15 Jun 2022 1:08 UTC
14 points
5 comments1 min readLW link

Agenty AGI – How Tempt­ing?

PeterMcCluskey1 Jul 2022 23:40 UTC
22 points
3 comments5 min readLW link
(www.bayesianinvestor.com)

Deon­tol­ogy and Tool AI

Nathan11235 Aug 2022 5:20 UTC
4 points
5 comments6 min readLW link

In­ter­pretabil­ity/​Tool-ness/​Align­ment/​Cor­rigi­bil­ity are not Composable

johnswentworth8 Aug 2022 18:05 UTC
129 points
13 comments3 min readLW link

[Question] What is the prob­a­bil­ity that a su­per­in­tel­li­gent, sen­tient AGI is ac­tu­ally in­fea­si­ble?

Nathan112314 Aug 2022 22:41 UTC
−3 points
6 comments1 min readLW link

Simulators

janus2 Sep 2022 12:45 UTC
594 points
161 comments41 min readLW link8 reviews
(generative.ink)

Gen­er­a­tive, Epi­sodic Ob­jec­tives for Safe AI

Michael Glass5 Oct 2022 23:18 UTC
11 points
3 comments8 min readLW link

Ap­ply­ing su­per­in­tel­li­gence with­out col­lu­sion

Eric Drexler8 Nov 2022 18:08 UTC
107 points
63 comments4 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
43 points
4 comments26 min readLW link

Cyborgism

10 Feb 2023 14:47 UTC
331 points
44 comments35 min readLW link

An­no­tated re­ply to Ben­gio’s “AI Scien­tists: Safe and Use­ful AI?”

Roman Leventov8 May 2023 21:26 UTC
18 points
2 comments7 min readLW link
(yoshuabengio.org)

Yoshua Ben­gio ar­gues for tool-AI and to ban “ex­ec­u­tive-AI”

habryka9 May 2023 0:13 UTC
53 points
15 comments7 min readLW link
(yoshuabengio.org)

GPT as an “In­tel­li­gence Fork­lift.”

boazbarak19 May 2023 21:15 UTC
46 points
27 comments3 min readLW link

Paper: Iden­ti­fy­ing the Risks of LM Agents with an LM-Emu­lated Sand­box—Univer­sity of Toronto 2023 - Bench­mark con­sist­ing of 36 high-stakes tools and 144 test cases!

Singularian25019 Oct 2023 0:00 UTC
5 points
0 comments1 min readLW link

Pro­tect­ing agent boundaries

Chipmonk25 Jan 2024 4:13 UTC
10 points
6 comments2 min readLW link

[Question] Plau­si­bil­ity of cy­bor­gism for pro­tect­ing bound­aries?

Chipmonk27 Mar 2024 18:53 UTC
5 points
2 comments1 min readLW link