AI Robustness

TagLast edit: 24 Oct 2022 22:37 UTC by markov

AI Robustness is an agents ability to maintain its goal and its capabilities when exposed to different data distributions or environments.

AI Safety in a World of Vuln­er­a­ble Ma­chine Learn­ing Systems

8 Mar 2023 2:40 UTC
58 points
10 comments29 min readLW link

Ro­bust­ness to Scale

Scott Garrabrant21 Feb 2018 22:55 UTC
117 points
22 comments2 min readLW link1 review

AXRP Epi­sode 17 - Train­ing for Very High Reli­a­bil­ity with Daniel Ziegler

DanielFilan21 Aug 2022 23:50 UTC
16 points
0 comments34 min readLW link

[Question] Do the Safety Prop­er­ties of Pow­er­ful AI Sys­tems Need to be Ad­ver­sar­i­ally Ro­bust? Why?

DragonGod9 Feb 2023 13:36 UTC
22 points
42 comments2 min readLW link

Ran­dom Ob­ser­va­tion on AI goals

FTPickle8 Apr 2023 19:28 UTC
−10 points
2 comments1 min readLW link

Squeez­ing foun­da­tions re­search as­sis­tance out of for­mal logic nar­row AI.

Donald Hobson8 Mar 2023 9:38 UTC
14 points
1 comment2 min readLW link

Ro­bust­ness to Scal­ing Down: More Im­por­tant Than I Thought

adamShimi23 Jul 2022 11:40 UTC
37 points
5 comments3 min readLW link

Is there a ML agent that aban­dons it’s util­ity func­tion out-of-dis­tri­bu­tion with­out los­ing ca­pa­bil­ities?

Christopher King22 Feb 2023 16:49 UTC
1 point
7 comments1 min readLW link

Ro­bust­ness & Evolu­tion [MLAISU W02]

Esben Kran13 Jan 2023 15:47 UTC
10 points
0 comments3 min readLW link
No comments.