RSS

adamShimi(Adam Shimi)

Karma: 5,845

Epistemologist specialized in the difficulties of alignment. Currently at Conjecture, and running Refine.

Goal-di­rected = Model-based RL?

adamShimi20 Feb 2020 19:13 UTC
21 points
10 comments3 min readLW link

Where’s the Tur­ing Ma­chine? A step to­wards On­tol­ogy Identification

adamShimi26 Feb 2020 17:10 UTC
25 points
0 comments8 min readLW link

Les­sons from Isaac: Poor Lit­tle Robbie

adamShimi14 Mar 2020 17:14 UTC
1 point
8 comments3 min readLW link

Wel­come to the Haskell Jungle

adamShimi18 Mar 2020 18:58 UTC
16 points
2 comments2 min readLW link

My Func­tor is Rich!

adamShimi18 Mar 2020 18:58 UTC
10 points
0 comments17 min readLW link

Les­sons from Isaac: Pit­falls of Reason

adamShimi8 May 2020 20:44 UTC
9 points
0 comments8 min readLW link

Fo­cus: you are al­lowed to be bad at ac­com­plish­ing your goals

adamShimi3 Jun 2020 21:04 UTC
19 points
17 comments3 min readLW link

Goal-di­rect­ed­ness is be­hav­ioral, not structural

adamShimi8 Jun 2020 23:05 UTC
6 points
12 comments3 min readLW link

Lo­cal­ity of goals

adamShimi22 Jun 2020 21:56 UTC
16 points
8 comments6 min readLW link

The 8 Tech­niques to Tol­er­ify the Dark World

adamShimi20 Jul 2020 0:58 UTC
2 points
5 comments2 min readLW link

Deal­ing with Cu­ri­os­ity-Stoppers

adamShimi30 Jul 2020 22:05 UTC
50 points
6 comments10 min readLW link

[Question] What are you look­ing for in a Less Wrong post?

adamShimi1 Aug 2020 18:00 UTC
27 points
19 comments1 min readLW link

[Question] What are the most im­por­tant pa­pers/​post/​re­sources to read to un­der­stand more of GPT-3?

adamShimi2 Aug 2020 20:53 UTC
22 points
4 comments1 min readLW link

An­a­lyz­ing the Prob­lem GPT-3 is Try­ing to Solve

adamShimi6 Aug 2020 21:58 UTC
16 points
2 comments4 min readLW link

[Question] Will OpenAI’s work un­in­ten­tion­ally in­crease ex­is­ten­tial risks re­lated to AI?

adamShimi11 Aug 2020 18:16 UTC
53 points
55 comments1 min readLW link

Goal-Direct­ed­ness: What Suc­cess Looks Like

adamShimi16 Aug 2020 18:33 UTC
9 points
0 comments2 min readLW link

Univer­sal­ity Unwrapped

adamShimi21 Aug 2020 18:53 UTC
29 points
2 comments18 min readLW link

The “Backchain­ing to Lo­cal Search” Tech­nique in AI Alignment

adamShimi18 Sep 2020 15:05 UTC
29 points
1 comment2 min readLW link

Why You Should Care About Goal-Directedness

adamShimi9 Nov 2020 12:48 UTC
38 points
15 comments9 min readLW link

The (Unoffi­cial) Less Wrong Com­ment Challenge

adamShimi11 Nov 2020 14:18 UTC
82 points
35 comments2 min readLW link