Alignment Newsletter

I pub­lish the Align­ment Newslet­ter, a weekly pub­li­ca­tion with re­cent con­tent rele­vant to AI al­ign­ment. See here for more de­tails. Quick links: email signup form, RSS feed, spread­sheet of all sum­maries.

The Align­ment Newslet­ter #1: 04/​09/​18

The Align­ment Newslet­ter #2: 04/​16/​18

The Align­ment Newslet­ter #3: 04/​23/​18

The Align­ment Newslet­ter #4: 04/​30/​18

The Align­ment Newslet­ter #5: 05/​07/​18

The Align­ment Newslet­ter #6: 05/​14/​18

The Align­ment Newslet­ter #7: 05/​21/​18

The Align­ment Newslet­ter #8: 05/​28/​18

The Align­ment Newslet­ter #9: 06/​04/​18

The Align­ment Newslet­ter #10: 06/​11/​18

The Align­ment Newslet­ter #11: 06/​18/​18

The Align­ment Newslet­ter #12: 06/​25/​18

Align­ment Newslet­ter #13: 07/​02/​18

Align­ment Newslet­ter #14

Align­ment Newslet­ter #15: 07/​16/​18

Align­ment Newslet­ter #16: 07/​23/​18

Align­ment Newslet­ter #17

Align­ment Newslet­ter #18

Align­ment Newslet­ter #19

Align­ment Newslet­ter #20

Align­ment Newslet­ter #21

Align­ment Newslet­ter #22

Align­ment Newslet­ter #23

Align­ment Newslet­ter #24

Align­ment Newslet­ter #25

Align­ment Newslet­ter #26

Align­ment Newslet­ter #27

Align­ment Newslet­ter #28

Align­ment Newslet­ter #29

Align­ment Newslet­ter #30

Align­ment Newslet­ter #31

Align­ment Newslet­ter #32

Align­ment Newslet­ter #33

Align­ment Newslet­ter #34

Align­ment Newslet­ter #35

Align­ment Newslet­ter #36

Align­ment Newslet­ter #37

Align­ment Newslet­ter #38

Align­ment Newslet­ter #39

Align­ment Newslet­ter #40

Align­ment Newslet­ter #41

Align­ment Newslet­ter #42

Align­ment Newslet­ter #43

Align­ment Newslet­ter #44

Align­ment Newslet­ter #45

Align­ment Newslet­ter #46

Align­ment Newslet­ter #47

Align­ment Newslet­ter #48

Align­ment Newslet­ter #49

Align­ment Newslet­ter #50

Align­ment Newslet­ter #51

Align­ment Newslet­ter #52

Align­ment Newslet­ter One Year Retrospective

Align­ment Newslet­ter #53

[AN #54] Box­ing a finite-hori­zon AI sys­tem to keep it unambitious

[AN #55] Reg­u­la­tory mar­kets and in­ter­na­tional stan­dards as a means of en­sur­ing benefi­cial AI

[AN #56] Should ML re­searchers stop run­ning ex­per­i­ments be­fore mak­ing hy­pothe­ses?

[AN #57] Why we should fo­cus on ro­bust­ness in AI safety, and the analo­gous prob­lems in programming

[AN #58] Mesa op­ti­miza­tion: what it is, and why we should care

[AN #59] How ar­gu­ments for AI risk have changed over time

[AN #60] A new AI challenge: Minecraft agents that as­sist hu­man play­ers in cre­ative mode

[AN #61] AI policy and gov­er­nance, from two peo­ple in the field

[AN #62] Are ad­ver­sar­ial ex­am­ples caused by real but im­per­cep­ti­ble fea­tures?

[AN #63] How ar­chi­tec­ture search, meta learn­ing, and en­vi­ron­ment de­sign could lead to gen­eral intelligence

[AN #64]: Us­ing Deep RL and Re­ward Uncer­tainty to In­cen­tivize Prefer­ence Learning

[AN #65]: Learn­ing use­ful skills by watch­ing hu­mans “play”

[AN #66]: De­com­pos­ing ro­bust­ness into ca­pa­bil­ity ro­bust­ness and al­ign­ment robustness

[AN #67]: Creat­ing en­vi­ron­ments in which to study in­ner al­ign­ment failures

[AN #68]: The at­tain­able util­ity the­ory of impact

[AN #69] Stu­art Rus­sell’s new book on why we need to re­place the stan­dard model of AI

[AN #70]: Agents that help hu­mans who are still learn­ing about their own preferences

[AN #71]: Avoid­ing re­ward tam­per­ing through cur­rent-RF optimization

[AN #72]: Align­ment, ro­bust­ness, method­ol­ogy, and sys­tem build­ing as re­search pri­ori­ties for AI safety

[AN #73]: De­tect­ing catas­trophic failures by learn­ing how agents tend to break

[AN #74]: Separat­ing benefi­cial AI into com­pe­tence, al­ign­ment, and cop­ing with impacts

[AN #75]: Solv­ing Atari and Go with learned game mod­els, and thoughts from a MIRI employee

[AN #76]: How dataset size af­fects ro­bust­ness, and bench­mark­ing safe ex­plo­ra­tion by mea­sur­ing con­straint violations

[AN #77]: Dou­ble de­scent: a unifi­ca­tion of statis­ti­cal the­ory and mod­ern ML practice

[AN #78] For­mal­iz­ing power and in­stru­men­tal con­ver­gence, and the end-of-year AI safety char­ity comparison

[AN #79]: Re­cur­sive re­ward mod­el­ing as an al­ign­ment tech­nique in­te­grated with deep RL

[AN #80]: Why AI risk might be solved with­out ad­di­tional in­ter­ven­tion from longtermists

[AN #81]: Univer­sal­ity as a po­ten­tial solu­tion to con­cep­tual difficul­ties in in­tent alignment

[AN #82]: How OpenAI Five dis­tributed their train­ing computation

[AN #83]: Sam­ple-effi­cient deep learn­ing with ReMixMatch

[AN #84] Re­view­ing AI al­ign­ment work in 2018-19

[AN #85]: The nor­ma­tive ques­tions we should be ask­ing for AI al­ign­ment, and a sur­pris­ingly good chatbot

[AN #86]: Im­prov­ing de­bate and fac­tored cog­ni­tion through hu­man experiments

[AN #87]: What might hap­pen as deep learn­ing scales even fur­ther?

[AN #88]: How the prin­ci­pal-agent liter­a­ture re­lates to AI risk

[AN #89]: A unify­ing for­mal­ism for prefer­ence learn­ing algorithms

[AN #90]: How search land­scapes can con­tain self-re­in­forc­ing feed­back loops

[AN #91]: Con­cepts, im­ple­men­ta­tions, prob­lems, and a bench­mark for im­pact measurement

[AN #92]: Learn­ing good rep­re­sen­ta­tions with con­trastive pre­dic­tive coding

[AN #93]: The Precipice we’re stand­ing at, and how we can back away from it

[AN #94]: AI al­ign­ment as trans­la­tion be­tween hu­mans and machines

[AN #95]: A frame­work for think­ing about how to make AI go well

[AN #96]: Buck and I dis­cuss/​ar­gue about AI Alignment

[AN #97]: Are there his­tor­i­cal ex­am­ples of large, ro­bust dis­con­ti­nu­ities?

[AN #98]: Un­der­stand­ing neu­ral net train­ing by see­ing which gra­di­ents were helpful

[AN #99]: Dou­bling times for the effi­ciency of AI algorithms

[AN #100]: What might go wrong if you learn a re­ward func­tion while acting

[AN #101]: Why we should rigor­ously mea­sure and fore­cast AI progress

[AN #102]: Meta learn­ing by GPT-3, and a list of full pro­pos­als for AI alignment

[AN #103]: ARCHES: an agenda for ex­is­ten­tial safety, and com­bin­ing nat­u­ral lan­guage with deep RL

[AN #104]: The per­ils of in­ac­cessible in­for­ma­tion, and what we can learn about AI al­ign­ment from COVID

[AN #105]: The eco­nomic tra­jec­tory of hu­man­ity, and what we might mean by optimization

[AN #106]: Eval­u­at­ing gen­er­al­iza­tion abil­ity of learned re­ward models

[AN #107]: The con­ver­gent in­stru­men­tal sub­goals of goal-di­rected agents

[AN #108]: Why we should scru­ti­nize ar­gu­ments for AI risk

[AN #109]: Teach­ing neu­ral nets to gen­er­al­ize the way hu­mans would

[AN #110]: Learn­ing fea­tures from hu­man feed­back to en­able re­ward learning

[AN #111]: The Cir­cuits hy­pothe­ses for deep learning

[AN #112]: Eng­ineer­ing a Safer World