RSS

Pro­ject Announcement

TagLast edit: 27 Aug 2022 18:32 UTC by Multicore

A project announcement is what you might expect—an announcement of a project.
Posts that are about a project’s announcement, but do not themselves announce anything, should not have this tag.

See also: Organization Updates

An­nounc­ing Balsa Research

Zvi25 Sep 2022 22:50 UTC
234 points
60 comments2 min readLW link
(thezvi.wordpress.com)

Re­treat for ra­tio­nal­ity meetup or­ga­niz­ers (ap­ply to at­tend!)

mingyuan29 Mar 2022 18:42 UTC
57 points
0 comments1 min readLW link

Orthog­o­nal: A new agent foun­da­tions al­ign­ment organization

carado19 Apr 2023 20:17 UTC
196 points
2 comments1 min readLW link
(orxl.org)

Show LW: Yo­daTimer.com

Adam Zerner11 Jun 2022 8:52 UTC
26 points
4 comments1 min readLW link

An­nounc­ing AXRP, the AI X-risk Re­search Podcast

DanielFilan23 Dec 2020 20:00 UTC
54 points
5 comments1 min readLW link
(danielfilan.com)

Launch­ing a new progress in­sti­tute, seek­ing a CEO

jasoncrawford18 Jul 2022 16:58 UTC
25 points
2 comments3 min readLW link
(rootsofprogress.org)

Good Fu­tures Ini­ti­a­tive: Win­ter Pro­ject Internship

Aris27 Nov 2022 23:41 UTC
28 points
4 comments4 min readLW link

Creat­ing a database for base rates

nikos12 Dec 2022 10:09 UTC
2 points
1 comment3 min readLW link
(forum.effectivealtruism.org)

Test­ing New­port’s “Digi­tal Min­i­mal­ism” at CEEALAR

KurtB27 Jul 2021 18:28 UTC
18 points
1 comment5 min readLW link

An­nounc­ing The Filan Cabinet

DanielFilan30 Dec 2022 3:10 UTC
21 points
2 comments1 min readLW link
(danielfilan.com)

First Three Epi­sodes of The Filan Cabinet

DanielFilan18 Jan 2023 19:20 UTC
17 points
1 comment1 min readLW link

Brain-like AGI pro­ject “ain­telope”

Gunnar_Zarncke14 Aug 2022 16:33 UTC
53 points
2 comments1 min readLW link

An­nounc­ing En­cul­tured AI: Build­ing a Video Game

18 Aug 2022 2:16 UTC
103 points
26 comments4 min readLW link

The Roots of Progress is now a non­profit organization

jasoncrawford23 Aug 2021 21:33 UTC
44 points
3 comments2 min readLW link
(rootsofprogress.org)

We Are Con­jec­ture, A New Align­ment Re­search Startup

Connor Leahy8 Apr 2022 11:40 UTC
195 points
24 comments4 min readLW link

An­nounc­ing the Vi­talik Bu­terin Fel­low­ships in AI Ex­is­ten­tial Safety!

DanielFilan21 Sep 2021 0:33 UTC
64 points
2 comments1 min readLW link
(grants.futureoflife.org)

In­tro­duc­ing The Log­i­cal Foun­da­tion, A Plan to End Poverty With Guaran­teed Income

Michael Simm18 Nov 2022 8:13 UTC
9 points
23 comments1 min readLW link

An­nounc­ing AI safety Men­tors and Mentees

Marius Hobbhahn23 Nov 2022 15:21 UTC
60 points
7 comments10 min readLW link

AISER—AIS Europe Re­treat

Carolin23 Dec 2022 19:03 UTC
5 points
0 comments1 min readLW link

An­nounc­ing Cavendish Labs

19 Jan 2023 20:15 UTC
51 points
5 comments2 min readLW link
(forum.effectivealtruism.org)

Flour­ish Stu­dent Jour­nal—Open for Submissions

15 Feb 2023 16:11 UTC
3 points
0 comments1 min readLW link

The View from 30,000 Feet: Pre­face to the Se­cond EleutherAI Retrospective

7 Mar 2023 16:22 UTC
14 points
0 comments4 min readLW link
(blog.eleuther.ai)

We’ve built Con­nected Papers—a vi­sual tool for re­searchers to find and ex­plore aca­demic papers

discordy8 Jun 2020 18:44 UTC
205 points
39 comments2 min readLW link

How I learned to stop wor­ry­ing and love skill trees

junk heap homotopy23 May 2023 4:08 UTC
62 points
2 comments1 min readLW link

ra­tio­nalfic­tion.io—pub­lish, dis­cover, and dis­cuss ra­tio­nal fiction

rayalez31 May 2016 12:02 UTC
12 points
12 comments1 min readLW link

Progress re­view: genome se­quenc­ing—June 2019

Yuxi_Liu29 Jun 2019 7:58 UTC
25 points
3 comments5 min readLW link

CFAR’s new fo­cus, and AI Safety

AnnaSalamon3 Dec 2016 18:09 UTC
51 points
88 comments3 min readLW link

An­nounc­ing the Re­plac­ing Guilt audiobook

gianlucatruda25 Jun 2021 13:38 UTC
65 points
1 comment1 min readLW link

Visi­ble Thoughts Pro­ject and Bounty Announcement

So8res30 Nov 2021 0:19 UTC
246 points
103 comments13 min readLW link1 review

ML Align­ment The­ory Pro­gram un­der Evan Hubinger

6 Dec 2021 0:03 UTC
82 points
3 comments2 min readLW link

Skil­ling-up in ML Eng­ineer­ing for Align­ment: re­quest for comments

23 Apr 2022 15:11 UTC
19 points
0 comments1 min readLW link

An­nounc­ing the Clearer Think­ing Re­grants program

spencerg17 Jun 2022 13:14 UTC
36 points
1 comment1 min readLW link

Ra­dio Bostrom: Au­dio nar­ra­tions of pa­pers by Nick Bostrom

PeterH9 Aug 2022 8:56 UTC
12 points
0 comments2 min readLW link
(forum.effectivealtruism.org)

An­nounc­ing the Distil­la­tion for Align­ment Practicum (DAP)

18 Aug 2022 19:50 UTC
23 points
3 comments3 min readLW link

In­tro­duc­ing the Ex­is­ten­tial Risks In­tro­duc­tory Course (ERIC)

19 Aug 2022 15:54 UTC
9 points
0 comments7 min readLW link

High-Im­pact Psy­chol­ogy (HIPsy): Pilot­ing a Global Network

Inga G.29 Sep 2022 18:16 UTC
8 points
0 comments1 min readLW link

Open ap­pli­ca­tion to be­come an AI safety pro­ject mentor

Charbel-Raphaël29 Sep 2022 11:27 UTC
8 points
0 comments1 min readLW link
(docs.google.com)

Res­lab Re­quest for In­for­ma­tion: EA hard­ware projects

Joel Becker26 Oct 2022 21:13 UTC
10 points
0 comments1 min readLW link