RSS

Eris(Eris)

Karma: 61

Ex software developer, ex QA. Currently an independent AI Safety researcher.

Prior to working in industry was involved with academic research of cognitive architectures. I’m a generalist with a focus on human-like AIs (know a couple of things about developmental psychology, cognitive science, ethology, computational models of the mind).

Personal research vectors: ontogenetic curriculum and the narrative theory. The primary theme is consolidating insights from various mind related areas into plausible explanation of human value dynamics.

A long-time lesswronger (~8 years). Mostly been active in the local LW community (as a consumer and as an org).

Recently I’ve organised a sort peer-to-peer accelerator for anyone who wants to become AI Safety researcher. Right now there are 17 of us.

Was a part of AI Safety Camp 2023 (Positive Attractors team).

Open for funding. The past 7 months self-funded my research.

Nar­ra­tive The­ory. Part 6. Ar­tifi­cial Neu­ral Networks

Eris18 Jul 2023 9:22 UTC
3 points
0 comments2 min readLW link

Nar­ra­tive The­ory. Part 4. Neu­ral Darwinism

Eris17 Jul 2023 16:45 UTC
3 points
0 comments2 min readLW link

Nar­ra­tive The­ory. Part 3. Sim­plest to succeed

Eris16 Jul 2023 14:41 UTC
4 points
0 comments1 min readLW link

Nar­ra­tive The­ory. Part 2. A new way of do­ing the same thing

Eris15 Jul 2023 10:37 UTC
2 points
0 comments1 min readLW link

Introduction

30 Jun 2023 20:45 UTC
7 points
0 comments2 min readLW link

In­her­ently In­ter­pretable Architectures

30 Jun 2023 20:43 UTC
4 points
0 comments7 min readLW link

Pos­i­tive Attractors

30 Jun 2023 20:43 UTC
6 points
0 comments13 min readLW link

An­ton Zhel­toukhov’s Shortform

Eris15 Jun 2023 8:56 UTC
2 points
6 comments1 min readLW link

In­tro to On­to­ge­netic Curriculum

Eris13 Apr 2023 17:15 UTC
19 points
1 comment2 min readLW link

AISC 2023, Progress Re­port for March: Team In­ter­pretable Architectures

2 Apr 2023 16:19 UTC
14 points
0 comments14 min readLW link