I would be curious if you think the following take is naive or reasonable?
It seems to me that a lot of bad AI decisions boil down to building something for scale and in that letting go of a certain environment that would be conducive to producing good thinking? Yet if we look at the VC or general entrepreneurial scaling model, it is not quite aligned with that purpose and so a lot of the organisations that we see will then go down such a path?
Isn’t it then very important to be able to provide a space for ambitious people to work on something real in an environment that is optimised for the precursors of calm and friendly thought? (Since most other places will be optimised for scale.)
I think you can put fun, curiosity and a positive impact direction at the forefront of an organisation and not fall into the traps that you’ve described. I think the trick is to not have an EA oriented hardcore impact evaluation frame as I think that leads to fear, pressure and generally worse decision making. I’ll see if this empirically holds but that’s also why I’m asking what you think about this as I’ve had similar thoughts on the EA + startup sphere and this is the response/solution I’ve thought of (and am trying).
I would be curious if you think the following take is naive or reasonable?
It seems to me that a lot of bad AI decisions boil down to building something for scale and in that letting go of a certain environment that would be conducive to producing good thinking? Yet if we look at the VC or general entrepreneurial scaling model, it is not quite aligned with that purpose and so a lot of the organisations that we see will then go down such a path?
Isn’t it then very important to be able to provide a space for ambitious people to work on something real in an environment that is optimised for the precursors of calm and friendly thought? (Since most other places will be optimised for scale.)
I think you can put fun, curiosity and a positive impact direction at the forefront of an organisation and not fall into the traps that you’ve described. I think the trick is to not have an EA oriented hardcore impact evaluation frame as I think that leads to fear, pressure and generally worse decision making. I’ll see if this empirically holds but that’s also why I’m asking what you think about this as I’ve had similar thoughts on the EA + startup sphere and this is the response/solution I’ve thought of (and am trying).