Though we’re still actively searching for the senior web developer role: https://ought.org/careers/web-developer
I’m finding it fruitful to consider the “exiles” discussion in this post alongside Hunting the Shadow.
Try harder to learn from tradition than you have been on the margin. Current = noisy.
What does “Current = noisy” mean here?
Funding people who are doing things differently from how you would do them is incredibly hard but necessary. EA should learn more Jessica Graham
What does “learn more Jessica Graham” mean?
A majority of these choices are influenced by Bredesen’s book The End of Alzheimers, or by a prior source with similar advice.
Oh interesting. Do you know if anyone’s done an epistemic spot-check of The End of Alzheimers?
… health risks of fish oil while linking to a page saying fish oil doesn’t contain Mercury. Is that not the health risk you were thinking of?
No good reason. I stopped taking for health risk concerns like mercury (plus not noticing any effect).
I think I’m a bit paranoid about heavy metals from fish. Probably irrationally so
Thanks! This meta-analysis of Metformin makes it seem promising.
cf. Talent Stacks
(I’m helping Ought hire for the web dev role.)
Ought is based in SF (office in North Beach).
Ideally we’d find someone who could work out of the SF office, but we’re open to considering remote arrangements. One of our full-time researchers is based remotely and periodically visits the SF office to co-work.
And then it’s trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.
If by “a lot of ready means of mass destruction” you’re thinking of nukes, it doesn’t seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI’s own survival.
We don’t have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn’t have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).
It seems like an AGI wouldn’t execute a “kill all humans” plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don’t see how an AGI could become confident about high-variance “kill all humans” plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)
Wouldn’t an AI following that procedure be really easy to spot? (Because it’s not deceptive, and it just starts trying to destroy things it can’t predict as it encounters them.)
Firstly, on a historical basis, many of the greatest scientists were clearly aiming for explanation not prediction.
Could you expand a bit more on how you view explanation as distinct from prediction?
(As I think about the concepts, I’m finding it tricky to draw a crisp distinction between the two.)
Here’s an archived version of the doc.
See Sinclair: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”