Someone on Hacker News had the idea of putting COVID patients on an airplane to increase air pressure (which is part of how ventilators work, due to Fick’s law of diffusion).
Could this genuinely work?
Someone on Hacker News had the idea of putting COVID patients on an airplane to increase air pressure (which is part of how ventilators work, due to Fick’s law of diffusion).
Could this genuinely work?
Hey, I’m a student at the University of Copenhagen in Bioinformatics/Computer Science and I’d like to help any way I can. If there’s anything I can do to help let me know.
Actually, as a tournament player I feel I can help explain the slowness:
The article suggests that this isn’t due to increased computational speed or focus, but I think that’s wrong. Playing slowly doesn’t imply thinking slowly. In a chess game, you have a certain amount of time overall, and often when the position is very complicated players will spend half an hour delving into variations and sub-variations. If it’s hard to concentrate, they may just rely on low-calc alternatives, and play faster.
Thoughts on Timothy Snyder’s “On Tyranny”?
Suppose it were discovered with a high degree of confidence that insects could suffer a significant amount, and almost all insect lives are worse than not having lived. What (if anything) would/should the response of the EA community be?
This is a cool idea! My intuition says you probably can’t completely solve the normal control problem without training the system to become generally intelligent, but I’m not sure. Also, I was under the impression there is already a lot of work on this front from antivirus firms (i.e. spam filters, etc.)
Also, quick nitpick: We do for the moment “control our computers” in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.
Perhaps I was a bit misleading, but when I said the net utility of the Earth may be negative, I had in mind mostly fish and other animals that can feel pain. That was what Singer was talking about in the beginning essays. I am fairly certain net utility of humans is positive.
Let me also add that while a sadist can parallelize torture, it’s also possible to parallelize euphoria, so maybe that mitigates things to some extent.
Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don’t instantiate anything with any non-negligible level of sentience?
I read somewhere NK is collapsing, according to a top-level defector. Maybe it’s best to wait things out.
Thanks for this topic! Stupid questions are my specialty, for better or worse.
1) Isn’t cryonics extremely selfish? I mean, couldn’t the money spent on cryopreserving oneself be better spend on, say, AI safety research?
2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?
3) Is the study linking nut consumption to longevity found in the link below convincing?
http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2173094
And if so, is it worth a lot of effort promoting nut consumption in moderation?
I’m not surprised. But I also don’t see much utility from this study; most people already believe that coffee helps them focus.
More specifically, what should the role of government be in AI safety? I understand tukabel’s intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.
It becomes uncomfortable for me to stay in bed more than about half an hour after waking up.
Wow, that had for some reason never crossed my mind. That’s probably a very bad sign.