Yes, everything is terrible. But it seems like, if you’re writing a book and discover something like the Omegaven story, it might be worth writing a blog post just about that and seeing if it can get some publicity via social media? (I settled for resharing the 2013 NBC article.)
skybrian
Looks like there is a detailed Wiki page about this.
I’m reminded of the Oblique Strategies playing cards. Obviously the cards don’t provide any sort of rigor. But having them around might be useful for thinking creatively. Might the same apply for Less Wrong jargon?
From an outside (but sympathetic) perspective, seems like this post would have been better if you started out with “Why we’re starting a new rationalist community in Manchester” and took it from there? As it is, I wonder how many people made it to the end.
Remove Intercom?
Prize money helps, but you’d also need to find relevant experts who know enough about each sub-field to tell whether the standards are indeed high. (Usually they are called “judges,” but perhaps we could call them “peers?”)
It might help to narrow the question: instead of looking for “high standards” (which is vague), the prizes could be awarded based on whether papers already published elsewhere appear to use good statistics. Then you’d only need reviewers who are experts in statistics.
I’d like to see citations for the claims about maganese and selenium.
Where do we report bugs? For example, I was unable to leave a comment here using Chrome on an Android tablet. (Desktop is okay.)
Also, is source available? I might be able to make suggestions.
Thanks! Bug filed. Regarding the Intercom chat bubble, I did post one comment a while back (accidentally in the wrong chat room for Lesswrong), but got no response, and I don’t see any other responses in either chat room. Also, the indicator always says “away”. To the naive user it looks abandoned. Are you sure it’s working? Maybe the old chat room should be deleted?
I’m happy to see a demonstration that Eliezer has a good understanding of the top-level issues involving computer security.
One thing I wonder though, is why making Internet security better across the board isn’t a more important goal in the rationality community? Although very difficult (for reasons illustrated here), it seems immediately useful and also a good prerequisite for any sort of AI security. If we can’t secure the Internet against nation-state level attacks, what hope is there against an AI that falls into the wrong hands?
In particular, building “friendly AI” and assuming it will remain friendly seems naive at best, since it will copied and then the friendly part will be modified by hostile actors.
It seems like someone with a security mindset will want to avoid making any assumption of friendliness and instead work on making critical systems that are simple enough to be mathematically proven secure. I wonder why this quote (from the previous post) isn’t treated as a serious plan: “If your system literally has no misbehavior modes at all, it doesn’t matter if you have IQ 140 and the enemy has IQ 160—it’s not an arm-wrestling contest.”
We are far from being able to build these systems but it still seems like a more plausible research project than ensuring that nobody in the world makes unfriendly AI.
Even if there’s no “friendly part,” it seems unlikely that someone who learns the basic principles behind building a friendly AI will be unable to build an unfriendly AI by accident. I’m happy that we’re making progress with safe languages, but there is no practical programming language in which it’s the least bit difficult to write a bad program.
It would make more sense to assume that at some point, a hostile AI will get an Internet connection, and figure out what needs to be done about that.
I think odds are good that, assuming general AI happens at all, someone will build a hostile AI and connect it to the Internet. I think a proper understanding the security mindset is that the assumption “nobody will connect a hostile AI to the Internet” is something we should stop relying on. (In particular, maintaining secrecy and internatonal cooperation seems unlikely. We shouldn’t assume they will work.)
We should be looking for defenses that aren’t dependent of the IQ level of the attacker, similar to how mathematical proofs are independent of IQ. AI alignment is an important research problem, but doesn’t seem directly relevant for this.
In particular, I don’t see why you think “routing through alignment” is important for making sound mathematical proofs. Narrow AI should be sufficient for making advances in mathematics.
I mean things like using mathematical proofs to ensure that Internet-exposed services have no bugs that a hostile agent might exploit. We don’t need to be able to build an AI to improve defences.
I’m just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.
I’m wondering if anyone can recommend some recordings that they like on YouTube or Spotify of this sort of music? I don’t know if I’ve heard it before.
Coronavirus tests and probability
I’m wondering what’s a way to keep better tabs on what people are talking in the rationalist community without reading everything? There is a lot of speculation, but sometimes very useful signal.
I feel like I’m reasonably in touch from reading Slate Star Codex and occasionally checking in here, and yet the first thing I saw that really got my attention was “Seeing the Smoke” getting posted on Hacker News. I guess I’m not following the right people yet?
Frivolous speculation about the long-term effects of coronavirus
Yeah, I don’t see it changing that drastically; more likely it will be a lot of smaller and yet significant changes that make old movies look dated. Something like how the airports changed after 9/11, or more trivially, that time when all the men in America stopped wearing hats.
Maybe compare with epistemic learned helplessness?
http://squid314.livejournal.com/350090.html