What does it imply for things like AI governance and global coordination on x-risks?
I’ve read the article a while ago, and vaguely concluded there should be some implications here (but largely uncertain about the direction or magnitude, being a non-expert). Interested to hear what people think (esp. people who concentrate on policy)
How about we let go of success, but keep doing challenging stuff anyway, just for the fun of it?
This sort of feels like Feynman’s attitude, despite him being extremely successful.
Also notable: NVIDIA trained a half-order-of-magnitude larger model https://nv-adlr.github.io/MegatronLM?utm_campaign=NLP%20News&utm_medium=email&utm_source=Revue%20newsletter
Seems way off from the actual release; any post-mortem?
A decent solution to the “who should you be accountable to”, from the wisdom of the ancients (shows thought on many of the considerations mentioned)
When in doubt, remember Warren Buffett’s rule of thumb: “… I want employees to ask themselves whether they are willing to have any contemplated act appear the next day on the front page of their local paper—to be read by their spouses, children and friends—with the reporting done by an informed and critical reporter.”
(might contain spoilers/private info) https://www.technologyreview.com/s/613961/elon-musks-brain-interface-company-is-promising-big-news-heres-what-it-could-be/
I think the Hypothesis is not about Open Threads specifically
Tracking employment/location and publishing/conference attendance records of researchers will probably be good source data for this.
I think it was easier in that era; AFAIK they used conventional secrecy methods (project names, locations, misdirection, need to know, obfuscation) to pull it off. Feynman’s “Surely you’re joking” and Rhodes “making the atomic bomb” are good sources for some examples (and otherwise recommended)
Hence it has no motivation to manipulate[d] humans through its answer.
I somewhat overlooked this line and yes, it’s a nod in the right direction
Based on the transcript this does not sound like a FOOM discussion (as in rapid self-improvement) other than mentioning “group learning” by autonomous cars, which is maybe somewhat related. Also the pregnancy ad story is much more about pattern recognition with lots of data than any serious AI.
Basically JP is, in this area, a complete layman (unlike Gates, Musk, or, from the other side, Pinker) whose opinion counts for little and not talking about FOOM anyway.
Is this much different from Scott Adams’ advice https://dilbertblog.typepad.com/the_dilbert_blog/2007/07/career-advice.html
if you want something extraordinary, you have two paths:
1. Become the best at one specific thing.2. Become very good (top 25%) at two or more things.
Do you mean Zvi’s “Change is bad”?
Second, we’re not actually comparing reason to tradition—we’re comparing changing things to not changing things. Change, as we know, is bad.
Request for clarification: isn’t “reasonable solution” always a “change” when compared to preexisting tradition?
I get that my usage hurts indirectly, my question was specifically if everyone used FB occasionally and for similar purposes as I do, would FB still be detrimental? Harming other people because they have unhealthy patterns of usage is still a concern but a lesser one to me
I use FB occasionally to stay in touch with family/friends.
I subscribe to interesting people on Twitter and find it a great source of intellectual information.
I know these are harmful to some people, and I’ve occasionally noticed addictive behavior in myself, but overall seems like a good trade. If someone wants to explain/convince me that this is highly dangerous to me in a non-obvious way or that **my kind** of usage is endangering the commons I’m open to hear it.