Engineer at CoinList.co. Donor to LW 2.0.
Maybe I’m missing some historical context here.
For some reason a bunch of people started referring to him as “Big Yud” on Twitter. Here’s some context regarding EY’s feelings about it.
FWIW I believe “Yud” is a dispreferred term (because it’s predominantly used by sneering critics), and your comment wouldn’t have gotten so many downvotes without it.
Regarding secrecy, I’d prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too much.
Was one of the much’s in this sentence supposed to be a ‘little’?(My guess is that you meant to say that you want orgs to err on the side of being overly cautious rather than being overly reckless, but wanted to double-check.)
I would like to humbly suggest that you break blocks of text that are this big into multiple paragraphs.
Then it feels weird, seeing button “B,” to press the button knowing that it causes you to lose $1 in the real, actually-existing world.
Was that supposed to be “seeing button ‘A’”? (since A was the one who stands to lose a dollar, and B the one who stands to gain a dollar)
Makes sense, thanks!
What makes it unserious? Is it that there are too many assumptions baked in to the scenario as described, so that it’s unlikely to match real challenges we will actually face?
The owners over-interpret and anthropomorphize the button “speech”The is the biggest danger in my opinion. Hopefully with rigorous analysis during the study and specifically set up experiments we’ll be able to understand better at what level of communication the dogs actually are.
The is the biggest danger in my opinion. Hopefully with rigorous analysis during the study and specifically set up experiments we’ll be able to understand better at what level of communication the dogs actually are.
I think this is most of what’s going on here. I’d guess that the owners have in fact taught their animals some new words and associations, but that they’re way over-interpreting what the dogs are “saying”.
You wouldn’t get such misunderstanding with the clever hans effect.
You could get it with the seeing-shapes-in-clouds effect though.
There’s a typo in the title:
Is GPT-3 is...
What’s the distinction you’re making? A quick google suggests this as the definition for “feasibility”:
the state or degree of being easily or conveniently done
the state or degree of being easily or conveniently done
This matches my understanding of the term. It also sounds a lot like tractability / difficultly.
Are you thinking of it as meaning something more like “theoretical possibility”?
English pedant note: it should be either “how … look” or “what … look like”, but never “how … look like”.
Extreme, in this context, was implying far from the consensus expectation.
FWIW, my interpretation of Eliezer’s comment was just that he meant high confidence.
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else.
Thanks for making this point and connecting it to that post. I’ve been thinking that something like this might be the right way to think about a lot of this anthropics stuff — yes, we should use anthropic reasoning to inform our priors, but also we shouldn’t be afraid to update on all the detailed data we do have. (And some examples of anthropics-informed reasoning seem not to do enough of that updating.)
3. “Not long after, Google rocks the tech industry with a major announcement at I/O. They’ve succeeded in training a deep learning model to completely auto-generate simple SaaS software from a natural-language description. ” Is this just like Codex but better? Maybe I don’t what SaaS software is.Yes, pretty much just Codex but better. One quick-and-dirty way to think of SaaS use cases is: “any business workflow that touches a spreadsheet”. There are many, many, many such use cases.
3. “Not long after, Google rocks the tech industry with a major announcement at I/O. They’ve succeeded in training a deep learning model to completely auto-generate simple SaaS software from a natural-language description. ” Is this just like Codex but better? Maybe I don’t what SaaS software is.
Yes, pretty much just Codex but better. One quick-and-dirty way to think of SaaS use cases is: “any business workflow that touches a spreadsheet”. There are many, many, many such use cases.
Adding to this — as I understand, Codex can only write a single function at a time. While an SaaS product would be composed of many functions (and a database schema, and an AWS / Azure / GCP cloud services configuration, and a front-end web / phone app...).
It’s like the difference between 10 lines of code and the entirety of Gmail.
Formatting note — if you put a space between the ‘>’ and the next character, it’ll format correctly as a proper block quote.
My understanding is that Geoff Anders and Andrew Critch each independently invented goal factoring, and had even been using the same diagramming software to do it! (I’m not sure which one of them first brought it to CFAR.)
Would it be fair to say that the error this post is addressing is analogous to if the cook told the botanists, “No, tomatoes aren’t fruits.”?
I am pretty interested in ideas from people on how to reduce the bad parts of the social ritual.
One way to make it seem more serious (to me) would be to make the effects bigger. E.g. taking down the frontpage (or the whole site?) for a whole week rather than just a day.
To me, just ascribing more value to things without anything material about the situation changing sounds like inflation, not real growth.