See https://joshuafox.com for more info.
JoshuaFox(Joshua Fox)
What about a Turing Test variant in which such inquiries are banned?
That would be possible. Plenty of people don’t know much about this topic. If you had such a judge, do you think actually doing a Turing Test (or some variant) for ChatGPT is a good ideaa
Nice! I am surprised we don’t hear more about attempts at a Turing Test, even if it is not quite there yet.
That looks pretty close to the level of passing a Turing Test to me. So is there a way of trying a full Turing Test, or something like it, perhaps building on the direction you show here?
Do you think there is a place for a Turing-like test that determines how close to human intelligence it is, even if it has not reached that level?
ChatGPT isn’t at that level.
That could well be. Do you think there is a place for a partial Turing Test as in the Loebner Prize—to determine how close to human intelligence it is, even if it has not reached that level?
What’s up with ChatGPT and the Turing Test?
I thought there was a great shortage of cadavers. How did they manage to get them for a non-medical school, indeed for use by non-students? Also, I am quite impressed that any course, particularly in the Bay Area, is $60 or free.
Nice! Is there also a list of AI-safety corporations and non-profits, with a short assessment of each where feasible: goals, techniques, leaders, number of employees, liveness, progress to date?
I organized that, so let me say that:
That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
I have conversed with him a few times, as follows:
I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
In 2012, he explained Acausal Trade to me, and that was the seed of this post. That discussion was quite sensible and I thank him for that.
A few years later, I invited him to speak at LessWrong Israel. At that time I thought him a mad genius—truly both. His talk was verging on incoherence, with flashes of apparent insight.
Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.
If I have offended anyone, I apologize, though I believe that letting someone speak is generally not something to be afraid of. But I wouldn’t invite him again.
- 8 Mar 2023 11:23 UTC; 24 points) 's comment on Abuse in LessWrong and rationalist communities in Bloomberg News by (EA Forum;
- Decentralized Exclusion by 13 Mar 2023 15:50 UTC; 22 points) (
Jonathan Blow at the AstralCodexTen Online Meetup
Sam Altman at the AstralCodexTen Online Meetup
Bram Cohen at the AstralCodexTen Online Meetup
Daniel M. Ingram at the AstralCodexTen Online Meetup
Thank you. Can you point me to a page on FLI’s latest grants? What I found was from a few years back. Is there another organizations whose grants are worthy of attention?
Thank you. Can you link to some of the better publications by Wentworth, Turner, and yourself? I’ve found mentions of each of you online but I’m not finding a canonical source for the recommended items.
I found this about Steve Byrnes
This about Beth Barnes
Thank you! That is valuable. I’d love to get also educated opinions on the quality of the research of some of these, with a focus on foundational or engineering research aimed at superhuman-AGI XRIsk (done mostly, I think, in MIRI, FHI, and by Christiano), but that article is great.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.