As far as I can tell, the main point of your post is that ControlAI’s approach is evidently working, more so than other people’ approach, so people not following ControlAI’s approach is evidence of them being bad and being under the control of a malign Spectre. If you make such claims, you need to provide evidence that ControlAI’s approach is actually working well!
As I said, I don’t see the 35 MPs signing your statement as a good evidence for that. You briefing 150+ UK reps is also no evidence of the effectiveness of ControlAI’s approach. If you could point to many of these reps making AI takeover risk one of their core issues, that would be evidence, but I don’t see that happening.
I agree I have forgotten about the two debates in the House of Lords, sorry about that. I still don’t find this a very convincing evidence of ControlAI’s effectiveness—my understanding is that the House of Lords doesn’t have much power, and that they debate 5-10 issues on every working day. The fact that there has been two debates on superintelligence doesn’t sound very impressive to me.
Instead, most AI Policy Organisations and Think Tanks act as if “Persuasion” was the bottleneck. This is why they care so much about respectability, the Overton Window, and other similar social considerations.
Before we started the DIP, many of these experts stated that our topics were too far out of the Overton Window. They warned that politicians could not hear about binding regulation, extinction risks, and superintelligence. Some mentioned “downside risks” and recommended that we focus instead on “current issues”.
They werewrong.
In the UK, in little more than a year, we have briefed +150 lawmakers, and so far, 112 have supported our campaign about binding regulation, extinction risks and superintelligence.
The point is that many experts stated that this was far out of the Overton window, and that they were wrong.
That this was a symptom of being systematically avoidant.
A year ago, ControlAI better through its strategy that they were wrong. This article summarises why we think they were wrong, including both indirect and direct evidence.
I don’t know who these experts were and what they exactly told you at the time. I can imagine them being more wrong than you. I’m certainly not in favor of most forms of “focusing on the current issues” because it often leads to people scaremongering in a kind of dishonest way. For example, I’m glad that ControlAI stopped focusing on deepfakes.
So if these so-called experts advised you to focus on deepfakes, I think that was wrong. But if they advised you to focus on getting more support for UKAISI, and supporting better eval practices and so on instead of advocating for immediate international moratorium on superintelligence, then I think the jury is still very much out on which strategy is more effective.
Your piece is centrally not advocating against running misleading campaigns on the effects of deepfakes. Instead, you are railing against people working in lab safety teams, eval orgs and AISIs, and the policy orgs and philanthropists trying to support them. And then you write:
We have reliable pipelines that can scale with more money. We have good processes and tracking mechanisms that give us a good understanding of our impact. We clearly see what needs to be done to improve things.
You are making the case that your work is better than the people’s supporting more marginalist steps (more funding for UKAISI, better evals, incremental technical work aimed at catching AIs red-handed), and you are claiming that everyone who decides to work at evals orgs, AISIs, or more marginalists policy orgs, instead of following ControlAI’s clearly superior “reliable pipelines” to impact, is somehow morally corrupt. For this claim, you’d need to show that your methods are actually clearly working better than what other people are doing. So I think it’s fair to point out that all your evidence for your efforts working is pretty underwhelming.
Your piece is centrally not advocating against running misleading campaigns on the effects of deepfakes.
First. I don’t think ControlAI has run campaigns that were misleading on the effects of deepfakes.
Second. The section you quote is centrally about not running more campaign like DeepFakes! It is part of the comparison with what we have done before, which includes and explicitly mentions DeepFakes!
Here is how it starts:
We have engaged with The Spectre. We know what it looks like from the inside.
To get things going funding-wise, ControlAI started by working on short-lived campaigns. We talked about extinction risks, but also many other things. We did one around the Bletchley AI Safety Summit, one on the EU AI Act, and one on DeepFakes.
By now, you have made quite a few misrepresentations and errors at basic reading comprehension.
I think you could have avoided them easily, and that you simply got triggered. I would invite you to pause.
--
The point of this piece is just to show that there have in fact been a cluster of orgs that have optimised to not talk about extinction risks. AISIs, evals orgs and “more marginalist policy orgs” are central examples.
I do not think you personally deny that there was optimisation to not talk about extinction risks.
I believe you just think it is okay, because it may be plausibly defended on consequentialist grounds if one buys in a specific set of beliefs.
--
But many people do not agree with this naive consequentialist reasoning, and I am writing this primarily for them. If you do not think honesty is morally worthwhile in itself, this is likely lost on you.
Here is an example of a thing many people consider morally bad, but where you might disagree. For many people, if you do believe in extinction risks and work at an AI Policy org, it is in fact bad and dishonest to not make it abundantly clear.
Similarly, if the UK AISI is a “marginalist policy org” whose people primarily care about extinction risks, it is bad that its trend report does not mention extinction risks.
--
Unfortunately, many people with such deontological intuitions were scared into avoiding honesty on the grounds that honesty was doomed to failure (or even corrosive).
This article shows that this scare-mongering was groundless. Furthermore, it shows that it was not coincidental, and instead the result of a clear optimisation process.
First. I don’t think ControlAI has run campaigns that were misleading on the effects of deepfakes.
The campaign ControlAI ran (https://controlai.com/deepfakes) seems misleading to me, in the sense that it’s warning of deepfakes being a much bigger deal than they are, doing the standard misleading persuasion textbook by citing extremely cherry-picked statistics, and generally just dressing up everything in vibes without making any arguments.
My guess is you also made it in bad faith as I would be surprised if you actually thought deepfakes were super bad, but instead mostly are working on this for slowing down AI reasons (and if not, I would be happy to try to convince you that in the absence of x-risk, substantially regulating AI for deepfake reasons would be a really bad idea and obviously doesn’t pass cost-benefit analyses).
generally just dressing up everything in vibes without making any arguments.
Citing the second card on the page you linked, that you can see by scrolling down once:
Deepfakes can steal your face, your voice, and your identity.
They are often used to create sexually abusive material, commit fraud, and harass individuals.
Anyone with internet access can make a deepfake of whoever they want.
All they need is one photo of you or a 10 second voice clip.
This page is part of a public campaign, so it’s not written in LessWrong English. My attempt to translate:
Deepfakes can greatly facilitate identity theft and scams compared to what could be done previously.
Deepfakes can be used to make porn that features people who didn’t give their consent (and just to be clear, the majority of people consider this as extremely morally abhorrent and consider this a moral priority, esp. if there was no x-risk or if they are not aware of x-risks)
It is so easy to make deepfakes that it’s only a matter of time until they become ubiquitous, once models that can output deepfakes are made publicly available.
You are vulnerable even if you’re not a public figure / don’t post a lot of content online.
The page does go on to make a few more arguments, that I don’t have time to point out now. These arguments are clearly spelled out near the top of the page.
We have each other on Signal, and you can DM me on LW. I don’t think you ever sent me a case for either of your points, nor had someone follow through me. So by default, I don’t care much for it.
I also think you are confused about how campaigns work. There is a campaign page (which you link), and usually acts the home page. If you want the arguments, you have to go on the report page ( https://controlai.com/deepfakes/deepfakes-policy#report )
To your points:
in the sense that it’s warning of deepfakes being a much bigger deal than they are
How big of a deal do you think we think they are? How big of a deal do you think we made them to be, based on which elements from our copy?
If there’s a large gap there, I can understand why you would think that we were misleading.
If nah, I think you just feel bad about our campaign. (For other reasons, which may independently be good or bad.) But fwiw, in general, I do not care much for “I indirectly made Habryka feel bad online” or “I disagree with Habryka”, given that we are not friends nor regular intellectual sparring partners.
I would be happy to try to convince you that in the absence of x-risk, substantially regulating AI for deepfake reasons would be a really bad idea and obviously doesn’t pass cost-benefit analyses
Please do.
Given that you have not written this case or even shared it with me, I have no idea why you think that I would be convinced by it, given that I likely spent more time on this than you did.
It may have been better to do it as we were campaigning on DeepFakes rather than now, but alas. I would still be interested though: I have other relevant views correlated with it.
David and Oli are your allies. They’re endeavoring to help you see yourself and the world more clearly. The tone of your replies here seems to indicate that you may have lost sight of that.
As far as I can tell, the main point of your post is that ControlAI’s approach is evidently working, more so than other people’ approach, so people not following ControlAI’s approach is evidence of them being bad and being under the control of a malign Spectre. If you make such claims, you need to provide evidence that ControlAI’s approach is actually working well!
As I said, I don’t see the 35 MPs signing your statement as a good evidence for that. You briefing 150+ UK reps is also no evidence of the effectiveness of ControlAI’s approach. If you could point to many of these reps making AI takeover risk one of their core issues, that would be evidence, but I don’t see that happening.
I agree I have forgotten about the two debates in the House of Lords, sorry about that. I still don’t find this a very convincing evidence of ControlAI’s effectiveness—my understanding is that the House of Lords doesn’t have much power, and that they debate 5-10 issues on every working day. The fact that there has been two debates on superintelligence doesn’t sound very impressive to me.
This is the introduction:
The point is that many experts stated that this was far out of the Overton window, and that they were wrong.
That this was a symptom of being systematically avoidant.
A year ago, ControlAI better through its strategy that they were wrong. This article summarises why we think they were wrong, including both indirect and direct evidence.
I don’t know who these experts were and what they exactly told you at the time. I can imagine them being more wrong than you. I’m certainly not in favor of most forms of “focusing on the current issues” because it often leads to people scaremongering in a kind of dishonest way. For example, I’m glad that ControlAI stopped focusing on deepfakes.
So if these so-called experts advised you to focus on deepfakes, I think that was wrong. But if they advised you to focus on getting more support for UKAISI, and supporting better eval practices and so on instead of advocating for immediate international moratorium on superintelligence, then I think the jury is still very much out on which strategy is more effective.
Your piece is centrally not advocating against running misleading campaigns on the effects of deepfakes. Instead, you are railing against people working in lab safety teams, eval orgs and AISIs, and the policy orgs and philanthropists trying to support them. And then you write:
You are making the case that your work is better than the people’s supporting more marginalist steps (more funding for UKAISI, better evals, incremental technical work aimed at catching AIs red-handed), and you are claiming that everyone who decides to work at evals orgs, AISIs, or more marginalists policy orgs, instead of following ControlAI’s clearly superior “reliable pipelines” to impact, is somehow morally corrupt. For this claim, you’d need to show that your methods are actually clearly working better than what other people are doing. So I think it’s fair to point out that all your evidence for your efforts working is pretty underwhelming.
First. I don’t think ControlAI has run campaigns that were misleading on the effects of deepfakes.
Second. The section you quote is centrally about not running more campaign like DeepFakes! It is part of the comparison with what we have done before, which includes and explicitly mentions DeepFakes!
Here is how it starts:
By now, you have made quite a few misrepresentations and errors at basic reading comprehension.
I think you could have avoided them easily, and that you simply got triggered. I would invite you to pause.
--
The point of this piece is just to show that there have in fact been a cluster of orgs that have optimised to not talk about extinction risks. AISIs, evals orgs and “more marginalist policy orgs” are central examples.
I do not think you personally deny that there was optimisation to not talk about extinction risks.
I believe you just think it is okay, because it may be plausibly defended on consequentialist grounds if one buys in a specific set of beliefs.
--
But many people do not agree with this naive consequentialist reasoning, and I am writing this primarily for them. If you do not think honesty is morally worthwhile in itself, this is likely lost on you.
Here is an example of a thing many people consider morally bad, but where you might disagree. For many people, if you do believe in extinction risks and work at an AI Policy org, it is in fact bad and dishonest to not make it abundantly clear.
Similarly, if the UK AISI is a “marginalist policy org” whose people primarily care about extinction risks, it is bad that its trend report does not mention extinction risks.
--
Unfortunately, many people with such deontological intuitions were scared into avoiding honesty on the grounds that honesty was doomed to failure (or even corrosive).
This article shows that this scare-mongering was groundless. Furthermore, it shows that it was not coincidental, and instead the result of a clear optimisation process.
The campaign ControlAI ran (https://controlai.com/deepfakes) seems misleading to me, in the sense that it’s warning of deepfakes being a much bigger deal than they are, doing the standard misleading persuasion textbook by citing extremely cherry-picked statistics, and generally just dressing up everything in vibes without making any arguments.
My guess is you also made it in bad faith as I would be surprised if you actually thought deepfakes were super bad, but instead mostly are working on this for slowing down AI reasons (and if not, I would be happy to try to convince you that in the absence of x-risk, substantially regulating AI for deepfake reasons would be a really bad idea and obviously doesn’t pass cost-benefit analyses).
Citing the second card on the page you linked, that you can see by scrolling down once:
This page is part of a public campaign, so it’s not written in LessWrong English. My attempt to translate:
The page does go on to make a few more arguments, that I don’t have time to point out now. These arguments are clearly spelled out near the top of the page.
We have each other on Signal, and you can DM me on LW. I don’t think you ever sent me a case for either of your points, nor had someone follow through me. So by default, I don’t care much for it.
I also think you are confused about how campaigns work. There is a campaign page (which you link), and usually acts the home page. If you want the arguments, you have to go on the report page ( https://controlai.com/deepfakes/deepfakes-policy#report )
To your points:
How big of a deal do you think we think they are?
How big of a deal do you think we made them to be, based on which elements from our copy?
If there’s a large gap there, I can understand why you would think that we were misleading.
If nah, I think you just feel bad about our campaign. (For other reasons, which may independently be good or bad.)
But fwiw, in general, I do not care much for “I indirectly made Habryka feel bad online” or “I disagree with Habryka”, given that we are not friends nor regular intellectual sparring partners.
Please do.
Given that you have not written this case or even shared it with me, I have no idea why you think that I would be convinced by it, given that I likely spent more time on this than you did.
It may have been better to do it as we were campaigning on DeepFakes rather than now, but alas. I would still be interested though: I have other relevant views correlated with it.
David and Oli are your allies. They’re endeavoring to help you see yourself and the world more clearly. The tone of your replies here seems to indicate that you may have lost sight of that.
For the record, this is impressive to me, and I’m the executive director of CeSIA, which also conducts awareness-building works in France.