epistemic status: thought about this for like 15 minutes + two deep research reports
a contrarian pick for underrated technology area is lie detection through brain imaging. It seems like it will become much more robust and ecologically valid through compute scaled AI techniques, and it’s likely to be much better at lie detection than humans because we didn’t have access to images of the internals of other peoples brains in the ancestral environment.
On the surface this seems like it would be transformative—brain scan key employees to make sure they’re not leaking information! test our leaders for dark triad traits (ok that’s a bit different than specific lies but still) - however there’s a cynical part of me that sounds like some combo of @ozziegooenand Robin Hanson which notes we have methods now (like significantly increased surveillance and auditing) which we could use for greater trust and which we don’t employ.
So perhaps this won’t be used except for the most extreme natsec cases, where there are already norms of investigations and reduced privacy.
however there’s a cynical part of me that sounds like some combo of @ozziegooenand Robin Hanson which notes we have methods now (like significantly increased surveillance and auditing) which we could use for greater trust and which we don’t employ.
Quick note: I think Robin Hanson is more on the side of “we’re not doing this because we don’t actually care”. I’m more on the side of, “The technology and infrastructure just isn’t good enough.”
What I mean by that is that I think it’s possible to get many of the benefits of surveillance without minimal costs, using a combination of Structured Transparency and better institutions. This would be a software+governance challenge.
epistemic status: thought about this for like 15 minutes + two deep research reports
a contrarian pick for underrated technology area is lie detection through brain imaging. It seems like it will become much more robust and ecologically valid through compute scaled AI techniques, and it’s likely to be much better at lie detection than humans because we didn’t have access to images of the internals of other peoples brains in the ancestral environment.
On the surface this seems like it would be transformative—brain scan key employees to make sure they’re not leaking information! test our leaders for dark triad traits (ok that’s a bit different than specific lies but still) - however there’s a cynical part of me that sounds like some combo of @ozziegooenand Robin Hanson which notes we have methods now (like significantly increased surveillance and auditing) which we could use for greater trust and which we don’t employ.
So perhaps this won’t be used except for the most extreme natsec cases, where there are already norms of investigations and reduced privacy.
Related quicktake: https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1#25tKsX59yBvNH7yjD
Quick note: I think Robin Hanson is more on the side of “we’re not doing this because we don’t actually care”. I’m more on the side of, “The technology and infrastructure just isn’t good enough.”
What I mean by that is that I think it’s possible to get many of the benefits of surveillance without minimal costs, using a combination of Structured Transparency and better institutions. This would be a software+governance challenge.