RSS

Se­cu­rity Mindset

TagLast edit: 16 Feb 2022 0:36 UTC by abramdemski

Security Mindset is a predisposition for thinking about the world in a security-oriented way. A large part of this way of thinking involves always being on the lookout for exploits.

Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.

I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”

Security requires a particular mindset. Security professionals — at least the good ones — see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.

SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”

Really, we can’t help it.

-- Bruce Schneier, The security Mindset, Schneier on Security

[I’m unsure of the origin of the term, but Schneier is at least an outspoken advocate. --Abram]

In 2017, Eliezer Yudkowsky wrote a pair of posts on the security mindset:

Amongst other things, these posts forwarded the idea that true security mindset is not just the tendency to spot lots and lots of security flaws. Spotting security flaws is not in itself enough to build secure systems, because you could be spotting flaws with your design forever, patching specific weak points, and moving on to find yet more flaws.

Building secure systems requires coming up with strong positive arguments for the security of a system. These positive arguments have several important features:

  1. They have as few assumptions as possible, because each assumption is an additional chance to be wrong.

  2. Each assumption is individually very certain.

  3. The conclusion of the argument is a meaningful security guarantee.

The mindset required to build tight security arguments like this is different from the mindset required to find security holes.

Ad­vice Needed: Does Us­ing a LLM Com­pomise My Per­sonal Epistemic Se­cu­rity?

Naomi11 Mar 2024 5:57 UTC
17 points
7 comments2 min readLW link

So­cial me­dia use prob­a­bly in­duces ex­ces­sive mediocrity

trevor17 Feb 2024 22:49 UTC
6 points
11 comments12 min readLW link

Train­ing of su­per­in­tel­li­gence is se­cretly adversarial

quetzal_rainbow7 Feb 2024 13:38 UTC
15 points
2 comments5 min readLW link

Pro­tect­ing agent boundaries

Chipmonk25 Jan 2024 4:13 UTC
10 points
6 comments2 min readLW link

Safety Data Sheets for Op­ti­miza­tion Processes

StrivingForLegibility4 Jan 2024 23:30 UTC
15 points
1 comment4 min readLW link

Assess­ment of AI safety agen­das: think about the down­side risk

Roman Leventov19 Dec 2023 9:00 UTC
13 points
1 comment1 min readLW link

In­ter­pret­ing the Learn­ing of Deceit

RogerDearnaley18 Dec 2023 8:12 UTC
30 points
8 comments9 min readLW link

Where Does Ad­ver­sar­ial Pres­sure Come From?

quetzal_rainbow14 Dec 2023 22:31 UTC
16 points
1 comment2 min readLW link

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

Chipmonk27 Nov 2023 21:04 UTC
48 points
0 comments3 min readLW link

Helpful ex­am­ples to get a sense of mod­ern au­to­mated manipulation

trevor12 Nov 2023 20:49 UTC
33 points
3 comments9 min readLW link

Balanc­ing Se­cu­rity Mind­set with Col­lab­o­ra­tive Re­search: A Proposal

MadHatter1 Nov 2023 0:46 UTC
9 points
3 comments4 min readLW link

5 Rea­sons Why Govern­ments/​Mili­taries Already Want AI for In­for­ma­tion Warfare

trevor30 Oct 2023 16:30 UTC
32 points
0 comments10 min readLW link

Sen­sor Ex­po­sure can Com­pro­mise the Hu­man Brain in the 2020s

trevor26 Oct 2023 3:31 UTC
17 points
6 comments10 min readLW link

AI Safety is Drop­ping the Ball on Clown Attacks

trevor22 Oct 2023 20:09 UTC
69 points
72 comments34 min readLW link

Back to the Past to the Future

Prometheus18 Oct 2023 16:51 UTC
5 points
0 comments1 min readLW link

Fix­ing In­sider Threats in the AI Sup­ply Chain

Madhav Malhotra7 Oct 2023 13:19 UTC
20 points
2 comments5 min readLW link

Is AI Safety drop­ping the ball on pri­vacy?

markov13 Sep 2023 13:07 UTC
50 points
17 comments7 min readLW link

Biose­cu­rity Cul­ture, Com­puter Se­cu­rity Culture

jefftk30 Aug 2023 16:40 UTC
103 points
10 comments2 min readLW link
(www.jefftk.com)

A po­ten­tially high im­pact differ­en­tial tech­nolog­i­cal de­vel­op­ment area

Noosphere898 Jun 2023 14:33 UTC
5 points
2 comments2 min readLW link

The Se­cu­rity Mind­set, S-Risk and Pub­lish­ing Pro­saic Align­ment Research

lukemarks22 Apr 2023 14:36 UTC
39 points
7 comments6 min readLW link

Le­gi­t­imis­ing AI Red-Team­ing by Public

VojtaKovarik19 Apr 2023 14:05 UTC
10 points
7 comments3 min readLW link

Cryp­to­graphic and aux­iliary ap­proaches rele­vant for AI safety

Allison Duettmann18 Apr 2023 14:18 UTC
7 points
0 comments6 min readLW link

Even if hu­man & AI al­ign­ment are just as easy, we are screwed

Matthew_Opitz13 Apr 2023 17:32 UTC
35 points
5 comments5 min readLW link

Boundaries-based se­cu­rity and AI safety approaches

Allison Duettmann12 Apr 2023 12:36 UTC
42 points
2 comments6 min readLW link

[In­ter­view w/​ Jeffrey Ladish] Ap­ply­ing the ‘se­cu­rity mind­set’ to AI and x-risk

fowlertm11 Apr 2023 18:14 UTC
12 points
0 comments1 min readLW link

Reli­a­bil­ity, Se­cu­rity, and AI risk: Notes from in­fosec text­book chap­ter 1

Akash7 Apr 2023 15:47 UTC
34 points
1 comment4 min readLW link

AI in­fosec: first strikes, zero-day mar­kets, hard­ware sup­ply chains, adop­tion barriers

Allison Duettmann1 Apr 2023 16:44 UTC
39 points
0 comments9 min readLW link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 0:06 UTC
355 points
224 comments39 min readLW link

(re­tired ar­ti­cle) AGI With In­ter­net Ac­cess: Why we won’t stuff the ge­nie back in its bot­tle.

Max TK18 Mar 2023 3:43 UTC
5 points
10 comments4 min readLW link

POC || GTFO cul­ture as par­tial an­ti­dote to al­ign­ment wordcelism

lc15 Mar 2023 10:21 UTC
144 points
10 comments7 min readLW link

Se­cu­rity Mind­set—Fire Alarms and Trig­ger Signatures

elspood9 Feb 2023 21:15 UTC
23 points
0 comments4 min readLW link

It’s time to worry about on­line pri­vacy again

Malmesbury25 Dec 2022 21:05 UTC
65 points
23 comments6 min readLW link

AI can ex­ploit safety plans posted on the Internet

Peter S. Park4 Dec 2022 12:17 UTC
−15 points
4 comments1 min readLW link

Why do we post our AI safety plans on the In­ter­net?

Peter S. Park3 Nov 2022 16:02 UTC
4 points
4 comments11 min readLW link

Builder/​Breaker for Deconfusion

abramdemski29 Sep 2022 17:36 UTC
72 points
9 comments9 min readLW link

LW Meetup @ DEFCON (Las Ve­gas) − 5-7pm Thu. Aug. 11 at Fo­rum Food Court (Cae­sars)

jchan8 Aug 2022 14:57 UTC
6 points
0 comments1 min readLW link

“Just hiring peo­ple” is some­times still ac­tu­ally possible

lc5 Aug 2022 21:44 UTC
38 points
11 comments5 min readLW link

Con­jec­ture: In­ter­nal In­fo­haz­ard Policy

29 Jul 2022 19:07 UTC
131 points
6 comments19 min readLW link

Cir­cum­vent­ing in­ter­pretabil­ity: How to defeat mind-readers

Lee Sharkey14 Jul 2022 16:59 UTC
112 points
12 comments33 min readLW link

Se­cu­rity Mind­set: Les­sons from 20+ years of Soft­ware Se­cu­rity Failures Rele­vant to AGI Alignment

elspood21 Jun 2022 23:55 UTC
360 points
42 comments7 min readLW link1 review

Do your­self a FAVAR: se­cu­rity mindset

lukehmiles18 Jun 2022 2:08 UTC
20 points
2 comments2 min readLW link

Six Di­men­sions of Oper­a­tional Ad­e­quacy in AGI Projects

Eliezer Yudkowsky30 May 2022 17:00 UTC
299 points
66 comments13 min readLW link1 review

Se­cu­rity Mind­set and Take­off Speeds

DanielFilan27 Oct 2020 3:20 UTC
55 points
23 comments8 min readLW link
(danielfilan.com)

On See­ing Through ‘On See­ing Through: A Unified The­ory’: A Unified Theory

gwern15 Jun 2019 18:57 UTC
26 points
0 comments1 min readLW link
(www.gwern.net)

Se­cu­rity Mind­set and the Lo­gis­tic Suc­cess Curve

Eliezer Yudkowsky26 Nov 2017 15:58 UTC
101 points
48 comments20 min readLW link

Se­cu­rity Mind­set and Or­di­nary Paranoia

Eliezer Yudkowsky25 Nov 2017 17:53 UTC
115 points
25 comments29 min readLW link
No comments.