Am incline to agree, but I want to add that security is all connected. There are several direct causal paths from compromised user data to compromised dev workstation (and vice versa).
Do you think the point of adding nuclear close calls isn’t to move public policy into a direction that’s less likely to produce a nuclear accident? That’s a political purpose. It’s not party political but it’s political.
Of course I believe it serves that purpose. I also believe that the most recent edit in all of Wikipedia at my time of writing, deleting a paragraph from the article on Leanna Cavanagh (a character from some British TV show I’d never heard of) serves to decrease the prominence of that TV show, which will weaken whatever message or themes it carries (such as bringing attention to Yorkshire, where the show is set).
So, this is an empty criticism.
Similarly, I don’t know who “the account behind the edit you point to” is since I linked to two different revisions both of which cover edits by multiple authors, but I checked the edit history of one of them, user Simfish (whose real life identity I shan’t reveal at this moment). He has a bunch of edits on the “Timeline of Nordstrom” article, and I don’t know what that has to do with EA.
I’m not sure this conversation has any more productive purpose. You keep on harping on a specific defense of Wikipedia culture that any hostility encountered by my peers is justified because we were a paid special interest group. I’ve stated several reasons why those justifications did not apply at the time hostility was first encountered. I see you continuing to try to find ways to make those criticisms apply. Needless to say, this is a silly battle since I’m the one with all the details.
I can say that this experience is not leaving me any more desirous of editing Wikipedia, so I’m at least one person with whom you’ve not yet succeeded in your original goal.
Edit: Okay, I just found Simfish (and his real name) on a list of people whom Vipul paid, and found that Vipul Naki’s timeframe overlapped with the FLI group. I have to partly retract the details behind my thesis above. I can still make it because I do not recognize anyone else on Vipul’s list having a Boston/FLI connection.
Edit 2: Neither of these articles appear on the list of articles sponsored by Vipul.
It was paid-editing for a political agenda. From an EA perspective paying someone to do paid editing or do political lobbying is completley fine. On the other hand you have the money isn’t speech side that considers using money to do lobbying or get someone change Wikipedia according to their political interests bad.
Putting aside that a volunteer project by a non-profit is not paid, and I take some issue with arguments that improvements to the page on nuclear close calls is “political”:
I mean that some individuals later in this group, before any organized effort by the FLI existed, had dabbled in editing some of these same articles, for exactly the pure motives that you advocate editing for, and encountered entrenched (and perhaps unreasonable) opposition.
From the Wikipedia perspective there’s a difference of a Wikipedia user group that does a Wikipedia-editing session together which is great and an organization having a project to change Wikipedia according to their agenda.
Our perspective was that we were merely adding better information, improving accuracy, and giving fair summaries of the arguments.
I expect similar groups would say the same.
You can judge for yourself. Here are some edits from the group:
The talk of an admin who controlled those pages with an iron fist came from before this project existed, presumably encountered by affiliates who had tried to edit in good faith exactly as you’ve advocated, but were shut down.
We were far from the first or only group that had Wikipedia-editing sessions. I’ve walked past signs at my university advertising them for other groups. Ours was quite benign. I’m reading some of the discussion from back then; their list included things like adding links for the page on nuclear close calls.
I’ve seen articles on hot-button topics where the Wikipedia article is far more slanted to one side than any of the mainstream media articles, and read the talk archives where a powerful few managed to invoke arcane rules to rule out all sources to the contrary. It’s stuff like this that makes me want out. I was a happy Wikipedian in high school in a previous decade, but I shall be no longer.
During my stint volunteering with the FLI, I worked on a project to improve Wikipedia’s coverage of existential risk. I don’t remember the ultimate outcome of the project, but we were up against an admin who “owned” many of those pages, and was hostile to many of FLI’s views.
This article, at least by appearances, is an excellent account of the problems and biases of Wikipedia: https://prn.fm/wikipedia-rotten-core/
The underlying thought behind both this and the previous post seems to be the notion that counterfactuals are somehow mysterious or hard to grasp. This looks like a good chance to plug our upcoming ICML paper, w
hich reduces counterfactuals to a programming language feature. It gives a new meaning to “programming Omega.” http://www.zenna.org/publications/causal.pdf
It’s a small upfront cost for gradual long-term benefit. Nothing in that says one necessarily outweighs the other. I don’t think there’s anything more to be had from this example beyond “hyperbolic discounting.”
I think it’s simpler than this: renaming it is a small upfront cost for gradual long-term benefit. Hyperbolic discounting kicks in. Carmack talks about this in his QuakeCon 2013, saying “humans are bad at integrating small costs over time”: https://www.youtube.com/watch?v=1PhArSujR_A
But, bigger picture, code quality is not about things like local variable naming. This is Mistake #4 of the 7 Mistakes that Cause Fragile Code: https://jameskoppelcoaching.com/wp-content/uploads/2018/05/7mistakes-2ndedition.pdf
I read/listened to Lean Startup back in 2014. Reading it helped me realize many of the mistakes I had made in my previous startup, mistakes I made even though I thought I understood the “Lean startup” philosophy by osmosis.
Indeed, “Lean Startup” is a movement whose terminology has spread much faster than its content, creating a poisoned well that inoculates people against learning
For example, the term “minimum viable product” has been mutated to have a meaning emphasizing the “minimum” over the “product,” making it harder to spread the actual intended idea. I blogged about this a long time ago: http://www.pathsensitive.com/2015/10/the-prototype-stereotype.htmlAnyway, this post was a nice review! I had to guess on some of the questions, which is probably good; if I’m successful, it means I really internalized it. Thanks!
1. What is the difference between learning and validated learning?
Validated learning is learning that has been tested empirically against users/the marketplace.
2. True or false: “According to the author, a startup with exponential growth in metrics like revenue and number of customers is doing well.” Explain your answer.
False. This is only true if those metrics imply a path of long-term sustainable profitability. If the startup in question is Github, it probably does. If it’s Groupon.....
3. Finish the sentence: “almost every lean startup technique we’ve discussed so far works its magic in two ways:”
By reducing inventory and increasing validated learning.
4. Ries argues that startups should pay more attention to innovation accounting than traditional accounting. Name two ways in which startups can change their financial metrics to accomplish innovation accounting.
(a) Estimate the value of patents/trade secrets and track on an internal balance sheet. (b) Require VoI calculations and add such numbers to an internal balance sheet.
5. Describe, concretely, what a car company’s supply chain would look like if it used push vs pull inventory.
Push: Each supplier pumps out parts, which are stockpiled in storerooms and warehouses. Each factory will regularly, e.g.: be shipped all the stuff it needs for the next month. Pull: Each factory keeps just a few days of parts needed, places frequent orders for the next few day’s worth.
6. Ries applies the pull inventory model to startups. But what is the unit that is being pulled, and where does it obtain the “pull signal”?
The unit is “aspects of the business that deliver value to customers.” The initial pull is validated market demand, which then translates to internal demand for features/process.
7. True or false: “Lean manufacturing is meant to give manufacturers an advantage in domains of extreme uncertainty”. Explain your answer.
True. Lean manufacturing allows manufacturers to retool and change their production much faster, greatly cheapening the cost of creating a suboptimal or unwanted product.
8. True or false: “Lean manufacturing is about harnessing the power of economies-of-scale.”
False. Lean manufacturing cheapens the cost of small runs, making the manufacturer more competitive at a lesser scale.
9. Ries discusses an anecdote of a family folding letters. The dad folds, stamps, and seals one letter at a time; whereas the kids begin by folding all letters, then stamping all, etc. Name two reasons Ries’ considers the dad’s method superior.
a) Not having to manage the intermediate outputs. b) Can discover issues later in the pipeline earlier.
10. True or false: “A consequence of lean manufacturing is that the performance of each employee as an isolated unit, in terms of output per unit of time, might *decrease*.” Explain your answer.
Lean manufacturing comes with much higher switching costs. An employee’s output might shrink, but more of it will go towards useful ends.
11. Give an example of what a “large batch death spiral” might look like in practice.
My game team is running behind schedule. To catch up, I ask the artists to produce assets without waiting for them to be tested. This then creates a large batch of work for the programmers to implement the graphics, which produces a large back of comments. This gets passed back to the artists, who do a huge number of revisions at once. The cycle continues.
12. According to Ries, the “Five why’s” method is a control system (though he doesn’t say so explicitly). What does it control, and how?
It puts a damper on major failures; it creates a mechanism by which a failure is turned into a systematic, mitigating change.
13. Explain the meaning of Toyota proverb “Stop production so that production never stops”
Do regular maintenance and improvement work to prevent larger future problems.
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It’s very bad at getting causal knowledge from nothing. This has long been known.For the first: Well, yep, that’s why I said I was only 80% satisfied.
For the second: I think you’ll need to give a concrete example, with edges, probabilities, and functions. I’m not seeing how to apply thinking about complexity to a type causality setting, where it’s assumed you have actual probabilities on co-occurrences.
This post is a mixture of two questions: “interventions” from an agent which is part of the world, and restrictions
The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl’s answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
The second is about limiting allowed interventions. This looks like a special case of normality conditions, which are described in Chapter 3 of Halpern’s book. Halpern’s treatment of normality conditions actually involves a normality ordering on worlds, though this can easily be massaged to imply a normality ordering on possible interventions. I don’t see any special mileage here out of making the normality ordering dependent on complexity, as opposed to any other arbitrary normality ordering, though someone may be able to find some interesting interaction between normality and complexity.
Speaking more broadly, this is part of the broader problem that our current definitions of actual causation are extremely model-sensitive, which I find a serious problem. I don’t see a mechanistic resolution, but I did find this essay extremely thought provoking, which posits considering interventions in all possible containing models: http://strevens.org/research/expln/MacRules.pdf
Thought I’d share an anecdote that didn’t make it into the article: on how doing something yourself can make you a better outsourcer.
About 6 months ago, I went shopping for a logo for one of my projects. It helped greatly that I’ve spent a lot of time studying visual design myself.
I made a document describing what I wanted, including a mood board of other logos. I showed it to a logo design specialist recommended by a friend. He said “That’s the best logo recquisition doc I’ve seen, and I’ve seen a lot.”
I also showed it to the designer I’ve been working with on other things (like sprucing up Powerpoint slides). She’s not a logo specialist, but quoted half the price.
I had the confidence that I’d be able to give good feedback to my designer even if she was less likely to knock it out of the park on her first try than the specialist. I went with her.
Many rounds of feedback later, I had a design. Showed it to some housemates and my advisor. “Dang, that’s a good logo.”
You can see it live at www.cubix-framework.com .
Oh, on the contrary: I think this article misses several things that are quite important (or were brushed under a single sentence like “[main principal/agent problems] are communication and risk.” Reason: emphasis on things fewer readers were likely to consider.
So the costs you’re describing are indeed real and brushed off to corner. I think both of these fall under transaction costs, and #2 also under centralization and overhead. For #2, I think you mean something other than what “externality” means to me (a cost specifically born by a non-party to a transaction) --- maybe second-order cost?
Thanks! This is good.
It’s not a physical good, but I had also been thinking that most of the price of renting a venue on the open market is trust (that you won’t mess up their space; whether they can give you the keys vs. needing someone to let you in), followed by coordination. Hence, why having a friend let you use their office’s conference room on a weekend to do an event might cost $0, while renting such a space might cost $1000.
To clarify: You’re not saying the wedding tax is because of insurance costs, as the article is asking about, right?
I have a number of issues with this post.
First, as others have mentioned, opponents are very much not equal. Further, timing is important: certain trades you should be much more or less likely to take near the end of the game, for example.Second, I don’t think it’s valid to look at expected values when all you care about is rank. Expectation is very much a concept for when you care about absolute amounts.
Third, which perhaps sums everything up: I don’t see a valid notion of utility / utility maximization for board games, other than perhaps “probability of winning,” which makes this circular (“if you’re trying to win, you should make moves that increase your probability of winning”). Utility is meant to put a linear scale on satisfaction with a given state of the world. When discussing what to do in a board game, one usually presumes the objective is to win, and satisfaction derives ultimately from winning. The closest thing you usually see to a “utility” number on an intermediate state is a heuristic, as used in e.g.: chess AIs, where you might give yourself 5 points for having a pawn in a center square. If I’m remembering my undergrad correctly, these heuristics are intended to approximate log-likelihoods of victory, but they certainly lack the soundness required to think about expected utility.
Let’s switch out of Catan, and to a game that hopefully people here know but is more directly combative: Diplomacy. Pray tell me how you propose to assign a utility score to putting a navy in the Black Sea.
Did not know about the answer/comment distinction! Thanks for pointing that out.
Before I dig deeper, I’d like to encourage you to come bring these questions to Formal Methods for the Informal Engineer ( https://fmie2021.github.io/ ), a workshop organized by myself and a few other researchers (including one or two more affiliated with LessWrong!) specifically dedicated to helping build collaborations to increase the adoption of formal methods technologies in the industry. Since it’s online this year, it might be a bit harder to have these deep open-ended conversations, but you sound exactly like the kind of person we want to attend. (To set expectations, I should add that registrations already exceed capacity; I’m not sure how we plan to allocate spots.)I’d also like to share this list of formal methods in industry: https://github.com/ligurio/practical-fm . In the past decade, there’s been a huge (in relative, not absolute terms) increase in the commercial use of formerly Ivory Tower tools.
You may also be interested in the readings from this course: https://6826.csail.mit.edu/2020/
BTW, I’ve been trying to think about whom I know that directly works in language-based security. (defined as a narrow specialty). The main name that comes to mind that I personally know is Stephen Chong, but I think some of the Wyvern developers ( https://wyvernlang.github.io/ ) may also consider themselves in this category (and be easier to get ahold of than a Harvard professor).I’m going to briefly hit some of your more narrow questions now. As an aside, be wary about saying “process” to a researcher—it’s used narrowly in ICSE circles to mean “methodology” (e.g.: Agile). I’m trying to mentally replace every use with “language-based security.”
the most security-promoting development processes that are currently in wide use.
I think Jim mostly gets it above: memory-safe languages and secure API design; also, implicitly, the type systems that make the latter possible.There are a number of patterns in secure API design you might not know the names for, such as object capabilities ( https://en.wikipedia.org/wiki/Object-capability_model ).
In a way, this question is kinda self-answered by the framing, since language-based security primarily refers to language design, which primarily means type systems—this is in contrast to techniques such as static analysis, testing, model checking, symbolic execution, and sandboxing.Leaving the realm of Turing-complete programs, I’ll point you to PNaCl/RockSalt ( https://news.harvard.edu/gazette/story/2012/07/nacl-to-give-way-to-rocksalt/ ) and eBPF, both of which have verified sandboxes.If you’re willing to be flexible about the “widely-used” statement, then individual companies have their own quirky languages, some of which have rather interesting restrictions. This language ostensibly used by OutSystems comes to mind ( https://link.springer.com/chapter/10.1007/978-3-642-19718-5_8 ), although I’m told by one of the authors that their actual implementation (as of 2013) is a bit simpler.
the most security-promoting development processes that are possible with recently developed technology
This is a tough question for me, because recent papers in this area tend to be about solving highly specific problems (the space of security problems is big, yo), and it takes a lot to generalize that to answer such a broad question. Also, I don’t follow latest developments that intensely. I’m going to take a pass.
processes that could come to exist 10 years away; processes that might exist 30-50 years from now.
Adam Chlipala thinks by that time we’ll be generating correct-by-construction code from specs. The Everest and Fiat-Crypto projects, both of which generate correct-by-construction cryptography code, are probably the two current best-known deployments of this.
perhaps some impossibility theorems that may bind even the creatures of the singularity.
“If it’s nontrivial to prove your program terminates, your program probably doesn’t run.”—an undergrad friend.
For common infrastructure software of today, no. Except maybe that I don’t know it’s been shown possible to build secure, reliable software atop a realistic model of hardware faults.
For Software 2.0 (i.e.: neural net in the loop), it’s a more open question that I don’t know much about.
For the kinds of reflective self-improvement software MIRI discusses, that’s part of their active research program (and generally outside the cognizance of PL/SE researchers).