Former tech entrepreneur (co-creator of the music software Sibelius). Among other things I now play the stock market, write software to predict it, and occasionally advise tech startups. I have degrees in philosophy.
bfinn
They’re not really calling for mob action (in almost all cases). It’s a rhetorical expression of hatred. Cf saying ‘eat the rich’ is not a serious advocacy of cannibalism.
That’s not to say though that it’s ok to call for mob action eg on social media, as it slightly increases the chance that some extremists might take it literally and act on it
I’ve only just realised that a key part of the AI alignment problem is essentially Wittgenstein’s rule-following argument. (Maybe obvious, but I’ve never seen this stated before.)
His rule-following argument claims that it’s impossible to define a term unambiguously, whether by examples or rules or using other terms; indeed any definition is so ambiguous as to be consistent with any future application of the term. So you can’t even teach someone ‘+’ in such a way that when following your definition/rule/algorithm they will give your desired answer to a sum they haven’t seen before, eg 1000 + 1000 = 2000. They could just as ‘correctly’ give 3000 or −45.7 or pi. (I won’t explain why here.)
Cf no amount of training an AI to be ‘good’ etc will ensure that it remains so in novel situations.
I’m not convinced Wittgenstein was right (and argued against the rule-following argument for my philosophy masters FWIW); maybe a real philosopher more familiar with the topic could apply it usefully to AI alignment.
Having read a few studies myself I got a CO2 monitor (from AirThings, also monitors VOCs, temperature, humidity etc). From which I can confirm that CO2 builds to quite high levels in an unventilated room within an hour or two. But even leaving a window only slightly ajar helps a lot.
Apparently fan heating and air conditioning systems may or may not mix in air from outside—many just recirculate the same air—so switching these on may or may not help with ventilation.
Some studies suggest high CO2 also harms sleep—though again the research is inadequate. If so, sleeping with the window slightly open should help; if cold/noise makes this impractical, sleep with the bedroom door ajar (if there aren’t other people around) and a window open in another room Or even if no window is open at all, having your bedroom door ajar seems to help by letting the CO2 out. I’ve done this for the last year, though can’t be sure if it’s helped my sleep.
A confounding factor is that it’s best to sleep in a cool room, which opening a window also achieves. Either way this is an argument for opening a window while you sleep.
Would be good to hear more of this
Many excellent examples and analysis. Obviously v long but no doubt others will find it useful source material.
Cancer is an interesting example I haven’t seen before, with suitably alarming connotations.
I don’t know; I had assumed so but maybe not
Re ‘AI is being rapidly adopted, and people are already believing the AIs’ - two recent cases from the UK of national importance:
In a landmark employment case (re trans rights), the judge’s ruling turned out to have been partly written by AI which had made up case law:
And in a controversy in which police banned Israeli fans from attending a soccer match, their decision cited evidence which had also been made up by AI (eg an entirely fictional previous UK visit by the Israeli team). The local police chief has just resigned over it:
Also with eg dog territory, the boundary markers aren’t arbitrary—presumably the reason dogs piss on trees & lampposts, which are not physical thresholds, is (a) they provide some protection for the scent against being removed eg by rain; (b) they are (hence) standard locations for rival dogs to check for scent, rather than having to sniff vast areas of ground; ie they are (evolved) Schelling points for potential boundary markers.
(Walls are different as they are both potential boundary markers and physical thresholds.)
According to the Wikipedia article above, the Frisch–Peierls memorandum included those two scientist’s suggestion that the best way to deal with their concern that the Germans would develop an atomic bomb was to build one first. But what they thought about the moral issues I don’t know
When scientists first realised an atomic bomb might be feasible (in the UK in 1939), and how important it would be, the UK defence science adviser reckoned there was only a 1 in 100,000 chance of successfully making one. Nonetheless the government thought that high enough to instigate secret experiments into it.
(Obliquely relevant to AI risk.)
https://en.wikipedia.org/wiki/Frisch–Peierls_memorandum
Reminds me of when I was 8 and our history teacher told us about some king of England being deposed by the common people. We were shocked and confused as to how this could happen—he was the king! If he commanded them to stop, they’d have to obey! How could they not do that?? (Our teacher found this hilarious.)
Great post. Three comments:
If it were the case that events in the future mattered less than events now (as is the case with money, because money sooner can earn interest), one could discount far future events almost completely and thereby make the long-term effects of one’s actions more tractable. However I understand time discounting doesn’t apply to ethics (though maybe this is disputed by some).
That said, I suspect discounting the future instead on the grounds of uncertainty (the further out you go the harder it is to predict anything) - using say a discount rate per year (as with money) to model this—may be a useful heuristic. No doubt this is a topic discussed in the field.
Secondly, no doubt there is much to be said about what the natural social and temporal boundaries of people’s moral and other influence & plans are, eg family, friends, work, retirement, death (and contents of their will); and how these can change—eg if you gain or exercise power/influence, say by getting an important job, having children, or doing things with wider influence (eg donating to charity), which can be for better or worse.
Thirdly, a minor observation: chess has an equivalent to the Go thing about a local sequence of moves ending in a stop sign, viz. an exchange of pieces—eg capturing a pawn in exchange for a pawn, or a much longer & more complicated sequence involving multiple pieces, but either way ending in a ‘quiet position’ where not very much is happening. Before Alpha Zero, chess programs considering an exchange would look at all plausible ways it might play out, stopping each move sequence only when a quiet position was reached. And in the absence of an exchange or other instability, would stop a sequence after a ‘horizon’ of say 10 moves (and evaluate the resulting situation on the basis of the board position, eg what pieces there are and their mobility).
FWIW ‘directionally correct’ includes ‘right but for the wrong reasons’, I.e. right only by fluke, hence irrelevant & ignorable. Which isn’t what you want to include. Though it’s maybe not often used in that situation
In London where I live, philosophy meetup groups are much better than this. A broader mix of people—few have philosophy degrees, few know any formal philosophy, some have no university degree, very many recent immigrants, though admittedly almost everyone is middle class. Almost always good conversations, with decent reasoning, including people taking contrary and controversial stances, but respectfully discussed and never any heatedness or performative wokeness. Discussions in groups of 4-6 people work best. (The main bad dynamic is if you get someone who talks too much and dominates a conversation.)
How about ‘out-of-control superintelligence’? (Either because it’s uncontrollable or at least not controlled.) Which carries the appropriately alarming connotations that it’s doing its own thing and that we can’t stop it (or aren’t doing so anyway)
Indeed building something you want, or that someone you know wants, is necessary, but not sufficient! I’d say it depends how much time you’re going to spend creating it and whether you have broader commercial ambitions at the outset.
If you’re creating something you’re going to use yourself anyway, that could well justify creating it (if it won’t take too long). Similarly if you’re creating it for someone else (as a favour, or who will pay you appropriately). Or if you can create a minimum viable product quickly to try out on people.
Also, particularly in the realm of short software projects, there’s a blurry line between creating something for fun/interest and doing so with serious commercial intentions, i.e. you could justify doing it speculatively without feeling you’d wasted your time if it goes nowhere.
But if you’re going to take months (or years) full-time creating something with a view to commercializing it, i.e. make a serious effort, it is remiss not to do basic research and evaluation first, to find out whether there really is a market for your thing (e.g. who customers would be, potential market size, what customers currently do instead, whether you can actually improve on that enough, how hard that might be, what customers would be prepared to spend), whether your thing should do what you think it should (i.e. its features, or indeed whether you’d be better off creating something else entirely), etc. It’s far cheaper to do basic research & planning than to spend months/years creating something speculatively and only then discover much/all of that was misguided.
explicitly without concern for how exactly you are going to commercialize. Indeed, most successful companies figure out where their margins come from quite late into their lifecycle.
The exact way you commercialize or get margins can of course change—but if you can’t figure out any way way of making it work on paper, the chances of it succeeding in real life are slim.
(My LW article on this FWIW: Write a business plan already — LessWrong)
Good post. On one point, I think Landmines are useful in many fields, to warn against important beginners’ mistakes/misconceptions. Though (unless for big safety reasons) this should indeed be secondary to positive advice.
Eg with a startup, don’t spend lots of time creating a product before writing a business plan. The plan should come first, or at least early on, because it’s how you decide whether to create the product! (Something I’ve written about on here)
Re being deficient in vitamins, it’s worth taking a supplement containing all 23 essential micronutrients (every few days), as almost no one gets 100% of the recommended daily amount of all of them, which is nearly impossible to achieve from a plausible diet anyway. I.e. you are probably somewhat deficient in something.
Broadly agree—I overstated my point; of course some people don’t have these concepts. But I think there is a big gap between having these concepts as theory (eg IVT in pure math) and applying them in practice to less obvious cases.
(Cf Wittgenstein thought that understanding a concept just was knowing how to apply it—you don’t fully understand it until you know how to use it.)
To clarify, I think your criticism of utilitarianism/consequentialism is of a naive form of it that only looks at first-order effects. Not ‘proper’ utilitarianism. But yes no doubt many are naive like this, and it’s v hard to evaluate second- and higher-order effects (such as exploitation and coordination).
Also, this kind of naivety is particularly common on the left.