Guy on the right is Markus Kalisch.
Not sure about the one on the left—outside chance it’s Bertrand Russell but probably not.
Guy on the right is Markus Kalisch.
Not sure about the one on the left—outside chance it’s Bertrand Russell but probably not.
Another class of routes is for the AI to obtain the resources entirely legitimately, through e.g. running a very successful business where extra intelligence adds significant value. For instance, it’s fun to imagine that Larry Page and Sergey Brin’s first success was not a better search algorithm, but building and/or stumbling on an AI that invented it (and a successful business model) for them; Google now controls a very large proportion of the world’s computing resources. Similarly, if a bit more prosaically, Walmart in the US and Tesco in the UK have grown extremely large, successful businesses based on the smart use of computing resources. For a more directly terrifying scenario, imagine it happening at, say, Lockheed Martin, BAE Systems or Raytheon.
These are not quick, instant takeovers, but I think it is a mistake to imagine that it must happen instantly. An AI that thinks it will be destroyed (or permanently thwarted) if it is discovered would take care to avoid discovery. Scenarios where it can be careful to minimise the risk of discovery until its position is unassailable will look much more appealing than high-risk short-term scenarios with high variance in outcomes. Indeed, it might sensibly seek to build its position in the minds of people-in-general as an invaluable resource for humanity well before its full nature is revealed.
… and this is part of why my kids have always known that Santa and the Tooth Fairy are fun pretend games we play, not real. I really don’t see what they’re “missing out”: they seem no less excited about Santa coming than other kids, and get no fewer presents.
Not lying about it has all sorts of extra benefits. It makes keeping the story straight easy. It means I’m not dreading that awkward moment when they’ve half-guessed the truth and ask about it outright. And I wasn’t remotely tempted to tell them -as several people I know did—that the International Space Station pass on Christmas Eve was Santa on a warm-up run. Firstly, because that would mean you couldn’t tell them about the ISS and how you can see it with your own eyes if you look up at the right time, and that’s really cool. And secondly, because they’d have recognised it anyway.
It’s also helpful social practice in behaving with integrity but respectfully when around people who passionately defend their supernatural beliefs.
Short response: Check out the Cochrane Library on mental health. (Browse by Topics in the left-hand side, Expand, then click on Mental Health—as of just now there are 406 entries.)
Evaluating healthcare interventions is hard. The gold standard is a randomised controlled trial (RCT), published in a peer reviewed journal. But there are all sorts of problems with single trials, some of which you allude to here. It’s a really great idea to do a systematic review of all published trials and combine the good ones to get the best evidence available.
Doing this well is really hard—you need specialist expertise in the specific area to correctly interpret the primary literature (the RCTs), and specialist skills in systematic reviewing (as with RCTs themselves, there are many obvious and subtle issues about how to do them well). And it takes ages.
Luckily, there’s an international collaboration of people, called the Cochrane Collaboration who get together to do this sort of thing, and have been beavering away for 20 years.
Unless you have significant resources, you are unlikely to do better on any topic than the latest available Cochrane Review. And if you do have significant resources, you’re likely to do well to start with it.
When a health issue pops up for me or someone I care about, I jump straight for the Cochrane review (and also any relevant guidelines and protocols, but that’s a tier down the evidence quality pyramid), and it’s like I’m getting a well thought-through briefing from the world’s experts on what we currently know about what works and what doesn’t.
I love it.
As a postscript, there is a whole field of healthcare informatics that looks at how to find good academic papers on a particular issue—I once ran a whole course on the topic (and related ones). The shortcut answer is ‘use Cochrane’; the long spadework answer is ‘search Medline’.
Good luck.
The question of which temperature / global climate is optimal is rarely discussed, suggesting that the current climate is unlikely to be optimal.
Agree that this topic is not widely discussed in mainstream climate science as such, but not sure your conclusion follows.
The chances of the current global climate being optimal would be pretty remote, except for the facts that one very powerful (but slow) optimisation process has been busy adapting the existing biota (including humans) to the climate, followed by an even more powerful and much more rapid optimisation process adapting human capital and expertise to the existing climate.
It’s not that we can’t adapt to another set of climate and weather patterns, it’s that it’ll have a cost. There certainly is a reasonable amount of discussion in the mainstream climate literature about the likely costs of adaptation. Which, of course, one needs to weigh against the likely benefits.
Predicting the local weather impact of global climate change is fearsomely difficult and uncertain (far more so than global temperature), which makes adding up the cost/benefit analysis extremely difficult. (And makes planning long-term capital investments difficult.)
It seems to me pretty likely that the current global climate is at least a local optimum for humans.
If that’s an interesting insight for you, you might get a kick out of realising that trees come from out of the air.
One evening, when I was in my mid-teens, my parents had gone out and were due back very late. For story-unrelated reasons there was a lot of tension, nervousness and worry in the household at that time. My younger brothers went to bed, and I stayed up a bit watching the film Cat’s Eye, a mild horror film written by Stephen King.
In the final part of the film, a girl is threatened by a vicious troll, a short, ugly, nasty creature with a dagger. It repeatedly creeps in to her bedroom in the night, first slaughtering her pet parrot, and then trying to kill her by sucking her breath out. She’s defended by a stray cat, but unfortunately when her parents come in, there’s no sign of the troll, only the cat, so the parents don’t believe her and blame the luckless animal for the mayhem.
While I was watching this, one of my brothers came in from his bedroom, clearly upset. He’d heard something creeping in to his bedroom, first opening the door, then walking across the floor. He was scared. I instantly thought of the vicious troll from the film, but with my rational brain knew it couldn’t possibly be that. I also knew he hadn’t seen the film. So I tried to reassure him, and talked about how the house makes noises in the floorboards when the central heating turns off—which had just happened. He wasn’t remotely convinced: he knew fine what the usual house-settling noises were, and this was something different. It was something with feet, and small, no more than a foot tall.
I was a bit creeped out, but as the older brother put on a brave, reassuring face and came with him in to his bedroom and searched it thoroughly. We found nothing. With a bit of persuasion he went back to bed. I went back to the film.
About fifteen minutes later he came back, absolutely terrified. The thing, whatever it was, had come back, opened his door, and walked around on its little feet. It totally wasn’t the house settling, it was footsteps. I wondered whether he’d overheard or seen the film, and was imagining the troll, but I was pretty sure he hadn’t. He was convincing: he wasn’t the sort to get that upset at something wholly imaginary, and was able to give clear detail about what he had heard when questioned. So by now I was really quite creeped out. With my rational brain I knew that the vicious troll couldn’t be real and in our house, but there was clearly something going on. My emotions were running pretty high, and I really didn’t want to take on the role of the wrongly-unbelieving parents from the film. Which of course made me pretty unconvincing at reassuring my poor brother. I went with him to check his bedroom, and again we found nothing.
He was too scared to sleep on his own, so I stayed with him. If anything does come in, it’ll have to come past me first, and I’m pretty tough and I’ll be ready, I told him with the best teenage bravado I could muster. Of course, nothing happened with me on watch, and eventually, he fell asleep.
It was my own bedtime by then, so I got myself ready for bed and locked the doors and turned off all the lights except the porch and hall lights for my parents’ return. That in itself was slightly spooky, which didn’t help.
I lay down in bed and turned off the bedside light. My mind was still racing, but eventually I found myself starting to get a little sleepy.
Suddenly, I was wide awake and awash in serious adrenaline reaction. My bedroom door had just opened an inch or two, and my body was in full-on fight-or-flight-or-freeze mode. I froze. Had I imagined it, in a going-to-sleep sort of way? No: as I watched in horror, the door opened another couple of inches. I’d been in the dark long enough that my eyes were fully dark-adapted, and from where I was lying in bed, I could see the doorway from about a foot high upwards, dimly but distinctly backlit from the hall light, and there was nothing there. Whatever had opened the door was less than a foot tall. So definitely not my parents coming home and checking on me, then. Now I was really scared. My hyper-alert state led to massive subjective time dilation: all this took only a few seconds, but it felt like minutes.
It got worse. I heard footsteps. Small but quite distinct footsteps. Nothing remotely like the house settling. The sort of footsteps something less than a foot high would make. Exactly like my brother had described. Exactly like the vicious troll. Whatever it was stopped for a moment. I could hardly breathe.
Then it started again, clearly walking towards me in my bed. I’m not sure I’ve ever been as scared as I was at that moment.
Rationally, I knew it couldn’t be a vicious troll come to kill me, but emotionally I was certain of it. I thought furiously, taking advantage of the extra subjective time. Whatever it was, I wasn’t going to just lie there and let it do whatever it wanted. I sized up my situation. I had no obvious weapons or things-that-could-be-weapons to hand or in easy reach, but on the plus side, I was clearly much bigger than it was, and reasonably fit and strong. Whatever it was clearly intended to surprise me in my bed, but I reckoned I could seize the tactical advantage by surprising it. So far I’d just lain there silently, as if asleep. I decided to seize the initiative and confront it in a rush. This was classic battlefield thinking: under desperate pressure, I didn’t seek and evaluate alternatives, I just quickly checked over the first plan that came in to my mind, and although it didn’t seem great, it seemed better than doing nothing, so I went for it. I visualised what I would do, got my muscles ready, then moved. I leapt out of bed, hurling off the blankets in the direction of the thing, and roared as loudly as I could as I charged towards it.
Bhe bja ubhfrubyq png unq pbzr va gb gur orqebbz ybbxvat sbe fbzrjurer jnez gb frggyr qbja sbe n anc. Ur jnf nofbyhgryl greevsvrq ol guvf qvfcynl, ghearq gnvy, naq syrq.
I took the survey.
I, like many others, was very amused at the structure of the MONETARY AWARD.
I’m not sure it was an advisable move, though. There’s an ongoing argument about the effect of rewards on intrinsic motivation. But few argue that incentives don’t tend to incentivise the behaviour they reward, rather than the behaviour the rewarder would like to incentivise. In this instance, the structure of the reward appears to incentivise multiple submissions, which I’m pretty sure is not something we want to happen more.
In some contexts you could rely on most of the participants not understanding how to ‘game’ a reward system. Here, not so much, particularly since we’d expect the participants to know more game theory than a random sample of the population, and the survey even cues such participants to think about game theory just before they submit their response. Similarly, the expectation value of gaming the system is so low that one might hope people wouldn’t bother—but again, this audience is likely to have a very high proportion of people who like playing games to win in ways that exercise their intelligence, regardless of monetary reward.
So I predict there will be substantially more multiple submissions this time compared to years with no monetary reward.
I’m not sure how to robustly detect this, though: all the simple techniques I know of are thwarted by using a Google Form. If the prediction is true, we’d expect more submissions this year than last year—but that’s overdetermined since the survey will be open for longer and we also expect the community to have grown. The number of responses being down would be evidence against the prediction. A lot of duplicate or near-duplicate responses aren’t necessarily diagnostic, though a significant increase compared to previous years would be pretty good evidence. The presence of many near-blank entries with very little but the passphrase filled in would also be very good evidence in favour of the prediction.
(I used thinking about this as a way of distracting myself from thinking what the optimal questionnaire-stuffing C/D strategy would be, because I know that if I worked that out I would find it hard to resist implementing it. Now I think about it, this technique—think gamekeeper before you turn poacher—has saved me from all sorts of trouble over my lifespan.)
I know several people who have done and enjoyed lucid dreaming.
I did it for a while as a student, but stopped. I started with classic techniques for noticing (a word written on the back of my hand worked well for me) and after a month or so got good enough at it that I could tell dreams from reality just by being able to perceive the dream-like quality of my perceptions. The dreams were good fun. It even changed my experience of waking life too, making it more dream-like: more vivid and more fuzzy at the same time.
But eventually I noticed that I was finding it a lot harder to do challenging thinking tasks, like a difficult programming job, or a problem sheet, that I’d previously had no trouble with. Waking life had got less interesting—I’d basically be clockwatching until I could go back to bed and have lucid dreams again. The dream-like quality to waking life was also getting problematic—I’d find myself almost completely zoned out in situations where I really didn’t want to be and consequently got in trouble.
When I noticed all this, I was a bit freaked out, so I stopped completely. I trained myself out of the habit of noticing near-constantly whether I was dreaming or not, and developed a new habit of forgetting dreams once I was awake. (Mainly by not thinking about them once I’m awake—the don’t think of a white tiger trick is to deliberately think about something else.) Things returned to the previous normal quite quickly, and I haven’t done it since.
Looking back, I’m fairly sure I’d become quite seriously sleep-deprived. My guess is my lucid dreaming was making my sleep less restful. Now I have a lot more experience of sleep deprivation, I know that one of the insidious features of chronic sleep deprivation (for me at least) is that it zaps my metacognitive abilities as fast as it zaps my other cognitive functions. Which leads to a Dunning-Kruger death spiral: not only do I get less smart, I get less good at telling how smart I am.
I expect that many people who do lucid dreams won’t have that problem: I mention it as something that’s worth keeping an eye on. But perhaps wise not to do the experimenting in the run up to e.g. big exams, work deadlines, interviews, long drives, etc.
One of my absolute favourite lucid dreams was flying. I’ve since done a bit of flying light aircraft as a (very expensive) hobby. The thing about lucid dreams is that you’re aware that they are only dreams. In my experience, actually living your favourite dream is harder work, but way better.
I’d add another call for caution with this approach. This got longer than I meant; the short version is: beware of getting in to an arms race with smart people, particularly ones you love, because one or other of you will lose.
A bright child will be able to out-manoeuvre you in areas where they are motivated and you are not. If not now, soon. Think about it: there will already be things they are better at than you. (Some of those games, for instance.) Better not to rely long-term on a strategy that only works if you are able to continue to out-think them.
To spell it out, the risk is motivating him to avoid you learning about his behaviour, rather than motivating him to avoid the behaviour.
That was my experience as a child. Between about 11 and 16, I hung out with a bunch of troublemakers, but was regularly able to evade almost all the negative authority-imposed consequences their actions led to, largely by being much better at subterfuge than the others in the group. (The others in the group were among the least bright in the cohort.) My parents were both smart—but I knew more about what I was up to than they did, and was much more motivated in practice to avoid punishment than they were to enforce discipline. And it did my relationship with them no favours to regularly succeed in hoodwinking them.
(Add-colour aside anecdotes: I recall being punished by a bright teacher for some infraction. I protested the size of the punishment—I admitted a minor wrong, but (truthfully) claimed that I hadn’t been involved in the main naughtiness. They didn’t buy it. A while later, evidence emerged backing up my case that I didn’t do it. The teacher said “Well, count the unjustified bit of the punishment as being for all those times you did do it and weren’t caught.”. Which I don’t think was meant to strongly emphasise to me the vital importance of not being caught, but it did. I am also perversely proud of a new school rule being instituted because I had semi-successfully argued that I shouldn’t be punished for breaking a rule that didn’t exist. Luckily that arms race was abandoned by mutual consent before it got out of hand.)
This is also my experience a parent. Obviously, I don’t know of any instances where my kids have evaded my ‘surveillance’ entirely successfully. But I have caught some cases close to my ability to detect, and it seems very unlikely that they do things behind my back only right up to the edge of my ability to detect and not over it.
I’d advocate trying for more genuinely negotiated engagement with them. It’s really hard, and not something that you can just do like that. But I certainly try for “we urgently need a discussion about whether that action is a good idea because we seem to disagree strongly” as a frontline response ahead of “do that again and I will stop you having X that you like”. (Another bonus to the discussion approach is that it leaves the door open to the kid convincing me to change my mind and thus coming out of the situation a winner.)
Back on the original topic, I very much expect that taking an active interest in what the kid’s up to, and how bored they are or not, and trying to keep them positively engaged (as in this post!) is an excellent step to avoiding the negative outcome here.
Starting today, Monday 25 November 2013, some Stoic philosophers are running “Stoic Week”, a week-long mass-participation “experiment” in Stoic philosophy and whether Stoic exercises make you happier.
There is more information on their blog.
To participate, you have to complete the initial exercises (baseline scores) by midnight today (wherever you are), Monday 25 November.
Generally, you should not be in the habit of doing things that have a 0.1% chance of killing you. Do so on a daily basis, and on average you will be dead in less than three years
Indeed!
It’s even worse than that might suggest: 0.999^(3*365.25) = 0.334, so after three years you are almost exactly twice as likely to be dead than alive.
To get 50%, you only need 693 days, or about 1.9 years. Conversely, you need a surprising length of time (6500 days, about 17.8 years) to reduce your survival chances to 0.001.
The field of high-availability computing seems conceptually related. This is often considered in terms of the number of nines—so ‘five nines’ is 99.999% availability, or <5.3 min downtime a year. It often surprises people that a system can be unavailable for the duration of an entire working day and still hit 99.9% availability over the year. The ‘nines’ sort-of works conceptually in some situations (e.g. a site that makes money from selling things can’t make money for as long as it’s unavailable). But it’s not so helpful in situations where the cost of an interruption per se is huge, and the length of downtime—if it’s over a certain threshold—matters much less than whether it occurs at all. There are all sorts of other problems, on top of the fundamental one that it’s very hard to get robust estimates for the chances of failure when you expect it to occur very infrequently. See Feynman’s appendix to the report on the Challenger Space Shuttle disaster for amusing/horrifying stuff in this vein.
Very big and very small probabilities are very very hard.
I’d definitely avoid making a big deal out of it being educational or related to school. (Unless their educational experience is very unusual.) This is cool, interesting stuff you’re giving them! Obviously, this relies on you being able to sell that idea to the child.
If the direct sales approach seems unlikely to work, you can make it available without much fanfare but give just enough of a hook for their curiosity. (If they’re incurious, that’s probably the place that’ll yield most benefit,)
My parents—I now realise—did a lot of this, “happening” to leave well-written books on subjects they knew I was interested in around the place. So, for instance, leaving books about sex, reproduction and puberty lying around when I was about 11 or 12. We had an adult encyclopedia, which was kept with my parents’ serious/valuable books, but they said if I really wanted to, I was allowed to have a look, as a special privilege. So long as I was careful with them and didn’t damage them because they were special. So I sat there for hours and hours and days and days with my fingers stuck in the pages, in much the way I do now with browser tabs and Wikipedia.
Also helps greatly if the books are actually good and interesting. The better you know the kid and their interests, the better you’ll be able to (a) pick things they will be interested in, and (b) convince them that it is interesting.
To be fair to the medieval, their theories about how one can build large, beautiful buildings were pretty sound.
This is the beginning of a very good idea. Happily, many, many highly-competent educational researchers have had it already, and some have pursued it to a fair degree of success, particularly in constrained domain fields (think science, technology, engineering, maths, medicine). It certainly seems to be blooming as a field again these last 5-10 years.
Potentially-useful search terms include: intelligent tutoring systems, AI in Education, educational data mining.
One particularly-nifty system is the Pittsburgh Science of Learning Centre’s Datashop, which is a shared, open repository of learner interactions with systems designed to teach along these lines. The mass of data there helps get evidence of what sequence of concepts actual learners find helpful, rather than what sequence teachers think they will.
Here’s 2013′s Prediction Thread.
Bedford was frozen in 1967; how hard would it be to either collect or assemble a set of yearbooks, describing what’s happened since then, and storing a small library of such reference texts at both CI and Alcor?
I think that, at least, is a solved problem, or at least as near-to-solved as we’re likely to get, so no effort need be made on that front.
Wikipedia has encyclopedic overviews of each century—C20th, C21st—with further information available readily. And down to far more detail than a revivee is likely to want, apart from those areas and people that they had a personal/individual interest in. There are significant, well-organised and seemingly-sustainable efforts in place to keep this information up to date, to keep it safe, and to keep it readily available.
I think just giving them a tablet and a couple of minutes’ instruction on navigating Wikipedia would work very nicely for that particular job. It’d also get them started with the online world, which is arguably the biggest shift in Westerners’ daily lives since 1967.
(And if the process has temporarily impaired their vision or motor skills, happily Wikipedia is readily available in a wide variety of accessible formats.)
Also, they were not just AIDS researchers but AIDS activists and campaigners. The conference they were going to was expecting 12-15,000 delegates (depending on the report); it’s the most prominent international conference in the area but far from the only one. As you say, a terrible loss, particularly for those close to the dead. The wider HIV/AIDS community will be sobered, but it will not be sunk. If nothing else, they coped with far higher annual death rates before effective therapies became widespread in the developed world.
The story of this story does helpfully remind us that the other ‘facts’ about this situation—which we know from the same media sources—may be similarly mistaken.
I spent quite a lot of time many years ago doing my own independent checks on astronomy.
I started down this line after an argument with a friend who believed in astrology. It became apparent that they were talking about planets being in different constellations to the ones I’d seen them in. I forget the details of their particular brand of astrology, but they had an algorithm for calculating a sort-of ‘logical’ position of the planets in the 12 zodiacal signs, and this algorithm did not match observation, even given that the zodiacal signs do not line up neatly with modern constellations. They were scornful that I was unable to tell them where, say, Venus would be in 12 years time, or where it was when I was born.
So challenged, I set to.
The scientific algorithms for doing this are not entirely trivial. I got hold of a copy of Jean Meeus’ Astronomical Algorithms, and it took me quite a lot of work to understand them, and then even longer to implement them so I could answer that sort of question. They are hopelessly and messily empirical (which I take as a good sign) - there is a daunting number of coefficients. Eventually I got it working, and could match observation to prediction of planetary positions to my satisfaction—when I looked at them, the planets were where my calculations said they should be, more or less.
It’s hard with amateur equipment to measure accurate locations in the sky (e.g. how high and in which direction is a particular star at a particular time), but relative ones are much easier (e.g. how close is Venus to a particular star at a particular time). The gold standard for this sort of stuff is occultations—where you predict that a planet will occult (pass in front of) a star. There weren’t any of those happening around the time I was doing it, but I was able to verify the calculations for other occultations that people had observed (and photographed) at the date and times I had calculated.
These days, software to calculate this stuff—and to visualise it, which I never managed—is widely available. There are many smartphone apps that will show you these calculations overlaid on to the sky when you hold your phone up to it. (Although IME their absolute accuracy isn’t brilliant, which I think is due to the orientation sensors being not that good.) This makes checking these sorts of predictions very, very easy. Although of course you can’t check that there isn’t, say, a team of astronomers making observations and regularly adjusting the data that gets to your phone.
I was also able to independently replicate enough of Fred Espenak’s NASA eclipse calculations to completely convince me he was right. (After I found several bugs in my own code.) Perhaps the most spectacular verification was replicating the calculations for the solar eclipse of 11 August 1999. I was also able to travel to the path of totality in France, and it turned up slap on time and in place. This was amazing, and I strongly urge anyone reading this to make the effort to travel to the path of totality of any eclipse they can.
Until I’d played around with these calculations, I hadn’t appreciated just how spectacularly accurate they have to be. You only need a teeny-tiny error in the locations of the Sun/Moon/Earth system for the shadow cast by the moon on the Earth to be in a very different place.
I also replicated the calculations for the transit of Venus in 2004. I was able to observe it, and it took place exactly as predicted so far as I was able to measure—to within, say, 10 seconds or so. (I didn’t replicate the calculations for the transit in 2012 - no time and I’d forgotten about how my ghastly mess of code worked—and I wasn’t able to observe it either, since it was cloudy where I was at the time.)
More recently, you can calculate Iridium flares and ISS transits. Again, you have to be extremely accurate in calculations to be able to predict where they will occur, and they turn up as promised (except when it’s cloudy). And again, there are plenty of websites and apps that will do the calculations for you. With a pair of high-magnification binoculars you can even see that the ISS isn’t round.
All this isn’t complete and perfect verification. But it’s pretty good Bayesian evidence in the direction that all that stuff about orbits and satellites is true.