1) Evidence/Reasoning?
6) Evidence/Reasoning?
8) I thought the idea was that if it were discovered a year later, people were going to assume Bellatrix had died in her cell. This requires the death doll to decay, which might be implausible.
1) Evidence/Reasoning?
6) Evidence/Reasoning?
8) I thought the idea was that if it were discovered a year later, people were going to assume Bellatrix had died in her cell. This requires the death doll to decay, which might be implausible.
You seem to be making a fully general argument against action.
(This comment is on career stuff, which is tangential to your main points)
I recently had to pick a computer science job, and spent a long time agonizing over what would have the highest impact (among other criteria). I’m not convinced startups or academia have a higher expected value than large companies. I would like to be convinced otherwise.
(Software) Startups:
1) Most startups fail. It’s easy to underestimate this because you only hear the success stories.
2) Many startups are not solving “important” problems. They are solving relatively minor problems for relatively rich people, because that’s where the money is. Snapchat, Twitter, Facebook, Instagram are examples.
3) Serious problems are complicated, and usually require more resources than a startup can bring to bear.
4) Financially: If you aren’t a founder, your share of the company is negligible.
(Computer Science) Academia:
1) My understanding is that there are dozens of applications for each tenure-track opening. So your chance of success is low, and your marginal advantage over the next-best applicant is probably low.
2) I trust markets more than grant committees for distributing money.
3) It seems easier to get sidetracked into non-useful work in academia
A bold claim, since no one understands “the algorithms used by the brain”. People have been trying to “understand how intelligence works” for decades with no appreciable progress; all of the algorithms that look “intelligent” (Deep Blue, Watson, industrial-strength machine learning) require massive computing power.
I agree. My point is merely that super-human intelligence will probably not appear as a sudden surprise.
EDIT: I changed my OP to better reflect what I wanted to say. Thanks!
1) You are right; that was tangential and unclear. I have edited my OP to omit this point.
2) It’s evidence that it will take a while.
3) Real-time access to neurons is probably useless; they are changing too quickly (and they are changing in response to your effort to introspect).
I’m happy to be both “borderline tautological” and in disagreement with the prevailing thought around here :)
No one is working on cryonics because there’s no money/interest because no one is signed up for cryonics. Probably the “easiest” way to solve this problem is to convince the general public that cryonics is a good idea. Then someone will care about making it better.
Some rich patron funding it all sounds good, but I can’t think of a recent example where one person funded a significant R&D advance in any field.
Counterpoint: spending 40 hours a week on your job is a huge time commitment. It’s also a huge willpower drain (doing worthwhile things requires effort to grind out results, not just time). It’s hard for me to believe that the hours money allows you to “buy back” are worth the “wasted hours” on work. So it’s important that work not be a waste.
Also, you will probably make more money and do more worthwhile things at a job you enjoy.
Money is definitely a big factor, but I don’t think it totally dominates everything else.
It turns out that it’s not clear this is actually true—some studies have found more money leads to greater happiness up through the highest income levels examined.
The “highest income levels examined”—based on the chart on that page—appear to be 128k/yr. Since the income satisficing level (for an unattached individual) is ~75k, this doesn’t seem like good evidence one way or another.
Evolution moves incrementally and it’s likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn’t happen to discover for whatever reason.
Maybe, but that doesn’t mean we can find them. Brain emulation and machine learning seem like the most viable approaches, and they both require tons of distributed computing power.
Sure; bet on mathematical conjectures, and collect when they are resolved one way or the other.
In understanding how intelligence works? No.
Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue’s evaluation for a specific position is more “intelligent”, but it’s just hard-coded by the programmers. Deep Blue didn’t think of it.
Watson can “read”, which is pretty cool. But:
1) It doesn’t read very well. It can’t even parse English. It just looks for concepts near each other, and it turns out that the vast quantities of data override how terrible it is at reading.
2) We don’t really understand how Watson works. The output of a machine-learning algorithm is basically a black box. (“How does Watson think when it answers a question?”)
There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient “intelligence algorithm”, or “understanding how intelligence works”.
1) I expect to see AI with human-level thought but 100x as slow as you or I first. Moore’s law will probably run out sooner than we get AI, and these days Moore’s law is giving us more cores, not faster ones.
I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking
Not at all. Brains are complicated, not magic. But complicated is bad enough.
Would you consider the output of a regression a black box?
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
What’s your machine learning background like, by the way?
One semester graduate course a few years ago.
It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question.
This is definitely an empirical question. I hope it will be settled “relatively soon” in the affirmative by brain emulation.
Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times.
You are extrapolating Moore’s law out almost as far as it’s been in existence!
We could make it a million times more efficient if we trim the fat and keep the essence.
It’s nice to think that, but no one understands the brain well enough to make claims like that yet.
individuals’ capability to produce value differs GREATLY
I have heard this claim repeated many times; I would love to see some evidence for it.
(Long-time lurker; first post)
Some points from earlier chapters that remain unclear to me: any insights would be appreciated?
1) Why did Neville’s remembrall go off so vividly in Harry’s hands? Also, how are there now two remembralls?
2) Do we have any more information/guesses about Trelawney’s prophecy that Dumbledore cut off? What starts with ‘S’?
3) Who told Harry to look for Hermione on the train? The writing is ambiguous, and it’s not really clear why McGonagall would’ve wanted them to meet. I guess other theories are worse, though.
4) What’s up with Harry’s father’s rock? Just a way for Dumbledore to encourage Harry to practice transfiguration?
5) Why are we so sure Dumbledore burned a chicken (or transfigured something)? His explanation makes total sense, and Harry’s confusion at the time is well-explained by his lack of familiarity with phoenixes. It seems more reasonable to assume almost-burned-out phoenixes look like chickens than...whatever the alternative is.
6) Who is saying “I’m not serious” in Azkaban?
7) Is the “terrible secret” of Lily’s potion book really that Snape and Lily fought about it? That just seems like a bizarre reason for a friendship to end. Were Dumbledore’s suggestions incorporated into the potion Petunia took?
8) Why did Quirrell leave a polyjuice potion in Bellatrix’s cell? (especially since the crime was meant to go unnoticed)