They also have arguments over which is more erotic, 50 years of the best sex ever or 3^^^3 accidental nipple brushes.
whpearson
I’d like to vote this up as I agree with lots of the points raised, but I am not comfortable with the personal nature of this article. I’d much rather the bits personal to Eliezer be sent via email.
Probably some strange drama avoidance thing on my part. On the other hand I’m not sure Eliezer would have a problem writing a piece like this about someone else.
I’ve thought to myself that I have read one too many fantasy books as a kid, so the partying metaphor hits home.
I’m not so much into stealing food, but nihilistic procrastination, yeah I’ve been there. I try to get myself out of it by asking if I would expect people of the opposite sex would find me attractive doing it.
In short whether I am being awesome. Which nihilistic procrastination is not, so it should not be done. Getting that thought out tends to be hard because I tend to do stuff that keeps the mind occupied.
Here is a better image for the section.
I’d try to get proficient with the tools before your body degrades too much, configuring how you would want them (perhaps getting to know the internals if they are open source) while it is still easy to make the transition less painful. I’d also try and develop a very strong ability to mentally model code and do maths, rather than using pen and paper for notes or a laptop.
It might also be worth keeping an eye on neural plasiticity research, to see if there is anything that can help your brain reconfigure after lots of its motor functions become redundant.
but it should be equally obvious that such behavior is less than rational in our modern era of contraception: sex simply doesn’t have the same dangers that it did in the ancestral environment.
Is getting pregnant really the only danger? Sex can cause the release of mind altering drug that can cause you to pair bond (women more so than men). This can have a dramatic effect on your life if it is with the wrong person.
Notice: I am not Professor Quirrell in real life.
And that is exactly what Professor Quirrell would say!
I’m doing an MSc in Computer Forensics and have stumbled into doing a large project using Bayesian reasoning for guessing at what data is (machine code, ascii, C code, HTML etc). This has caused me to think again about what problems you encounter when trying to actually apply bayesian reasoning to large problems.
I’ll probably cover this in my write up; are people interested in it? The math won’t be anything special, but a concrete problem might show the problems better than abstract reasoning,
It also could serve as a precursor to some vaguely AI-ish topics I am interested in. More insect and simple creature stuff than full human level though.
She asked my advice on how to do creative work on AI safety, on facebook. I gave her advice as best I could.
She seemed earnest and nice. I am sorry for your loss.
I wonder how well it would work on questions like.
“Does homeopathy cure cancer”.
Or in general where there are people in the minority that know the majority won’t side with them, but the majority might not know how many believe the fringe view.
- 6 Feb 2017 14:32 UTC; 2 points) 's comment on Why is the surprisingly popular answer correct? by (
From an infosec point of view, you tend to rely on responsible disclosure. That is you tell people that will be most affected or that can solve the problem for other people, they can create counter measures and then you release those counter measures to everyone else (which gives away the vulnerability as well), who should be in a position to quickly update/patch.
Otherwise you are relying on security via obscurity. People may be vulnerable and not know it.
There doesn’t seem to be a similar pipeline for non-computer security threats.
I think it is interesting to look at the information efficiency. So we can ask how much better it is at absorbing information compared to humans civilization. So if we gave it and humanity a new problem how many trials/tests would we expect it to need compared to humans.
I’m guessing serious go players (who may have contributed to human knowledge) play on average of 2 games per day? Over a 60 year lifespan. Gives 44k games per person over their life time. Around 60 million go players currently today. I’m assuming 120 million over the lifespan of humanity, because of the rapid growth in our population. This gives roughly 5.25 x 10 ^12 games.
Alpha Go played 5 million games against itself. So it is roughly 1 million times more efficient than historical civilization (at learning Go).Number are very much back of the envelope kind of thing. If anyone knows more about the likely number of games an average person plays I would be interested.
Caveats:
If we were both starting to learn a game from this point in history, we would expect humans to be more efficient at information extraction than we were historically as we had bad communication between them and they knowledge might have to be re-learnt many times.
Go might be an unusually bad game for transferring knowledge gained between people. We could look at people learning go and see how much they improved by playing the game vs by reading strategies/openings. We could look at other important tasks (like physics and programming) and see how much people improved by doing the activity vs reading about the activity.
Edit: So we should be really worried if new variant of Alpha Go comes out that only need 44k games to get as good as it did.
I’m not sure why you expect this… Go is easily simulatable. We find it hard to simulate simple quantum systems like atoms well. Let alone aggregate materials.
I noticed that the links don’t seem to be nofollow I think hackernews makes the low karma ones so to reduce the incentive to spam.
It appears that comments are already nofollow.
He is referring to the link to Godwin’s Law in the post. From the wiki.
For example, there is a tradition in many newsgroups and other Internet discussion forums that once such a comparison is made, the thread is finished and whoever mentioned the Nazis has automatically “lost” whatever debate was in progress.
Pigeons can solve Monty hall (MHD)?
A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.
Behind a paywall
Cryonics is hard to argue against as it partially involves magic (in the sufficiently advanced technology sense) and it involves something we don’t fully understand, how information is stored in the brain. So the lack of technical criticism might be a rare instance of people shutting up about things they don’t understand.
This is the conference. Found it through the guardian article.
Off the cuff thoughts. I see no reason at all why they couldn’t have four legs and two arms. That is quadrapeds seem like a choice baked in early. I can see why intelligent aquatic life might have a harder time forming heavy industry due to lack of oxygen, but I don’t think it is impossible that there are sentient squid out there. I’ll read some more.
ETA: In a way it doesn’t matter what the aliens originally looked like, I doubt we will see squid in a can coming our way for much the same reason that monkey in a can doesn’t seem realistic. The TL;DR of that link is that to create a living self-sustaining biosystem requires too much mass and complexity, and having ultra advanced technology implies a singularity/uploads type scenario, so we wouldn’t see their original bodies.
As to psychology, all bets are off if they have done significant self-modification.
As a data point for why this might be occurring. I may be an outlier, but I’ve not had much luck getting replies or useful dialogue from X-risk related organisations in response to my attempts at communications.
My expectation, currently. is that if I apply I won’t get a response and I will have wasted my time trying to compose an application. I won’t get any more information than I previously had.
If this isn’t just me, you might want to encourage organisations to be more communicative.
I’ve been wondering if that the “can’t get crap done” malaise of the lesswrong community is based in part on its format and feedback system.
I am part of another community (a hackspace) with a similar makeup in members, geeky computery people, and stuff gets done. Hackdays are done, workshops are organised, code is altered, things are created. “What are you working on” is a common question.
The thingiverse and github communities are on-line ones where people do stuff.
So what is the difference? Lesswrong is a talking shop, you are given positive feedback for making a good post or comment. It will attract people that enjoy and are good at discussion. You also might get evaporative cooling, where people that like action go elsewhere.
What makes github or thingiverse different? The base unit of thing that might get people interested in you is a project, something you have created or are in the process of doing.
If anyone is interested in making a community that rewards doing projects in a rationalist frame (maximum effect for the effort), get in contact. I’m currently working my way there very slowly, through an indirect path.
Edit: See here for details http://groups.google.com/group/group-xyz