Could it be that pain-filled stories carry literary value exactly because (to a reader) they’re filled with bearable pain? But I have little idea as to how we’d go about setting the threshold for “tolerable pain.”
Kazuo_Thow
He’s currently the technical director at Bitphase AI. From talking to him, it seems that his strategy is to make tools for speeding up eventual FAI development/implementation and also commercialize those tools to gain funding for FAI research.
Page 136 (in Chapter 5 - “Queer Uses for Probability Theory”), in the first full paragraph.
Google Books is your friend.
- 9 Jan 2010 2:53 UTC; 1 point) 's comment on Open Thread: January 2010 by (
P(H is true | H is not represented in my mind)
How would this probability be assigned?
One that I sometimes forget, usually by encountering a potential path to an answer and quickly switching into short-term investigation mode:
Estimate the value of obtaining an answer and consider whether that would be worth the time/energy investment. The hard question may sound interesting in an attention-grabbing way, but one’s level of fascination moments after hearing it may be a poor indicator of a solutions’ actual value.
but cognitive dissonance is supposed to be a private thing, like going to the bathroom or popping a zit.
I see no compelling reason care about another person’s mundane, unavoidable bodily functions. But I can see a number of compelling reasons to care about another person’s sanity.
Are you making this as a statement of personal preference, or general policy? What if it becomes practically impossible for a person to give informed consent, as in cases of extreme mental disability?
I’m in favor of both the grace period and “karma coward” option. In my own experience, anxiety about being downvoted acted as a deterrent against posting comments; reading and responding to posts by new members is relatively cheap, while missing opportunities to make them feel included in the community (and thus potentially missing out on their future contributions) seems comparatively expensive.
Would it be useful—maybe as something to be incorporated with the discussion forum—to have a (semi-)formalized system of study partners/groups? A while ago, Morendil asked if anyone would be interested in teaming up to study Jaynes’ Probability Theory: the Logic of Science. An influx of new members would bring more people who could benefit from ongoing help and motivation to study the central topics of interest here on LW. It would be nice to have a standard way of coordinating with study partners.
Count me as “having an intention to do that in the future”. Although I’m currently just an undergraduate studying math and computer science, I hope to (within 5-10 years) start doing everything I can to help with the task of FAI design.
[...] but we have no guarantee at all that our formal system contains the full empirical or quasi-empirical stuff in which we are really interested and with which we dealt in the informal theory. There is no formal criterion as to the correctness of formalization.
-- Imre Lakatos, “What Does a Mathematical Proof Prove?”
ETA: When I first read this remark, I couldn’t decide whether it was terrifying, or just a very abstract specification of a deep technical problem. I currently think it’s both of those things.
Fixed, thanks.
From the article:
“When we are in the public arena we tell people we’re working on the aging process, the first thing they think is that we want to make a 100-year-old person live to be 250 -- and that’s actually the furthest from the truth,” he [Andrew Dillin, Salk Institute / Howard Hughes Medical Institute] said.
I wonder how many appearances of this idea (“making 70-80 year lives healthy would be awesome, but trying to vastly extend lifespans would be weird”) are due to public relations expediency, and how many are due to the speakers actually believing it.
Sorry for directly breaking the subjunctive here, but given the number of lurkers we seem to have, there’s probably some newcomers’ confusion to be broken as well, lest this whole exchange simply come off as bizarre and confusing to valuable future community members.
A brief explanation of “Clippy”: Clippy’s user name (and many of his/her posts) are a play on the notion of a paperclip maximizer—a superintelligent AI whose utility function can roughly be described as U(x) = “the total quantity of paperclips in universe-state x”. The idea was used prominently in “The True Prisoner’s Dilemma” to illustrate the implications of one solution to the prisoner’s dilemma. It’s also been used occasionally around Less Wrong as a representative element of the equivalence class of AIs that have alien/low-complexity values.
In this particular top-level post (but not in general), the paperclip maximizer is taken to have not yet achieved superintelligence—hence why Clippy is bothering to negotiate with a bunch of humans.
… wherein I’m trying to talk an escaped AI back into its box.
Yeah… good luck with that.
I wonder whether there are similar brain differences between top mathematicians and everyone else, and if such a simple method could make people better at math.
It would be worth trying, but given that the process of doing original mathematics feels to top mathematicians like it involves a lot of vague, artistic visualization (i.e. mental operations much more complicated than the cursor-moving task), I’d put a low prior probability on simple electrical stimulation having the desired effect.
Eliezer has been outright lying about cost of cryonics in the past.
We would find it helpful if you could provide some insight into why you think this.
I will also be in the vicinity of the Bay Area from June 12 to late September, and would be quite happy to give the study group a try. I attempted a full read of Jaynes’ book about a year ago, and realized about 70% of the way through that I didn’t have all the mathematical background necessary to fully appreciate it.
A zipped archive of all the chapters, which seemed to be missing on the pages linked in the top-level post, is available here.
Does anyone know a good IRC infrastructure that allows for quickly entering and displaying TeX formulas?
There’s a plugin for Pidgin called pidgin-latex which handles just that.
ETA: If people start using this plugin (or, more generally, if we use TeX/LaTeX in any capacity for this study group), it might occasionally be helpful to use the detexify handwritten symbol recognizer—for when you want to use a symbol and can’t quite remember the command that produces it.
“This is one reason why old scientists need to die out before new ideas can receive suitable consideration.”
So giving these scientists full ability to update their beliefs isn’t an acceptable solution?