Thanks for the irony!
avalot
Lesswrong is certainly designed for the advanced user. Most everything on the site is non-standard, which seriously impedes usability for the new user. Considering the topic and intended audience, I’d say it’s a feature, not a bug.
Nonetheless, the site definitely smacks of unix-geekery. It could be humanized somewhat, and that probably wouldn’t hurt.
Anti-vaccination activists base their beliefs not on the scientific evidence, but on the credibility of the source. Not having enough scientific education to be able to tell the difference, they have to go to plan B: Trust.
The medical and scientific communities in the USA are not as well-trusted as they should be, for a variety of reasons. One is that the culture is generally suspicious of intelligence and education, equating them with depravity and elitism. Another is that some doctors and scientists in the US ignore their responsibility to preserve the profession’s credibility, and sell out big time.
Chicken, meet egg.
So if my rationality is your business, you’re going to have to get in the business of morality… Because until you educate me, I’ll have to rely on trusting the most credible self-proclaimed paragon of virtue, and proto-scientific moral relativism doesn’t even register on that radar.
Interesting too is the concept of amorphous, distributed and time-lagged consciousness.
Our own consciousness arises from an asynchronous computing substrate, and you can’t help but wonder what weird schizophrenia would inhabit a “single” brain that stretches and spreads for miles. What would that be like? Ideas that spread like wildfire, and moods that swing literally with the tides?
By “strangers and superficial acquaintances”, I didn’t mean bosses or co-workers. In business, knowing the ground is important, but as a foreigner, you get more free passes for mistakes, you’re not considered a fool for asking advice on basic behavior, and you can actually transgress on some (not all, not most) cultural norms and taboos with impunity, or even with cachet.
I was not talking specifically about Americans. Americans indeed tend to find out that they have a lot to answer for when traveling abroad. I believe this is also often compounded by provincialism and lack of cultural sensitivity on the part of the imperials: America is the most culturally insular western country I know.
At any rate, the crux of my point wasn’t about an American’s chances trying to play by the rules in a foreign country. My point was that the cultural baggage you accumulated as a child in your home country is worth more if you sell it where the supply is low, and the demand is high.
It’s like trading silk or spices, but instead you’re trading cultural outlook. When you’re young, and a new entrant to the marketplace, your cultural outlook is not a competitive advantage at home. It’s an automatic differentiator in a foreign country, where you can turn it into an edge. It’s not a free pass, but it can be a shortcut.
Thank you! You have no idea just how helpful this comment is to me right now. Your answer to all-consuming nihilism is exactly what i needed!
I think there is a widespread emotional aversion to moving abroad, which means there must be great money to be made on arbitrage.
I think a lot of the aversion is fear of inferiority and/or ostracism. These are counter-intuitively misplaced.
The theory is this: You’re worried that the people over there have their own way of doing things, they know the lay of the land, and they’re competing hard at a game they’ve been playing together since they were born. Whereas you barely speak the language, don’t know the social conventions, and have no connections. What chance could you possibly have of making money or making friends?
In practice, it’s the opposite: Against a wildcard like you, they don’t stand a chance!
If you’re somewhat smart, you’ll find that you have cultural superpowers in a foreign country: Your background gives you a different, unusual look on things which makes you interesting and exotic. At home, you’d be nothing special. And since your accent is cute, you’ll be forgiven your blunders (at least by strangers and superficial acquaintances).
The same asymmetry applies to your education, your working style, etc. They are suddenly unique and refreshing. That can be parlayed into advantage, if used judiciously.
Playing 100% by the rules only guarantees that your playing field will be too crowded for you to get any breaks.
Where the market is irrationally risk-averse, take risks, young ones!
Yes, and I think this is the one big crucial exception… That is the one bit of knowledge that is truly evil. The one datum that is unbearable torture on the mind.
In that sense, one could define an adult mind as a normal (child) mind poisoned by the knowledge-of-death toxin. The older the mind, the more extensive the damage.
Most of us might see it more as a catalyst than a poison, but I think that’s insanity justifying itself. We’re all walking around in a state of deep existential panic, and that makes us weaker than children.
The sound of one hand clapping is “Eliezer Yudkowsky, Eliezer Yudkowsky, Eliezer Yudkowsky...”
Eliezer Yudkowsky displays search results before you type.
Eliezer Yudkowsky’s name can’t be abbreviated. It must take up most of your tweet.
Eliezer Yudkowsky doesn’t actually exist. All his posts were written by an American man with the same name.
If Eliezer Yudkowsky falls in the forest, and nobody’s there to hear him, he still makes a sound.
Eliezer Yudkowsky doesn’t believe in the divine, because he’s never had the experience of discovering Eliezer Yudkowsky.
“Eliezer Yudkowsky” is a sacred mantra you can chant over and over again to impress your friends and neighbors, without having to actually understand and apply rationality in your life. Nifty!
Surprised that nobody has posted this yet...
“Self” is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)
More bothersome: The illusion of “Self” might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)
I don’t have a very advanced grounding in math, and I’ve been skipping over the technical aspects of the probability discussions on this blog. I’ve been reading lesswrong by mentally substituting “smart” for “Bayesian”, “changing one’s mind” for “updating”, and having to vaguely trust and believe instead of rationally understanding.
Now I absolutely get it. I’ve got the key to the sequences. Thank you very very much!
Maybe it’s a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I’m much much more interested in being preserved before I’m brain-dead.
I’m looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There’s the genetic approach, and the hydrogen sulfide approach.
...by the way, the comments threads on the TED website could use a few more rationalists… Lots of smart people there thinking with the wrong body parts.
Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can’t wait to get “frozed-up” at the hospital so he can tell all his buddies about it.
Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as “hibernation for a long time.”
Hibernation will likely become a commonly used “last resort” for many many critical cases (instead of letting them die, you freeze ’em until you’ve gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you’re being told that there’s nothing left to do, we’re going to have to thaw them and watch them die, your next question is going to be “Can we leave them in the fridge a bit longer?”
Hibernation will sell people on the idea that fridges save lives. It doesn’t have to be much more rational than that.
If you’re young, you might be better off pushing hard to help that tech go mainsteam faster. That will lead to mainstream cryo faster than promoting cryo, and once cryo is mainstream, you’ll be able to sign up for cheaper, probably better cryo, and more importantly, one that is integrated into the medical system, where they might transition me from hibernation to cryo, without needing to make sure I’m clinically dead first.
I will gladly concede that, for myself, there is still an irrational set of beliefs keeping me from buying into cryo. The argument above may just be a justification I found t avoid biting the bullet. But maybe I’ve stumbled onto a good point?
You are right: This needs to be a fully decentralized system, with no center, and processing happening at the nodes. I was conceiving of “regional” aggregates mostly as a guess as to what may relieve network congestion if every node calls out to thousands of others.
Thank you for setting me right: My thinking has been so influenced by over a decade of web app dev that I’m still working on integrating the full principles of decentralized systems.
As for boiling oceans… I wish you were wrong, but you probably are right. Some of these architectures are likely to be enormously hard to fine-tune for effectiveness. At the same time, I am also hoping to piggyback on existing standards and systems.
Anyway, let’s certainly talk offline!
You’re right: A system like that could be genetically evolved for optimization.
On the other hand, I was hoping to create an open optimization algorithm, governable by the community at large… based on their influence scores in the field of “online influence governance.” So the community would have to notice abuse and gaming of the system, and modify policy (as expressed in the algorithm, in the network rules, in laws and regulations and in social mores) to respond to it. Kind of like democracy: Make a good set of rules for collaborative rule-making, give it to the people, and hope they don’t break it.
But of course the Huns could take over. I’m trusting us to protect ourselves. In some way this would be poetic justice: If crowds can’t be wise, even when given a chance to select and filter among the members for wisdom, then I’ll give up on bootstrapping humanity and wait patiently for the singularity. Until then, though, I’d like to see how far we could go if given a useful tool for collaboration, and left to our own devices.
Alexandros,
Not surprised that we’re thinking along the same lines, if we both read this blog! ;)
I love your questions. Let’s do this:
Keynesian Beauty Contest: I don’t have a silver bullet for it, but a lot of mitigation tactics. First of all, I envision offering a cascading set of progressively more fine-grained rating attributes, so that, while you can still upvote or downvote, or rate something with starts, you can also rate it on truthfulness, entertainment value, fairness, rationality (and countless other attributes)… More nuanced ratings would probably carry more influence (again, subject to others’ cross-rating). Therefore, to gain the highest levels of influence, you’d need to be nuanced in your ratings of content… gaming the system with nuanced, detailed opinions might be effectively the same as providing value to the system. I don’t mind someone trying to figure out the general population’s nuanced preferences… that’s actually a valuable service!
Secondly, your ratings are also cross-related to the semantic metadata (folksonomy of tags) of the content, so that your influence is limited to the topic at hand. Gaining a high influence score as a fashion celebrity doesn’t put your political or scientific opinions at the top of search results. Hopefully, this works as a sort of structural Palin-filter. ;)
The third mitigation has to do with your second question: How do we handle the processing of millions of real-time preference data points, when all of them should (in theory) get cross-related to all others, with (theoretically) endless recursion?
The typical web-based service approach of centralized crunching doesn’t make sense. I’m envisioning a distributed system where each influence node talks with a few others (a dozen?), and does some cross-processing with a them to agree on some temporary local normals, means and averages. That cluster does some more higher-level processing in consort with other close-by clusters, and they negotiate some “regional” aggregates… that gets propagated back down into the local level, and up to the next level of abstraction… up until you reach some set of a dozen superclusters that span the globe, and who trade in high-level aggregates.
All that is regulated, in terms of clock ticks, by activity: Content that is being rated/shared/commented on by many people will be accessed and cached by more local nodes, and processed by more clusters, and its cross-processing will be accelerated because it’s “hot”. Whereas one little opinion on one obscure item might not get processed by servers on the other side of the world until someone there requests it. We also decay data this way: If nobody cares, the system eventually forgets. (Your personal node will remember your preferences, but the network, after having consumed their influence effects, might forget their data points.)
A distributed, propagation system, batch-processed, not real-time, not atomic but aggregated. That means you can’t go back and change old ratings, and individual data points, because they get consumed by the aggregates. That means you can’t inspect what made your scored go up and down at the atomic level. That means your score isn’t the same everywhere on the planet at the same time. So gaming the system is harder because there’s no real-time feedback loop, there’s no single source of absolute truth (truth is local and propagates lazily), and there’s no auditing trail of the individual effects of your influence.
All of this hopefully makes the system so fluid that it holds innumerable beauty contests, always ongoing, always local, and the results are different depending on when and where you are. Hopefully this makes the search for the Nash equilibrium a futile exercise, and people give up and just say what they actually think is valuable to others, as opposed to just expected by others.
That’s my wishful thinking at the point. Am I fooling myself?
Clippy, how can we get along?
What should humans do to be AI-friendly? For paperclip-maximizing AIs, and other “natural” (non-Friendly) AIs, what are the attributes that can make humans a valuable part of the utility function, so that AIs won’t pull the plug on us?
Or am I fooling myself?
At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain’t FOOMing any faster than Clippy. At this rate, we’ll never gonna ensure survival of the species.
If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we’re helping with paperclip-maximization, he’ll probably throw in some FOOM for us too (at least he’ll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.
With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on… Of course, it will soon become cheaper to use robots to do this work, but that’s the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we’ll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)
So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.
Clippy’s values and utility function are enormously more simple, defined, and achievable than ours. We’re still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.
Clippy’s value system is clear, defined, easy to implement, achieve, and measure. It’s something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.
Shouldn’t that count for something?
I’m wired for empathy toward human intelligence… Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That’s all I’m saying. :)
Very tricky question. I won’t answer it in two ways:
As I indicated, in terms of navigation/organization scheme, LW is completely untraditional. It still feels to me like a dark museum of wonder, of unfathomable depth. I get to something new, and mind-blowing, every time I surf around. So it’s a delightful labyrinth, that unfolds like a series of connected thoughts anyway you work it. It’s an advanced navigation toolset, usable only by people who are able to conceptualize vast abstract constructs… which is the target audience… or is it?
I’ve been in the usability business too long to make UI pronouncements without user research. We’ve got a very specific user base, not defined by typical demo/sociographics, but by affinity. Few common usability heuristics would apply blindly to this case.
But among the few that would:
Improved legibility, typographic design, visual hierarchy
Flexible, mobile to wide-screen self-optimizing layout
More personalized features (dashboard, analytics, watch lists, alerts, etc.) although many are implicitly available through feeds, permalinks, etc.
Advanced comments/post management tools for power-users (I’m guessing there might be a need, through I am not one by any means.)
But, again, I think we have a rare thing here: A user base that is smart enough to optimize its own tools. Normally, the best user experience practitioners will tell you that you should research, interview and especially observe your users, but never ever ever just listen to them. They don’t know what they really want, wouldn’t know how to explain it, and what they want isn’t even close to what they actually need. Would LW users be different? And would design by committee work here? I’m very dubious, but curious.
Does anyone know the back-story of how this website evolved? Was it a person, a team, or the whole group designing it?