The Seven Secular Sermons guy. Long-time ingroup member. Current occupation: applied AI in media. Divorced dad of three in Leipzig, Germany.
chaosmage
What cognitive biases feel like from the inside
My simple hack for increased alertness and improved cognitive functioning: very bright light
I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too
A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk
Talking to yourself: A useful thinking tool that seems understudied and underdiscussed
The biological function of love for non-kin is to gain the trust of people we cannot deceive
Thanks, but it appears we’re both wrong. Here is a nice intro article that gives proper numbers on this very subject and concludes supernovae aren’t a life-forbidding problem even in the galactic center.
But high density of stars might lead to planetary orbit perturbations which could be. It appears the galaxy is a bit complicated.
I formally proposed to the love of my life, and she said yes.
Caelum est Conterrens: I frankly don’t see how this is a horror story
I’m no economist, but as a former citizen of that former country, this is what I could see.
There was a divide of basic goods and services and luxury ones. Basic ones would get subsidies and be sold pretty much at cost, luxury ones would get taxed extra to finance those subsidies.
The (practically entirely state-owned) industries that provided the basic type of goods and services were making very little profit and had no real incentive to improve their products, except to produce them cheaper and more numerously. Nobody was doing comparison shopping on those, after all. (Products from imperalist countries were expected to be better in every way, but that would often be explained away by capitalist exploitation, not seen as evidence homemade ones could be better.) So for example, the country’s standard (and almost only) car did not see significant improvements for decades, although the manufacturer had many ideas for new models. The old model had been defined as sufficient, so to improve it was considered wasteful and all such plans were rejected by the economy planners.
The basic goods were of course popular, and due to their low price, demand was frequently not met. People would chance upon a shop that happened to have gotten a shipment of something rare and stand in line for hours to buy as much of that thing as they would be permitted to buy, to trade later. In the case of the (Trabant) car, you could register to buy one at a seriously discounted price if you went via an ever-growing waiting list that, near the end, might have you wait for more than 15 years. Of course many who got a car this way sold it afterwards, and pocketed a premium the buyer paid for not waiting.
Arguably more importantly, money was a lot better at getting you basic goods than luxury ones. So people tended to use money mostly for basic goods and services, and would naturally compare a luxury buy’s value with those. When you can buy a (luxury) color TV at ten times the price of a (basic) black-and-white TV, it feels like you’d pay nine basic TVs for adding color to the one you use. Empirically, people often simply saved their money and thus kept it out of circulation.
Housing was a mess, too. Any rent was decreed to have to be very small. So there was no profit in renting out apartments, which again created a shortage of supply. (Private landownership was considered bourgeouis and thus not subsidized.) It got so bad many young couples decided to have child as early as possible, because that’d help them in the application to receive a flat of their own, and move out from their parents. And of course most buildings fell into disrepair—after all, there was no incentive to invest in providing higher quality for renters. This demonstrates again that to be making a basic good or service meant you’d always have demand, but that demand wouldn’t benefit you much.
The production of luxury goods went better, partly because these were often exported for hard currency. The GDR had some industries that were fairly skilled at stealing capitalist innovations and producing products that had them, for sale at fairly competitive prices. Artificially low prices and subsidies for certain goods and products made pretty sure most of domestic consumption never benefitted from that skill.
Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons.
Nick Bostrom’s TED talk on Superintelligence is now online
To put it bluntly, I think it made me smarter. Not more intelligent in the IQ sense—I remain between 125 and 130 - but quicker to notice confusion, see contradictions and avoid dead-ends. So I waste a little less time on predictably fruitless endeavors, my thinking is much more consistent (after a lot of house-cleaning), and I have clear priorities that help me decide right even when pressed for time. These changes have also made me more aware of mistakes others make, and more certain in rejecting them. I have had to learn to point out the mistakes of others more nicely and effectively, but I’m nowhere near good enough at that yet.
I learned a lot about artificial intelligence and machine learning, and am now introducing machine learning methods into my work environment.
I met a bunch of great people, especially at Secular Solstices.
I got a felt impression of how huge the smarter-than-me population actually is, and how sharply limited my abilities are. This helped me start an earnest search for the best task I can do at my level of ability. Similarly, I got an acute sense of how people at different levels of cognitive ability see the world entirely differently—independently of cultural and economic factors, just depending on the quantity and quality of interpretations and implications they’re able to draw from their perception.
I got rid of a lot of false beliefs and a couple of people who continue to hold them. This freed up lots of attentional resources, which I partly reinvested into better beliefs and better people. From the latter I learned more good skills, such as standing up for my needs and empathetic communication.
The rest of the freed-up attention largely went into a huge art project that is incredibly satisfying.
I got better at modeling rational thought processes in other people, which helps in negotiations and got me a quite comfortable salary. I’ve come to rely on this quite a bit, and like to think it makes me an effective communicator. But at the same time, people where this sense fails (who I cannot model as rational agents) feel unsettling to me, and I more and more try to avoid them.
Perhaps most of all, I appear to make fewer stupid mistakes. The absence of something is always hard to notice, but it feels like I’m paying some kind of stupidity tax all the time, and that tax rate has gone down. Not an effect you notice after a day or two, but over the years, the benefits accumulate.
- 29 Dec 2015 15:53 UTC; 6 points) 's comment on has anyone actually gotten smarter, in the past few years, by studying rationality? by (
Why “AI alignment” would better be renamed into “Artificial Intention research”
Functional silence: communication that minimizes change of receiver’s beliefs
These feel like stating the obvious, but maybe outside LW they wouldn’t:
Expect the judgement of evidently rational players (such as Peter Thiel, Elon Musk, and probably many others I’m unaware of) to be extra trustworthy. Do what they’re suggesting, or what becomes profitable once their suggestions have been implemented by others. (For example, Elon Musk said fully electric hypersonic VTOL jets are possible. He’s rational and knows a lot about electric propulsion and aerospace, so this heuristic means believe him even though he hasn’t demonstrated it. So when looking at aircraft propulsion business, favor companies at least looking at electric propulsion.)
Expect economic upheaval created by self-driving cars and other autonomous drones in the next ten years. Avoid investing in any business insufficiently aware of, or insufficiently preparing for, that. Specifically, avoid brick-and-mortar retail with (narrow ranges of) products that could be shipped via drones.
Expect the education bubble to burst at some point. Avoid investing in business that would suffer in that case (e.g. real estate near universities), and invest in companies that benefit from it (e.g. online education providers).
Maybe if you make a detailed scenario study of a world where all of these are true, you can find more indirect opportunities. All those drones should create a booming market for ultra-low power radar devices, for example. But that’s hardly an LW specific idea. I think rationality mostly helps you reduce uncertainty about probabilities, but not necessarily into any particular direction. I suspect its main value might be that with greater certainty about how things that haven’t happened yet will eventually turn out, you can more confidently think another step ahead and take opportunities that other people aren’t sure will even arise.
A different argument against Universal Basic Income
I was interviewed about AI risk for a popular radio show. Not a highbrow forum, but I managed to insert some LW talking points in between their talk of Terminators and their jokes about having to be nice to your coffee machine.
I’m actually grateful for having heard about that Basilisk story, because it helped me see Eliezer Yudkowsky is actually human. This may seem stupid, but for quite a while, I idealized him to an unhealthy degree. Now he’s still my favorite writer in the history of ever and I trust his judgement way over my own, but I’m able (with some System 2 effort) to disagree with him on specific points.
I can’t think I’m entirely alone in this, either. With the plethora of saints and gurus who are about, it does seem evident that human (especially male) psychology has a “mindless follower switch” that just suspends all doubt about the judgement of agents who are beyond some threshold of perceived competence.
Of course such a switch makes a lot of sense from an evolutionary perspective, but it is still a fallible heuristic, and I’m glad to have become aware of it—and the Basilisk helped me get there. So thanks Roko!
I don’t think filters have to be sequential—some could be alternatives to each other, and they might interact. Consider the following.
Each supernova sterilizes everything for several lightyears around them. This galaxy has three supernovas per century, and it used to have more. Earth has gone unsterilized for 3.6 billion years, i.e. each of the last (very roughly) 100 million supernovas was far enough away to not kill it.
That’s easy to do for a planet somewhere on the outer rim, but the ones out there seem to lack heavy elements. If single-celled, mullti-celled, even intelligent life was easy given a couple billion years of evolution, you still couldn’t go to space on a periodic table that didn’t contain any metals.
So planets in areas with lots of supernova activity (i.e. high density of stars) could simply never have enough time between sterilizations to achieve spacefaring civilization, while planets in areas with low density of stars/supernovas haven’t accumulated enough heavy elements to build industry and spaceships. Neither effect prohibits everything, but together they’re a great filter.
There could be other combinations of prohibitive factors, where passing one makes passing the other more difficult. Maybe you need to be a carnivore in order to evolve theory of mind, but you also need to be a herbivore in order to evolve agriculture and exponential food surplus. Or maybe you need tectonic plates to avoid stratification of elements, but you also need a very stable orbit around your star, and those two conditions usually rule our each other. I don’t know. It just seems that a practically linear model of sequential filters, where filters basically don’t interact with each other, is entirely too simplistic to merit confidence.
In a few years, we’ll have a much clearer picture of the chemical makeup of the closest few hundred exoplanets, and that’ll cut down the number of possible explanations of Fermi’s Paradox to a maybe sort of manageable size. Until then, this discussion is unlikely to lead anywhere.