FWIW I have come to similar conclusions along similar lines. I’ve said that I think human intelligence minus rat intelligence is probably easier to understand and implement than rat intelligence alone. Rat intelligence requires a long list of neural structures fine-tuned by natural selection, over tens of millions of years, to enable the rat to do very specific survival behaviors right out of the womb. How many individual fine-tuned behaviors? Hundreds? Thousands? Hard to say. Human intelligence, by contrast, cannot possibly be this fine tuned, because the same machinery lets us learn and predict almost arbitrarily different* domains.
I also think that recent results in machine learning have essentially proven the conjecture that moar compute regularly and reliably leads to moar performance, all things being equal. The human neocortical algorithm probably wouldn’t work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work. In other words, the neocortex needs trillions of synapses to do what it does for much the same reason that GPT-3 can do things that GPT-2 can’t. Size matters, at least for this particular class of architectures.
*I think this is actually wrong—I don’t think we can learn arbitrarily domains, not even close. Humans are not general. Yann LeCun has repeatedly said this and I’m inclined to trust him. But I think that the human intelligence architecture might be general. It’s just that natural selection stopped seeing net fitness advantage at the current brain size.
I grew up in warm climates and tend to suffer a lot in cold weather. I moved to a colder climate a few years ago and discovered scarves. Wearing scarves eliminated 90% of this suffering. Scarves are not exactly a bold and novel invention, but people from warm climates may underestimate their power.
Scaling up testing seems to be critical. With easy, fast and ubiquitous testing, huge numbers of individuals could be tested as a matter of routine, and infected people could begin self-isolating before showing symptoms. With truly adequate testing policies, the goal of true “containment” could potentially be achieved, without the need to resort to complete economic lockdown, which causes its own devastating consequences in the long term.
Cheap, fast, free testing, possibly with an incentive to get tested regularly even if you don’t feel sick, could move us beyond flattening the curve and into actual containment.
Even a test with relatively poor accuracy helps, in terms of flattening the curve, provided it is widely distributed.
So I might phrase this as a set of questions:
Should I get tested, if testing is available?
How do we best institute wide-scale testing?
How do we most quickly enact wide-scale testing?
As my brother pointed out to me, arguments are not won in real time. Give them information in packets and calmly deal with objections as they come up, then disengage and let them process.
Perhaps there’s some obvious way in which I’m misunderstanding, but if10% of people contract the virus over a shortish time frame then won’t essentially everyone contract it eventually? Why would it reach a 10% penetration and then stop? Isn’t this like asking what happens if 10% of people contract influenza? Maybe in a given year your odds of getting the flu are X% but your odds of getting it once in 10 years are roughly 10*X%. Am I missing something that implies the virus will be corralled and gotten under control at a certain point?
Fantastic post; I’m still processing.
One bite-sized thought the occurs to me is that maybe this coupling of the Player and the Character is one of the many things accomplished by dreaming. The mind-system confabulates bizarre and complex scenarios, drawn in some sense from the distribution of possible but not highly probable sensory experiences. The Player provides an emotional reaction to these scenarios—you’re naked in school, you feel horrifying levels of embarrassment in the dream, and the Character learns to avoid situations like this one without ever having to directly experience it.
I think that dreaming does this sort of thing in a general way, by simulating scenarios and using those simulations to propagate learning through the hierarchy, but in particular it would seem that viewing the mind in terms of Player/Character gives you a unique closed-loop situation that really bootstraps the ability of the Character to intuitively understand the Player’s wishes.
I would love to see an answer to or discussion of this question. The premise of the OP that large companies would be better off if split into much much smaller companies is a shocking and bold claim. If conglomeration and growth of large firms were a purely Molochian and net-negative proposition, then the world would look different than it does.
I’m reminded of the post Purchase Fuzzies and Utilons Separately.
The actual human motivation and decision system operates by something like “expected valence” where “valence” is determined by some complex and largely unconscious calculation. When you start asking questions about “meaning” it’s very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like “utility maximization”, where “utility” is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you’re lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.
Possible courses of action include:
1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.
2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn’t enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.
3. Giving up, basically. Determining that you’d rather just do things that don’t make you miserable, even if you’re being a bad utilitarian. This will cause ongoing low-level dissonance as you’re aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.
There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.
The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.
For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I’m whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It’s a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.
If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I’m allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can’t beg off. That’s not an allowed social move. Better not to ask, or if you’re going to ask, be aware of what you’re asking.
Sure, you can pull the “but we’re supposed to be Rationalists(tm)” card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.
I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.
If you’re looking for feedback …
On one level I appreciate this post as it provides delicious juicy social drama that my monkey brain craves and enjoys on a base, voyeuristic level. (I recognize this as being a moderately disgusting admission, considering the specific subject matter; but I’m also pretty confident that most people feel the same, deep down.) I also think there is a degree of value to understanding the thought processes behind community moderation, but I also think that value is mixed.
On another level, I would rather not know about this. I am fine with Less Wrong being moderated by a shadowy cabal. If the shadowy cabal starts making terrible moderation decisions, for example banning everyone who is insufficiently ideologically pure, or just going crazy in some general way, it’s not like there’s anything I can do about it anyway. The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized. The bad/evil moderator does whatever they want, doesn’t even try to open up a dialogue, and usually gets away with it.
Fundamentally you stand to gain little and lose much by making posts like this, and now I’ve spent my morning indulging myself reading up on drama that has not improved my life in any way.
Maybe, but I don’t think that we developed our tendency to lock in emotional beliefs as a kind of self-protective adaptation. I think that all animals with brains lock in emotional learning by default because brains lock in practically all learning by default. The weird and new thing humans do is to also learn concepts that are complex, provisional, dynamic and fast-changing. But this new capability is built on the old hardware that was intended to make sure we stayed away from scary animals.
Most things we encounter are not as ambiguous, complex and resistant to empirical falsification as the examples in the Epistemic Learned Helplessness essay. The areas where both right and wrong positions have convincing arguments usually involve distant, abstract things.
I thought folks might enjoy our podcast discussion of two of Ted Chiang’s stories, Story of Your Life and The Truth of Fact, the Truth of Feeling.
Thanks for writing this up. Do you think massage materially would help with this type of issue?
I’ve been able to help a few people (including myself) with chronic neck/shoulder pain by getting people to utilize their rhomboids rather than their trapezius for the purpose of holding their shoulders back. The rhomboids have a significant mechanical advantage for that purpose. Most people can’t even intentionally activate their rhomboids; they have no kinesthetic awareness of even possessing them. Wondered if you had a response to this, within the framework of the “main muscles of movement”.
My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
I’m still not sure what it would mean for humans to actually have subagents, versus to just behave exactly as if they have subagents. I don’t know what empirical finding would distinguish between those two theories.
There are some interesting things that crop up during IFS sessions that I think require explanation.
For example, I find it surprising that you can ask the Part a verbal question, and that part will answer in English, and the answer it gives can often be startling, and true. The whole process feels qualitatively different from just “asking yourself” that same question. It also feels qualitatively different from constructing fictional characters and asking them questions.
I also find that taking an IFS approach, in contrast to a pure Focusing approach, results in much more dramatic and noticeable internal/emotional shifts. The IFS framework is accessing internal levers that Focusing alone isn’t.
One thing I wanted to show with my toy model, but didn’t really succeed, was that arranging an agent architecture where certain functions belong to the “subagents” rather than the “agent” can be more elegant or parsimonious or strictly simpler. Philosophically, I would have preferred to write the code without using any for loops, because I’m pretty sure human brains never do anything that looks like a for loop. Rather, all of the subagents are running constantly, in parallel, and doing something more like message-passing according to their individual needs. The “agent” doesn’t check each subagent, sequentially, for its state; the subagents pro-actively inject their states into the global workspace when a certain threshold is met. This is almost certainly how the brain works, regardless of whether you wish to use the word “subagent” or “neural submodule” or what exactly. In this light, at least algorithmically, it would seem that the submodules do qualify as agents, in most senses of the word.
Unfortunately there are many prominent examples of Enlightened/Awakened/Integrated individuals who act like destructive fools and ruin their lives and reputations, often through patterns of abusive behavior. When this happens over and over, I don’t think it can be written off as “oh those people weren’t actually Enlightened.” Rather, I think there’s something in the bootstrapping dynamics of tinkering with your own psyche that predictably (sometimes) leads in this direction.
My own informed guess as to how this happens is something like this: imagine your worst impulse arising, and imagine that you’ve been so careful to take every part of yourself seriously that you take that impulse seriously rather than automatically swatting it away with the usual superegoic separate shard of self; imagine that your normal visceral aversion to following through on that terrible impulse is totally neutralized, toothless. Perhaps you see the impulse arise and you understand intellectually that it’s Bad but somehow its Badness is no longer compelling to you. I don’t know. I’m just putting together the pieces of what certain human disasters have said.
Anyway, I don’t actually think you’re wrong to think integration is an important goal. The problem is that integration is mostly neutral. You can integrate in directions that are holistically bad for you and those around you, maybe even worse than if you never attempted it in the first place.