See also Central Flows for why the magnitude of
James Camacho
is relatively constant—it also predicts it should be around .
Okay, this helps me understand where you are coming from. Basically, there are antinormative conspiracies that are bad for these institutions, but less bad for themselves, so they grow in relative power and are difficult to dislodge by uncoordinated pronormative actors [1] . I would say, sure, these conspiracies exist [2] , but the people within them would readily jump to the pronormative side if they can see greater benefits. It is possible for individual actors to defeat these conspiracies just by proposing better solutions to invested parties.
For example… after several minutes, I could not think of an example. I wanted to say, “clearly Linux will outcompete Windows, it’s a better and free OS,” but in reality the Microsoft conspiracy bribes schools to indoctrinate children into buying their OS when they grow up. Or maybe, “well don’t people leave cults when they realize they’re better off outside them?” but in reality the cult teaches people an inconsistent method of integrating value and sourcing information on consistent methods. So it’s actually rarer and harder than I thought, and even if a pronormative actor succeeds, why won’t their object-level gains just get expropriated by a new antinormative conspiracy?
I think it should be possible to protect against that. For example, honest education studies and assessments would calibrate the system. But what happens in reality is politically motivated studies that are dishonestly reported and passed on as good teaching standards. There isn’t really a mechanism to stop politics in research funding, even the theoretical physicists couldn’t figure it out.
How do you solve this problem? Would fiercer competition work, so parasitized institutions die out faster?
EDIT: The solution is already there in my previous comment! We can’t prevent antinormative conspiracies since they can just adopt the same policies as normative institutions until the right time to cash out. However, is it really an antinormative conspiracy if it perfectly mimics a normative institution? If you put in guard labor to audit the long-run institution’s history, you can force them to play ‘up’ enough that they are essentially a normative institution. Of course, someone has to watch the guards, but you can watch them in a circular pattern.
- ↩︎
Uncoordinated, because most of the time they do not even realize they are in conflict with a coordinated enemy. If they did, they could coordinate and win, which is why their opposition must be a conspiracy.
- ↩︎
Though those within the conspiracy might not label it as such; are the MIT students that lied their way into the school (this comprises the majority of the student body) part of a conspiracy to cash out on the value of MIT’s reputation? They would say no, they’re completely unaware of being in a conspiracy, we would say if they quack like a conspiracy and act like a conspiracy, they’re part of a conspiracy.
- ↩︎
Thank you for the reference, I’ll read it tomorrow (or skim if it’s >50 pages). By “training in game theory and information theory” I meant something akin to “training in chess”, “training in math competitions”, or perhaps most similar, “training in quantitative trading”. I say this like it is a prerequisite, because I think there are certain ways of thinking you can only automatically do after beating your head against the subject for a long time. For example, writing correct proofs came naturally to me because I had already trained in chess where you constantly check for mistakes, and writing essays came naturally to me because I had already trained via coding to plan pages of text ahead of time. I think without having similar training in game theory and information theory, the cognitive load and inferential leaps may be too much to overcome through trial and error, analogous to building a rocket without Newtonian mechanics or a computer language without formal logic.
Here are some things I learned that are obviously true in hindsight, but were not something I thought I should think about in the first place:
-
Adversaries have no reason to divulge information, allies have no reason to conceal information (assuming you are true adversaries or allies).
-
Talk without imposed costs only allows coordination when all parties benefit from distinguishing signals. Otherwise it is just babble. Notice how this makes resumes mostly useless (and why the equilibrium is everyone lying about too-difficult-to-prove signals like years of experience).
-
A reputation’s value comes from costly signals. You can also spend reputation. For example, take an up/down dilemma [1] with a long-run (institutional) actor and a series of one-off agents that know the institution’s prior history. The long-run actor should buy reputation by playing up enough so every agent believes they will be paid more by playing up than down, while cashing in on that reputation by occasionally playing down.
That last part is where the information theory comes in. Now that I think about it, it is probably enough for the first few companies to realize they can use game/information theory in pricing recruitment and stumble their way from there. The more rigorous analysis can come later, as the market gets more competitive. I’m also curious how well this is done in the credit and insurance industries.
- ↩︎
Player A gets paid most for down/up responses and a little for up/up, while Player B gets paid most for up/up and a little for down/down.
-
Until a few years ago the kids went into quant. They still do, but AI capabilities has started recruiting them too. There’s another issue you’re missing with the USA military’s R&D. A majority of the students at MIT are now foreigners or children of foreigners, up from ~10% fifty years ago, and the USA has repeatedly stated China is its greatest threat.
The talent exists, even specifically for war. MIT holds a Battlecode competition every year and the top teams would bring drone warfare to a new paradigm. They are not interested. War is unpopular in general among college students, but it is especially unpopular at MIT. The military sends recruiters every year to the career fairs and everyone walks right past them. They get fewer conversations than startups which are semi scams.
During WWII the USA actively recruited German scientists, but today Chinese scientists are deemed a ‘security risk’. I heard an MIT professor moved back to China due to safety concerns last year. I have not heard, but suspect that many students did not return to MIT after Trump denied Harvard visas. Amusingly, at the same time US universities have become less friendly to foreigners, China has increased scholarships for foreigners.
I think a better financial system could lead to a better recruitment system which could lead to a better education system, but I also think the problems lower down can be solved independently. Quantitative trading firms (like Jane Street) do a comparatively amazing job with recruitment, perhaps because their conflicts with each other are more frequent (-> faster evolution) and metrics more precise (-> better gradients). I imagine the current recruiting market will improve over time, and I also believe a good entrepreneur could single-handedly make large improvements. Of course, there’s always the doubt, “then why hasn’t someone yet?” but then again, very few people have the necessary training in game theory and information theory [1] .
More importantly, I think the deterioration in education is mostly removed from these other systems. Yes, the credential system has pushed more people into college, yes it has led to administrators milking their districts, and yes stronger economic pressures to perform due to a calibrated recruitment market would prohibit this, but I think this mostly happened due to sabotage in academia by Marxists. They formed a clique which circlejerked papers affirming Marxism and denied funding to research on merit, or even research that tried to be precise with the effect of various Marxist policies. For example, there are zero papers studying how “No Child Left Behind” effected gifted students [2] and only a few studying its effect on race achievement gaps (though several hundred purporting to study this effect for funding sake). I think there was a political problem they needed to solve—the replication crisis of a philosophy with low birthrates—and they recognized public schools could replace familial socialization, so they targeted schooling in particular. I get that schools create bullshit jobs, which is good for the managerial class of administrators, and I definitely saw problems with that at MIT, but it was a different set of problems than the anticompetence I saw at public schools. My professors there at least knew their subject, even if they complained about stupid policies.
- ↩︎
I just know enough to say you should be able to put value on reputation and costly signals and recognize most people only have a hazy sense of what those values are. They can say Ivy League > State School, but can they say whether it is better to interview 10 Ivy League graduates at $5000/candidate or 100 State School graduates at $500/candidate?
- ↩︎
There are a few that purport or attempt to study this, but they all use metrics that cannot pick up on effects among the top 5% of students.
- ↩︎
I think much of the suffering could be alleviated if any of these institutions were meritocratic instead of credentocratic:
Education
Recruiting
Finance
Education
I went to “good” secondary schools in my small state, but I would not consider them good. The bitrate was too slow and the other students were unmotivated to learn more than necessary. The teachers, too, were not paid at competitive rates—maybe one quarter the $200k that a good STEM graduate could make at the time—so it was rare to find a passionate and competent teacher. I did have one or two, but they should not have been the exception.
I did not actually take mathematics or science classes in secondary school; I went home and self-studied until I was old enough to enroll in college classes. As my father said, “never let schooling get in the way of your education.” This is why I am still confused what American students learn between 4th and 10th grades. I understand up to 3rd grade they are learning the three R’s [1] , and the upper years take classes like calculus or chemistry, but in between...? What is there in between arithmetic and calculus? Algebra? And why is calculus spread over two years instead of one semester? These subjects only take a few months for the average student to learn, if they’re actually trying. My best guess is to translate my schooling in history and literature which is to say students learn just about nothing.
What horrifies me is that schooling has gotten worse since I left. Highschoolers are frequently illiterate—not functionally illiterate, but cannot-sound-out-these-words illiterate. I’m not terribly surprised though. I remember being confused when I was younger living within the system as it deteriorated.
“Why is our teacher asking us to add using these number blocks? Didn’t we learn addition two years ago? And why did we only do that activity once and never again?”
“Why is this allegedly sixth-grade standardized assessment asking me to show my work adding single-digit numbers? Didn’t we learn that in first grade? And what work is there to show?”
Forget meritocracy, this is not even internally consistent. This was confusing as a student, but now I recognize it’s just politics and credentocracy. The first, because incompetents in power ought to yield that power, right? Who cares if Common Core decreases the median AMC 10 score by 20%, our committee did something! for the kids! The second, because teachers and administrators get bonuses and promotions for a piece of paper saying, “test scores went up,” so they choose the tests accordingly. It’s actually worse than just that—teacher pay is pretty strictly tied to their credentials (bachelor’s or master’s degree regardless of university + years of tenure). I had an amazing competition math coach in 8th grade who could not get hired at a public school (even for pitiful wages) and ended up working for a private school. Yes, their school now wins every state math competition, but who cares about merit? Certainly not the education system.
It does not end in secondary school. Many students go on to obtain a four-year degree, then a six-year PhD, and the pinnacle of education is spending ten more years publishing towards tenure or perishing. I tried to opt out and applied to a hundred or so jobs at the end of high school, but only got one or two callbacks. It actually still astounds me how Google sent me a recruitment email in ninth grade for stumbling my way through Foobar—before they knew my age—but when I actually knew some stuff three years later and tried applying every system autorejected me. I ended up going to university and then joining a startup, but I could not even join a startup (at least, a good startup) without the education credential. Which brings us to our next issue.
Recruiting
Recruiters mostly suck at their jobs. Part of this is mass spam from lying applicants, making it hard to sift for competency, but there is no excuse for filtering out everyone without university experience. It is ridiculously easy to find a list of the USAMO participants each year, and firms with competent recruiters (like Jane Street) do and send out advertisements each year. Oh, and they also sponsor academic competitions, YouTubers, and sites like AoPS or Brilliant.org. Why? Because they can do an expected value calculation on the difference between the cost and value of certain types of recruitment. The optimal move is almost never paying low wages to nonexperts to sift through spam from lying applicants, and yet that’s what pretty much every company does. Nonexperts can’t tell the difference between Javascript or Java, so they just look for the right keywords and filter for education level. At the end of high school I couldn’t be interviewed for a median-salary job, but a few months into my first year at MIT my applications suddenly became visible. And 3–4 years after I matriculated (presumably when I was nearing graduation) there was quite an uptick in recruiters asking me to apply to their jobs.
It’s really confusing behavior. You would think the market should be efficient enough to encourage companies to spend less on recruitment. Amortized, ~5% of payroll gets paid to recruiters. The issue though is companies seem to operate under a Benetarian hiring philosophy even as they’re putting out “help needed” posters. Hiring someone bad creates visible numbers that HR doesn’t like, while there are no numbers written down when you don’t hire someone good. This problem only exists because HR numbers are separate from product growth numbers and no one audits them properly, at least once the organization becomes big enough. Startups are more amenable to “risky” hires (those without credentials that prove they are likely not super-negative EV), and you would think that as they grow those with better recruitment structures would grow faster, so the big organizations would be fine, but that requires an efficient market in that respect, which requires a free market. And the market is not free.
Finance
Founders and early employees want to have money, not just stock options that they can sell for money, so they often sell that stock. To whom? Whomever has enough liquid cash on hand. This is true for small businesses too. Why not retire if a private equity firm is offering you $10m for your mom-and-pop shop? It makes sense. The only issue is some groups can borrow cash cheaper than others based on their established reputation, whether or not they will run the business more efficiently. The United States’ central bank can borrow money at ~4% interest rates, while the average homebuyer pays 6%. This means a homebuyer can only outbid the central bank if the home is 50% more valuable to them. Of course, no one is bidding on homes against the central bank, but they are bidding against Blackstone. And companies with more efficient recruiting are being auctioned off to more established equity firms, which go in and align the systems to be closer to their own.
When I learned how difficult it was for Stripe to get a banking license in the UK, I was initially confused. If they have the right policies in place, gone through the legalwork, and are good for the money, what more could the government want? They wanted an established reputation. If anyone in Silicon Valley can credential themselves as a Bank, they could borrow more money at lower rates and blow the money on poor investments [2] . People want to lend to those worthy of credit, but it takes time to build up that reputation, which is a constraint on the free market slowing down recruiter efficiency.
I think the problem is not that, “we don’t have a good mathematical definition of phenomenological consciousness,” it’s that there isn’t a definition at all! My theory is that this babble is useful to survival, because babble can still be used as a justification. On a species level, you still see religious people today saying, “it’s okay to kill and eat animals, but not humans, because humans have souls.” On an individual level, you’re going to fight harder if your brain insists me. On a memetic level, proclaiming to be realer than real, “the thing that is redness”, will get more people talking about it. Doesn’t mean there’s anything there.
I was a downvoter. I think the first three sections are mysterious in a way that invalidates the rest of the post. I like how he thinks in the next few sections, but it reminds me of the urban legend of the PhD student who found a lot of interesting results about a new class of mathematical objects, only to show up to their dissertation and realize only the trivial case existed. It generally isn’t productive to ask a lot of “what if?” questions before pinning down what you mean when you talk about consciousness or morality. If you can’t pin it down exactly, then go with a working definition based on a few examples that should fit that category. At the very least, apply some known examples to the “what if?” that follows. I think kbear’s comment does a pretty good job of this.
Disclaimer: This comment hasn’t really been edited for clarity, cohesion, or politeness. I do think it’s useful, but it’ll definitely be spicy.
Trying to derive all of morality from physics alone – say, if someone is crazy enough to derive an entirely ethical philosophy and ideological movement based on maximizing entropy — would strike most people as deeply confused.
I think if most people consider this philosophy to be deeply confused, it is actually the case that most people are deeply confused. When I read this sentence, I was pleasantly surprised that someone else had figured it out, and even more surprised it was the leader of e/acc (unrelated to the previous surprisal).
I believe you are being serious in your post, but there’s this niggling suspicion in the back of my mind that, if I were satirizing how philosophists talk about consciousness/subjective experience/morality, this is how it would come out. Statements like,
“The world of consciousness. Subjective experience. What it feels like to see red.”
that you see exclaimed everywhere with an undertone of wonder and confusion, and no attempt to really pin down what is meant mathematically. Then a section called “pinpointing the ineffable” saying, “this probably sounds too abstract. Let’s try to make it more concrete,” without actually trying to make it more concrete (mathematically)—just make explicit the wonder and confusion.
The rest of the post builds off of this in a constructive way, so I believe you are being serious here. I just don’t get the confusion around consciousness. As someone else said, the laws of mathematics are enough to explain the phenomenon (though they qualified their statement more). It isn’t a separate world. Subjective experience? Simply a reference to a compressed copy of the self. Ontologies? They’re a little harder to figure out, but I’m pretty sure it’s the significant bits of autoencoding.
And let’s not forget the central question, what about moral goods? Here’s a question for you: is soft actor-critic maxxing energy under entropy regularization, or entropy under energy regularization? They’re the same thing! But if you dig down into the two terms, entropy definitely exists, while energy always feels like a placeholder for something else. Like, “does this policy get the results I want, so I’m going to let it stick around and further evolve?” But that’s just maxxing entropy when you consider part of the game is for the researcher to keep using the policy.
If we continue to pursue the ‘decapitation’ theory of warfare, or the ‘kingpin strategy,’ then I do not believe that goes in good places. So far this hasn’t been flipped around against the leadership of democracies so much, but how long will that last?
Can you explain this further? To me it seems to good from a humanitarian, utilitarian, and game theory perspective.
It seems worse to kill millions of rank-and-file soldiers than hundreds of generals/political leaders.
Those leaders are usually coercing the rank-and-file to fight in the first place by threatening their life or liberty. Furthermore, those leaders are usually the ones making the decision to go to war at all.
If you have the capability, you should punish the people imposing negative externalities on you, which sure includes the rank-and-file soldiers, but I think it’s better to model them like you model natural disasters. A lot of military training is spent teaching them to not think and just be a tool the higher-ups can use. They’re the real source of negative externalities here, so the appropriate people to punish.
I get how this kind of warfare changes the decision making process among the generals/political leaders, for example, it is difficult to elect politicians in Mexico that promise to get rid of the drug cartels (at least, difficult to elect them for more than a few days). And maybe this leads to more stupid suffering than WWI, but it seems really hard to be worse 10% of a generation getting conscripted and killed.
I am moderately interested in joining the Discord, at least just to see what has worked for others. I also got Long COVID ~1.5 years ago, and it’s rough.
I was talking with Joseph, and I think I like his SharkBot more because it fails more gracefully. Suppose (1) “proof” has an upper bound in computation cycles, and (2) people occasionally make mistakes in their logic. A good prover might spend more computation cycles in error correction. What happens if they do not have enough time to prove cooperation leads to cooperation, or defection leads to defection?
If Joseph’s bot stalls out on the second half of its computation, it concludes, “I couldn’t prove they would cooperate if I’m caught (provably) defecting,” and cooperates. If your bot stalls out on the second half of its computation, it concludes, “I couldn’t prove being caught (provably) defecting would lead to them also defecting,” and thinks it can get away with defection.
Another way of putting it: dumber bots are more likely to think they can get away with defection when they really can’t, and defect against bots smarter than them. If a bot is going to try to take advantage of rocks [1] , it better well make sure it is actually playing against a rock, and not just making a stupid mistake that hurts everyone.
Also, as an aside, I think making mistakes (logical bit flips some percent of the time) naturally penalizes high-complexity policies. This is why you might expect societies to begin with mostly cooperate/defect bots, then transition to citizen/police bots, and slowly build complexity where each individual’s policy is relatively simple, but society as a whole gets more complex interactions. I think this would be an interesting area of research.
- ↩︎
From the phrase, “how do you play a Prisoner’s Dilemma against a rock?” Rocks are bots that cooperate even if it is proven you are going to defect.
- ↩︎
“You, not all of you but most of you, should not be working on AI safety.”
So, it seems you endorse a utility function that puts more weight on others than your actual preferences. Wouldn’t you prefer to endorse a different utility function?
Well do you care about the rest of humanity enough to send yourself to hell? Or adopting policies where you only get sent to hell in
universes rather than ? Seems like a smart selfish egoist would send themselves to hell.
How do you determine which beings ought to be in a utilitarian’s utility function? I think it’s generally the utilitarian decides for themselves and the rest of society beats them over the head until the utilitarian includes them too.
Perhaps here is where the controversy comes in. The utilitarian comes along and says, “I want to maximize utility!” And everyone thinks, “great! she wants to help everyone out!” The selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?”
I think, also, there is a sense in which utilitarians work to maximize the same utility function. This is also true for selfish egoists, but they’re both better and worse at negotiating (they are more prone to negotiate, but utilitarians make mistakes that are biased towards reaching a consensus just because they solve the problem from different directions).
American culture/philosophy/ideology is heavily influenced by Christianity and the Enlightenment (which is heavily influenced by Christianity). For example, in their independence declaration, it is said
Note those capital ‘Rights’ enforced by the capital ‘Creator’. This line is so common in the culture that pretty much every American has it memorized by the age of ten. And the thing is, China doesn’t have this ‘Creator’ to make ‘Rights’ unalienable. No, China is an atheist, communist, god-hating country.
This is what Americans mean when they say China has incompatible cultural values. This is even what atheist philosophers in America mean. They believe in unalienable rights, the CCP believe in the mandate of heaven. Of course, despite the existence of rights being self-evident in the American memetic programming, there is still much debate about what those rights are. However, American debates on rights skip over the practical enforcement and ask only whether people would be better off if omnipotent gods enforced particular rules. Usually the gods pass the job of enforcement to America.
In effect, Americans believe they have the mandate of heaven while not believing they believe that, while China is consciously aware of this, and believes they will eventually gain it back. You’ll see China say, “we will someday take Taiwan back, through military force is necessary,” while America says, “but you can’t do that! They have rights!”
I think the fear is that if China grows more powerful than America, America would see unalienable rights suddenly being alienated. It would force the West to rethink several centuries of Enlightenment culture and philosophy, and perhaps several millennia of Christianity.
When it comes to Americans saying China is aggressive, they say that because they legitimately believe they are right simpliciter, deus vult. They do have a history of pretty pure intentions—wanting to be a good ally or police on the international stage, not just out of their own self-interest—and they recognize China is acting primarily out of self-interest. Yes, China says they want win-win cooperation, but America is virtuous enough to settle for lose-win! I think this comes from self-sacrifice being the primary tenet of Christianity, with no comparable idea in Confucianism. So Americans excuse their aggression with, “well we are in the right, and the proof is we’re not doing it out of self-interest,” while China’s excuse, “well it was in our self—interest,” is an admittance of guilt. They are only proving that they don’t care about other countries (for some reason America finds it hard to believe China at their words, “we prefer win-win cooperation”) and so it would be very bad for other countries if China grew in power.
EDIT: Those who disagree, why?