I think the problem is not that, “we don’t have a good mathematical definition of phenomenological consciousness,” it’s that there isn’t a definition at all! My theory is that this babble is useful to survival, because babble can still be used as a justification. On a species level, you still see religious people today saying, “it’s okay to kill and eat animals, but not humans, because humans have souls.” On an individual level, you’re going to fight harder if your brain insists me. On a memetic level, proclaiming to be realer than real, “the thing that is redness”, will get more people talking about it. Doesn’t mean there’s anything there.
James Camacho
I was a downvoter. I think the first three sections are mysterious in a way that invalidates the rest of the post. I like how he thinks in the next few sections, but it reminds me of the urban legend of the PhD student who found a lot of interesting results about a new class of mathematical objects, only to show up to their dissertation and realize only the trivial case existed. It generally isn’t productive to ask a lot of “what if?” questions before pinning down what you mean when you talk about consciousness or morality. If you can’t pin it down exactly, then go with a working definition based on a few examples that should fit that category. At the very least, apply some known examples to the “what if?” that follows. I think kbear’s comment does a pretty good job of this.
Disclaimer: This comment hasn’t really been edited for clarity, cohesion, or politeness. I do think it’s useful, but it’ll definitely be spicy.
Trying to derive all of morality from physics alone – say, if someone is crazy enough to derive an entirely ethical philosophy and ideological movement based on maximizing entropy — would strike most people as deeply confused.
I think if most people consider this philosophy to be deeply confused, it is actually the case that most people are deeply confused. When I read this sentence, I was pleasantly surprised that someone else had figured it out, and even more surprised it was the leader of e/acc (unrelated to the previous surprisal).
I believe you are being serious in your post, but there’s this niggling suspicion in the back of my mind that, if I were satirizing how philosophists talk about consciousness/subjective experience/morality, this is how it would come out. Statements like,
“The world of consciousness. Subjective experience. What it feels like to see red.”
that you see exclaimed everywhere with an undertone of wonder and confusion, and no attempt to really pin down what is meant mathematically. Then a section called “pinpointing the ineffable” saying, “this probably sounds too abstract. Let’s try to make it more concrete,” without actually trying to make it more concrete (mathematically)—just make explicit the wonder and confusion.
The rest of the post builds off of this in a constructive way, so I believe you are being serious here. I just don’t get the confusion around consciousness. As someone else said, the laws of mathematics are enough to explain the phenomenon (though they qualified their statement more). It isn’t a separate world. Subjective experience? Simply a reference to a compressed copy of the self. Ontologies? They’re a little harder to figure out, but I’m pretty sure it’s the significant bits of autoencoding.
And let’s not forget the central question, what about moral goods? Here’s a question for you: is soft actor-critic maxxing energy under entropy regularization, or entropy under energy regularization? They’re the same thing! But if you dig down into the two terms, entropy definitely exists, while energy always feels like a placeholder for something else. Like, “does this policy get the results I want, so I’m going to let it stick around and further evolve?” But that’s just maxxing entropy when you consider part of the game is for the researcher to keep using the policy.
If we continue to pursue the ‘decapitation’ theory of warfare, or the ‘kingpin strategy,’ then I do not believe that goes in good places. So far this hasn’t been flipped around against the leadership of democracies so much, but how long will that last?
Can you explain this further? To me it seems to good from a humanitarian, utilitarian, and game theory perspective.
It seems worse to kill millions of rank-and-file soldiers than hundreds of generals/political leaders.
Those leaders are usually coercing the rank-and-file to fight in the first place by threatening their life or liberty. Furthermore, those leaders are usually the ones making the decision to go to war at all.
If you have the capability, you should punish the people imposing negative externalities on you, which sure includes the rank-and-file soldiers, but I think it’s better to model them like you model natural disasters. A lot of military training is spent teaching them to not think and just be a tool the higher-ups can use. They’re the real source of negative externalities here, so the appropriate people to punish.
I get how this kind of warfare changes the decision making process among the generals/political leaders, for example, it is difficult to elect politicians in Mexico that promise to get rid of the drug cartels (at least, difficult to elect them for more than a few days). And maybe this leads to more stupid suffering than WWI, but it seems really hard to be worse 10% of a generation getting conscripted and killed.
I am moderately interested in joining the Discord, at least just to see what has worked for others. I also got Long COVID ~1.5 years ago, and it’s rough.
I was talking with Joseph, and I think I like his SharkBot more because it fails more gracefully. Suppose (1) “proof” has an upper bound in computation cycles, and (2) people occasionally make mistakes in their logic. A good prover might spend more computation cycles in error correction. What happens if they do not have enough time to prove cooperation leads to cooperation, or defection leads to defection?
If Joseph’s bot stalls out on the second half of its computation, it concludes, “I couldn’t prove they would cooperate if I’m caught (provably) defecting,” and cooperates. If your bot stalls out on the second half of its computation, it concludes, “I couldn’t prove being caught (provably) defecting would lead to them also defecting,” and thinks it can get away with defection.
Another way of putting it: dumber bots are more likely to think they can get away with defection when they really can’t, and defect against bots smarter than them. If a bot is going to try to take advantage of rocks [1] , it better well make sure it is actually playing against a rock, and not just making a stupid mistake that hurts everyone.
Also, as an aside, I think making mistakes (logical bit flips some percent of the time) naturally penalizes high-complexity policies. This is why you might expect societies to begin with mostly cooperate/defect bots, then transition to citizen/police bots, and slowly build complexity where each individual’s policy is relatively simple, but society as a whole gets more complex interactions. I think this would be an interesting area of research.
- ↩︎
From the phrase, “how do you play a Prisoner’s Dilemma against a rock?” Rocks are bots that cooperate even if it is proven you are going to defect.
- ↩︎
“You, not all of you but most of you, should not be working on AI safety.”
So, it seems you endorse a utility function that puts more weight on others than your actual preferences. Wouldn’t you prefer to endorse a different utility function?
Well do you care about the rest of humanity enough to send yourself to hell? Or adopting policies where you only get sent to hell in
universes rather than ? Seems like a smart selfish egoist would send themselves to hell.
How do you determine which beings ought to be in a utilitarian’s utility function? I think it’s generally the utilitarian decides for themselves and the rest of society beats them over the head until the utilitarian includes them too.
Perhaps here is where the controversy comes in. The utilitarian comes along and says, “I want to maximize utility!” And everyone thinks, “great! she wants to help everyone out!” The selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?”
I think, also, there is a sense in which utilitarians work to maximize the same utility function. This is also true for selfish egoists, but they’re both better and worse at negotiating (they are more prone to negotiate, but utilitarians make mistakes that are biased towards reaching a consensus just because they solve the problem from different directions).
Yes.
I don’t understand what you don’t understand. I heard a remark once about a philosopher who really tried to steelman other people’s arguments, but so that they made sense according to the philosopher, not in the mental frame of the other person. It led to some pretty wacky arguments on the steelman side. I think here, you should assume when I say, “mathematically equivalent,” that’s what I mean. Like, any math you use in utilitarianism is the same as that of selfish egoism. Or, if you tried to put the two philosophies in mathematical terms, you get the exact same equations. So, it extends to logical beings or irrational beings. The words “selfish egoism” and “utilitarianism” are synonyms.
Not just egoism, selfish egoism. Every utility function people choose is a selfish one or they wouldn’t choose it. The claim isn’t, “selfish egoism is a subset of utilitarianism” but “selfish egoism is identically the same as utilitarianism.”
It should be consistent with any decision theory.
“People often submit incredibly epistemically rude and short-sighted comments on forums, but they deceive people into upvoting them by putting on a veneer of politeness. ‘John, I feel like you’ve got a nail in your head.’ they say. ‘Your conclusion is wrong so you must not have thought of this thing you explicitly mentioned in your post.’”
“The rationality scene is a little culty.”
“Utilitarianism and selfish egoism are mathematically the same [EDIT: i.e. they could be used as synonyms except for their different connotations].”
I think much of the suffering could be alleviated if any of these institutions were meritocratic instead of credentocratic:
Education
Recruiting
Finance
Education
I went to “good” secondary schools in my small state, but I would not consider them good. The bitrate was too slow and the other students were unmotivated to learn more than necessary. The teachers, too, were not paid at competitive rates—maybe one quarter the $200k that a good STEM graduate could make at the time—so it was rare to find a passionate and competent teacher. I did have one or two, but they should not have been the exception.
I did not actually take mathematics or science classes in secondary school; I went home and self-studied until I was old enough to enroll in college classes. As my father said, “never let schooling get in the way of your education.” This is why I am still confused what American students learn between 4th and 10th grades. I understand up to 3rd grade they are learning the three R’s [1] , and the upper years take classes like calculus or chemistry, but in between...? What is there in between arithmetic and calculus? Algebra? And why is calculus spread over two years instead of one semester? These subjects only take a few months for the average student to learn, if they’re actually trying. My best guess is to translate my schooling in history and literature which is to say students learn just about nothing.
What horrifies me is that schooling has gotten worse since I left. Highschoolers are frequently illiterate—not functionally illiterate, but cannot-sound-out-these-words illiterate. I’m not terribly surprised though. I remember being confused when I was younger living within the system as it deteriorated.
“Why is our teacher asking us to add using these number blocks? Didn’t we learn addition two years ago? And why did we only do that activity once and never again?”
“Why is this allegedly sixth-grade standardized assessment asking me to show my work adding single-digit numbers? Didn’t we learn that in first grade? And what work is there to show?”
Forget meritocracy, this is not even internally consistent. This was confusing as a student, but now I recognize it’s just politics and credentocracy. The first, because incompetents in power ought to yield that power, right? Who cares if Common Core decreases the median AMC 10 score by 20%, our committee did something! for the kids! The second, because teachers and administrators get bonuses and promotions for a piece of paper saying, “test scores went up,” so they choose the tests accordingly. It’s actually worse than just that—teacher pay is pretty strictly tied to their credentials (bachelor’s or master’s degree regardless of university + years of tenure). I had an amazing competition math coach in 8th grade who could not get hired at a public school (even for pitiful wages) and ended up working for a private school. Yes, their school now wins every state math competition, but who cares about merit? Certainly not the education system.
It does not end in secondary school. Many students go on to obtain a four-year degree, then a six-year PhD, and the pinnacle of education is spending ten more years publishing towards tenure or perishing. I tried to opt out and applied to a hundred or so jobs at the end of high school, but only got one or two callbacks. It actually still astounds me how Google sent me a recruitment email in ninth grade for stumbling my way through Foobar—before they knew my age—but when I actually knew some stuff three years later and tried applying every system autorejected me. I ended up going to university and then joining a startup, but I could not even join a startup (at least, a good startup) without the education credential. Which brings us to our next issue.
Recruiting
Recruiters mostly suck at their jobs. Part of this is mass spam from lying applicants, making it hard to sift for competency, but there is no excuse for filtering out everyone without university experience. It is ridiculously easy to find a list of the USAMO participants each year, and firms with competent recruiters (like Jane Street) do and send out advertisements each year. Oh, and they also sponsor academic competitions, YouTubers, and sites like AoPS or Brilliant.org. Why? Because they can do an expected value calculation on the difference between the cost and value of certain types of recruitment. The optimal move is almost never paying low wages to nonexperts to sift through spam from lying applicants, and yet that’s what pretty much every company does. Nonexperts can’t tell the difference between Javascript or Java, so they just look for the right keywords and filter for education level. At the end of high school I couldn’t be interviewed for a median-salary job, but a few months into my first year at MIT my applications suddenly became visible. And 3–4 years after I matriculated (presumably when I was nearing graduation) there was quite an uptick in recruiters asking me to apply to their jobs.
It’s really confusing behavior. You would think the market should be efficient enough to encourage companies to spend less on recruitment. Amortized, ~5% of payroll gets paid to recruiters. The issue though is companies seem to operate under a Benetarian hiring philosophy even as they’re putting out “help needed” posters. Hiring someone bad creates visible numbers that HR doesn’t like, while there are no numbers written down when you don’t hire someone good. This problem only exists because HR numbers are separate from product growth numbers and no one audits them properly, at least once the organization becomes big enough. Startups are more amenable to “risky” hires (those without credentials that prove they are likely not super-negative EV), and you would think that as they grow those with better recruitment structures would grow faster, so the big organizations would be fine, but that requires an efficient market in that respect, which requires a free market. And the market is not free.
Finance
Founders and early employees want to have money, not just stock options that they can sell for money, so they often sell that stock. To whom? Whomever has enough liquid cash on hand. This is true for small businesses too. Why not retire if a private equity firm is offering you $10m for your mom-and-pop shop? It makes sense. The only issue is some groups can borrow cash cheaper than others based on their established reputation, whether or not they will run the business more efficiently. The United States’ central bank can borrow money at ~4% interest rates, while the average homebuyer pays 6%. This means a homebuyer can only outbid the central bank if the home is 50% more valuable to them. Of course, no one is bidding on homes against the central bank, but they are bidding against Blackstone. And companies with more efficient recruiting are being auctioned off to more established equity firms, which go in and align the systems to be closer to their own.
When I learned how difficult it was for Stripe to get a banking license in the UK, I was initially confused. If they have the right policies in place, gone through the legalwork, and are good for the money, what more could the government want? They wanted an established reputation. If anyone in Silicon Valley can credential themselves as a Bank, they could borrow more money at lower rates and blow the money on poor investments [2] . People want to lend to those worthy of credit, but it takes time to build up that reputation, which is a constraint on the free market slowing down recruiter efficiency.
Reading, ’riting, ’rithmatic
Like Silicon Valley Bank did.