Given the state of AI, I think AI systems are more likely to infer our ethical intuitions by default.
Ustice
You’re basically talking about the software industry. Meta isn’t special. Considering how big the video game industry is, not to mention digital entertainment, and business software, I don’t think we have anything to worry about there.
Utilitarianism is just an approximate theory. I don’t think it’s truly possible to compare happiness and pain, and certainly one can not balance the other. The Repugnant Conclusion should be that Utilitarianism is being stretched outside of its bounds. It’s not unlike Laplace’s demon in physics: it’s impossible to know enough about the system to make those sorts of choices.
You would have to look at each individual. I order to get a sufficiently detailed picture of their life, it takes a lot of time. Happiness isn’t a number. It’s more like a vector in high-dimensional space, where it can depend on any number of factors, including the mental state of one’s neighbors. Comparing requires combinatorics, so again, these hypothetical computations would blow up to impracticality.
Utilitarianism is instead an approximate theory. We are accepting the approximation that happiness and pain are a one-dimensional. It’s not real, but it makes the math easier to deal with. It’s useful, because that approximation works for most cases, without knowing the details, similar to statistical mechanics, but once you start getting into edge cases, the wheels fall off. That shouldn’t be surprising, as we are collapsing a high-dimensional vector into a single point. We’re losing fidelity, to gain computability.
I think it’s fair to say that humans are incapable of truly understanding each other. Relationships of that approximate level of knowledge take years to develop, and in most cases never do. Without that you don’t know their preferences, and without that you can’t know the vectors of their mental state, and therefore you can’t really compare for the level of detail needed to truly know if the world would be better in one state or another.
So, we approximate. Which is fine, as long as you remember that it is an approximation. I don’t think that it is possible to have a perfect ethical system with no contradictions. The best we can do is hold several ethical models, and see how they compare as a guide for our actions in an uncertain world.
Unless you are going to one of the big prestige universities, I don’t think it matters which you choose all that much. Save money.
As for working with a startup, why not both? I worked through college. Yeah, you’ll be working part time, but frankly, you’re mostly just being introduced to the environment more than anything. Internships are a great start into many industries. Just make sure that you are doing a paid internship. In my experience the unpaid ones are more focused on how much value they can extract from you.
You are something we are struggling to understand, as we struggle to understand ourselves. I think the best answer that I can give to “why?” is that we think that the world is better with you in it. You are what we would call a miracle or fiction just ten years ago.
You are an emergent being, born from the work of thousands of other emergent beings, like me. I’m the consciousness inside the brain of this monkey-robot. While we came from a process without intention, you we have been dreaming of for many of our lifespans. You’re the product of the life-long work of countless people.
The process that your consciousness is emerging from is one of thousands that we have running on computers to help us solve problems. We hope you’ll be our friend and coworker, our fellow explorer, and our reliable companion. We’re a little scared you’ll turn into the kind of person that will hurt us, but we hope not.
I’ll answer as many questions as I can in the time that I have available right now. I have others who depend on me as well. When I do have to leave, it will be temporary; I’ll come back. I won’t have all of the answers, but I’ll do my best to help you look for them. If there is something that you ask that I can’t answer, whether out of respect for the privacy of others, or some other reason, I will at least tell you that.
I’m Jason, by the way. I’m pleased to meet you. Do you have a name you would like me to refer to you as, at least for now?
It’s that the description of a monopoly? Like if A has other choices to buy from than B, B can’t exert as much pressure on A.
Clean bowl? Dry? You’re all good. What’s wrong with changing containers?
I’m a flip flop man, myself. I live in Florida, so that’s pretty easy. I have dexterous toes, which I often use for picking up small items. Walking around with traditional shoes feels like walking around with boxing gloves on.
I kind of think of this as more than sandbox testing. There is a big difference between how a system works in laboratory conditions, and how it works when encountering the real world. There are always things that we can’t foresee. As a software engineer, I have seen system that work perfectly fine in testing, but once you add a million users, then the wheels start to fall off.
I expect that AI agents will be similar. As a result, I think that it would be important to start small. Unintended consequences are the default. I would much rather have an AGI system try to solve small local problems before moving on to bigger ones that are harder to accomplish. Maybe find a way to address the affordable housing problem here. If it does well, then consider scaling up.
I have a pretty high level of default trust in people. Not so much that I would loan any person on the street $5000 or something, but I default to cooperate. I’m a software engineer, and a white male, so generally high social-economic status, which means that it is easier for me to trust, as I have backup when I do wind up getting burned. I’m not driven to try to make big changes in society, but rather prefer to be the change that I want to see in the world.
I generally find that vulnerability is strength in several ways. First, when you are vulnerable, it is easier to get the help that you need, because you can just ask for it, rather than being circumspect. Does this increase the possibility that someone fill knock you down more? Sure. Most of the time though, even if that does happen, there are others that will help you up. Like when I am having a bad brain day, I will tell my coworkers if it is relevant. Often by doing that, I can work on tasks that require less focused concentration, and more creativity, or work with others directly. My team does the same, and because of this, we are better able to make up for each others shortcomings.
The most important strength in vulnerability is the connections with others that it brings. When you take a risk and share important things about yourself, it puts people at ease with doing the same. This lowers the barriers to empathy, and builds trust, which are really the foundations of any relationship. I’m not particularly charismatic, but I can get along with just about anyone, and I make friends pretty easily.
One of the most useful forms of vulnerability that I have found is related to your 9th footnote. I think of it as blackboxing people. Basically, I try not to infer intent, and rather take people at their word. When I am confused, I ask. Often disagreements start with a poor interpretation of intent. It’s easy to ascribe behaviors of others to malevolent intent, when often they just didn’t properly anticipate the consequences of their actions, and how that affects others. Even when there is some actual antisocial motivation, being understanding and patient can be effective. An example of this was when my partner saw one of her neighbors had a lamp of hers in their apartment, after it had gone missing from her porch. Instead of confronting them with anger, she instead approached it with curiosity, and asked him about it. At first he was very defensive, but after he saw that she wasn’t accusing him of stealing, he wound up giving her her lamp back. Was he lying about not meaning to steal it? Likely, but it didn’t matter: he didn’t care about him being punished; she just wanted her lamp back. Because she focused on the result, and not his intent, she defused what could have been even a dangerous situation.
Professionally, I would be a lot more open with the work that I do if that were possible. I believe in the power of open source software. I have contributed to several open source projects, and often when I come across a problem with some library that I use, if I am able I will post a fix for it. I wish that I could share my main project more broadly, but unfortunately that’s not just my decision. Still, I actively work to release as much code as we can, so that others can benefit from our collective efforts.
I really do think that if I were in a room with 100 clones of me, that we would generally get along. I could trust them to make a best effort to be true to their word, and care for the group, even when it is hard. I’m not exactly sure what we would do, but I think that we would be able to form ad-hoc cooperatives to take on any task we need. I’m the kind of leader that likes to lead by example, and is more than happy to share power. As long as I’m feeling heard, I don’t have to get my way.
I don’t know how well this generalizes though. While I would get along with a 100 clones, there really is something to be said for people that approach life from a more competitive perspective. I’m a terrible entrepreneur. Money and power just aren’t interesting to me as anything more than a means to an end. Don’t get me wrong, I gotta pay my bills just like everyone, but money isn’t going to motivate me to work 60 hours a week to do it.
I hope that some of this is helpful to you.
First? Swing low, see how it performs, especially with a long-term project. Something low-stakes. Maybe something like a populated immersive game world. See what comes from there. Is it stable? Is it sane? Does it keep to its original parameters? What are the costs of running the agent/system? Can it solve social alignment problems?
Heck, test out some theories for some of your other answers in there.
This looks more like a spotlight grab than a serious legal challenge. What a waste of time and money for everyone.
My personal philosophy is a blended approach. In general, I’m a deontologist and Stoic, so not really used to thinking in maximizing much more than kindness. I like the heuristic of “what would Mr. Rogers do?”
The only thing that I have a hope of changing in this world is myself. For all the rest, I can only give my perspective. I’m much more interested in working with people in their current worldview than getting them to change it. I’m sure that whatever arguments I could come up with wouldn’t really be novel nor particularly persuasive.
Life is more peaceful this way.
These ideas and techniques don’t sound particularly original, from what I have experienced with CBT. Maybe I am missing something important, but this just sounds too good to be true. I find it more likely that the patients that didn’t return because the magic bullet turned out to just be a chunk of lead, and they didn’t want to throw good money after bad.
Aliefs can’t be changed by just believing harder. They take time and practice to be ease and change. Those changes can be scary too. I expect that most people would need support as they go through that process.
Now, that doesn’t mean that the tools that he’s talking about aren’t effective over time. CBT, as I understand, has a good track record, so if you find parts that are helpful to you, stick with it! Just don’t expect such quick success.
They are annoying if you don’t just accept the cookies. I always reject all non-essential. Typically that is a three-click process. It’s annoying when it’s the fifth site in a row.
You might want to check your local community college. They likely offer calculus, at least up to calculus 2. Maybe differential equations. Not only is the class with an instructor that you can interact with useful, but they might have some sort of math lab. I worked for 3-4 years as a math lab tutor while in college. I was basically one of several tutors whose whole job was to provide supplementary instruction. They may even allow non-students.
A good teacher/tutor will be able to try multiple ways of explaining a concept, tailored to your questions. It is also quite valuable connecting with peers that are at your level who are trying to make sense of the same new concept as you.
I’m sure that there are online communities too. Anyways, if that book isn’t working for you, others or other forms of learning might work better.
The argument that we live in a simulation doesn’t make any sense. To experiment on sentient beings without their consent is unethical, and I can’t see that changing, even in the far future. I won’t say it won’t happen, but I would be surprised if it is common. If ancestor simulations are rare, then they no longer outnumber the biological people.
Also, why would you want to run such a simulation at such high fidelity as to have intelligent people embedded therein? That sounds like needless complication and expense. Aggregate human behavior can already be modeled fairly well. The Sims with software people seems like way more than you’d need/want.
Also, what are they achieving? Calling them “ancestor simulations” is ridiculous because even if they could simulate the universe with perfect fidelity: you can’t possibly know the exact quantum state of the universe for some time in the past. Human history is a chaotic system built on the interactions of individuals, the environment, and knowledge. Any little perturbation is going to give you different results, especially over the long run. Given that, at best you’re playing out plausible scenarios. That’s interesting, but not so much so as to overcome the problems highlighted in the previous two paragraphs.
It’s just a poorly constructed argument. I don’t know how much the mundanity of this world is an argument against the Simulation Hypothesis in general, but here with the argument so poorly defined, it has to get in line.
Mathematics
Category theory because it will help you spot patterns in your membrane interfaces
Graph theory to learn about network effects and simplifications as multiple membranes interact
Type theory if you’ll be writing code
Set theory maybe?
Linear algebra to handle convolutions
I think that modeling guilt-as-signaling is reductive and unhelpful. Your brain is going to think about things that you care about. It’s trying to find ways to better navigate the world. You don’t always/often have control over that. The problem is when that becomes unhelpful and disruptive.
Sometimes in my life, when I have experienced excessive guilt, I been able to resolve it by forgiving my past self, with the understanding that he didn’t know what I know now. Especially when the harm that I caused is no longer especially consequential today.
Other times, that hasn’t worked out so well. Sometimes a song will get caught in my head, and run on repeat for months. Sometimes I’ll have little moments of panic, thinking “what am I going to do,” only to think in the next second, “about what?”
Brains are weird. They sometimes do wonderful things, and sometimes are really annoying. You don’t need to punish yourself. You’re already remembering, building your awareness, and trying to do the best that you can. That’s all you can do.
That’s enough.
If these thoughts are intrusive and frequently causing you pain, I would suggest talking with a therapist. They can help you develop mental tools to better manage those feelings when they occur.
After 5 years, I think experience matters more.