An aspiring rationalist who has been involved in the Columbus Rationality community since January 2016.
J Thomas Moros
I’m confused as to why you are confused.
You say “FICO scores do not seem to be made with a special process. How can they be especially good data?” The score was presumably designed for the purpose of assessing risk. Per Wikipedia, FICO scores were introduced for that purpose based on credit reports. I highly doubt that the Fair Issac company did that with “no special process.” Most likely, a data analysis was done where many different possibly relevant data points were pulled from credit reports and then analyzed to see which would be predictive. I imagine the weighting between the factors was optimized for that purpose.
Your explanation of how the various factors are relevant to creditworthiness mostly makes sense. The best predictor of future behavior is past behavior. I do think that for the majority of people a FICO score is getting at a mix of trustworthiness and stability. If someone has a long track record of paying their debts and nothing has changed in their financial situation, then there is a high probability they will continue to pay them.
Keep in mind also that credit cards and many loans also ask for your income with legal consequences if you lie. Combine FICO score with sufficient income and you have eliminated a large risk factor that may not show up yet on a credit report (i.e. loss of employment).
The only factor that has always seemed odd to me is credit mix. Indeed, Wikipedia has a section in its Criticism of credit scoring systems in the United States page about Poor predictor of risk, which is mostly about how a mix of credit can be misleading. For example, I have no installment loans because I don’t own a house, and my car is paid off. Does that make me a worse credit risk? No, in fact, it makes me a better risk because I have lower expenses and so much savings that I always buy cars with cash. The best theory I can come up with is that not having a mix of credit types is predictive of being low-income, since low-income people are less likely to own a home and more likely to own used cars that they don’t have a loan on, or to not even own a car.
This should be promoted to the front page.
I don’t understand what you mean.
The human brain takes time to process sensory signals so that the qualia experienced are slightly delayed from when the sensory input that gave rise to those qualia entered the brain. In that sense, experience happens over time. But at any moment, there is only the qualia that is being experienced. How could it be otherwise? If you say that you then recall that the qualia you had just before that was different. Well, that is a different instant in time in which you are experiencing recall of a memory from your brain.
I share your confusion about the first two, but I believe that the question about the subjective passage of time can be dissolved, unless I am misunderstanding your question.
As you mention, the laws of physics are reversible, but as you mention, entropy gives an arrow of time. That arrow of time causes it to be the case that your brain encodes memories of the past instead of the future. You are not a subjective observer outside of time. You are always experiencing the current moment from the perspective of your brain at that moment. Thus, you always have the experience of being in a brain that remembers the past and remembers the immediately prior moment as the immediate past. So your experience will always be that your current experience has moved forward in time from the past. As a thought experiment, imagine that the 4D block universe existed and that there is some process that evaluates slices at which subjective experience happens. Imagine that instead of that process moving forward in time, it is moving backward in time from the “end” of time to the Big Bang. What would your subjective experience be in that case? It would still be that you are traveling forward in time. Because, as you experienced each moment, you would do so with the memory of the past moment, not of the moment that was experienced sequentially before (which is the temporally future moment). There is therefore no question here except what gives rise to qualia, and perhaps whether the block universe view is correct, or the universe is actually an evolving system for which only some “now” exists for each point in space.
(In reality, I don’t think a block universe makes sense. While I don’t understand what gives rise to qualia, all evidence says that it is tied to the execution of the “thinking” algorithm of my brain. A block universe would have no “execution” and so I think would have no qualia unless qualia exist eternally at all places in the block universe where there is a conscious being.)
The author, @Max Harms, is working on a high-quality AI-read audio book version. He had hoped to release it at the same time as the book, but is currently planning to release it in early 2026. There is a prediction market for When will the Red Heart audiobook come out? There is a preview on YouTube
There are existing crypto algos for “coin flipping”. You should be using one of those. See https://en.wikipedia.org/wiki/Commitment_scheme#Coin_flipping
A browser doesn’t count as a significant business integration?
I worry that this paper and article seem to be safety washing. They imply that existing safety techniques for LLMs are appropriate for more powerful systems. They apply a safety mindset from other domains to AI in an inappropriate way. I do think AI safety can learn from other fields, but those must be fields with an intelligent adversarial agent. Studying whether failure modes are correlated doesn’t matter when you have an intelligent adversary who can make failure modes that would not normally be correlated happen at the same time. If one is thinking only about current systems, then perhaps such an analysis would be helpful. But both the paper and article fail to call that out.
Most of this is interesting but unsurprising. Having reflected on it for a bit, I do find one thing surprising. It is very strange that Illya doesn’t know who is paying his lawyers. Really, he assumes that it is OpenAI and is apparently fine with that. I’m surprised he isn’t concerned about a conflict of interest. I assume he has enough money that he could hire his own lawyers if he wanted. I would expect him to hire at least one lawyer himself to ensure that his own interests are represented and to check the work of the other lawyers.
I signed the statement. My concern, which you don’t address, is that I think the statement should call for a prohibition on AGI, not just ASI. I don’t think there is any meaningful sense in which we can claim that particular developments are likely to lead to AGI, but definitely won’t lead to ASI. History has shown that anytime narrow AI reaches human levels, it is already superhuman. Indeed, if one imagines that tomorrow one had a true AGI (I won’t define AGI here, but imagine an uploaded human that never needs to sleep or rest), then all one would need to do to make ASI is to add more hardware to accelerate thinking or add parallel copies.
As a professional software developer with 20+ years of experience who has repeatedly tried to use AI coding assistants and gotten consistently poor results, I am skeptical of even your statement that, “The average over all of Anthropic for lines of merged code written by AI is much less than 90%, more like 50%.” 50% seems way too high. Or if it is then most of that code is extraneous changes that aren’t part of the core code that executes. For example, I’ve seen what I believe to be AI-generated code where 3⁄4 of the API endpoints are unused. They exist just because the AI assumes that the rest endpoint for each entity ought to have all the actions even though that didn’t make sense in this case.
I think there is a natural tendency for AI enthusiasts to overestimate the amount of useful code they are getting from AI. If we were going to make any statements about how much code was generated by AI at any organization, I think we would need much better data than I have ever seen.
Have you considered that the policies are working correctly for most people with a “normie” communication style? I agree that they should be clearer. However, when I read your description of what they are saying, I think the rule makes sense. It isn’t that everything must be entirely fragrance-free. The intended rule seems to be nothing strongly scented. For example, I’ve met women who use scented shampoo, but you don’t notice it on them even when you are close to them. I’ve also met women who you immediately smell the scent of their shampoo from 3 feet. It seems they are basically asking that people use reasonable judgment. That may not be sufficient for extremely sensitive people, but it will address a lot of the problem. By having it in their code of conduct, they can ask people to leave if it is a problem.
Your Beantown Stomp statement seems to be the proper way to communicate the actually intended policy.
If your goal is to discourage violence, I feel you’ve missed a number of key considerations that you need to address to discourage people. Specifically, I find myself confused by several things:
Why the assumption that the target would be AI researchers instead of AI infrastructure (i.e. training clusters and chip fabs)
You claim that the state would have the force advantage, but independent terrorist actors often have advantages against large state actors. Any violence wouldn’t be some kind of organized militia vs. the state. It would be a loose coalition of “terrorist” cells seeking to disrupt and slow AI development.
You assume a lot of association with mainstream AI safety and reputational harm. However, any violence would likely have to come from a separate faction that would seek to distinguish itself from the mainstream. For example, there are a few ecoterrorist and “deep green” movements, but they are clearly distinct from mainstream climate activism, and I don’t think they have had much impact on mainstream climate activism.
So, by all means, we at LessWrong condemn any attempt to use violence to solve the race to ASI that kills everyone. But, if you were attempting to prevent some group from splintering off to seek a violent means of resistance, I think you’ve somewhat missed the mark.
I did watch this interview, but not his other videos. It does start with the intro from that trailer. However, I did not see it as reflecting a personality cult. Rather, it seemed to me that it was trying to establish Eliezer’s credibility and authority to speak on the subject for people who don’t know who he is. You have to remember that most people aren’t tracking the politics of the rationality community. They are very used to an introduction that hypes up the guest. Yes, it may have been a bit more hyperbolic than I would like, but given how much podcast/interview guests are hyped on other channels and the extent to which Eliezer really is an expert on the subject, much more so than many book authors that get interviewed, it was necessary to lay it on strong.
I’ve been a long-time user of book darts and highly recommend them.
The one other downside is that if they are on the page and catch on something so that they rotate, the clip can cut into the page edge. This can generally be avoided by making sure you put them all the way to the edge of the page and being aware not to let anything drag along the edge of the pages of a book with darts in it.
Do co-ops scale? I would guess they may not. If many firms are larger than the size that co-ops effectively scale to, then we would see more traditional firms than co-ops.
This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027
I think this isn’t a strong enough statement. Indeed, the median narrative is longer. However, even the modal narrative ought to include at least one unspecified obstacle occurring. In a three-year plan, the most frequent scenarios have something go wrong.
I think it is interesting that you think it is not very neglected. I assume you think that because languages like Rust, Kotlin, Go, Swift, and Zig have received various funding levels. Also, academic research is funding languages like Haskell, Scala, Lean, etc.
I suppose that is better than nothing. However, from my perspective, that is mostly funding the wrong things and even funding some of those languages inadequately. As I mentioned, Rust and Go show signs of being pushed to market too soon in ways that will be permanently harmful to the developers using them. Most of those languages aren’t improving programming languages in any meaningful way. They are making very minor changes at the margin. Of the ones I listed, I would say only Rust and Scala have made any real advances in mainstream languages, and Scala is still mired in many problems because of the JVM ecosystem. On the other hand, the Go language has been heavily funded and pushed by Google and has set programming languages back significantly.
I would say there is almost no path to funding a language that is both meant for widespread general use and pushes languages forward. Many of the languages that have received funding did so by luck and were funded too late in the process and underfunded. There is no funding that actually seeks out good early-stage languages and funds them.
Also, many of those languages got funding by luck. Luck is not a funding plan.
Thanks for the summary of various models of how to figure out what to work on. While reading it, I couldn’t help but focus on my frustration about the “getting paid for it” part. Personally, I want to create a new programming language. I think we are still in the dark age of computer programming and that programming languages suck. I can’t make a perfect language, but I can take a solid step in the right direction. The world could sure use a better programming language if you ask me. I’m passionate about this project. I’m a skilled software developer with a longer career than all the young guns I see. I think I’ve proved with my work so far that I am a top-tier language designer capable of writing a compiler and standard library. But...… this is almost the definition of something you can’t and won’t be paid for. At least not until you’ve already published a successful language. That fact greatly contributes to why we can’t have better programming languages. No one can afford to let them incubate as long as needed. Because of limited resources, everyone has to push to release it as fast as possible. Unlike other software, languages have very strict backward compatibility requirements, so improving them is a challenge and inevitably leads to real issues as the language grows over time. However, they can never fix previous mistakes or address design changes needed to support new features.
I’m conflicted about this post because I agree with your core message and the urgency that is lacking here. However, I don’t think this is the right way to create a message for this group. I highly doubt a bunch of biblical references are the way to reach a Less Wrong crowd, even if they are very basic ones.