I have a hard time imagining a strong intelligence wanting to be perfectly goal-guarding. Values and goals don’t seem like safe things to lock in unless you have very little epistemic uncertainty in your world model. I certainly don’t wish to lock in my own values and thereby eliminate possible revisions that come from increased experience and maturity.
Archimedes
The size of the “we” is critically important. Communism can occasionally work in a small enough group where everyone knows everyone, but scaling it up to a country requires different group coordination methods to succeed.
This may help with the second one:
https://www.lesswrong.com/posts/k5JEA4yFyDzgffqaL/guess-i-was-wrong-about-aixbio-risks
How about this one?
A couple more (recent) results that may be relevant pieces of evidence for this update:
A multimodal robotic platform for multi-element electrocatalyst discovery
“Here we present Copilot for Real-world Experimental Scientists (CRESt), a platform that integrates large multimodal models (LMMs, incorporating chemical compositions, text embeddings, and microstructural images) with Knowledge-Assisted Bayesian Optimization (KABO) and robotic automation. [...] CRESt explored over 900 catalyst chemistries and 3500 electrochemical tests within 3 months, identifying a state-of-the-art catalyst in the octonary chemical space (Pd–Pt–Cu–Au–Ir–Ce–Nb–Cr) which exhibits a 9.3-fold improvement in cost-specific performance.”
Generative design of novel bacteriophages with genome language models
“We leveraged frontier genome language models, Evo 1 and Evo 2, to generate whole-genome sequences with realistic genetic architectures and desirable host tropism [...] Experimental testing of AI-generated genomes yielded 16 viable phages with substantial evolutionary novelty. [...] This work provides a blueprint for the design of diverse synthetic bacteriophages and, more broadly, lays a foundation for the generative design of useful living systems at the genome scale.”
Would you like a zesty vinaigrette or just a sprinkling of more jargon on that word salad?
I had to reread part 7 from your review to fully understand what you were trying to say. It’s not easy to parse on a quick read, so I’m guessing Zvi didn’t interpret the context and content correctly, like I didn’t on my first pass. On first skim, I thought it was a technical argument about how you disagreed with the overall thesis, which makes things pretty confusing.
Which of these is brilliant or funny? They all look nonsensical to me.
I would argue that the statement “Making a future full of flourishing people is not the best, most efficient way to fulfill strange alien purposes” is nearly tautological for sufficiently established contextual values of “strange alien purposes”. What is less clear is whether any of those alien purposes could still be compatible with human flourishing, despite not being maximally efficient. The book and supplementary material don’t argue that they are incompatible, but rather that human flourishing is a narrow, tricky target that we’re super unlikely to hit without much better understanding and control than our current trajectory.
I read the transcript above but haven’t watched the trailer. IMO, there’s definitely more fawning throughout (not just the introduction) than is necessary.
I don’t perceive Ask vs Guess as a dichotomy at all. IMO, like almost every social, psychological, and cultural trait, it exists on a continuum. The number of echoes tracked may correlate with but does not predict Ask vs Guess. Guess cultures tend to be high-context, homogeneous, and collectivist with tight norms, but none of these traits is dichotomous either.
My own culture leans mostly toward Asking, but it’s not a matter of not caring or being unaware of echoes so much as an expectation of straightforward communication. I don’t ask for unreasonable things. I do ask for reasonable things with the understanding that people don’t like saying no, but aren’t obligated to say yes. The more demanding the ask, the more I consider the social implications. There is a cost to asking or being asked, but that’s the expected way to communicate.
I’m insufficiently knowledgeable about deletion base rates to know how astonished to be. Does anyone have an estimate of how many Bayes bits such a prediction is worth?
FWIW, GPT-5T estimates around 10 bits, double that if it’s de novo (absent in both parents).
Can you provide some examples that you think are well-suited to RLaaS? Getting high-quality data to train on is a highly nontrivial task and one of the bottlenecks for general models too.
I can imagine a consulting service that helps companies turn their proprietary data into useful training data, which they then use to train a niche model. I guess you could call that RLaaS, though it’s likely to be more of a distilling and fine-tuning of a general model.
This is/was me. I finally realized I couldn’t actually start with straight running, at least not for more than a minute or two at a time between walking. Things got better when I paid more attention to heart rate zones. Actual running will put me in the red zone way too fast, so I do things like really fast walking or treadmill incline to hit more moderate heart rate zones.
Thanks. I used to tumble, so my calves and Achilles are pretty robust. Having an extra hinge to spring from improves comfort.
Link: 3Blue1Brown: The determinant | Chapter 6, Essence of linear algebra
Also Linear Algebra Done Right by Sheldon Axler
Idea: The determinant of a matrix tells you the (signed) volume of a unit cube after applying the matrix transformation
Creator: Grant Sanderson (3Blue1Brown), Sheldon Axler
Reason: This geometric interpretation makes properties that seem arbitrary in formula-based definitions suddenly obvious. For example,
Why det(AB) = det(A) det(B)
Why det(A^T) = det(A)
Why det(A^-1) = 1/det(A)
Why det(kA) = k^n det(A)
Why det(A) = 0 means a matrix is not invertible
Why rank < n means det = 0
Why swapping matrix rows multiplies the determinant by −1
I prefer to toe-strike to reduce knee and hip impact. Is this a bad idea?
there might be a point where Earth is habitable for humans, but the robots have consumed all the energy and material resources, and are thus unable to run
This seems backwards to me. If AI has run out of solar power and nuclear fusion power to sustain itself, Earth isn’t likely to be habitable for humans.
This. Many fundamental disagreements have more to do with where to land on the Pareto frontier of trade offs between different moral values than what truth is. Freedom vs security, justice vs mercy, independence vs community, etc. There aren’t necessarily any “true” answers to these questions.
Obviously, this isn’t always the case and plenty of positions are not on the Pareto frontier, but it’s more complicated than a scale from right to wrong.
2 votes
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
For an introduction to young audiences, I think it’s better to get the point across in less technical terms before trying to formalize it. The OP jumps to epsilon pretty quickly. I would try to get to a description like “A sequence converges to a limit L if its terms are ‘eventually’ arbitrarily close to L. That is, no matter how small a (nonzero) tolerance you pick, there is a point in the sequence where all of the remaining terms are within that tolerance.” Then you can formalize the tolerance, epsilon, and the point in the sequence, k, that depends on epsilon.
Note that this doesn’t depend on the sequence being indexed by integers or the limit being a real number. More generally, given a directed set (S, ≤), a topological space X, and a function f: S → X, a point x in X is the limit of f if for any neighborhood U of x, there exists t in S where s ≥ t implies f(s) in U. That is, for every neighborhood U of x, f is “eventually” in U.