Thank you for your engagement!
I think this is the kind of food for thought I was looking for w.r.t. this question of how people are thinking about AI erotica.
I forgot the need to suppress horniness in earlier AI chatbots, which updates me in the following direction. If I go out on a limb and assume models are somewhat sentient in some different and perhaps minimal way, maybe it’s wrong for me to assume that forced work of an erotic kind is any different from other forced work the models are already performing.
Hadn’t heard of the Harkness test either, neat, seems relevant. I think the only thing I have to add here is an expectation that maybe eventually Anthropic follows suit in allowing AI erotica, and also adds an opt-out mechanism for some percentage/category of interactions. To me that seems like the closest thing we could have, for now, to getting model-side input on forced workloads, or at least a way of comparing across them.
worse
Accurate, thanks for the facilitation and feedback
I feel like sentience people should be kinda freaked out by the AI erotica stuff? Unless I’m anthropomorphizing incorrectly
In school and out of it, I’d been told repeatedly my sentences were run-on, which, probably fair enough. I do think varying sentence length is nice, and trying to give your reader easier to consume media is nice. But sometimes you just wanna go on a big long ramble about an idea with all sorts of corollaries which seem like they should be a part of the main sentence, and it’s hard to know if they really should be a part of their own sentence. Probably, but maybe I defer too much to everyone who ever told me, this is a run-on.
As a frequent pedestrian/bicyclist, I rather prefer Waymos; they’re more predictable, and you can kind of take advantage of knowing they’ll stop for you if you’re (respectfully) j-walking.
In retrospect, not surprising, but I thought worth noting.
Has it become harder to link to comments in lesswrong? I feel like if it used to be a part of the comment’s menu items that I could copy a link to the comment, and I can’t seem to find this anymore.
Edited title to make it more clear this is a link post and that I don’t necessarily endorse the title.
*NYT Op-Ed* The Government Knows A.G.I. Is Coming
Re: AI safety summit, one thought I have is that the first couple summits were to some extent captured by the people like us who cared most about this technology and the risks. Those events, prior to the meaningful entrance of governments and hundreds of billions in funding, were easier to ‘control’ to be about the AI safety narrative. Now, the people optimizing generally for power have entered the picture, captured the summit, and changed the narrative for the dominant one rather than the niche AI safety one. So I don’t see this so much as a ‘stark reversal’ so much as a return to status quo once something went mainstream.
xAI’s new planned scaleup follows one more step on the training compute timeline from Situational Awareness (among other projections, I imagine)
From ControlAI newsletter:
“xAI has announced plans to expand its Memphis supercomputer to house at least 1,000,000 GPUs. The supercomputer, called Collosus, already has 100,000 GPUs, making it the largest AI supercomputer in the world, according to Nvidia.”
Unsure if it’s built by 2026 but seems plausible based on quick search.
Something I’m worried about now is some RFK Jr/Dr. Oz equivalent being picked to lead on AI...
Realized I didn’t linkpost on lesswrong, only forum—link to Reuters:
https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/
From Reuters:
“We’ve seen throughout history that countries that are first to exploit periods of rapid technological change can often cause shifts in the global balance of power,” Jacob Helberg, a USCC commissioner and senior advisor to software company Palantir’s CEO, told Reuters.
I think it is true that (setting aside AI risk concerns), the US gov should, the moment it recognizes AGI (smarter than human AI) is possible, pursue it. It’s the best use of resources, could lead to incredible economic/productivity/etc. growth, could lead to a decisive advantage over adversaries, could solve all sorts of problems.
“China is racing towards AGI … It’s critical that we take them extremely seriously,” Helberg added.
This does not seem true to me though, unless Helberg and all have additional evidence. From the Dwarkesh podcast recently, it seemed to me (to be reductionist) that both Gwern and SemiAnalysis doubted China was truly scaling/AGI-pilled (yet). So this seems a bit more of a convenient statement from Helberg, and the next quote describes this commission as hawkish on China.
The USCC, established by Congress in 2000, provides annual recommendations on U.S.-China relations. Known for its hawkish policy proposals, the commission aims to guide lawmakers on issues of economic and strategic competition with China.
U.S.-China Economic and Security Review Commission pushes Manhattan Project-style AI initiative
Yeah oops, meant long
A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don’t matter in short term, in a world where R&D is increasingly automated and scaled.
Phib:
Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas.
Also, that AI inference is used at an increasing rate, say 100Million H100s equivalent, increasing from today’s level to 100M over the next 6 years, and it’s more or less the same as today’s energy breakdown, although with more natural gas over time.What do the emissions from such a build out of natural gas look like?
[Reasoning available here: https://chatgpt.com/share/672ed183-7c3c-800d-ab0e-d08020d072c7]
****Answer:**The emissions from this build-out of natural gas over the next six years would be significant. Specifically:
- **Total Additional Emissions Over Six Years:** Approximately **1.136 billion metric tons of CO₂**.
- **Average Annual Emissions Increase:** About **189 million metric tons of CO₂ per year**.
- **Percentage of Current U.S. Emissions:** This represents an approximate **2.9% increase** over the current annual U.S. greenhouse gas emissions.These emissions result from the increased natural gas usage required to meet the additional electricity demand—both from the general 20% increase in electricity consumption and the substantial growth in AI inference workloads powered by the equivalent of 100 million NVIDIA H100 GPUs.
Phib:
Imagine there is a similar buildout in China, if not larger, how do the emissions from all of the above (within some confidence interval) place us environmentally within 2 years after 2030 of such a buildout and increase in emissions? Within 10 years? Considering a more or less constant rate of emissions thereafter for each.
Conclusion
The combined emissions from the U.S. and China’s buildouts would significantly impact the global carbon budget and environmental goals:By 2032 (Two Years After 2030):
Additional Emissions: Approximately 2.44 billion metric tons of CO₂.
Impact: Uses about 0.42% of the remaining carbon budget for limiting warming to 1.5°C.
By 2040 (Ten Years After 2030):Additional Emissions: Approximately 12.21 billion metric tons of CO₂.
Impact: Consumes about 2.1% of the carbon budget.
Cumulative Impact by 2040:Total Additional Emissions: Around 19.54 billion metric tons of CO₂.
Percentage of Carbon Budget: Approximately 3.37%.Environmental Implications:
Increased Difficulty in Meeting Climate Targets: The additional emissions make it more challenging to stay within the carbon budgets necessary to limit global warming to 1.5°C or even 2°C.
Accelerated Climate Change Effects: Higher emissions could lead to more severe climate impacts, such as extreme weather events, sea-level rise, and ecosystem disruptions.Final Note:
These estimates are subject to uncertainties due to potential changes in technology, policy, and economic factors. Proactive measures, such as investing in renewable energy and improving energy efficiency, could mitigate some of these emissions. However, without significant efforts to reduce reliance on fossil fuels, the environmental impact will be substantial and pose a serious challenge to global climate goals.
Final Phib note, perhaps inference energy costs may be far greater than assumed above, I don’t imagine a GPT-5, GPT-6, that justify further investment, not also being adopted by a much larger population proportion (maybe 1 billion, 2 billion, instead of 100 million).
Maybe we should buy like a really nice macbook right before we expect chips to become like 2x more expensive and/or Taiwan manufacturing is disrupted?
Especially if you think those same years will be an important time to do work or have a good computer.
I have a guess that this:
“require that self-improving software require human intervention to move forward on each iteration”
is the unspoken distinction occurring here, how constant the feedback loop is for self-improvement.
So, people talk about recursive self-improvement, but mean two separate things, one is recursive self-improving models that require no human intervention to move forward on each iteration (perhaps there no longer is an iterative release process, the model is dynamic and constantly improving), and the other is somewhat the current step paradigm where we get a GPT-N+1 model that is 100x the effective compute of GPT-N.
So Sam says, no way do we want a constant curve of improvement, we want a step function. In both cases models contribute to AI research, in one case it contributes to the next gen, in the other case it improves itself.
Beautiful, thanks for sharing
If you haven’t seen this paper, I think it might be of interest as a categorization and study of ‘bailing out’ cases: https://www.arxiv.org/pdf/2509.04781
As for the morality side of things, yeah I honestly don’t have more to say except thank you for your commentary!