I’m currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
ozziegooen
I found this analysis refreshing and would like to see more on the GPU depreciation costs.
If better GPUs are developed, these will go down in value quickly. Perhaps by 25% to 50% per year. This seems like a really tough expense and supply chain to manage.I’d expect most of the other infrastructure costs to depreciate much more slowly, as you mention.
I’m sure there are tons of things to optimize. Overall happy to see these events, just thinking of more things to improve :)
I’m unsure of shirts, but like the idea of more experimentation. It might be awkward to wear the same shirt for 2-3 consecutive days, and also some people will want more professional options.I liked the pins this year (there were some for “pDoom”). I could also see having hats, lanyards, bracelets.
Information-Dense Conference Badges
It’s a possibility, but this seems to remove a ton of information to me. The Ghibli faces all look quite similar to me. I’d be very surprised if they could be de-anonymized in cases like these (people who aren’t famous) in the next 3 years, if ever.
If you’re particularly paranoid, I presume we could have a system do a few passes.
Kind of off topic, but I this leads me to wonder: why are so many websites burying the lede about the services they actually provide like this example?
I heard from a sales person that many potential customers turn away the moment they hear a list of specific words, thinking “it’s not for me”. So they try to keep it as vague as possible, learn more about the customer, then phrase things to make it seem like it’s exactly for them.
(I’m not saying I like this, just that this is what I was told)
Personally, I’m fairly committed to [talking a lot]. But I do find it incredibly difficult to do at parties. I’ve been trying to figure out why, but the success rate for me plus [talking a lot] at parties seems much lower than I would have hoped.
Quickly:
1. I imagine that strong agents should have certain responsibilities to inform certain authorities. These responsibilities should ideally be strongly discussed and regulated. For example, see what therapists and lawyers are asked to do.
2. “doesn’t attempt to use command-line tools” → This seems like a major mistake to me. Right now an agent running on a person’s computer will attempt to use that computer to do several things to whistleblow. This obviously seems inefficient, at very least. The obvious strategy is just to send one overview message to some background service (for example, something a support service to one certain government department), and they would decide what to do with it from there.
3. I imagine a lot of the problem now is just that these systems are pretty noisy at doing this. I’d expect a lot of false positives and negatives.
Part of me wants to create some automated process for this. Then part of me thinks it would be pretty great if someone could offer a free service (even paid could be fine) that has one person do this hunting work. I presume some of it can be delegated, though I realize the work probably requires more context than it first seems.
CoT monitoring seems like a great control method when available, but I think it’s reasonably likely that it won’t work on the AIs that we’d want to control, because those models will have access to some kind of “neuralese” that allows them to reason in ways we can’t observe.
Small point, but I think that “neuralese” is likely to be somewhat interpretable, still.
1. We might advance at regular LLM interpretability, in which case those lessons might apply.
2. We might pressure LLM systems to only use CoT neuralese that we can inspect.
There’s also a question of how much future LLM agents will rely on CoT vs. more regular formats for storage. For example, I believe that a lot of agents now are saving information in English into knowledge bases of different kinds. It’s far easier for software people working with complex LLM workflows to make sure a lot of the intermediate formats are in languages they can understand.
All that said, personally, I’m excited for a multi-layered approach, especially at this point when it seems fairly early.
There are a few questions here.
1. Do Jaime’s writings that that he cares about x-risk or not?
→ I think he fairly clearly states that cares.
2. Does all the evidence, when put together, imply that actually, Jaime doesn’t care about x-risk?
→ This is a much more speculative question. We have to assess how honest he is in his writing. I’d bet money that Jaime at least believes that he cares and is taking corresponding actions. This of course doesn’t absolve him of full responsibility—there are many people who believe they do things for good reasons, but causally actually do things for selfish reasons. But now we’re getting to a particularly speculative area.
“I also think it should be our dominant prior that someone is not motivated by reducing x-risk unless they directly claim they do.” → Again, to me, I regard him as basically claiming that he does care. I’d bet money that if we ask him to clarify, he’d claim that he cares. (Happy to bet on this, if that would help)
At the same time, I doubt that this is your actual crux. I’d expect that even if he claimed (more precisely) to care, you’d still be skeptical of some aspect of this.
---
Personally, I have both positive and skeptical feelings about Epoch, as I do other evals orgs. I think they’re doing some good work, but I really wish they’d lean a lot more on [clearly useful for x-risk] work. If I had a lot of money to donate, I could picture donating some to Epoch, but only if I could get a lot of assurances on which projects it would go to.
But while I have reservations about the org, I think some of the specific attacks against them (and defenses or them) are not accurate.
I did a bit of digging, because these quotes seemed narrow to me. Here’s the original tweet of that tweet thread.
Full state dump of my AI risk related beliefs:
- I currently think that we will see ~full automation of society by Median 2045, with already very significant benefits by 2030
- I am not very concerned about violent AI takeover. I am concerned about concentration of power and gradual disempowerment. I put the probability that ai ends up being net bad for humans at 15%.
- I support treating ai as a general purpose tech and distributed development. I oppose stuff like export controls and treating AI like military tech. My sense is that AI goes better in worlds where we gradually adopt it and it’s seen as a beneficial general purpose tech, rather than a key strategic tech only controlled by a small group of people—
I think alignment is unlikely to happen in a robust way, though companies could have a lot of sway on AI culture in the short term.
- on net I support faster development of AI, so we can benefit earlier from it.It’s a hard problem, and I respect people trying their hardest to make it go well.
Then right after:
All said, this specific chain doesn’t give us a huge amount of information. It totals something like 10-20 sentences.
> He says it so plainly that it seems as straightforwardly of a rejection of AI x-risk concerns that I’ve heard:
This seems like a major oversimplification to me. He says “I am concerned about concentration of power and gradual disempowerment. I put the probability that ai ends up being net bad for humans at 15%.” There is a cluster in the rationalist/EA community that believes that “gradual disempowerment” is an x-risk. Perhaps you wouldn’t define “concentration of power and gradual disempowerment” as technically an x-risk, but if so, that seems a bit like a technicality to me. It can clearly be a very major deal.
It sounds a lot to me that Jaime is very concerned about some aspects of AI risk but not others.
In the quote you reference, he clearly says, “Not that it should be my place to unilaterally make such a decision anyway.”. I hear him saying, “I disagree with the x-risk community about the issue of slowing down AI, specifically. However, I don’t think this disagreement a big concern, given that I also feel like it’s not right for me to personally push for AI to be sped up, and thus I won’t do it.”
however there’s a cynical part of me that sounds like some combo of @ozziegooenand Robin Hanson which notes we have methods now (like significantly increased surveillance and auditing) which we could use for greater trust and which we don’t employ.
Quick note: I think Robin Hanson is more on the side of “we’re not doing this because we don’t actually care”. I’m more on the side of, “The technology and infrastructure just isn’t good enough.”
What I mean by that is that I think it’s possible to get many of the benefits of surveillance without minimal costs, using a combination of Structured Transparency and better institutions. This would be a software+governance challenge.
Happy to see thinking on this.
I like the idea of getting a lot of small examples of clever uses of LLM in the wild, especially by particularly clever/experimental people.
I recently made this post to try to gather some of the techniques common around this community.
One issue that I have though is that I’m really unsure what it looks like to promote neat ideas like these, outside of writing long papers or making semi-viral or at least [loved by a narrow community] projects.
The most obvious way is via X/Twitter. But this often requires building an X audience, which few people are good at. Occasionally particularly neat images/clips by new authors go viral, but it’s tough.
I’d also flag:
- It’s getting cheaper to make web applications.
- I think EA has seen more success in making blog posts and web apps than we did things like [presenting neat ideas in videos/tweets].
- Often, [simple custom applications] are pretty crucial for actually testing out an idea. You can generate wireframes, but this only tells you a very small amount.
I guess what I’m getting at is that I think [web applications] are likely a major part of the solution—but that we should favor experimenting with many small ones, rather than going all-in on 2-4 ideas or so.
I’m curious whether you know of any examples in history where humanity purposefully and succesfully steered towards a significantly less competitive [economically, militarily,...] technology that was nonetheless safer.
This sounds much like a lot of the history of environmentalism and safety regulations? As in, there’s a long history of [corporations selling X, using a net-harmful technology], then governments regulating. Often this happens after the technology is sold, but sometimes before it’s completely popular around the world.
I’d expect that there’s similarly a lot of history of early product areas where some people realize that [popular trajectory X] will likely be bad and get regulated away, so they help further [safer version Y].
Going back to the previous quote:“steer the paradigm away from AI agents + modern generative AI paradigm to something else which is safer”
I agree it’s tough, but would expect some startups to exist in this space. Arguably there are already several claiming to be focusing on “Safe” AI. I’m not sure if people here would consider this technically part of the “modern generative AI paradigm” or not, but I’d imagine these groups would be taking some different avenues, using clear technical innovations.
There are worlds where the dangerous forms have disadvantages later on—for example, they are harder to control/oversee, or they get regulated. In those worlds, I’d expect there should/could be some efforts waiting to take advantage of that situation.
Nuanced Models for the Influence of Information
I’m sure they thought about it.
I think this is dramatically tougher than a lot of people think. I wrote more about it here.
https://www.facebook.com/ozzie.gooen/posts/pfbid0377Ga4W8eK89aPXDkEndGtKTgfR34QXxxNCtwvdPsMifSZBY8abLmhfybtMUkLd8Tl
I have a Quest 3. The setup is a fair bit better than the Quest 2, but it still has a long way to go.
I use it in waves. Recently I haven’t used it much, maybe a few hours a month or so.
Looking forward to future headsets. Right now things are progressing fairly slowly, but I’m hopeful there might be some large market moment, followed by a lot more success. Though at this point it seems possible that could happen post-TAI, so maybe it’s a bit of a lost cause.
All that said, there is a growing niche community of people working/living in VR, so it seems like it’s a good fit for some people.
Obvious point—I think a lot of this comes from the financial incentives. The more “out of the box” you go, the less sure you can be that there will be funding for your work.
Some of those that do this will be rewarded, but I suspect many won’t be.
As such, I think that funders can help more to encourage this sort of thing, if they want to.
“The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in).”
-> I feel optimistic about the ability to use prompts to get us fairly far with this. More powerful/agentic systems will help a lot to actually execute those prompts at scale, but the core technical challenge seems like it could be fairly straightforward. I’ve been experimenting with LLMs to try to detect what information that they could come up with that would later surprise them. I think this is fairly measurable.
I’ve been working on an app for some parts of this. Plan to more formally announce it soon, but the basics might be simple enough. Eager to get takes. Happy to add any workflows if people have requests. (You can also play with adding “custom workflows”, or just download the code and edit it).
Happy to discuss if that could be interesting.
https://www.roastmypost.org