Intelligence as a Platform

In this post I review the platform/​product distinction and note how GPT can be modeled as a platform, which further products will be built upon. I argue that prosaic alignment is best viewed as hypnotism capabilities training instantiating products on that platform. I explain why this slightly pushes out my X-Risk timing while increasing my X-Risk factor.

Products vs Platforms

In Stevey’s Google Platforms Rant (2011), Steve Yegge lays out the differences between products and platforms.

The other big realization [Jeff Bezos] had was that he can’t always build the right thing. I think Larry Tesler [senior UX from Apple] might have struck some kind of chord in Bezos when he said his mom couldn’t use the goddamn website. It’s not even super clear whose mom he was talking about, and doesn’t really matter, because nobody’s mom can use the goddamn website. In fact I myself find the website disturbingly daunting, and I worked there for over half a decade. I’ve just learned to kinda defocus my eyes and concentrate on the million or so pixels near the center of the page above the fold.

I’m not really sure how Bezos came to this realization—the insight that he can’t build one product and have it be right for everyone. But it doesn’t matter, because he gets it. There’s actually a formal name for this phenomenon. It’s called Accessibility, and it’s the most important thing in the computing world.

[...]

Google+ is a knee-jerk reaction, a study in short-term thinking, predicated on the incorrect notion that Facebook is successful because they built a great product. But that’s not why they are successful. Facebook is successful because they built an entire constellation of products by allowing other people to do the work. So Facebook is different for everyone. Some people spend all their time on Mafia Wars. Some spend all their time on Farmville. There are hundreds or maybe thousands of different high-quality time sinks available, so there’s something there for everyone.

Our Google+ team took a look at the aftermarket and said: “Gosh, it looks like we need some games. Let’s go contract someone to, um, write some games for us.” Do you begin to see how incredibly wrong that thinking is now? The problem is that we are trying to predict what people want and deliver it for them.

[...]

Ironically enough, Wave was a great platform, may they rest in peace. But making something a platform is not going to make you an instant success. A platform needs a killer app. Facebook—that is, the stock service they offer with walls and friends and such—is the killer app for the Facebook Platform. And it is a very serious mistake to conclude that the Facebook App could have been anywhere near as successful without the Facebook Platform.

This is probably one of the most insightful ontological distinctions I have discovered in my decade of software development. Understood through this lens, Twitter is a platform where @realDonaldTrump was a product, Instagram is a platform where influencers are products, and prediction markets will be a platform where casinos and sports books are products. To become a billionaire, the surest route is to create 2000 millionaires and collect half the profits.

GPT as a Platform

NB: following convention I use “GPT” generically.

In Simulators, janus makes a compelling ontological distinction between GPT and many normally discussed types of AGI (agents, oracles, tools, and mimicry). In particular, they note that while GPT-N might model, instantiate, and interrogate agents and oracles (of varying quality), the AI does not have a utility function similar to any of those usual frameworks. The AI in particular does not appear to develop instrumental goals.

Prosaic alignment is the common phrase used to generally describe “try to stop GPT from saying the N word so much”.

A GPT misalignment graph from Anthropic which groups “harmful” behaviors Mechanical Turkers were able to solicit from GPT

Source: https://​​twitter.com/​​AnthropicAI/​​status/​​1562828022675226624/​​photo/​​1

Prosaic alignment is often capabilities research, eg step-by-step. “Prompt Engineers” are likely to become common. Products based on taking a LLM trained over vast amounts of generic language, and then training the final legs on specific corpuses like coding, are coming out. There is an app store for products taking GPT-3 and specializing the final legs or simply condition GPT-3 output. Some of these apps, like AI Dungeon, are reasonably high quality. I expect that offering a near-SOTA LLM user trained on a proprietary corpus will become a common service offered by platforms like GCP.

When prosaic alignment engineers attempts to generate helpful, honest, and harmless (HHH) GPT instances, they are largely attempting to constrain which products can be instantiated on their platform.

GPTs Impact on X-Risk

I do not believe that GPT is a likely x-risk. In particular, I believe GPT has demonstrated that near-human-level AI is a possible steady state for multiple years. The order of magnitude of data left to consume may be low and lower quality. After someone bites the multilingual bullet, transcribes YouTube, and feeds in all the WeChat conversations China has stored, there don’t seem to be many opportunities to double your corpus. It is reasonable to believe that GPT may end up only capable of simulating Feynman, and not capable of simulating a person 100x smarter than Feynman.

As GPT is a profitable platform, we should expect capability companies to divert time and resources into training better GPTs and making GPT based products. InstructGPT does not say the N word as often.

This does not lower X-Risk from RLHF. Indeed, as many “alignment” focused teams (staffed to a perfunctory level that absolves executives of guilt) are working on the “please GPT stop saying the N word” problem, the total energy towards X-Risk mitigation might drop. Additionally, as the most safety focused labs are drawn to GPT alignment and capabilities, companies working on orthogonal designs might be first to launch the training run that kills us.