Front-end developer, designer, dabbler, and avid user of the superpowered information superhighway.
Vale
Extremely quickly thrown together concept.
There is a tendency for the last 1% to take the longest time.
I wonder if that long last 1% will be before AGI, or ASI, or both.
OpenAI’s Jig May Be Up
A great collection of posts there. Plenty of useful stuff.
This prompted me to write down and keep track of my own usage:
https://vale.rocks/posts/ai-usage
Predicting AGI/ASI timelines is highly speculative and unviable. Ultimately, there are too many unknowns and complex variables at play. Any timeline must deal with systems and consequences multiple steps out, where tiny initial errors compound dramatically. A range can be somewhat reasonable, a more specific figure less so, and accurately predicting the consequences of the final event when it comes to pass even further improbable. It is simply impractical to come up with an accurate timeline with the knowledge we currently have.
Despite this, timelines are popular – both with the general AI hype crowd and those more informed. People don’t seem to penalise incorrect timelines – as evidenced by the many predicted dates we’ve seen pass without event. Thus, there’s little downside to proposing a timeline, even an outrageous one. If it’s wrong, it’s largely forgotten. If it’s right, you’re lauded a prophet. The nebulous definitions of “AGI” and “ASI” also offer an out. One can always argue the achieved system doesn’t meet their specific definition or point to the AI Effect.
I suppose @gwern’s fantastic work on The Scaling Hypothesis is evidence of how an accurate prediction can significantly boost credibility and personal notoriety. Proposing timelines gets attention. Anyone noteworthy with a timeline becomes the centre of discussion, especially if their proposal is on the extremes of the spectrum.
The incentives for making timeline predictions seem heavily weighted towards upside, regardless of the actual predictive power or accuracy. Plenty to gain; not much to lose.
Following news of Anthropic allowing Claude to decide to terminate conversations, I find myself thinking about when Microsoft did the same with the misaligned Sydney in Bing Chat.
If many independent actors are working on AI capabilities, even if each team has decent safety intentions within their own project, is there a fundamental coordination problem that makes the overall landscape unsafe? A case where the sum of the whole is flawed, unsafe, and/or dangerous and thus doesn’t equal collective safety?
We have artificial intelligence trained on decades worth of stories about misaligned, maleficent artificial intelligence that attempts violent takeover and world domination.
Vale’s Shortform
I think people seem to downplay that when artificial intelligence companies release new models/features, they tend to do so with minimal guardrails.
I don’t think it is hyperbole to suggest this is done for the PR boost gained by spurring online discussion, though it could also just be part of the churn and rush to appear on top where sound guardrails are not considered a necessity. Either way, models tend to become less controversial and more presentable over time.
Recently OpenAI released their GPT-4o image generation with rather relaxed guardrails (it being able to generate political content and images of celebrities without consent). This came hot off the heels of Google’s latest Imagen model, so there was reason to rush to market and ‘one-up’ Google.
Obviously much of AI risk is centred around swift progress and companies prioritising that progress over safety, but minimising safety specifically for the sake of public perception and marketing strikes me as something we are moving closer towards.
This triggers two main thoughts for me:How far are companies willing to relax their guardrails to beat competitors to market?
Where is ‘the line’ between a model with relaxed enough guardrails to spur public discussion but not relaxed enough to cause significant damages to the company’s perception and wider societal risk?
AI Model History is Being Lost
I went along to the VRChat meetup. Was absolutely wonderful to meet people, chat rational, discuss HPMoR, and generally nerd out for a while.
Thanks very much to everyone who organised events and helped with coordination!
Speaking as a fellow Declan, I’m wondering if an unhealthy love for peanut butter is a “Declan-thing”...
Really appreciate the pointers. I spend a lot of time lurking here but not much time posting. I’ll definitely keep all your suggestions in mind next time I post. Cheers!
if people had been playing with carbon fiber hair historically I really doubt horsehair would take off today.
I do oft wonder how many things we interact with are only the way they are because that’s the way they’ve always been. Tangentially, how much of the world is just vestigial and nobody has thought to question it?
My Experience With A Magnet Implant
Personally, I’ve had my Caps Lock key bound to Escape for quite a while (I’d suggest a few years now?). It has been lovely and fits perfectly with my keyboard-driven workflow. Pairs great with Colemak-DH, which is my layout of choice.
I haven’t found myself ever missing the Caps Lock key’s original functionality, and my blank laptop keycaps mean I can have it bound without mental annoyance.
The cost and time will gradually deter you and inherently create a scarcity mindset, sabotaging your creativity and playfulness and willingness to go “wouldn’t it be fun if...?”.
You caught something here with my own thinking. I have limitations associated with my own static site generator that acts as a simple barrier of just enough strength to deter my will and likely hinder output. I hadn’t really concidered or thought of it at detail.
It isn’t financial, but it is a time/mental tax I must pay to create, which should be eliminated. I’ll aim to mentally emphasise it’s credence as an issue requiring a solution.
Thanks for prompting the thought.
I don’t necessarily believe or disbelieve in the final 1% taking the longest in this case – there are too many variables to make a confident prediction. However, it does tend to be a common occurrence.
It could very well be that the 1% before the final 1% takes the longest. Based on the past few years, progress in the AI space has been made fairly steadily, so it could also be that it continues at just this pace until that last 1% is hit, and then exponential takeoff occurs.
You could also have a takeoff event that carries from now till 99%, which is then followed by the final 1% taking a long period.
A typical exponential takeoff is, of course, very possible as well.