I didn’t expect this to blow up, but I guess here we are.
The standard AI fear seems possible: we see another abrupt increase in AI capabilities this year, recursive self-improvement happens before anyone is ready for it, and we get paperclipped.
I’d previously assumed that LLMs would plateau somewhere due to architectural inefficiencies such as the fact that they run concepts through language, creating overhead compared with processing concepts directly. Mythos is an update away from that since AFAIK its architecture does not appear to be something other than a scaffolded LLM. Regardless of architecture, I did not expect Mythos-level capabilities to appear so quickly; I expected incremental improvements rather than a model that cleanly surpasses all existing ones in almost all tested benchmarks alongside finding vulnerabilities in extensively audited software like ffmpeg and OpenBSD.
Regarding personal outcomes: I currently evaluate that all of the positive utility in my life occurs in the future, and that makes me anxious about having my plans cut off by transformative AI. I’m stating this vaguely because my personal circumstances are rather complicated and I don’t know how much I want to publicly disclose.
Thank you for asking :)
I didn’t expect this to blow up, but I guess here we are.
The standard AI fear seems possible: we see another abrupt increase in AI capabilities this year, recursive self-improvement happens before anyone is ready for it, and we get paperclipped.
I’d previously assumed that LLMs would plateau somewhere due to architectural inefficiencies such as the fact that they run concepts through language, creating overhead compared with processing concepts directly. Mythos is an update away from that since AFAIK its architecture does not appear to be something other than a scaffolded LLM. Regardless of architecture, I did not expect Mythos-level capabilities to appear so quickly; I expected incremental improvements rather than a model that cleanly surpasses all existing ones in almost all tested benchmarks alongside finding vulnerabilities in extensively audited software like ffmpeg and OpenBSD.
Regarding personal outcomes: I currently evaluate that all of the positive utility in my life occurs in the future, and that makes me anxious about having my plans cut off by transformative AI. I’m stating this vaguely because my personal circumstances are rather complicated and I don’t know how much I want to publicly disclose.