These stories always assume that an AI would be dumb enough to not realise the difference between measuring something and the thing measured.
Every AGI is a drug addict, unaware that it’s high is a false one.
Why? Just for drama?
These stories always assume that an AI would be dumb enough to not realise the difference between measuring something and the thing measured.
Every AGI is a drug addict, unaware that it’s high is a false one.
Why? Just for drama?
Perhaps “agency” is a better term here? In the strict sense of an agent acting in an environment?
And yeah, it seems we have shifted focus away from that.
Thankfully, thanks to our natural play instincts, we have a wonderful collection of ready made training environments: I think the field needs a new challenge of an agent playing video games, only receiving instructions of what to do using natural language.
The predicted cost for GPT-N parameter improvements is for the “classical Transformer” architecture? Recent updates like the Performer should require substantially less compute and therefore cost.