As far as I know the cyclic weakness in KataGo (the top Go AI) was addressed fairly quickly. We don’t know a weird trick to beating the current version. (altough adverserial training might turn up another weakness). The AIs are superhuman at Go. The fact that humans could beat them by going out of distribution doesn’t seem relevant to me.
Fejfo
Figgie may not be a good game but it’s certainly better then poker, what game would be better then Figgie?
The Alexander technique claims your attention consists of 2 layers,
your awareness, everything you pay some attention to
your focus, the main chunk of you attention
Attention control is about choosing your focus in the space of awareness. The Alexander technique is about controlling your awareness space.
https://expandingawareness.org/blog/what-is-the-alexander-technique
Becoming aware of for example, tension in your muscles can help improve posture.
https://www.johnnichollsat.com/2011/02/27/explaining-the-at-1/
Reading this improved my self control over night, strong upvote.
I’ve been mainly using it for improving posture and eating healthier.
Focusing your attention on stopping does wonders for breaking bad habits,
I can tell stopping gets easier after just one or two iterations.
The allegory of the dragon chapter in replacing guilt, about the difference between the value and the price of a life, complements this chapter well.
Can finite factored sets be used for non-discrete variables?
Under fast takeoff maintaining the alignment curve could happen by ex. using AI to align more advanced AI.
But I agree this way of thinking is less useful under fast takeoff.
It’s my impression that a lot of the “promising new architectures” are indeed promising. IMO a lot of them could compete with transformers if you invest in them. It just isn’t worth the risk while the transformer gold-mine is still open. Why do you disagree?