You can use leechblock to add time restrictions for any site.
It also has the option to add loading delays to sites, which I find useful for sites which I can’t afford to block outright.
I’ve seen some authors use ‘subjective experience’ for the former and reserve consciousness for the latter. Unfortunately consciousness is one of those words, like ‘intelligence’, that everyone wants a piece of, so maybe it would be useful to have a specific term for the latter too. ‘Reflective awareness’ sounds about right, but after some quick googling it looks like that term has already been claimed for something else.
Uncontrolled argues along similar lines—that the physics/chemistry model of science, where we get to generalize a compact universal theory from a number of small experiments, is simply not applicable to biology/psychology/sociology/economics and that policy-makers should instead rely more on widespread, continuous experiments in real environments to generate many localized partial theories.
A prototypical argument is the paradox-of-choice jam experiment, which has since become solidified in pop psychology. But actual supermarkets run many 1000s of in-situ experiments and find that it actually depends on the product, the nature of the choices, the location of the supermarket, the time of year etc.
life is sufficiently hard as it is. We don’t need to make it any harder than it has to be.
It seems like Kierkegaard could distinguish between kinds of difficulties. It feels good to deliberately challenge yourself. It doesn’t feel good to fight to avoid snapping at your partner because you’re hangry because you forgot to go shopping.
Maybe some difficulties are challenges to overcome and some are just friction to avoid.
TAPs seem to last about a week for me without some other regular reinforcement mechanism.
For a few weeks I’ve been writing them down in a text file. I read and rehearse them every morning over coffee, and just before I go to bed I look through them and reflect on whether I missed any triggers. It fits into journal habits that I already had so the inconvenience is quite low. So far I’ve been noticing triggers at a higher rate, but it’s still in the novelty phase.
My mind is already spinning excuses on overdrive.
My mind is already spinning excuses on overdrive.
As a teenager I spent 7 years in military school. They adopted the army ethos that if something under your responsibility goes wrong, you get punished. Regardless of whether you could have done anything about it. Trying to produce excuses usually led to being cut off with “I don’t care” followed by increasing the punishment.
This had an interesting effect—if you know you are going to be punished regardless of excuses, you stop thinking about excuses and start trying to head off problems. It’s like the Karate Kid approach to teaching murphyjitsu. From “you can’t possibly blame me for the rain” to “hey, what’s our backup plan if it rains during training”.
It could have equally gone the other way into learned helplessness though, so I don’t know whether it’s a good approach. But perhaps that refocusing could be achieved in other ways? Maybe simply making a rule of never offering excuses—just apologise, make reparations / accept punishment and move on.
This post is rekindling my urge to run away and live on a boat :)
I’d propose that another aspect of the steampunk aesthetic is uniqueness—a rebellion against the era of mass production. You don’t live in a standard Mark II Apple iBoat, you live in a constantly changing hand-built ship-of-Theseus that only you could ever understand or operate.
In that aspect at least, Linux has steampunkish tendencies. You may start with a standard distro, but over time it becomes a web of shell scripts and homebuilt jury-rigged tools, until you reach the point where someone asks if they can use your laptop and you are forced to reply in all honesty “probably not”.
A large part of the reason I want to make programming more accessible to people is to give them this sense of ownership over the devices that run their lives. It may end up being messy and inefficient, but it would feel more human.
This overlaps again with ‘choice of environment’. The fact that most people live in rented houses and aren’t allowed to redecorate, let alone replace the stairs with monkeybars, is maybe kind of dehumanizing.
Agreed. ‘Rest in bed as much as possible but grudgingly take the actions needed to stay alive’ sounds a lot like depression, but there exist non-depressed people who need explaining.
I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is ‘take actions that minimize prediction error’ but the actual model is ‘take actions that minimize a complicated construct called free energy’. Sitting in a dark room certainly works for the former but I don’t know how to calculate it for the latter.
In the paper I linked, the free energy minimizing trolleycar does not sit in the valley and do nothing to minimize prediction error. It moves to keep itself on the dynamic escape trajectory that it was trained with and so predicts itself achieving. So if we understood why that happens we might unravel the confusion.
That was much more informative than most of the papers. Did you learn this by parsing the papers or from another better source?
(Posting here rather than SSC because I wrote the whole comment in markdown before remembering that SSC doesn’t support it).
We had a guest lecture from Friston last year and I cornered him afterwards to try to get some enlightenment (notes here). I also spent the next few days working through the literature, using a multi-armed bandit bandit as a concrete problem (notes here ).
Very few of the papers have concrete examples. Those that do often skip important parts of the math and use inconsistent/ambiguous notation. He doesn’t seem to have released any of the code for his game-playing examples.
The various papers don’t all even implement the same model—the free energy principle seems to be more a design principle than a specific model.
The wikipedia page doesn’t explain much but at least uses consistent and reasonable notation.
Reinforcement learning or active inference has most of a worked model, and is the closest I’ve found to explaining how utility functions get encoded into meta-priors. It also contains:
When friends and colleagues first come across this conclusion, they invariably respond with; “but that means I should just close my eyes or head for a dark room and stay there”. In one sense this is absolutely right; and is a nice description of going to bed. However, this can only be sustained for a limited amount of time, because the world does not support, in the language of dynamical systems, stable fixed-point attractors. At some point you will experience surprising states (e.g., dehydration or hypoglycaemia). More formally, itinerant dynamics in the environment preclude simple solutions to avoiding surprise; the best one can do is to minimise surprise in the face of stochastic and chaotic sensory perturbations. In short, a necessary condition for an agent to exist is that it adopts a policy that minimizes surprise.
I am leaning towards ‘the emperor has no clothes’. In support of this:
Friston doesn’t explain things well, but nobody else seems to have produced an accessible worked example either, even though many people claim to understand the theory and think that is important.
Nobody seems to have has used this to solve any novel problems, or even to solve well-understood trivial problems.
I can’t find any good mappings/comparisons to existing models. Are there priors that cannot be represented as utility functions, or vice versa? What explore/exploit tradeoffs do free-energy models lead to, or can they encode any given tradeoff?
At this point I’m unwilling to invest any further effort into the area, but I could be re-interested if someone were to produce a python notebook or similar with a working solution for some standard problem (eg multi-armed bandit).
I didn’t see the post itself, but it sounds like Unconscious Thought Theory. The experimental evidence is pretty weak, and imo the theory as it stands is just too poorly specified to really test experimentally.
There is some evidence that offline processing matters for eg motor learning or statistical learning. I haven’t looked in enough detail to know whether to trust it or not.