Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
Gunnar_Zarncke
It is not clear to me what this post is aiming at. It seems to be a mix of proposing a pragmatic model of agency, though it sounds like it is trying to offer a general model of agency. But it doesn’t discuss why coalitions are used except as a means to reduce some problems other implementations of agents have. It also doesn’t give details on what model of goals it uses.
I’m fine with throwing out ideas on what agency is—the post admits that this is exploratory. We do not have a grounded model of what goal and agency are. But I wish it would go more in the direction of Agency As a Natural Abstraction and a formalization of what a goal is to begin with.
Well, there will be compensation, that’s the whole idea of LVT—it’s a more efficient tax, so you can reduce inefficient taxes. But I guess you mean for the same person. And that is the hard to prove point.
I guess the only time to introduce such a tax is after a war.
I know! I own property myself. Obviously, people don’t like big sudden changes that cost them a lot of money. That’s why I asked for more incremental approaches. Though, I guess the problem is that small changes will not get a lot of support from non-property owners—because the effect is small—but will get opposition from large owners because they see the effect.
Would it work if the tax was raised very slowly? Like 10% points per generation.
Would it work if the tax sets in only after death for privately owned property? That might significantly reduce resistance by individual land owners.
Indeed, engineering readability at multiple levels may solve this.
It makes it easier, but consider this: The human brain also does this—when we conform to expectations, we make ourselves more predictable and model ourselves. But this also doesn’t prevent deception. People still lie and some of the deception is pushed into the subconscious.
Nobody is intending to tax land at 100%. I heard most proposals target 80%. Partly because of your reason, I guess, but more because of measurement errors.
I think it is pretty clear who is standing in the way of Georgism: Landowners. They stand to lose the most, and those with a lot of property typically have a larger influence on economic property. The problem is that there is not a single organized landowner you could paint as the bad guy. It is a distributed mass that is hard to counter.
Why did the fence of that patch of green? Isn’t that a nice place to sit and talk or something?
Nate Silver alludes to this question too.
Strictly, it only proves that something exists. I could be just a thought of the demon. Though, granted again, a simulation also exists in some meaningful sense. Which gas raised the moral concern for mental constructions and simulations.
OK. If you want to keep it as simple as possible, you can leave out the “Mu” button, but I suggest that you then a) randomize the experiments and b) count the relative fraction of unanswered experiments (compared to people who have answered at least one). That way, you can see which questions more people refuse to answer.
For example, I have gone through all questions now, but refused to answer Drowning Child (“similar” needs to be more specific), Mary’s Room (uncertainty if an adult brain can still learn to interpret the new sensations), and Blind Men and Elephant (“our” is ambiguous: all, any, or average).
On the questions I answered, I had the biggest difference for Deceiving Demon. I answered “No” because if everything (!) is created by the demon, and there is no observable difference to explanations based on physical models (physical not because it refers to physical reality, but because of the type of model), then “Deceiving Demon” is just a fancy name for the same thing—physics.
I miss the answer option: “Mu, the question doesn’t make sense, is ambiguous, or otherwise not answerable with Yes/No.” Preferably with a field to explain why.
The score seems to be 100 minus the number of answers you clicked. No answers mean completely pure.
If you got 65, then the test seems to measure something real. It’s not the test’s fault that you think you wasted your time.
It would be interesting to have a different test that asks about your knowledge of the Sequences. Maybe you can create one?
Rationalist Purity Test
This is a mix of nitpicks and more in-depth comments. I just finished reading the paper and these are my notes. I liked that it was quite technical, suggesting specific changes to existing systems and precise enough to make testeable predictions.
Some discussion about the type of consciousness applied in the paper: Dehaene’s Phenomenal Consciousness And The Brain reviewed on ACX
In the introduction the verb “to token” is used two times without it being clear what that means:
a being is conscious in the access sense to the extent that it tokens access-conscious states
The authors analyze multiple criteria for conscious Global Workspace systems and based on plausibility checks synthesize the following criteria:
A system is phenomenally conscious just in case:
(1) It contains a set of parallel processing modules.
(2) These modules generate representations that compete for entry through an information bottleneck into a workspace module, where the outcome of this competition is influenced both by the activity of the parallel processing modules (bottom-up attention) and by the state of the workspace module (top-down attention).
(3) The workspace maintains and manipulates these representations, including in ways that improve synchronic and diachronic coherence.
(4) The workspace broadcasts the resulting representations back to sufficiently many of the system’s modules.In the introduction, the authors write:
We take it to be uncontroversial that any artificial system meeting these conditions [for phenomenal consciousness] would be access conscious
Access conscious means the ability to report on perceptions. We can apply this statement in reverse as the condition that a system that isn’t access conscious can’t meet all of the above conditions.
To do so, we apply the thought experiment from the paper to test this: Young babies do not yet represent their desires and while they do show some awareness, many people would agree that they are not conscious (yet) and are likely not aware of their own existence, specifically, indepenent of their caretakers. Also, late stage dementia patients lose the ability to recognize themselves, in this model resulting from the loss of the ability to represent the self-concept in GWS. This indicates that something is missing.
Indeed, in section 7, the authors discuss the consequence that their four criteria could be fulfilled by very simple systems:
As we understand them, both objections [the small model objection and another one] are motivated by the idea that there may be some further necessary condition X on consciousness that is not described by GWT. The proponent of the small model objection takes X to be what is lacked by small models which prevents them from being conscious
The authors get quite close to an additional criteria in their discussion:
it has been suggested to us that X might be the capacity to represent, or the capacity to think, or the capacity for agency [...]
[...] Peter Godfrey-Smith’s [...] emphasizes the emergence of self-models in animals. In one picture, the essence of consciousness is having a point of view
But refrain from offering one:
while we have argued that no choice of X plausibly precludes consciousness in language agents, several of the choices do help with the small model objection.
As in the thought experiements above, there are readily available examples where too simple or impaired neuronal nets fail to appear conscious and to me this suggests the following criteria:
(5) To be phenomenally conscious, the system needs to have sufficient structure or learning capabilities to represent (specific or general) observations or perceptions as concepts and determine that the concept applies to the system itself.
In terms of the Smallville system discussed, this criteria may already be fulfilled by the modeling strength of GTP-3.5 and would likely be fulfilled by later LLM versions. And as required, it is not fulfilled by simple neuronal networks that can’t represent self-representation. This doesn’t rule out system with far fewer neurons than humans, e.g., by avoiding all the complexities of sense processing and interacting purely textually with simulated worlds.
And there I thought with minimalist you meant Lisp and with maximalist Haskell or Lean.
You may want to make this a linkpost to that paper as that can then be tagged and may be noticed more widely.
There was an excellent study on the effect of Universal Basic Income (UBI) in the US that came out recently. OpenResearch calls it their Unconditional Cash Study. 1000$ per month unconditional tax-free, RCT. There are two papers that came out shortly after @ESYudkowsky’s post:
The Employment Effects of a Guaranteed Income: Experimental Evidence from Two U.S. States by Eva Vivalt et al (this one refers to the study as OpenResearch Unconditional Income Study (ORUS), but I assume it is the same).
Does Income Affect Health? Evidence from a Randomized Controlled Trial of a Guaranteed Income by Sarah Miller et al.
Eva Vivalt has both listed on her blog.
My takeaway is:
reduction of labor by 2h per week, replaced by leisure
I think this is fine, actually. Less “by the sweat of your brow.”
reduction of (non UBI) income by 0.2$ per each $ UBI
I think this is fine and to be expected, at least in the “short” run of the study. Where else would more money come from?
no better quality jobs
this is the sad result from the study that hints at the “poverty equilibrium”—UBI didn’t help people avoid bad bosses.
a little bit more education, a little bit of precursors to entrepreneurship
no other relevant significant effects
doesn’t improve mental or physical health beyond year one.
I do wonder whether the stress in year three results from knowing that the UBI will run out.