I operate by Crocker’s rules.
niplav
That seems an odd motte-and-bailey style explanation (and likely, belief. As you say, misgeneralized).
From my side or theirs?
Huh. Intuitively this doesn’t feel like it rises to the quality needed for a post, but I’ll consider it. (It’s in the rats tail of all the thoughts I have about subagents :-))
(Also: Did you accidentally a word?)
What then prevents humans from being more terrible to each other? Presumably, if the vast majority of people are like this, and they know that the vast majority of others are also like this, up to common knowledge, I don’t see how you’d get a stable society in which people aren’t usually screwing each other a giant amount.
Prompted by this post, I think that now is a very good time to check how easy it is for someone (with access to generative AI) impersonating you to get access to your bank account.
On a twitter lent at the moment, but I remember this thread. There’s also a short section in an interview with David Deutsch:
So all hardware limitations on us boil down to speed and memory capacity. And both of those can be augmented to the level of any other entity that is in the universe. Because if somebody builds a computer that can think faster than the brain, then we can use that very computer or that very technology to make our thinking go just as fast as that. So that’s the hardware.
[…]
So if we take the hardware, we know that our brains are Turing-complete bits of hardware, and therefore can exhibit the functionality of running any computable program and function.and:
So the more memory and time you give it, the more closely it could simulate the whole universe. But it couldn’t ever simulate the whole universe or anything near the whole universe because it is hard for it to simulate itself. Also, the sheer size of the universe is large.
I think this happens when people encounter the Deutsch’s claim that humans are universal explainers, and then misgeneralize the claim to Turing machines.
So the more interesting question is: Is there a computational class somewhere between FSAs and PDAs that is able to, given enough “resources”, execute arbitrary programs? What physical systems do these correspond to?
Related: Are there cognitive realms? (Tsvi Benson-Tilsen, 2022)
Yes, I was interested in the first statement, and not thinking about the second statement.
Not “humans are a general turing-complete processing system”, that’s clearly false
Critical rationalists often argue that this (or something very related) is true. I was not talking about whether humans are fully implementable on a Turing machine, that seems true to me, but was not the question I was interested in.
Could you explain more about being coercive towards subagents? I’m not sure I’m picking up exactly what you mean.
A (probably-fake-)framework I’m using is to imagine my mind being made up of subagents with cached heuristics about which actions are good and which aren’t. They function in a sort-of-vetocracy—if any one subagent doesn’t want to engage in an action, I don’t do it. This can be overridden, but doing so carries the cost of the subagent “losing trust” in the rest of the system and next time putting up even more resistance (this is part of how ugh fields develop).
The “right” way to solve this is to find some representation of the problem-space in which the subagent can see how its concerns are adressed or not relevant to the situation at hand. But sometimes there’s not enough time or mental energy to do this, so the best available solution is to override the concern.
This seems right. One thing I would say is that kind of surprisingly it hasn’t been the most aversive tasks where the app has made the biggest difference, it’s the larger number of moderately aversive tasks. It makes expensive commitments cheap and cheap commitments even cheaper, and for me it has turned out that cheap commitments have made up most of the value.
Maybe for me the transaction costs are still a bit too high to be using commitment mechanisms, which means I should take a look at making this smoother.
Trying to Disambiguate Different Questions about Whether Humans are Turing Machines
I often hear the sentiment that humans are Turing machines, and that this sets humans apart from other pieces of matter.
I’ve always found those statements a bit strange and confusing, so it seems worth it to tease apart what they could mean.
The question “is a human a Turing machine” is probably meant to convey “can a human mind execute arbitrary programs?”, that is “are the languages the human brain emit at least recursively enumerable?”, as opposed to e.g. context-free languages.
My first reaction is that humans are definitely not Turing machines, because we lack the infinite amount of memory the Turing machine has in form of an (idealized) tape. Indeed, in the Chomsky hierarchy human aren’t even at the level of pushdown automata, instead we are nothing more than finite state automata. (I remember a professor pointing this out to us that all physical instantiations of computers are merely finite-state automata).
Depending on one’s interpretation of quantum mechanics, one might instead argue that we’re at least nondeterministic finite automata or even Markov chains. However, every nondeterministic finite automaton can be transformed into a deterministic finite automaton, albeit at an exponential increase in the number of states, and Markov chains aren’t more computationally powerful (e.g. they can’t recognize Dyck languages, just as DFAs can’t).
It might be that Quantum finite automata are of interest, but I don’t know enough about quantum physics to make a judgment call.
The above argument only applies if we regard humans as closed systems with clearly defined inputs and outputs. When probed, many proponents of the statement “humans are Turing machines” indeed fall back to a motte that in principle a human could execute every algorithm, given enough time, pen and paper.
This seems true to me, assuming that the matter in universe does not have a limited amount of computation it can perform.
In a finite universe we are logically isolated from almost all computable strings, which seems pretty relevant.
Another constraint is from computational complexity; should we treat things that are not polynomial-time computable as basically unknowable? Humans certainly can’t solve NP-complete problems efficiently.
I’m not sure this is a very useful notion.
On the one hand, I’d argue that, by orchestrating the exactly right circumstances, a tulip could receive specific stimuli to grow in the right directions, knock the correct things over, lift other things up with its roots, create offspring that perform subcomputations &c to execute arbitrary programs. Conway’s Game of Life certainly manages to! One might object that this is set up for the tulip to succeed, but we also put the human in a room with unlimited pens and papers.
On the other hand, those circumstances would have to be very exact, much more so than with humans. But that again is a difference in degree, not in kind.
After all, I’m coming down with the following conclusion: Humans are certainly not Turing machines, however there might be a (much weaker) notion of generality that humans fulfill and other physical systems don’t (or don’t as much). But this notion of generality is purported to be stronger than the one of life:
I don’t know of any formulation of such a criterion of generality, but would be interested in seeing it fleshed out.
Agree with the recommendation of using such websites.
I’ve found them to be very effective, especially for highly aversive medium-term tasks like applying for jobs, or finishing a project. Five times I wanted a thing to happen, but was really procrastinating on it. And five times pulling out the big guns (beeminder) got it done.
I haven’t tried using them for long-term commitments, since my intuition is that using them is highly coercive towards subagents which then further entrench their opposition. So I’ve used these apps sparingly. Maybe I’ll still give it a try.
I think these apps work best if really aversive tasks are set to a estimated-bare-minimum commitment, which in reality is probably the median amount of what one can get done.
There’s also the Fluidity podcast narrating Meaningness, which I mistakenly believed was narrated by you.
Ideally I would not go with the extreme. I would instead choose a relatively light ‘this is not allowed’ where in practice we mostly look the other way.
I’m not going to argue about assisted suicide here, but I am going to remark that cryonicists are sometimes in a dilemma where they know they have a neurodegenerative disease, which is destroying the information content of their personality, but they can’t (legally) go into cryopreservation before most of the damage has been done.
Obscure request:Short story by Yudkowsky, on a reddit short fiction subreddit, about a time traveler coming back to the 19th century from the 21st. The time traveler is incredibly distraught about the red tape in the future, screaming about molasses and how it’s illegal to sell food on the street.Nevermind, found it.
An African grey parrot costs ~$2k/parrot. For a small breeding population might be ~150 individuals (fox domestication started out “with 30 male foxes and 100 vixens”). Let’s assume cages cost $1k/parrot, including perches, feeding- and water-bowls. The estimated price for an avian vet is $400/parrot-year.
This page also says that African greys produce feather dust, and one therefore needs airfilters (which are advisable anyway). Let’s say we need one for every 10 parrots, costing $500 each.
Let’s say the whole experiment takes 50 years, which is ~7 generations. I’ll assume that the number of parrots is not fluctuating due to breeding them at a constant rate.
Let’s say it takes $500/parrot for feed and water (just a guess, I haven’t looked this up).
We also have to buy a building to house the parrots in. 2m²/parrot at $100/m² in rural areas, plus $200k for a building housing 50 parrots each (I’ve guessed those numbers). Four staff perhaps (working 8 hours/day), expense at $60/staff-hour, 360 days a year.
The total cost is then 150*$2k+15*$500+150*$1k+150*50*($400+$500)+3*$200k+2*150*$100+50*360*8*4*$60=$3.632 mio.
I assume the number is going to be very similar for other (potentially more intelligent) birds like keas.
If some common variable C is causally upstream both of A and B, then I wouldn’t say that A causes B, or B causes A—intervening on A can’t possibly change B, and intervening on B can’t change A (which is the understanding of causation by Pearl).
I don’t think this is true, there’s this paper and this post from November 2023, and these two papers from April and October 2022, respectively.
(Note that I’ve only read the single-turn paper.)
(my react is me rolling my eyes)
There’s this intro series by @Alex Lawsen.