Green goo doesn’t need all that (see: Covid and other plagues). Why would grey goo? Ok, Covid isn’t transforming everything into more of itself, but it’s doing enough of that to cause serious harm.
Richard_Kennaway
- 15 Jan 2021 16:08 UTC; 1 point) 's comment on Grey Goo Requires AI by (
Belief and action are different things and obey different laws. If I run for a train, the lower I think my chances, the more effort I must put in.
Interesting about Kepler, but it is surely not an example of the “metaphysical foundations” of Burrt’s title. (I have not read his book.) “Motivation” would be a more accurate word. Kepler’s laws stand on their own, independent of his sun-salutation. Newton later put a foundation under them, and Einstein a deeper foundation.
In the specific case of PCT, the model treats everything as closed-loop homeostasis occurring within the organism being modeled.
That is not the case. Indeed, most of the experimental work on PCT involves creatures controlling perceptions of things outside themselves, e.g. cursor-tracking experiments, or ball catching. Indeed, this is where the important applications are. Homeostatic processes within the organism, such as control of deep body temperature, are well understood to be control processes, and in the case of body temperature, I believe it is known where the temperature sensor is. It is for interactions with the environment that many still think in terms of stimulus-response, or plan-then-execute, or sensing and compensating for disturbances, none of which are control processes, and therefore cannot explain how organisms achieve consistent results in the face of varying environments.
endless stacks of stationary
Immovable stacks, clearly. Stacks of what? Stacks of immovability.
Summary results (without derivations):
Puzzle 1: score 19. Edit: that was for score = sum of numbers used. The product score for the same solution is 198.
Puzzle 2: 1.0030 expected rolls.
Edit: Scott pointed out that the primes had to be below 2021. For this I have a solution with exactly 2 rolls.
Edit, no, I’m still wrong, there are 2022 people to choose among, not 2021.
My latest attempt gets 2.000000579 expected rolls.
Puzzle 3: 4 coins (and at most 14 flips).
Full solutions:
Puzzle 1:
scores 19.
Edit: for the problem as originally stated, i.e. the score is the sum of the numbers used. For score = product, the score is 198.
Puzzle 2:
Use a die with 2027 faces (the smallest prime above 2021). Roll to choose; if the result is above 2021 roll again. The expected number of rolls is 2027/2021 = 1.0030.
Edit: I missed the condition that the primes had to be below 2021. Since , use one roll each of a 43-sided and a 47-sided die.
Edit, no, I’m still wrong, there are 2022 people to choose among, not 2021. So I don’t have a solution to puzzle 2 yet.
New attempt: I used some computational assistance in finding this solution. Roll one die of 1811 sides and one of 1907. The product of these is 3453577 = 1708*2022 + 1. In 3453576 out of 3453577 cases this gives you your choice, otherwise roll both again.
Expected rolls = 2*(3453577/3453576) = 2.000000579.
Puzzle 3:
Use six coins, with probabilities 1⁄2, 1⁄3, 1⁄5, 1024/2021, 729⁄997, and 243⁄268.
Flip 1024/2021 to divide the people into groups of 1024 and 997. Choose from the 1024 group with 10 flips of 1⁄2.
For the 997 group, use the 729⁄997 to get groups of 729 and 268.
The 729 group can be chosen from with the 1⁄2 and 1⁄3 coins in at most 12 rolls. (Use 1⁄3 to cut off one third of the group, and 1⁄2 to split the remaining 2⁄3 into two thirds. Do this 6 times.)
For the 268 group, use the 243⁄268 to split it into groups of 243 and 25.
These can both be chosen from with the 1⁄2, 1⁄3, and 1⁄5 coins, the group of 243 with at most 10 flips, the group of 25 with at most 6.
In the worst case 14 flips are needed.
Better solution with five coins: 2000/2021, 1⁄2, 1⁄3, 1⁄5, and 4⁄7.
Use 2000/2021 to divide the group into 2000 and 21. The 1⁄2 and 1⁄5 will choose from 2000 in at most 13 rolls. Use 1⁄3 and 1⁄2 to divide the 21 into three groups of 7. Use the 4⁄7 to split 7 into 4 and 3, which can be chosen from with the 1⁄2 and 1⁄3.
Further improvement with four coins: 2000/2021, 1⁄2, 1⁄5, and 20⁄21.
Use 2000/2021 to divide the group into 2000 and 21. 2000 is as before, using the 1⁄2 and 1⁄5. For the group of 21, use 20⁄21, then the group of 20 can be solved with the 1⁄2 and 1⁄5.
Considering the way these solutions all work, I doubt if there is one with three coins along these lines. UnexpectedValues claims to do it with just one coin, so he must be taking a completely different approach. I want to think about that before looking at his solution.
Consciousness of abstraction
The replacement indistinguishability is not transitive.
I assume that’s a typo for “is transitive”.
Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D.
Why not? If you assume absolute identity of behaviour, you’re assuming the conclusion. But absolute identity is unobservable. The best you can get is indistinguishability under whatever observations you’re making, in which case it is not transitive. There is no way to make this argument work without assuming the conclusion.
In other words, hold off on proposing solutions.
One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.
Then that undercuts the whole argument. That is exactly the argument by the beard. It depends on indistinguishablility being a transitive property, but it is not. If A and B are, for example, two colours that you cannot tell apart, and also B and C, and also C and D, you may see a clear difference between A and D.
You cannot see grass grow from one minute to the next. But you can see it grow from one day to the next.
Doing them all at once doesn’t help. You are still arguing that if kN neurons make no observable difference, then neither do (k+1)N, for any k. This is not true, and the underlying binary concept that it either does, or does not, make an observable difference does not fit the situation.
Note that I’m not referring to gradual changes through time, but a single procedure occurring once that replaces N neurons in one go.
You refer to doing this k times. There is your gradual process, your argument by the beard.
If A is indistinguishable from B, and B is indistinguishable from C, it does not follow that A is indistinguishable from C.
This is the argument of the beard. You can pluck one hair from a bearded man and he still has a beard, therefore by induction you can pluck all the hairs and he still has a beard.
Or if you stipulate that replacing N neurons not merely causes no “significant” change, but absolutely no change at all, even according to observations that we don’t yet know we would need to make, then you’ve baked the conclusion into the premises.
Green ink is the stereotypical medium that cranks and crackpots write in.
Is the “[REDACTED]” in the belief as submitted?
Will you be posting the anonymous beliefs?
Here’s a discussion of someone who didn’t find working in VR particularly usable
The hyperlink is missing.
precious mentals
I like this coinage.
Eliezer covers this in the article:
Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it true?
And he points out:
If the probabilities of various scenarios considered did not exactly cancel out, the AI’s action in the case of Pascal’s Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.
and:
Consider the plight of the first nuclear physicists, trying to calculate whether an atomic bomb could ignite the atmosphere. Yes, they had to do this calculation! Should they have not even bothered, because it would have killed so many people that the prior probability must be very low?The essential problem is that the universe doesn’t care one way or the other and therefore events do not in fact have probabilities that diminish with increasing disutility.
There is also a paper, which I found and lost and found again and lost again, which may just have been a blog post somewhere, to the effect that in a certain setting, all computable unbounded utility functions must necessarily be so dominated by small probabilities of large utilities that no expected utility calculation converges. If someone can remind me of what this paper was I’d appreciate it.
ETA: Found it again, again. “Convergence of expected utilities with algorithmic probability distributions”, by Peter de Blanc.
Because they’ve always thought it was for the greater good before.