Another one on computing: The Elements of Computing Systems. This book explains how computers work by teaching you to build a computer from scratch, staring with logic gates. By the end you have a working (emulation of) a computer, every component of which you built. It’s great if you already know how to program and want to learn how computers work at a lower level.
interstice
An interesting post, but I don’t know if it implies that “strong AI may be near”. Indeed, the author has written another post in which he says that we are “really, really far away” from human-level intelligence: https://karpathy.github.io/2012/10/22/state-of-computer-vision/.
How about you ask the AI “if you were to ask a counterfactual version of you who lives in a world where the president died, what would it advise you to do?”. This counterfactual AI is motivated to take nice actions, so it would advise the real AI to take nice actions as well, right?
What resources would you recommend for learning advanced statistics?
I think the idea is that you’re supposed to deduce the last name and domain name from identifying details in the post.
Please Help: How to make a big improvement in the alignment of political parties’ incentives with the public interest?
Dominic Cummings asks for help in aligning incentives of political parties. Thought this might be of interest, as aligning incentives is a common topic of discussion here, and Dominic is someone with political power(he ran the Leave campaign for Brexit), so giving him suggestions might be a good opportunity to see some of the ideas here actually implemented.
A Candidate Complexity Measure
usernameneeded@gmail.com
Hope it’s not too late, but I also meant for this post(linked in original) to be part of my entry:
https://www.lesserwrong.com/posts/ra4yAMf8NJSzR9syB/a-candidate-complexity-measure
While the concept of explicit solution can be interpreted messily, as in the quote above, there is a version of this idea that more closely cuts reality at the joints, computability. A real number is computable iff there is a Turing machine that outputs the number to any desired accuracy. This covers fractions, roots, implicit solutions, integrals, and, if you believe the Church-Turing thesis, anything else we will be able to come up with. https://en.wikipedia.org/wiki/Computable_number
re: differential equation solutions, you can compute if they are within epsilon of each other for any epsilon, which I feel is “morally the same” as knowing if they are equal.
It’s true that the concepts are not identical. I feel computability is like the “limit” of the “explicit” concept, as a community of mathematicians comes to accept more and more ways of formally specifying a number. The correspondence is still not perfect, as different families of explicit formulae will have structure(e.g. algebraic structure) that general Turing machines will not.
Don’t know if this counts as a ‘daemon’, but here’s one scenario where a minimal circuit could plausibly exhibit optimization we don’t want.
Say we are trying to build a model of some complex environment containing agents, e.g. a bunch of humans in a room. The fastest circuit that predicts this environment will almost certainly devote more computational resources to certain parts of the environment, in particular the agents, and will try to skimp as much as possible on less relevant parts such as chairs, desks etc. This could lead to ‘glitches in the matrix’ where there are small discrepancies from what the agents expect.
Finding itself in such a scenario, a smart agent could reason: “I just saw something that gives me reason to believe that I’m in a small-circuit simulation. If it looks like the simulation is going to be used for an important decision, I’ll act to advance my interests in the real world; otherwise, I’ll act as though I didn’t notice anything”.
In this way, the overall simulation behavior could be very accurate on most inputs, only deviating in the cases where it is likely to be used for an important decision. In effect, the circuit is ‘colluding’ with the agents inside it to minimize its computational costs. Indeed, you could imagine extreme scenarios where the smallest circuit instantiates the agents in a blank environment with the message “you are inside a simulation; please provide outputs as you would in environment [X]”. If the agents are good at pretending, this could be quite an accurate predictor.
By “predict sufficiently well” do you mean “predict such that we can’t distinguish their output”?
Unless the noise is of a special form, can’t we distinguish $f$ and $tilde{f}$ by how well they do on $f$’s goals? It seems like for this not to be the case, the noise would have to be of the form “occasionally do something weak which looks strong to weaker agents”. But then we could get this distribution by using a weak (or intermediate) agent directly, which would probably need less compute.
Couldn’t you say the same thing about basically any problem? “Problem X is really quite simple. It can be distilled down to these steps: 1. Solve problem X. There, wasn’t that simple?”
The weight could be something like the algorithmic probability over strings(https://en.wikipedia.org/wiki/Algorithmic_probability), in which case universes like ours with a concise description would get a fairly large chunk of the weight.
The idea of a universe “without preset laws” seems strange to me. Say for example that you take your universe to be a uniform distribution over strings of length n. This “universe” might be highly chaotic, but it still has an orderly short description—namely, as the uniform distribution. More generally, for us to even SPEAK about “a toy universe” coherently, we need to give some sort of description of that universe, which basically functions as the laws of that universe(probabilistic laws are still laws). So even if such universes “exist”(whatever that means), we couldn’t speak or reason about them in any way, let alone run computer simulations of them.
I largely agree with your conception. That’s sort of why I put scare quotes around exist—I was talking about universes for which there is NO finite computational description, which (I think) is what the OP was talking about. I think it would basically be impossible for us to reason about such universes, so to say that they ‘exist’ is kind of strange.
You could think of the ‘advice’ given by evolution being in the form of a short program, e.g. for a neural-net-like learning algorithm. In this case, a relatively short string of advice could result in a lot of apparent optimization.
(For the book example: imagine a species that outputs books of 20Gb containing only the letter ‘a’. This is very unlikely to be produced by random choice, yet it can be specified with only a few bits of ‘advice’)
How does this differ from indifference?