If we migrate, will some lesswrong.com urls become broken?
philip_b
So,
Helen::make yourself small = Impro::act as low status;
Helen::be low status = something like Impro::be seen by other people as low status (in this situation), or to deserve low Impro::status (in this situation)
Maybe you should do it with paper and a working utensils—I can’t really do math without external memory, and other people including you are probably bad at it to.
Can anyone provide a comparison between this book and Consciousness. An Introduction—Susan Blackmore. The latter has been recommended to me, but after having read a chapter I haven’t been impressed.
I haven’t, thanks.
Btw was your goal to show me the link or to learn whether I have seen it before? If the former, then I don’t need to respond. If the latter, then you want my response I guess.
How many hours did it take you to read the whole book and do all the exercises that you did? I am reading it too, so far I’ve spent somewhere between 12 and 22 hours and I’m at exercises 2.A. Also I recommend watching https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/playlists Linear Algebra Essence playlist to get (or remind yourself of) some geometric intuitions.
Adding resources to this thought experiment is just adding noise. If something other than life quality values matters in this model, then the model is bad.
A>B is correct in average utilitarianism and incorrect in total utilitarianism. The way to resolve this is to send average utilitarianism into the trash can, because it fails in so many desidarata.
What does this have in common with https://www.lesswrong.com/posts/kK67yXhmDYwXLqXoQ/fundamentals-of-formalisation-level-1-basic-logic ?
Extremely low amount of deaths is due to terrorist attacks (https://i.redd.it/5sq16d2moso01.gif, https://owenshen24.github.io/charting-death/), so this is not important, and people should care about such things less.
Can you elaborate? What is a constructive proof? Why should one care?
You say
Epistemic Status: Opinions stated without justification
but from the text it seems you believe that acting according to the described opinions is useful and that many of them are true. I don’t like this, I think you should clarify epistemic status.
Is this the first post in the sequence? It’s not clear.
What is the point of spending a section on dual maps, I wonder? Is the sole purpose to show that row rank equals column rank, I wonder? If so, then a lot of my time spent on exercises on dual maps might be wasted.
Could you please elaborate what kind of culture fit MIRI require?
How did you conclude that people who prepared GS are actually more likely to help than other people? Just from eyeballing 10⁄19 and 6⁄21 I can’t conclude that this is enough evidence, only that this is suggestive.
I find your predictions 1 through 3 not clearly defined.
Does OpenAI bot need to defeat a pro team in unconstrained dota 2 at least once during 2019? Or does it need to win at least one and more than 50% games against pro teams in 2019?
Suppose tesla releases a video footage or a report of their car reaching from one coast to the other, but it had some minor or not so minor problems. How minor should they be to count? Are humans allowed to help it recharge or anything like that?
How do you define “skilled” in SC II?
Is the point of your comment that you think people very rarely read (completely or almost completely) 3 books in one field?
(if yes, then I agree)
maybe recent machine learning topics are a point of comparative advantage
Do you mean recent ML topics related to AI safety, or just recent ML topics?
RAISE is already working on the former, it’s another course which we internally call “main track”. Right now it has the following umbrella topics: Inverse Reinforcement Learning; Iterated Distillation and Amplification; Corrigibility. See https://www.aisafety.info/online-course
I think this article is too vague, because for almost almost claims in it I am not sure if I understand the author correctly. Below I am posting my notes. If you want to help me and others clarify understanding of this article, consider answering questions in bold, or, if you see a mistake in my notes, correcting it. Also I hope my notes help the author as a piece of feedback. I’ve only finished 2⁄3 of the article so far, but posting notes because I might become less interested in this later.
Also it’s unfortunate that unlike in https://intelligence.org/2018/11/02/embedded-models/ version of this article we don’t have hyperlinks to explanations of various concepts here. Perhaps you could add them under the corresponding images? Or have images themselves be hyperlinks or reference links (like in academic articles) to the bottom of the document where all relevant links would be stored grouped by image number.
The post says an embedded agent can’t hold an exact model of the environment in its head, can’t think through the consequences of every potential course of action, can’t hold in its head every possible way the environment could be. I think this may not be necessarily true and I am not sure what assumptions the author used here.
It seems the whole article assumes countable probability spaces (even before the AIXI part). I wonder why and I wonder how realizability is defined for uncountable probability space.
--
Regarding relative bounded loss and what this bound is for, my best guess is as follows. Here I use non-conditional probability notation instead of . Let be the elementary event that is actually true. Let “expert” be a (not necessarily elementary) event, such that . Then loss of the expert is . Loss of the prior is . For their difference it holds that .
Initially, is your initial trust in expert h, and in each case where it is even a little bit more correct than you, you increase your trust accordingly; the way you do this ensures you assign an expert probability 1 and hence copy it precisely before you lose more than compared to it.
Remember, . It follows that probability of h increases given evidence e if and only if , i.e. h “is even a little bit more correct than you”. But I don’t understand the bit about copying the expert h precisely before losing more than . If the expert is an event, how can you “copy” it?
Combining this with the previous idea about viewing Bayesian learning as a way of allocating “trust” to “experts” which meets a bounded loss condition, we can see the Solomonoff prior as a kind of ideal machine learning algorithm which can learn to act like any algorithm you might come up with, no matter how clever.
It is assuming all possible algorithms are computable, not that the world is.
I don’t understand this. Our probability space is the cartesian product of the set of all possible UTM programs and the set of all possible UTM working tape initial configurations. Or, equivalently, the set of outputs of UTM under these conditions. Hence our whole hypothesis space only includes computable worlds. What does “can learn to act like any algorithm” mean here? “It’s getting bounded loss on its predictive accuracy as compared with any computable predictor.” Huh? Does predictor here mean expert h? If yes, what does it mean that h is computable and why? All in all, is the author claiming it’s impossible to have a better computable predictor than AIXI with Solomonoff prior, even if it has non-computable worlds in the probability space?
probabilities may not be calibrated identification of causal structure may not work
What do these mean? I only know informally what calibration means related to forecasting.
So, does AIXI perform well without a realizability assumption?
How is AIXI even defined without realizability, i.e. when the actual world isn’t in the probability space, or it has zero prior probability?
This is fine if the world “holds still” for us; but because the map is in the world, it may implement some function.
Is this about the world changing because of the agent just thinking? Or something else?
It should be noted, though, that there are additional barriers to getting this property in a game-theoretic setting; so in their common usage cases, “grain of truth” is technically demanding while “realizability” is a technical convenience.
...
In game theory, on the other hand, the assumption itself may be inconsistent. This is because games commonly yield paradoxes of self-reference.
From the former paragraph I don’t understand anything except that (the author claims) game theory has more problems with grain of truth / realizability, than AIXI. After the latter paragraph, my best guess is: for any game, if there is no pure strategy equilibrium in it, then we say it has no grain of truth, because for every possible outcome rational agents wouldn’t choose it.
If we put weight in both places until a proof rules one out, the beliefs just oscillate forever rather than doing anything useful.
Weights represent possible worlds, therefore they are on the scales right from the beginning (the prior), we never put new weights on the scales. My probably incorrect guess of what the author is saying is some agent which acts like AIXI but instead of updating on pieces of evidence as soon as he receives it, he stockpiles it, and at some points he (boundedly) searches for proofs that these pieces of evidence are in favor of some hypothesis and performs update only when he finds them. But still, why oscillation?
Any computable beliefs about logic must have left out something, since the tree will grow larger than any container.
I interpret it as there are infinitely many theorems, hence an agent with finite amount of space or finite amount of computation steps can’t process all of them.
Another consequence of the fact that the world is bigger than you is that you need to be able to use high-level world models: models which involve things like tables and chairs.
This is related to the classical symbol grounding problem; but since we want a formal analysis which increases our trust in some system, the kind of model which interests us is somewhat different. This also relates to transparency and informed oversight: world-models should be made out of understandable parts.
No idea what the second quoted paragraph means.
All in all, I doubt that high level world models are necessary. And it’s very not clear what is meant by “high level” or “things” here. Perhaps embedded agents can (boundedly) reason about the world in other ways, e.g. by modeling only part of the world.
https://intelligence.org/files/OntologicalCrises.pdf explains the ontological crisis idea better. Suppose our AIXI-like agent thinks the world is an elementary outcome of some parameterized probability distribution with the parameter θ. θ is either 1 or 2. We call the set of elementary outcomes with θ=1 the first ontology (e.g. possible worlds running on classical mechanics), and the set of elementary outcomes with θ=2 the second ontology (e.g. possible worlds running on superstrings theory). The programmer has only programmed the agent’s utility functiom for θ=1 part, i.e. a u function from ontology 1 to real numbers. The agent keeps count of which value of θ is more probable and chooses actions by considering only current ontology. If at some point he decides that the second ontology is more useful, he switches to it. The agent should extrapolate the utility function to θ=2 part. How can he do it?
I would like to add to Vanessa Kowalski, that it would be useful not only to talk about academic disciplines separately, but also to look at academia of different countries separately. Are y’all talking about academia in the US or in the whole world? I suspect the former. Is it like that in Europe too? What about China? Australia? Japan? India? Russia?