Congratulations. Now I’m going to read it.
AshwinV
to deceive you into believing that it is a 𝘝-maximizer.
If it is smart enough to know that it should tell you it’s a V-maximizer, then it’s smart enough to know that you wanted a V-maximizer.
I would expect people to react mainly to the part about the IMO gold medalist, even though the base rate for being an IMO gold medalist is higher than the base rate for authoring the most-reviewed Harry Potter fanfiction.
This is true, but the reaction is for the conjunction of IMO Gold Medalist and moset-reviewed HP fanfic.
I want to upvote this again.
Well.. I don’t think the process is too rigid. You can always discuss it in advance. Also, there are a few things that you do know are better for you, but are still not able to achieve. But yes, there is a risk. I do not think the risk is so great as to not even give this a try though.
Besides, we don’t even know if this works yet!
It is. Judgment comes before.
I’m only suggesting this as a trick, once you’ve already figured out what it is that you need to do. I suppose I could offer my own feedback, but I was hoping that I would at least try and see if it worked over a larger sample space.
Thanks for the input!
I’m not able to correct the hyperlink part, but I did change the name to Phil Goetz as was due.
Make your bad habits the villains
It’s definitely a check, but not a very good check. There are too many in between facts in this case. It really depends on whether Q is solely dependent on Q’ or whether it depends on a number of other things (Q″,Q‴......), provided of course that Q″ and Q‴ are not in themselves dependent on A, B and C.
A little obvious (to me perhaps, without adjusting for mind projection), but beautifully written.
To clarify—Yes, this point has been covered in the community aspect section of this post. Just wanted to highlight the importance of this change and increase its priority. Most importantly, work towards a litmus test. One obvious test of course is to simply watch for the inputs coming in and check for their validity as a Bayesian would in any case do.
The problem with this is that you’ll probably be stuck in the middle of the argument already. So you’ll either have to press your point which you think is correct, or nod along for the sake of avoiding a painful argument (this has more to do with being socially acceptable rather than being right).
Screening for arguers is one way, but then you run the risk of interacting with a self selecting group. This means the same ideas end up floating around, which in turn means you lose out on the biggest advantage of community—feedback from an outside perspective. This to me seems like an unacceptably high cost.
Empiricism to me always included experimentation. Experimentation was a direct sub-set of the same. But that’s probably just me (and maybe a few others.)
The virtue I’m most concerned about is Argument. In my opinion,it can be either extremely productive (especially when people make suggestions that are nowhere near what my stream of consciousness) or extremely frustrating (for rather more obvious reasons).
One important way in which the twelve virtues can be optimized is to develop a sort of litmus test to distinguish between the two. There is a good chance that this has already been done though. Apt links will be appreciated.
In my opinion, sort of. Munroe probably left out the reasoning of the Bayesian for comic effect.
But the answer is that the Bayesian would be paying attention to the prior probability that the sun went out. Therefore, he would have concluded that the sun didn’t actually go out and that the dice rolled six twice for a completely different reason.
This is fantastic input. Thank you very much.
I am a little skeptical of the first technique of the wheel. I thought that was something I did naturally in any case. Of course, I do need to read the book to really figure out what’s happening here though.
Also, I just realised that I didn’t quite answer your question. Sorry about that I got carried away in my argument.
But the answer is no, I don’t have anything specific in mind. Also, I don’t know enough about things like what effects RL will have on memory, preferences etc. But I kind of feel that I could design an experiment if I knew more about it.
Uhm, I kind of felt the pigeon experiment was a little misleading.
Yes, the pigeons did a great job of switching doors and learning through LR.
Human RL however (seems to me) takes place in a more subtle manner. While the pigeons seemed to focus on a more object level prouctivity, human RL would seem to take up a more complicated route.
But even that’s kind of besides the point.
In the article that Kaj had posted above, with the Amy Sutherland trying the LRS on her husband, it was an interesting point to note that the RL was happening at a rather unconscious level. In the monty hall problem solving type of cognition, the brain is working at a much more conscious active level.
So it seems more than likely to me that while LR works in humans, it gets easily over-ridden if you will by conscious deliberate action.
One other point is also worth noting in my opinion.
Human brains come with a lot more baggage than pigeon brains. Therefore, it is more than likely than humans have learnt not to switch through years of re-enforced learning. It makes it much harder to unlearn the same thing in a smaller period of time. The pigeons having lesser cognitive load may have a lot less to unlearn and may have made it easier for them to learn the switching pattern.
I got 6 as the answer, basing it on 1. presence of inner circle 2. outer box apparently following a pattern.
But there’s a high chance i’m privileging my observations.
Uhm. Is there any known experiment that has been tried which has failed with respect to RL?
In the sense, has there been an experiment where one says RL should predict X, but X did not happen. The lack of such a conclusive experiment would be somewhat evidence in favor of RL. Provided of course that the lack of such an experiment is not due to other reasons such as inability to design a proper test (indicating a lack of understanding of the properties of RL) or lack of the experiment happening to due to real world impracticalities (not enough attention having been cast on RL, not enough funding for a proper experiment to have been conducted etc.)
He didn’t “fail”. You’ll are just talking about different things.