It is truth, but you are explicitly saying those words so that the hearer (the patient) forms a false belief about the world. So it cannot be really truthful because most people in that situation would, after hearing example 3, believe that they are being given something that has more affect than a placebo.
AndrewH
Taking progress in AI to mean more real world effectiveness:
Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.
So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe enough to have an “oh, shit” moment?
Regardless of the time frame, if the AI community is working towards AGI rather than FAI, we will likely have (eventually) an AI go FOOM or at the very least, and “oh, shit” moment (I’m not sure if they are equivalent).
That’s teaching for you, the raw truth of the world can be both difficult to understand in the context of what you already ‘know’ (Religion → Evolution) or difficult to understand in its own right (Quantum physics).
This reminds me of “Lies to Humans” as Hex, the thinking machine of Discworld, where Hex tells the Wizards the ‘truth’ of something, coached in things they understand to basically shut them up, rather than to actually tell them what is really happening.
In general, a person cannot jump from any preconceived notion of how something is to the (possibly subjective!) truth. Instead, to teach you tell lesser and lesser lies, which in the best case, may simply be more and more accurate approximations of the truth. Throughout, you the teacher, have been as honest as to the learner as you can be.
But when someone has a notion of something that is wrong enough, I can see these steps as, in themselves, could contain falsehood which is not an approximation of the truth itself. Is this honest? To teach a flat-Earther the world is round, perhaps a step is to consider the world being convex, so as to explain the ‘ships over the horizon disappear’.
If your goal is to get someones understanding closer to the truth, it may be rational, but the steps you take, the things you teach, might not be honest.
WRT eugenics and other seemly nasty solutions, it is as they say: sometimes it has to get worse to get better. No option that causes, obvious to the voting population, short term harm but long term benefits, to the population as a whole, is going to be considered by politicians that want to be elected again.
It seems to me that the science and rationality that allow more than a shot in the dark probability of some social engineering project to work only came about recently (for example for Eugenics, post Darwin time). By the time that it was possible to do these sorts of projects, it was not possible to do them because of the national (and international) out-crying that would result.
So this really cuts off a great many possible projects that could benefit humanity in the long term. Is this a good or bad? depends on how far you are looking into the future, and whether or not you think AGI is possible or not!
I hope those nine guys in that basement are working hard.
Ease of entry and exit is really important. I want to be able to enter the world and enter a discussion asap, but I don’t want to feel compelled to stay for long periods of time.
So I think a browser based program would be best, rather than Second Life.
But I think having a place such as Second Life would be good addition compared to what we have now with LW. Having a a place where people like ourselves can discuss things in practically real time, would, I think, be useful in helping to create this community of Rationalists.
Mechanisms that make it feel like we really are living together, such as a detailed virtual world, and even virtual houses, could help in making the community and keeping people participating in it. And of course, the added benefit of this is that we don’t need to be physically close to each other but we could get the benefits as if we were (given a detailed enough environment).
In many cases, I suspect that people adopt false beliefs and the ensuing dark-side for short term emotional gain, but in the long term the instrumental loss outweighs this.
That may be one way of adopting false beliefs the first set of false beliefs. Once the base has been laid (perhaps containing many flaws to hide the falseness), then in evaluating a new belief, it doesn’t need to have short term emotional gain to be accepted, as long as it fits in with the current network of beliefs.
When I think of this, I think of missionaries, promising that having faith in God with help them through the bad times. Then after they accept that, move onto the usual discussion of Hell and if only you do what they say, you’ll be fine.
Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.
An important consideration not yet mentioned is that risk mitigation is can be difficult to quantify, compared to disaster relief efforts where if you save a house fill of children, you become a hero. Coupled with the fact that people extrapolate the future using the past (which misses all existential risks), the incentive to do anything about it drops pretty much to nil.