The most glaring problem I see with this theory is that it would allow a potted spider plant to be considered a ‘person’, and Patrick Bateman to be considered ‘healthy’.
saturn
I don’t think knowledge of computer programming can be applied to brains through analogies involving “adding computational power” or “improving algorithms”. A computer’s processor, memory, algorithms and data are strictly conceptually separate and each can be modified without causing any change to the others. That’s not at all the case with a brain.
Although not directly contradictory, the idea expressed in the quote is somewhat at odds with libertarianism, which is popular on LW.
Could you tell me about ugh fields you’ve experienced, and about any instances of selective search, fake justification, etc. that you can call to mind?
If a thought with unpleasant implications comes up, I’m tempted to quickly switch to a completely different, more pleasant topic. Usually this happens in the context of putting off some unpleasant task, but I could imagine it also happening if I believed in God or had some other highly cherished belief. I can’t think of any beliefs that I actually feel that strongly about, though.
I do sometimes come up with plausible excuses or fake justifications if I’m doing something that someone might disapprove of, in case they confront me about it. I don’t remember ever doing that for my own benefit. I can’t remember doing a selective search either, but of course it’s possible I do it without being aware of it.
I just thought of another thing that might be relevant- I find moralizing less appealing than seems to be typical.
Also, what modality do you usually think in—words, images, … ?
I’m not sure how to describe it. Sort of like wordless analogies or relations between concepts, usually combined with some words and images. But also sometimes words or images by themselves.
Also, what do you do when you e.g. desire a cookie, but have previously decided to reduce cookie-consumption?
Distract myself by focusing on something else. If my thoughts keep coming back to eating cookies, I might try imagining something disgusting like eating a cookie with maggots in it.
After reading the comments here I think I might be a person who doesn’t rationalize, or my tendency to do so is well below the norm. I previously thought the Litany of Tarski was about defeating ugh fields; I do experience those. I’m willing to answer questions about it, if that would help.
Probably Matt, although he might tell you to just create a new account.
Because it isn’t settled whether harming two different people is worse than harming two identical copies of one person.
They are Latin abbreviations. i.e. stands for “id est” meaning “that is” or “that means”, e.g. stands for “exempli gratia” meaning “for the sake of example”.
if she doesn’t, then I’ll feel obliged to not be in that room while the fire’s burning for my health
The smoke goes out the chimney, then back inside through any openings. There’s not much reason to assume that the room with the fireplace has a higher concentration of smoke than anywhere else in the house while the fire is hot. However, if the fire is left to burn down to embers, there may not be enough heat to force all of the smoke through the chimney.
So if you’re going there anyway, you might as well enjoy the fire, at least until it starts to die down.
As far as I know, every simple rule either leaves trivial loopholes, or puts the AI on the hook for a large portion of all the energy (or entropy) in its future light cone, a huge amount which wouldn’t be meaningfully related to how much harm it can do.
If there is a way around this problem, I don’t claim to be knowledgeable or clever enough to find it, but this idea has been brought up before on LW and no one has come up with anything so far.
Since energy cannot be created or destroyed, and has a strong tendency to spread out every which way through surrounding space, you have to be really careful how you draw the boundaries around what counts as “consumed”. Solving that problem might be equivalent to solving Friendliness in general.
Evolution hasn’t really caught on to the fact that calories are really easy to obtain now, so there’s probably some low-hanging fruit available by subverting the brain’s energy-conserving mechanisms. (I don’t know whether TDCS is doing that.)
I don’t think that type of comment should be downvoted except when the author can’t take a hint and continues posting the same false idea repeatedly. Downvoting false ideas won’t prevent well-intentioned people from making mistakes or failing to understand things, mostly it would just discourage them from posting at all to whatever extent they are bothered by the possibility of downvotes.
Only if you know exactly what you’re doing, and never use an FBI-monitored exit node or visit an FBI-controlled honeypot site. Tor’s threat model does not provide protection against an adversary that can monitor both the encrypted traffic coming from you, and the unencrypted traffic coming from an exit node or honeypot hidden service.
I don’t understand why jimrandomh is claiming that Tor is “perfect”. It certainly can be effective, but like most crypto, using it safely requires an adequate familiarity with its brittle points.
You can prove that a smaller set of transformations preserve the behavior of subsections of a program, then you can combine arbitrarily many of those transformations and preserve the input to output map of a larger program.
Both home and datacenter markets seem to be shifting away from raw power and towards energy efficiency (i.e. maximizing computing power per watt) which increases battery life and decreases datacenter costs. This might actually end up propping up Moore’s law anyway, as the more efficient transistors get, the more of them can be put on the same chip without overheating.
This will bottom out too, eventually, when a battery charge lasts longer than the device itself, or datacenter power and cooling costs become negligible.
If you use Firefox, you may be interested in this.
I don’t know of any studies, but there are many anecdotal reports about this.
I don’t understand your point. How would you demonstrate macroscopic decoherence without creating a coherent object which then decoheres?
Well, yes and no.