plex
I’ve read all posts in the Basic Quantum Mechanics section, plus many of the links from it and a handful of others (working through the rest, I’m still only three days into this). Quantum mechanics is something I’ve had vague explanations of from education and discussions with educated people, but it seemed extremely complicated and confusing due to almost precisely the issues touched on in the normal way it’s taught. Thank you for putting down the steps needed to walk me through rewriting my basic assumptions of reality to more accurately reflect how reality likely works, it’s been very fun and interesting. I’m starting to feel like a native of the quantum universe, and.. it kinda makes sense. Definitely a whole lot more sense than my previous mangled understanding of probabilities and wave/particle duality. Having a base level reality which works very differently from high level phenomenon which feel more intuitive does not seem like a great surprise.
Anyway, one idea I’ve had which seems interesting to me, but I am not yet in knowledgeable to evaluate properly and would like thoughts on:
Would you, under the many worlds interpretation, be able to experimentally test whether a universe is infinite in time but not space?
I know that infinite time+finite space not a favored model for cosmology currently, but it’s still interesting to me if quantum physics testably disproves a whole class of possible universes. And if by this (or similar) reasoning an infinite time/finite space universe is found to be incompatible with many worlds, finding evidence extremely strong evidence of an infinite time/finite space universe (highly unlikely as I understand it) would perhaps bring many worlds into question.
Possible line of reasoning:
In a universe with finite space, there is a finite configuration space (finite amount of physical space, so finite possible universal states).
Any particular blob of amplitude/branch/world will eventually evolve into a state of/near maximum entropy.
Maximum entropy is not entirely stable even if no work can be extracted from it, so it is not a static point in configuration space.
A non-static point in finite configuration space left to move for infinite time will eventually visit all possible arrangements of amplitude (configurations), infinite times. This includes Configuration A, which can be any possible point in configuration space.
In both (particle left, sensor measures LEFT, human sees “LEFT”) and (particle right, sensor measures RIGHT, human sees “RIGHT”) blobs of amplitude, the universe evolves differently for a vast amount of time after the heat death of the universe, but given infinite time will at some point reach Configuration A with probability 1.
Since both blobs of amplitude will, despite diverging for an unimaginable length of time, arrive at the same configuration as each other with probability 1, they are fully coherent allowing them to interact, and this is testable (and already falsified).
Points one, three, and four seems to me like the most likely weak link, but I’d be interested to know why this is not the case if it is indeed not the case. Perhaps at maximum-entropy each branch gets stuck in a unique infinite loop rather than visiting the rest of configuration space?
If the chain of reasoning holds and leads to the conclusions.. perhaps a stronger version of this argument could perhaps be constructed for a universe infinite in both time and space (depending on whether indefinitely expanding thermodynamic systems will reach all possible configurations given infinite time), though I’m already feeling somewhat out of my depth dealing with the weaker argument.
hm, from what I’ve been taking from the sequence quantum physics seems to apply fully at all levels, and the idea of it working differently/not applying is simply a matter of scale. For example an event causing a “split” affecting significantly macro objects almost entirely decohere, but not perfectly avoiding any kind of hard cutoff. Large systems definitely appear to work differently when you look at them on a large scale, but.. that appearance or classical hallucination is just an emergent property of underlying quantum effects.
Saying the quantum mechanics itself breaks down.. does not fit with the mental picture of reality I’ve taken from this, reality as entirely locally computable and with higher level effects based entirely on the base level substrate behavior. I’d like you to clarify what you mean by “break down”, and preferably how reality would choose where to draw any line between scales where quantum mechanics does and does not break down?
I have read quantum physics has issues with gravity, perhaps that is what you’re referring to? If so, I’d be interested in recommended further reading.
What I think you’re saying, correct me if I’m wrong, is that there’s a few big unknowns as to how QM applies to gravity or on cosmological scales, and because of this the answer to my chain of reasoning is “we just don’t know”? That there’s major unknowns is entirely reasonable/accurate, but.. I’m struggling to see exactly how the very real and important unknowns apply specifically to my reasoning.
Simply put: Where, in the line of reasoning, do you think the unknown of quantum gravity trips up the logic, and why?
It’s seems quite possible that in discovering the answers behind the big unknowns we’ll change some underlying assumptions and render my reasoning unworkable. But I don’t see where in the line of reasoning not knowing vacuum energy density, or quantum gravity, causes a problem. And given that, it seems like working with the best available theory means applying certain aspects of QM at universal scale is not unreasonable, though we should expect we may need to update models once some big unknowns are resolved.
I think my use of emergence does not fall into the emergence/magic trap, since I am not attempting to explain anything about how large scale systems behave through emergence, my statement is purely that whatever the details of how macro systems work the large scale effects are caused by local physics being consistently applied and only appearing to work differently due to taking a larger view. Even though I used the word “emergence”, my sentence can be reworded with my intended meaning if you swap it to “emerges from”, which is specifically allowed by that post.
Also, you think my picture of reality as locally computable is “an unfortunate side-effect of EY’s tone in the QM sequence”? If that’s the case, do you dispute reality as locally computable? I’d be interested in sources which coherently argue for reality being non-locally computable.
Okay, I think I see where you’re coming from better now. I have read that link, and at least feel like I conceptually understand some of the problems with applying quantum physics to the large scale. However, I’m still very curious as to exactly how the incompatibility in theories applies to this specific argument, and curious as to whether looking at a purely quantum universe (making the assumption that there is some way to derive relativistic experimental results from QM that we’ve missed, rather than that QM needs major changes) would give the results I’m describing, or whether I’m misunderstanding something about amplitude or thermodynamics in a heat death.
Hm, how to explain clearly.. It seems like what’s being said is QM is at odds with observation (vacuum energy density) and at odds with our other best theory, relativity, (event horizon, thanks for chiming in shminux), so QM is wrong or incomplete in some way. I accept this as a likely conclusion, though I do not understand either theory deeply enough to be able to follow the arguments for inconsistency in full.
However, dismissing a thought experiment about a widely used theory with some possible implications (if I’ve not missed anything and have understood various things better than I’d guess I have, that chain of reasoning could show a certain interpretation (MW) is incompatible with finite space+infinite time, while a different interpretation (collapse) would not be), due to the underlying theory (QM) being wrong/incomplete for other reasons seems.. limiting. Even if the line of reasoning only holds meaning with the assumption that the universe is fundamentally quantum, local, and macro effects are all explainable in principle by the laws which govern the smallest parts, I’m interested in whether or not it holds.
I’m primarily trying to refine my mental model of how decoherence works with these thoughts, and an answer focused on whether in a quantum universe would, from our current understanding of quantum physics, do as I suppose (that is, in finite space+infinite time, it could never even slightly decohere due to probability 1 arriving at an identical configuration eventually), or have I made some error in my reasoning which can be explained and would allow me to improve my model of decoherence?
Done, including most bonus questions. Missed the IQ ones since I’ve never had that test, and defected before reading the comment saying the money was coming from someone’s pocket rather than lesswrong (order of preference for where money is: my pocket>lesswrong>random lesswrong survey completer). Though I’d probably still defect knowing it’s coming from Yvain.. ideally next time you could find a source of prize money who everyone wants to take money from?
Thank you for sharing your story and methods.
In that scenario lying may be better for both in the short term, but lying about being in love with someone to trick them into sleeping with you seems pretty likely to upset them more in the long term. And there are more gentle ways to put it which could make honestly explaining that it’s mostly a physical thing which would reduce the immediate negativity considerably, though the amount depends on the listener’s disposition.
I agree that it’s not necessarily unreasonable for a truth to be upsetting, but it is somewhat unreasonable to press someone for a truthful answer (especially something important), then be upset with them specifically for being honest, especially if they have indicated discomfort giving a direct answer and tried skirting around the subject (since this hints that it’s something which may be an uncomfortable truth they may want to avoid), even if it’s pretty common in many circles.
I agree that in some cases, including the homophobic parents example, lying can be justified. Even in significantly more mild cases, I can see lying as occasionally consequently the better course of action, even if you take into account the chance of the lie being found out and trust being lost/hurt to other people due to being lied to.
However, correct me if I am wrong but you seem to be arguing something much stronger than this? From my read this article promotes at least accepting, maybe even encouraging, using white lies as a way to ease potentially uncomfortable social situations. I’d guess some of the other commenters (particularly Alicorn) have a similar read, and that’s prompting some strong reactions. While white lie culture may be common, and going against the grain (e.g. replying that you’re not particularly keen on some item of clothing when asked by an acquaintance) may go against our social instincts, refusing to say you don’t like things in many situations disallows useful opinion giving in all similar situations. If I want to get a second opinion on something, I want to ask someone who will give me information. If no matter their true opinion, they’ll give some mild nicity/white lie to spare my feelings, I’m not going to learn much. If every time someone asks if their friends if their new hair cut suits them their friends must say yes, that person is both never going to learn they have a haircut few people like and maybe more importantly they’re going to start automatically downgrading similar praise, quite correctly, because “people saying my haircut is nice” has zero correlation to the haircut being nice.
I accept that many, maybe even a significant majority of, people do just look for compliments or niceties some of the time. I accept that giving them those compliments rather than honesty may be better for their self-esteem in the short term. However, I have found that so long as I present myself as direct but gentle from the start and don’t hide honesty from someone then spring it on them at a bad moment, a vast majority of even those compliment seekers at least respect gentle honesty and many of them find it refreshing. Perhaps this is in part due to my social group being unusually tolerant, and this strategy would fail elsewhere.
On the other side, I prefer people to be honest with me and attempt to self-modify towards being someone who would, in all but the most convoluted situations, prefer in the long term to be told the truth in response to all serious questions. I do this specifically so I can appear to be a person who it is better to tell the truth to in effectively every case, because I want to be able to reliably get true opinions. This is something I have never had a negative reaction to once explained, and has been the gateway to many interesting conversations.
Due to these working well for me and the large advantages of being able to communicate openly with greatly reduced fear of unintended offence provided by a general near-universal policy of honesty, I remain very skeptical of the idea that the habit of looking for reassurance at the expense of honest advice or opinions is something to be respected or encouraged (especially in rationalist circles where truth-seeking is prized).
Last note: I see the saying the truth but bending the meaning to be polite as signaling to someone that you don’t quite mean what you’re saying subtly enough that if (and only if) they care about your true opinion enough to pay attention to what you say and ask a followup question you’ll tell them the full story. If they were just looking for a generic nicity, they either won’t notice your slightly careful wording, or should not request information they do not want. This is useful for people who may have reason to want your true opinion, and as a way of avoiding getting into the habit of telling white lies. It’s rarely hard to avoid the question or skip over it even if you can’t come up with a convincing not-lie, so long as you don’t get too obviously caught up in debating internally what to say or how to avoid offense first.
I agree with that being a useful default with most people, and reliable with even those who you don’t know well enough to figure out how they’d react to criticism.
I’d put a bit more emphasis on how putting a white lie into the initial encouragement can cause issues though. If you’ve said something generally encouraging or picked out some positive, but not actually said anything which you think of as untrue then if they do explicitly ask for a critique then you can give them your opinions and suggestions in full. If you used what you hoped would be a white lie then you must either contradict your previous encouragement or withhold parts of your opinion even if the person genuinely requests it and wants feedback, both of which seem like bad options.
Consequentialist reasoning which seems to align fairly well with Alicorn’s conclusions (at least the one about it being in some situations correct to hide the truth by being selective even when this in some sense deceives the listener, and at the same time being less correct to directly lie) are touched on here if that’s useful to you.
Essentially: You don’t know for sure if a person wants general encouragement/niceties or a genuine critique. One way to deal with this is to say something nice+encouraging+true which leaves room for you to switch to “okay but here is what you could do better” mode without contradicting your previous nicety if and only if they communicate clearly they want your full opinion after hearing your careful wording.
A few weeks ago a somewhat similar idea came into my mind while thinking through resources I’d have liked to have and ways to improve education (I think what started me off was the way that many concepts taught in early parts of school turn out to be incorrect simplifications?). I dumped some extremely rough notes into my ideas file (at the end of this post), and mostly concluded that it was an immense project which would require many years of focus to become really useful, and unless it got a lot of momentum would easily stall. On the other hand, this kind of resource could if built properly be amazing. On the other other hand, Khan Academy has many of the elements of the resource I imagined, and building from scratch is very likely more effort than encouraging them to add features or just helping build on the existing project.
Some comments on your suggested implementation: Human knowledge is really really big. Even just the bits taught in schools. Trying to rewrite each part in curriculum form before it becomes useful seems like it would cause a project like this to lose steam quickly. One way around this would be to collect existing high quality educational material online and link to it/include it directly from sources which you have arrangements with, allowing contributors to focus on building the dependency tree. If and when it becomes beneficial, switching to producing content may be better.
If I understand correctly (from the talk of deadlines, tutors, qualifications, social atmosphere, essays, classes), you would like to change the way formal education works? Building an educational resource external to the schooling system seems vastly more realistic than enacting radical change on large institutions without extremely strong evidence of the new methods of teaching working. It could possibly work to start a new school (in the UK there’s a lot of attention on “free schools” these days, which, so long as enough parents support an idea, can teach in much less orthodox ways and still get funding), but that’s also a massive project, and one which requires plenty of interest from parents otherwise you don’t have students.
Concretely, how could you get people to invest the effort required to build this? At minimum, if you’re trimming it down to just a web app and dependency tree which primarily links out to existing resources, you need a significant amount of programmer and web dev effort plus a good number of active, reliable people with good understanding of each domain covered building the web of knowledge. Paying the non-technical users to write/collect is not going to work unless you have immense funding, and even then there are many pitfalls (see Knol and Encarta). Not paying users means you’ve got to have something which is very user friendly, stands out in a big way to important volunteers, has a good base of existing example content, and ideally offers something back to them like StackOverflow does with their jobs program. And even if you have those features, mainly volunteer can flop easily.
And my extremely rough notes from the file with all my ideas which seem interesting but I’ll probably never do much with. If someone’s interested in any of the lines I can write up my actual thoughts on it, these were mostly to help me remember not actually explain my thoughts:
natural curriculum without stupid simplifications
nested knowledge
-starting from extremely basic statements a 5 year old can understand, building toward higher understanding with dependencies/inferential gaps filled.
crowdsourced, wikilike?
handling original research
degrees of acceptance in chunks of information (e.g. almost certain, very likely, useful approximation for x, )
handling conflicting possible knowledge, conflict warnings,
integrated QA system
-ask questions about each chunk of knowledge, these are organized/merged into a FAQ, or answered/redirected
domain-based reputation system
-based on useful edits and ratings from related knowledge
-general showed publicly, higher definition information to those who refine the system and people with higher rating
-fully recalculated on each change to system, arranged so this is socially okay
-gamification!
-general reputation includes typo fixes, organizational stuff, as well as directly building knowledge, but this is only minimally counted towards specific domain knowledge.
This seems like it’d be good to put on the wiki so others can keep it updated and add to it, would you object to me creating a page with this as the seed content?
Have the issues around logical first movers (brought up in Ingredients of Timeless Decision Theory) been discussed/solved somewhere I’ve not managed to track down with Google? I’ve been thinking it over and have some possibly useful things to add, but that discussion is ancient and it seems likely that it’s been solved more thoroughly somewhere in the last five years. I’ve found the posts about Masquerade which seems related, but only relevant to the special case of full source code disclosure.
It seems to me that there’s two different hidden questions pointed at by “Was this decision ethical”, and depending on why you’re asking you come up with different answers.
If you’re asking “Was this the correct choice”, you want to know if from the perspective of perfect knowledge, how close to optimal was this action, which corresponds fairly closely to actual result (though there’s complications with MWI, and possibly some other parts of the large universe. Or maybe that goes away if you swap out perfect knowledge for something more like “from the perspective of the observer after the event”, in which case the ethical status of a decision can be literally physically undefined until some time after the decision is made?). However, a lot of the time what you’re actually asking is “How does this choice impact my assessment of a person’s ability to make correct choices”, in which case you’re just interested in knowing whether the choice made using a method which reliably produces correct choices (which includes things like gathering relevant information on probability before remortgaging your house and blowing it on lottery tickets).
The first question is relatively easy to judge since you have evidence on how well a decision went, though lack of knowing the results other options gives some uncertainty, but does not provide useful information about trustworthiness of a person in general. The second seems much more useful since it should relate better to future behaviour, but is basically impossible to even approach quantifying in any realistically complicated situation. So.. you ask the first question, trying to get evidence about the second which is what you usually want to know?
If, once you know whether a decision in the past was correct (with reference to whatever morals you pick), and whether the method used to make that decision generally produces correct decisions, you still feel the need to ask “but was it really ethical”, it looks like a disguised query.
I’ve created List of communities and linked to it from list of blogs. I’ve included all the ones listed in the original post plus the EA links pablo posted, though I’m fairly skeptical of rationalwiki based on my own limited browsing there and posts like these two.
Suggestions for sites to add or remove are welcome (either made here or edited directly), as are brief descriptions of sites already on the wiki.
Wouldn’t that count as learning a rule and cause the meta-level rules to change to something worse if you started using your knowledge to make it more tolerable?
It would be pretty easy to set up a little template with a bunch of parameters which generates a nicely formatted box. Could also add categories automatically so it’d be easy to, for example, go to the category of people interested in business networking. More ambitious things which could be added include queries (e.g. search for business network people in bay area), though that’d be much easier with Semantic Mediawiki, something which excludes a user from certain categories/searches if they have been inactive for a certain length of time (easiest would likely be with a magic word extension for medawiki which returns a user’s last active date).
I can throw together a basic template later, then edit it based on feedback on parameters?
Basic template+autoadd categories are easy enough with existing codebase. Queries would require an extension (SMW package), which is addable by anyone with access in a few minutes and allow fun things like editing the template with a form rather than raw wikitext. The part about user inactivity would need a custom extension which would be either relatively simple or really annoying depending on how the recent activity data can be retrieved from the reddit install. Actually, thinking again, if there’s an API we can call for user’s most recent activity it should be possible using the existing extension ExternalData and a clever template or two, no need for custom coding.
And yea, it seems like there should be a few people with the coding skills willing to help out, though from my exploration of the code it seems to be unchanging (unless the public version control is just not updated any more).
Ask someone with mediawiki sysop powers (gwern is a good bet) to delete the page, I expect refresh from wiki with an empty page should give the original behaviour.
“amplitude distribution happens t factorize.” → “amplitude distribution happens to factorize.”
Not the best first comment, but I’ve spent too much time fixing inconsequential typos to feel comfortable skipping over it. Excellent sequence, I think I’m starting to get a toehold in a.. very different view of how things are.