I once saw a blind kid on TV that had developed a way of clicking with his mouth that he could use it to navigate sidewalks. This was pretty cool and it made me pay attention to my own sense of hearing and wondering what it must be like to use that kind of ability. I payed close attention to situations that it might be possible to hear the place of walls etc. Doing this for sometime it changed my relationship to my hearing.
I became aware when a sound is louder because additional bounces of wave energy hit my ear rather than having only the direct line-of-sight propagation. I picked up the threshold where I hear the primary sound and it’s echo as simultanous sound or as two separate sounds. After paying attention to things that I theorethically knew why they would happen I could tap into kinds of “feels” in the sound. My mind somehow clicked and connected geometric forms to the echo timing profile. In understand only discrete sounds conciously but the prolonged direction-changing continous echo that a sloped wall makes I could sense intrisically. And I found out that for example claps are very directional and you can kind of like cast different claping to a wall like you would shine a flashlight.
All in all my sense of hearing became much more like my sense of seeing with good 3D-structure. Experiencing this new way of hearing was very interesting and cool. However once I got settled how to hear like a echolocator I had trouble conceptualising what it is like not to hear like it. My guess is that if you don’t pay that much attention a lof information goes unextracted. But it was a big surprise that it wasn’t “obvious” how much information a given hearing includes. I didn’t gain a better ear. The amount of information I was receiving would have needed to stay same, but I guess I previously couldn’t structure them properly.
And I realised that i had atleast two hearing modes even before this new “3D” mode. A mono mode where you can decipher what kind of sound it is and can recognise what causing it with only knowing that it is “nearby within hearing distance” and couldn’t be able to face the sound and need to visually look for clues where the sound is coming. Then there is kinda “arrow” mode where you know to look at the direction where the sound is coming from. But it is kinda cool when in “3D” mode I can hear around the corner on what kind of space there is which I can’t do in “arrow” mode.
Thinking about how sound waves work it kinda makes sense how the perception changes between “mono” and “arrow mode”. If you are in a empty room and make big enough noise there is significant echo going from every direction. Without able to read the timing finestructure it feels like coming from everywhere. However if you in the same kind of room don’t make quite as much noise then the component directly going towards you will dominate the echoes. There is also an explanation why the “arrow” isn’t a pinpointer but a fuzzy approximation, when you try to read the texture/shape information as location information it will give a slightly contradictory result.
I am using language here where I first feel a certain way and then be puzzled on why it would feel this way and then start theorising in this way. I guess it’s worth noting that having more theory won’t give you insight in what your experience is. It was kinda mindopening to be able to target those feelings relatively theory-free and then the joy of finding the explanation. For example how sound propagation first felt “waterlike” and only afterwards confirming that that makes perfect sense as the waves are not equal in strength in all directions and do have dampening as they propagate.
I really couldn’t confirm that I wasn’t just reading too much into what I was supposedly experiencing, that I have just pretended to experience things while only actually wanting to experience them so. But then after I aquired the skill I passively would first pick up sounds and have 3D impressions of them when not actively pursuing to hear anything (and usually be frightened about it) and only then turning to look at them that this was a legit change in perception when the expectations formed by hearing would be confirmed by sight. For example I would drive by a post with a bike and suddenly be very aware of something square on my right, the wheel sounds giving enough echo basis that the post would pop-up against the background a lot more than it visually does. Or driving alleys making a sudden echo chamber on an otherwise echoless street. I also found out that glass sticks out a lot more than other materials (oh there is a large object to my right, oh it’s just a window).
For me I have discovered what it is to be like an echolocator which I guess is supposed to be the main alien part in the bat metaphor. There is also a joke on how drugs make you “taste blue” but I have come to experience that and how it makes sense to “see sound”. But the behavioural effects of this different kind of experiencing are not that telling or direct. I would not pass the vampire turing test because that isn’t to the point, it would need to be refined to be that but it is not trivial how that would be made.
The operation that made me undergo this change seems to be paying attention. It doesn’t seem to be that I learned a new fact. Althougth I clearly see that having atheory why I am feeling what I am feeling did have aguiding effect. Maybe call it a imagination aid? I would say it might be a deficiency in understanding and not knowledge that limits people not being able to experience bats. And it is possible for humans to understand what it is to be an echolocator. I would guess that if I had sufficiently clear descriptions on what kinds of “facets” my perceptions include I should be able to play it out how I would experience the situation had I that kind of a sense. So I think it might be possible to imagine seeing 4 primary colors but it takes skill in this “pay attention to your qualia structures” thingy that people are not in general very good at.
I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.
It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.
If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such “financial ties”. Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don’t fully get why it would be troublesome to reveal this belief.
The negation that “Popular ideas attract disproportionally good advocates” seems also worth attention. People accept sloppy thinking a lot more readily if they agree with the conlusion. This can be used as a dark art where you present a sloppy thinking argument for obvious truth or uplifting conlusion and then proceed to use the same technique to support the payload. The target is less likely to successfully deploy resistance.
Also it’s quite often that a result that is produced in a rigorous way is rederived in a sloppy way by those that are told about the result.
Developing a rationalist identity is harmfull. Promoting a “-ism” or group affilication with the label “rational” is harmful.
I know it’s the typical outcome, but I don’t know why it would be inevitable or obvious. A person that verbally asks for an “honest” answer but punishes is not in fact asking for a honest answer. Part of the reason why people add the qualifier is the belief that those kinds “give you more positive affect”.
If you try to shoot for an actual honest opinion you have to care to differentiate between asking for “dishonestly honest” opinions. For the kind of mindset that has “whatever can be destroyed by the truth should be destroyed” actually honest opinions are what to shoot for. But I have bad models on what attracts people to “dishonestly honest” opinions. I suspect that that mindset could benefit from different framing (“I have your back” vs “yes” ie forgo claims on state of the world in favour of explicit social moves).
This lesswrong post might make someone seek out more “dishonest positivity” by applying a “rejection danger” in pursuit of “belief strengthening”. I feel that there is an argument to be made that when rejection danger realises you should just eat it in the face without resisting and the failure mode prominently features resisting the rejection. And on the balance if you can’t withstand a no then you will not have earned the yes and should not be asking the question in the first place.
That is on the epistemic side there is a “conservation of expected evidence” but on the social side there is a “adherence to recieved choice”, you can’t give control of an area of life conditional on how that control would be used, if you censor someone you are not infact giving them a choice.
“Counterspells” are supposed to be useful.
MtG counterspell is a card but it’s also a spell category. Spell in that category usually cost less the more specific their target restrictions are. They also all accomplish the same thing in that ultimately nothing happens (ie a cancellation).
Using magic here as a metaphor might be fitting as the point of such a move is to reveal that the machinery supposed to be employed actually doesn’t do anything ie that magic doesn’t work and is just wishful thinking. The worry would be that by acknowledging the attempted methods you “steep down to their level” ie employ magic yourself despite not believing in it.
Note that in 1 if you want to avoid the “lackluster doing” outcome you have to genuinely be willing to not do / take pessimism effectively in to account when you do the group discussion: It seems to be a very distinct skill which is not very obvious.
In 9 it’s kinda weird that a bayesian wants to increase a probability of a proposition. Someone that takes conservation of expected evidence into hearth would know that a too high number would be counterproductive hubris. I guess it could mean “I want to make X happen” vs “I want to believe X will happen”. I get how the reasoning works on the belief side but effecting the world side I am unsure the logic even applies.
In posts about circular preferences that was appointed the role of “busy work amount to nothing” and the highest scorer on the utility function as the “optimal solution”. However here roles are pretty much reversed in that cyclical movement is “productive work” and stable maximisation is “death”.
The text also adds a lot of interpretative layer in addition to the experimental setups. Would not derive same semantics from the setups only.
I took the line written to mean that there are no “opinion leaders”. In a system where people could vote but actually trust someone elses judgement the amount of votes doesn’t reflect the amount of judgement processes employed.
I also think that in a system that requires a consensus it becomes tempting to produce a false consensus. This effect is strong enough that in all context where people bother with the concept of consensus there is enough basis to suspect that it doesn’t form that there is a significant chance that all particular consensuses are false. By allowing a system of functioning to tolerate non-consensus it becomes practical to be the first one to break a consensus and the value of this is enough to see requiring consensus to be harmful.
All the while it being true that while opinions diverge there is real debate to be had.
There was a picture of a dress going viral where some people saw it as black in intense lighting and others as blue in dim lightning. Our sense of color is very far from the “apparent color”. A red object in a dark room is still red. Our brain calculates away lightning differences.
If there were local red torches on Chiron Beta Prime you could have objects that are very luminous near torches and objects that are not. Thus you could differentiate between diamons and turquoises. But diamons and turquoises are both grue. However turquoises are not white. Therefore grue is not white.
Note also that red torches could be a recent innovation. Thus what is natural “in this universe” is technology level dependant.
Computer Science: recursion
For me it helps to solidify and make things explicit. However when you need to chase after foggiest thoughts and manage “seeing the forest in the trees” not so much.
I had this really weird conflict about whether I free up my thougth processes to be free from verbal structure and learn the associated thinking skills. It seemed that verbal forms would be too “clunky” and the precice definitions would under and overstate what I “meant” very often. And beside a lot of important thinking will anyway take place as non-verbal thoughts, having concious introspective acceess to that space was very tempting. I ended up going free-form but I am not sure I am happy with my choice.
It seems that I am at some places using what a mounts to heuristics while the role could be taken up by an algorythm. For a lot of thought processes there is no way to “check via the tedious and slow method” as the weird computations genuinely allow different kinds of operations than verbal forms would (+ no nice mappings between them for the intermediate stages). This might introduce a lot of sloppy thinking althought there is an idea that if really needed following the “spirit” of the thougths will lead to the correct details. However in practise I get sufficient results to base what I need to do without ever attending to the details.
What I would expect with explicit verbal thinking is greater intermind operablility. The processes you use are more likely to be supported on other minds too. However I would expect the thought space to be somewhat smaller. Thoughts can be taken more “as is” in separate chunks disregarding their context. One way of positively framing this is that the rate of correct thoughts per overall thoughts is high. However multiparadigmatic and very comprehensive thoughts become relatively expensive if not outright ruled out. A negative framing would be that the thoughts you can be right about is very small /reduces in size. It may also be harder to come up with thoughts with multiple parts needed to be created on the fly. Ie tweaking one concept is easy but tweaking / creating a concept system becomes hard.
I would guess that the expliciation effect would allow to extract from your brain more. I would however be pessimistic on how it affects your psyche structure and mental habits, both long term effects.A good harvest but poor growth soil.
Even with telepathy status games and indirect hiding might still happen. And it depends also what kind of access. If everybody feels every thought of everybody else it could easily be overwhelming and pure cognitive economy could favour opting out.
If you can choose to access at will anybodys mental state you can adopt strategies to people not opt to do so in critical moments. If you build up yourself a reputation for violent thoughts people might want to try to avoid your mind out of comfort granting you a degree of privacy (similar effects for boringness etc). You could also attach meanings to your thoughts that other people would not associate in essence encrypting your mind so that they can’t natively read it in full clarity (even if they have full cryptotext access) (may or may not be synonymous with going mad).
If everybody has intimate psychological contact your mind could form dependencies to parts of other minds without which you could not psychologically function. So people might want to opt out for wanting to remain individuals and not fuse with the hive mind. Even if only others formed such dependencies could be bothersome because you don’t only think for yourself but potentially for others as well.
It seems to open a can of worms that would need to be dealt somehow and those solutions would be far more important to opting than whether it would be cool.
Friendliness by mathematical proof about exact trustworthiness of future computing principles is misguided.
One formulation of the incoherency about free will is that physical laws are descriptive rather than normative. If the physics would suddenly behave diffrently (false vacuums or anything previously undiscovered) it’s the law in error and there is no blaming the matter as being “naughty”. In the same way when you are deciding how to act the law itself isn’t working as a cause in it. They are not human laws. Your freedom is not reduced by any obression.
from wikipedia: “Determinists generally agree that human actions affect the future but that human action is itself determined by a causal chain of prior events. Their view does not accentuate a “submission” to fate or destiny, whereas fatalists stress an acceptance of future events as inevitable.”
You have stated a reason to be a determinist but I have not seen the argument for “submission”. EIther you need to explicitly state the remaining hidden beliefs that are part of the reasoning or you are plain wrong about the implications fo what you have come to accept.
One reading suggest that you believe that what will happen somehow determines or dictates what happens to you now. Future determining the past is forbidden outside of closed timelike loops. It is not also the case that you could modify the present and keep the same future.
One way is also thinking it that you will only make 1 choice: whatever you choose to do will be what you chose to do. It’s not allowed to make 0 or multiple choices. 0 would mean the universe ends there. Multiple choices would mean the past correlates with multiple futures making factors other than what you chose determine which outcome happens. In neither case you would feel any freer. Hence you must narrow down, “determine”, what your choice is.
If people are biological computers and simulation can’t spring up new consciousness, doesn’t that mean that a baby can’t have a conciouness? In a way baby isn’t meant to simulate but I do think that the internal world shouldn’t be designated as illusory. That is we don’t need or ask for detailed brainstate histories of babies to designate them as conscious.
One could argue that it reduces to “know the rigourous actual semantics of human language” instead of nominal ones. Atleast analytial philosophy would be solved if one would attain this capability. It doesn’t sound that easy. One could say that the core problem of AI is that nobody knows with sufficent accuracy what intelligence means.
Panspermia theories have vulcanic activity and meteor strikes moving bacteria world to world. It’s not clear it’s off limit to evolution (or one needs to do some tricky organic world vs inorganic world boundary drawing to get a motivated cognition result).
There is a weird kind of reply of “I agree but disagree on the conceptualization” which would correspond here that claims are not tied to the conceptualization used.
For example “Is blood an effective plogiston transfer medium?” The answer is clearly yes but I would still object to the concept of phlogiston being an appropriate concept to handle the phenomenon in question. Some people might refuse to handle malformed questions and for some questions the malformation of the concepts affects the answer to the degree that addressing it is unavoidable.
And guess what if someone insists on using plogiston terminology I might acknowledge that I understand what they are talking about but I am still going to actively direct them to use other kind of terminology. And the reason is not that plogiston is “false” or “doesn’t exist”. For things like Higgs-boson the defence why it’s a productive way to address it’s phenomenon is stronger. But the conceptualization isn’t an entirely free dimension and the specific failures specific conceptualizations have can be very important. “The current active conceptualization” guides what expectations are on areas we have no data on and thus can’t be pure reformulations of measurements. Insisting that concepts are merely reformulations of data would mean you should not expect anything on parts where you don’t have data. Sure anywhere you expect something to happen you could plausibly see whether that expectation yields out. But it’s not reasonable to declare everything outside of already seen data to be off-limits.