Year 3 Computer Science student
find me anywhere in linktr.ee/saviomak
Year 3 Computer Science student
find me anywhere in linktr.ee/saviomak
I dont think your ‘bias’ usage is an applause light, even though the reverse of the state is abnormal.
The reward of a roaring applause is of course NOT enough to bias a speaker to start pouring on more applause lights
The reason being that this state is a predictive statement and not a moral statement.
You should always include a summary when recommending anything
You are the one who is interested in that thing, the other person isn’t (yet). It saves time for the other person to quickly determine whether they want to learn about it or not.
Related: include a tl;dr in posts?
I found what you’re writing very self-contradicting. If “Observing nothing carries no information”, then you should not be able to use it to update belief. Any belief must be updated based on new information, I would say observing nothing carries the information that the action (sabotage) which your belief predicts to happen did not happen during the observation interval.
He probably didn’t use standard notation here. I would read P(A|B) as P(A OR B) in this context.
Only responding to this part.
Also, for more complicated problems such as following a distribution around in dynamic system: You also have to have a model of what the system is doing—that is also an assumption, not a certainty!
I’m sure you have multiple possible model of the system. If you have accounted for the possibility that your model is incorrect, then it will not be an assumption, it will be something that can be approximated into a distribution of confidence.
For whatever reason, it is apparent that the conscious part of our brain is not fully aware of everything that our brain does.
I believe the conscious-unconscious separation have an advantage in human-human interaction (in the sense of game theory). It is easier for the conscious you to lie when you know less.
I don’t have a clear answer either, but it seems like the nodes in model 1 have a shorter causal link to reality.
The subjects that were told “the goose hangs high” mean the future looks gloomy believe the standard interpretation is the future looks gloomy. So no, it is not evidence that the most subjects were being rational. In fact it shows that most people are fallible to this bias.
If we were given more information though, such as 80% of ‘looks good’ subjects think that the standard interpretation is ‘look good’, while only 60% of ‘looks gloomy’ subjects think the standard interpretation is ‘looks gloomy’, then it is an evidence that SOME subjects are rational.
No for either of my interpretations of your question
If you mean “does a test for randomness exists”, I believe there isn’t, but there are statistical tests that can catch non random sequences.
If you mean “can a rational agent 100% believe the someone is random”, then no, because 100% certainty is impossible for anything.
I don’t disagree that humans can do actions that only benefits others, and that altruism exists. I think there is a better theory than both pleasure-maximizing and “humans are intrinsically nice to others”, and that is Evolution. Also, Evolution can be understood as “gene-spread chance maximizing”, so I think humans are still better modelled as internal counter maximizer.
Donating to charity can be explained by Signaling, it lets others know that you have an excess of money. Pure altruism alone cannot explain donations because we donate more when we’re being watched. (More detailed explanation of charity can be found in The Elephant in the Brain Chapter 12: Charity.)
I would not say that maximizing happiness is a higher goal than perceiving reality correctly.
I think maximizing happiness is a goal related to instrumental rationality, while perceiving reality correctly IS epistemic rationality. And epistemic rationality is a fundamental requirement for any intrumental goals.
But it doesnt mean perceiving reality correctly is a lower goal than other intrumental goals right? How do you even rank goals in the first place?
That quote doesn’t come from the passage and it is not obvious to me how it relates to the passage. What are you trying to talk about?
Just read free will, really disappointed.
not many interesting insights.
a couple posts on determinism, ok but I already believed it
some unrelated stuff: causality, thinking without notion of time… these are actually interesting but not needed
moral consequence of ‘no free will’: I disregard the notion of moral responsibility
EY having really strong sense of morality makes everything worse
low quality discussions: people keep attacking strawmans
I would make the assumption that we are talking about communication situations where all parties want to find out the truth, not to ‘win’ an argument. Rambling that makes 0 points is worse than making 1 point, but making 2+ “two-sided” points that accurately communicates your uncertainty on the topic is better than selectively giving out only one-sided points from all the points that you have.
I argee that finding the truth and winning arguments are not disjoint by definition, but debate and finding the truth are mostly disjoint (I would not expect the optimal way to debate and the optimal way to seek truth to align much).
Also, I did not think you would mean “debate” as in “an activity where 2+ people trying to find the truth together by honestly sharing all the information”; what I think “debate” means is “an activity where 2+ people form opposing teams with preassigned side and try to use all means to win the argument”. In a debate, I expect teams to uses methods that are bad in truth-seeking such as intentionally hiding important information that supports the other side. In this sense, debate is not a good example of truth-seeking activity.
At the end, my point is that in essentially all truth-seeking context, arguing one side is not optimal. I find it perceivable that some edge cases exists but debate is not one of them, because I don’t think it is truth-seeking in the first place.
We aren’t individually sentient, not really.
We do less thinking that we imagine, but we still think. However, I still argee (to a lesser extent) that (sub)cultures fixed many thoughts of many people.
The sad and funny thing is, we don’t even try to understand the cognition of our subcultures, when we research cognition.
I find 2 possible meaning of “we” here, but the sentence is false in both senses:
“We” = all of humanity: The “cognition of subcultures” sounds like half Anthropology and half Psychology, and I imagine it has been researched.
“We” = individuals, rationalists: If your goal is to think by yourself, it is minimally useful to understand how culture “think” for you. Knowing how to not let culture think for you is enough.
What possible advantages do you have in mind? I think it is just a bad, irrational thing to automatically assume attractive people to be smart or honest.
I am extremely confused by your comment, probably due to my own lack of linguistic knowledge.
(This whole reply should be seen as a call for help)
What I got is that fabricated options came from people “playing with word salad to form propositions” without fully understanding the implication of the words involved.
(I tried to generate an example of “propositions derived using syllogisms over syntactic or semantic categories”, but I am way too confused to write anything that makes sense)
Here are 2 questions: how does your model differ from/relate to johnswentworth’s model? Is john’s model a superset of yours? My understanding is that johnswentworth’s model says our algorithm relaxed some constraints, while yours specifically say that we relaxed the “true meaning” of the words (so the word “water” no longer requires a specific electronic configuration, or the melting point/boiling point to be specifically 0⁄100, “water” now just means something that feels like water and is transparent)
Yes, but some estimates are clearly false, while your examples are estimates that may be true, may be false.
Seems like Barnum states and applause lights are both vague, hidden tautologies