Just because you personally don’t understand doesn’t mean that somebody else could not.
But you might also be getting to the distinction that “all sets” is a proper class whereas numbers can be put into a set and are thus a small class.
Just because you personally don’t understand doesn’t mean that somebody else could not.
But you might also be getting to the distinction that “all sets” is a proper class whereas numbers can be put into a set and are thus a small class.
Within surreal number lingo the “construction ordering” is often referred to as “birthday” and −5 being younger than −10 becomes a thing one can prove. 1⁄2 has birthday just after 1 but 1⁄3 has birthday ω.
From the perspective that integers are sets of integers sets of integers do not come “after”. That is if you first create the integers and then you try to form {0,1,2} that is not a new construction as 3 has already been formed. In that perspective “last integer” and “last set” are not that different. There is the distinction of a limit ordinal. If “natural numbers” is a set then making the singleton {natural numbers} can also be thought as a successor ordinal ie ω+1 and how the set “natural numbers” can be thought as the ordinal ω. So there is a process where one can think of giving infinite time to nest sets. One can even give another round to get up to ω*2.
In the other direction one can assume as little magically given base material as possible. Having nothing but how to make new numbers one can get 0 as the limit ordinal of ex nihilo.
Confidence can be pretty straighforward if there is a separte outside reality to which correspondece is straightforward. Sometimes language is polymorphic in that vastly different things get covered in same or same sounding terms.
One potential categoy of categorization which moral statemdents might have is when the target of the statement is to be communally created. If half of a movie crew thought “We are making a romantic comedy” and another half thought “We are making a horror movie” there might be big trouble. If a creative leader has made a decision then being informed about it can be about accuracy in the traditional sense. However if the creative direction has never been discussed there might need to be the determination of the direction of the moive. Before such discussion has happened neither statement can be true. One of the possible outcomes is that group A goes a separated way to make a romantic comedy and group B goes to make a horror movie. If this happens in a sense both groups were right.
If a group of people goes out to make an arrangement where people do not backstab and kill each other mutually that can be a form of executive decision rather than an ethical discovery. If somebody then starts to “doubt” “maybe we should kill each other a little bit” that can be a form of setting out a different society or reforming the society to a different order. In this sense if somebody ask for your favorite color is it would be weird to be uncertain what your favorite color is. It would be really weird to read a proof to the effect of “your favorite color is actually green rather than red” and be convinced. Such things are true not because we discover them but because we stipulate them. Saying “maybe my favorite color is blue” is not knowing what your favorite color is or refusing to have a favorite color.
Now it seems open to me whether moral questions are such stipulative executive decisions. Under this kind of conception “murder is wrong, 95% confidence” can sound a lot like “I reserve a right to let 5% of murderers off the hook, but generally punish them.”
In the standard way when a “market of ideas” it is mean that some ideas are good or fruitful and will eventually be found that way or that some ideas have stronger arguments and can win out in debates.
Here the “market of ideas” functions in more immediate gut reactions. Ideas win out because their advocates are wealthy, not ideas winning out because their advocates are numerous. A contest of who waves their flags the most vigorous.
I guess there is a similar juxtaposition or sliding scale of democrasy vs demagogy.
In traditional democratic countries there is a universal and equal right to vote, everybody gets one. If you could buy additional votes if you wish, it would be significantly less egalitatrarian and signficantly more oligargic. Providing the winners of previous votes more voting power makes the system cascade into strong winners and losers fast.
Preventing autism induced ire from others throught masking sounds somewhat similar. Sometimes you want to mask tot he extent that you can but you just can’t all the way. Expecting people to be weirded out but the “deficiency” not being an internal signal to change stance.
Depersonalization when trying forcefully to fit illfitting social roles is also not unheard of.
If the care chooser is maximising expected life-years ie favours saving the young then he can be “inconsistent”.
Also if you had enough money you would just buy all the options. The only options that get dropped are those that interfere with more saving options.
If somebody truly considered a life to be worth some dollar amount and their budjet was increased then they would still pick the same options but end up with a bigger pile of cash. Given that this “considers worth to be” floats with the budjet I doubt that treating it as a dollar amount is a good idea.
The opportunity cost is still real thougth. If you use a doctor to save someone that means you can’t simultanously use them to save another. So assigning a doctor or equipment you are simultanousy saving and endangering lives. And being too stubborn about your decisio making just means you endaangerment side of things just grows without bound.
With effectiveness my doubt is that you iss kinds of knowledge in your definition and that logic might be less than effective in the grander scheme of things. For example the knowledge of how to ride a bike is hard to get into the scope of logic, in that respect logic is incomplete ie it leaves a bit of knowledge out. There is the issue with Mary’s room and whether color experience counts as knowledge, we can grant her allthe math test books and science books but we can still doubt whether we have caught all knowledge. Even the context of “effective method” Turing suspected that mathematicians use a kind of “insight” that coming up with a proof is a different kind of process than following a proof. Universal turing machine captures “effective method” which encompasses all of formal mathematics that person could write down. But still doubt lingers whether that is all the intersting kind of processes.
One could also be worried about a method of knowing that encapsulates logic. Divine relevation could be posited to give vast amounts of knowledge maybe enough so that further knowledge production work ceases to be viable. There is also the “trivial theory of arithmetic” where we just assume all arithmetic truths as axioms. In such a system there are no theorems, there is only a check whether or not a thing is a axiom or not. Such a system could be all encompassing and avoid the use of logical inference.
Starting point is a bit undefined, axiomaatic approach is way more defined. Sure we don’thave super cdertani “boot-system” on how we get going. But it doesn’t feature the characteristics of a axiomatic system. In the axiomatic style you can go “Assume X. Is it the case that X?” and you can definetely that “yes, X is the case”. If you tried to shoehorn the sensory reliance in axiomatic terms it would go something like “Assume X. Now turns out that X isn’t the case” which is non-sense in proof terms. Sure there is appeal to absurdity “Entertain subthought:[Assume X. X leads to contradiction]. Because subthought is contradictory the axiom set can’t all be true at the same time. Therefore not-X.”. But when our sensory expectations are violated they are not appeals to absurdity, it is more of a trial and error of “Guess X. If X then Y is a prosperous choice. Experience of Y is very unprosperous. Regard X as bad guess.”. A purely axiomatic approach will always refes to the starting definitions to resolve issues of truth. We don’t need to guess our axioms because we assume them true, which in effect we define to be true. “Assume all Xs are Y”, “well what if I find an X that isn’t Y?”, “then it is not an X, thefore you can’t find an X that isn’t Y”
I get that getting asteroided would be my business. But knowing what half of china is going to have for lunch tomorrow really isn’t, I am fine not knowing that I am fine that I don’t have control over that they can have their culinary autonomy. When you would scan for impact asteroids you would not generally scan all things in the same way, but focus on paths and locations that could contain dangerous elements which means giving more scrutinity to some and less to others. There is also the issue of balancing the prediction horizon over several threats. Do you want to spend time getting an addtiional decade advance warning on a collider asteroid or do you want to get another decade advance warning on climate disaster? Just because you can fret about or control something doesn’t mean you should. And integrating garbage can be more dangerous than acknowledging that you don’t know.
One doesn’t need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.
The use of the concept of “effective” is a bit wonky there and the word seems to carry a lot of the meaning. What I know of in my memory “effective method” is a measure of what a computer or mathematician is able to unambigiously specify. I have it hard to imagine to fairly judge a method to be ineffective.
Just because you need to have a starting point doesn’t mean that your approach needs to be axiomatic.
It is unclear why planetary consiouness would be desirable. If you admit that you can’t know what happen on the other side of the planet to a great degree you don’t have to rely on unreliable data mediums. Typically your life happens here and not there. And even if “there” is relevant to your life it usually has an intermediary through which it affects stuff “here”.
I have pet pevees against weird epistemic statuses. Is talking about poop supposed to be a content warning? I don’t think it fits within scope of epistemic status.
I am not surprised that gold background is a undesirable trait. however this is how we get high side-effects for women in drugs sold at stores, because testers prefer male over female. If humans in the wild have a 20% trait rate and your sample has 1% or 0% that is going to lead in a bad result in its own way. Having a WEIRD sample is not particularly representative.
If you have a discipline that supports multiple frameworks and recruit on resonance with a particular framework then the result tells less about the frameworks properties. For example one could try to provde that chess is an endurance game of bothering to check enough positions and reqruit based on stamina in order to “prove” it is not a game of intellect.
I remember when balancing away dive was a talking point. Then a lot of the teams were squamish in scrimming other strategies. If you need to redo the whole strategy stack instead of just adjusting the top layers then teams will eventually do it but it can take long while.
If you tell a high rank player to push they will know to still reftrain from being mindlessly suicidal, to not push all the way throught spawn etc. If you desribe somethings color in grue and bleen if helps if the communication reciever has existing support for those concepts. Even if there is no explicit culture sharing the learning curve could provide a way for “on the onset” some fundamentals to be evident and then when those are taken into account then more fine-graded concepts can make sense. But part of the point is that the incentive gradient to make the distinction doesn’t exist at all stages. This can be seen as an aspect to the “smiley face maximiser” error state of aligment problem, the defintions and concepts that humans actually use don’t exist in a neat context-free way. Telling a human to go “make people smile” result in sensible action while a literal minded Ai will tile things destructively with inapproriate patterns.
nit on the nit: anything that is in fact used to produce science becomes by definition part of the scientific method.
And within research the distintiction whether something is on the “anecdotes from friends level” or “meta-analyses from decades” is a pretty significant one.
And if we as someone to “believe science” that is often how their personal “anecdotes from friends” level clashes with productions from professional belief formers.
The danger model where science gets ignored is when that everyday experience dominates and that is suspected to form bad epistemics. There is atleast the idea hovering around that science is more reliable because it can get by without the utilization of this kind of “dirty epistemics” that scienced should strive to be as dirt-free as possible to practise.
Another aspect than the degere of support that such pharasing might refer to is the method of knowing. If one believed image boards and based on such evidence is pretty certain about something, that might be a high degree of support but trust in those information and judgement sources is not shared. A health authority might feel a pressure and can be argued that it should base its stances on things that have societal basis and which it can stand behind. If you as an individual act on rumors you take on the responcibility of possibly being reckless with your actions that they are maladaptive. Is a health authority in position to be reckless on behalf of the public? If you are going to shout fire in a crowded threather it is one thing to note whether there is a fire or not but weighing trampling deaths vs fireburn deaths (and I guess vs play restarts) is another lineation to make decision on.
If you see a spider and are mortally afraid it might make sense to be empathetic to you being afraid but be firm in that death is not to be expected, ie it is understandable to have the reaction and the reaction is around but there is a second line of logic that suggests another line of action.
I have significant history of being a gold player so that nmakes me think I wouldn’t be eligble for this thing. A-B testing between “natural learning” and “proper learning” could still be relevant.
If the ability of the good players would consist of factors that could be communicated or transferred and people have the motivation to do so the good players would lose their edge. Different routes migth have different conveoyance limits. For example it is very hard to give verbal instructions to how to effectively use a bike but bike-skills are still frequent as a little experimental practise quickly aquires it. It is not a competetive market in the sense that everybody does the game thing,as there are actual barriers to entry. Some of the barriers might play larger or smaller roles but everybody doesn’t collapse to a single rating.
Picking only smart people is like a school accepting only good students and then miraclously having good grades for their students. If the point is to measure the impact of couching itm ight make sense to avoid be overly selective. However if the focus is on the minimum time and effort to hit a highish bar that might make sense.
With regard to “advice sink time” I could also describe that as “low cognitive autonomy”, “high suggestibility” or “meta-monkeying”. There is also the issues on whether a communcation succeds or not whether that is due to the success or failure of the transmitter or the receiver. Concepts made by 4000 to be consumed by 4000 people might be hard for others to adopt not because of cognitiive domination but it being relevant to that style and culture. What I have seen opinion leaders do is that some advice for pros should not be folowed by lo SR people and that some people actively hurt themjselfs for trying. “Under gold just get your aim correct and don’t even think about anything else”.
There are probaly bad memes about being “super good at reaction speed”. But there are also differnces within anticipation. There is atleast the distinction between remembering and calculating what is going to happen, that of being habituated what happens in situations like these ie memory and extrapolating current situation into the future. I think for example in high level chess players lose ability to articulate particular reasons why moves are good or bad. So for games there migth be situation where extrapolation couterintuitive gives a bad result and a pure associative link can get past this.
There is also a difference of being able to execute a strategy or tactic that is good in the current scene versus being able to adapt and come up with such things.
A lot of people play games to be entertained to be fun. Some pros can gain pleasure form being good but it seems it tends to have a “harsh practise big winout” structure. The problem for the casual player is that learning “properly lethal” techniques is fun negative in the first half. This prevents people from randomly fluctuating into them.n The proble is even worse if “playing crappy” produces actively more entertaining games. In a game if you are fairly matchmade it is always a challenge but the entertainment gained from different styles of play might not be similar. That is pro-like games can be more fragile in their entertainment payout rather than unskilled versus unskilled. This can form a phenomenon where a player learns that if they improve they just get put into games where they have more chance to make unfun mistakes which can effectively make learning punishing.
I am also interested about the hypotheses that if we take randoms and artifiically make them scrim and be deliberate for example 2 weeks, but don’t provide couching how much this would help things.
“You should not expect to get anything out of this other than ~80-100 hours of fun video game coaching.” vs “I cannot guarantee you will have fun.” these 2 are contradictory. In agreeing to a group setting and a schedule you can have experiences which would not be possible playing as a solo to the random wrath of matchmaking. It would make sense for me that if the participants are expected to commit to put in the time there could/would be a symmetric part for the couches. Currently it seems you would bail out the second you think you are wrong. If you don’t actively sabotage the fun it is probably be expected to be net-fun but the challenges of coaching are going to be how to do stuff that is indifferent or contrary to the fun-gradient.
To the extent boltzman brains can be understood as a classical process then I think they are or can be viewed as pseudorandom phenomena. For quantum I do not really know. I do not know whether the paper intends to invoke quantum to get them that property.
The claim in the paper that they are “inaccesible by construction” is very implicit and requires a lot of accompaning assumptions and does a lot of work for the argument turn.
Numerology analog:
Say that some strange utility function wants to find the number that contains the maximum codings of the string “LOL” as a kind of smiley face maximiser. Any natural number when turned into binary and turned into strings can only contain a finite amount of such codings because there are only a finite amount of 1s in the binary representation. For any rational number turned to bianry deciaml there is going to be a period in the representation and the period can only contain finite multiples. The optimal rational number would be where the period is exactly “lol”. However for transcendental numbers there is no period. Also most transcendental numbers are “fair” in the sense that each digit appears approximately as likely as any other and additionally fair in that bigger combinations converge to even statistic. When the lol-maximiser tries to determine whether it likes pi or phi more as numbers, it is going to find infinite lols in both. However it would be astonishing if they contained the exact same amount of lols. The difference in lols is likely to be vanishingly small ie infinidesimal. But even if we can’t computationally check the matter, the difference exists before it is made apparent to us. The utility function of the lol-maximiser over the reals probably can’t be expressed as a real function.
While the difference between boltzman histories might be small if we want to be exact about preference preservation then the differences need to cancel exactly. Otherwise we are discarding lexiographic differences (it is common to treat a positive amout less than any real to be exactly 0). There is a difference between vanishingly different and indifferent and distributional sameness only gets you to vanishingly different.
Well there is a interplay between different sense of “cause”.
If you think how one controls a nuclear arsenal, buttons is totally how humans “cause” things. However if you conditioned between “button connected to radio” vs “button not connected to radio” and “radio message received by officer” vs “radio message not received by officer”, and “officer has key” vs “officer doesn’t have key” and “silo doors open” vs “silo doors don’t open”, the bit about pushing the button is likey to be insignifcant in compared to the other bits. So there probably ins’t a good statistical correlation with the bare button with nuclear winter.
Another limit case would be stock market crashes. They are not designed to crash and mostly people don’t want them to crash. Typically the is no single reason why they happen. But it would still be strange to say that they happen for no reason or that nothing caused the crash.
When you consider the button you are likely to keep the “button as part of this machinery” as the constant reference class and wary the environment. Similarly when yuo ared considering the butterfly you want to consider “this butterfly” and not “any butterfly” (like “any button” is not relevant). Part of keeping it that butterfly is to keep the environment of it somewhat constant “butterfly by this lake”, “butterfly now”. These provide these functionality structures.
In a kind of reverse thing you could ask given some assembly of machinery, can it be interpreted as a factory with a start button? For many man-built factories you indeed find these “linchpin” influencers. One could also be interested in “death star exhaust ports”, points that have great influence despite not designed to do so. And they would be “linchpin influencers” even before the exploit is found.
I migth be a bit out of my breath, but if there is a distinction between a “actual evolution” and “potential evolution”, the “representativeness” of the potential evolution has aspects of epistemology in it. If I have a large macrostate and let a thermodynamic simulation go on then I collapse more quickly into a single mess where the start condition lineations don’t allow me to make useful distinctions. If I define my macrostates more narrowly ie have more resolution in the simulation this will take longer. For any finite horizon there should be a narrowenough accuracy on the detailedness of the start state that it retains usefulnes. If an absolute zero simulation is possible (as atleast on paper with assumtions can be).
If I just know that there is a door A and a door B then I can’t make any meaningful distinction which door is better (I guess I could arbitrary prefer one over the other). If I know behind one of the doors is a donkey and one has a car I can make much more informed decisions. In a given situation how detailed a model I apply is dependent on my knowledge and sensory organs. However me not being able to guess the rigth door doesn’t mean that cars cease to be unvaluable. In Monty Hall switching is preferable. The point about the distributions being the same would be akin to saying that the decision procedure used to pick the door doesn’t matter as any door is as good as any other. But if there are different states behind different doors, ie it is not an identical superposition of car and donkey behind each door but some doors have cars and some have donkeys then door choice does matter.
I kinda maybe know that quantum mechanics has elements which are mor properly random than pseudo random. However quantum computing is reversibloe and the blackhole information paradox would suggets that phycisist don’t treat quantum effects to make states a indistinct mess, it is a distinct mess where entanglements and other things make tricky to keep track of stuff but it doesn’t come at the sacrificde of clockworkiness.
In particular quantum mechanics has entanglemebnt which means that even if a classical mechanis is “fuzzed” by exposure to try quantum spread that spread often is correlated that is entangled states are produced which have the potential to keep choices distinct. For example if the Monty chooses the valid door to reveal via a true quantum coin the situation can still be benefitted from by switching. Even if the car is in a equal superposition behind any of the doors, if Monty opens correct doors (ie montys reveal is entangled to never reveal a car) then the puzzle remains solvable. Just the involvement of actual randomness isn’t sufficient to say that distinctions are impossible but I lack the skill to distinguish what the requirements for that would be.
However if there was true “washing out” then the correlation between the orderly and the random should be broken. If a coin is conditional on what happens before the flip then it is not a fair coin.
The credibility link would not be associated with trust in the news paper but trust in the judges of the prediction market. It might be that having a single authority whose one job is to make judmements on what are “objective results” is more efficient than current arrangements. But is it not clear that you could convince randoms that you are fair such checker simply by using a scoring system.
It seems hard for me that somebody that attains a low score to continue to believe that the low score giver is a good authority on others.
There are atleast 3 level of hearing, recognising the sound, recognising its arrival direction and recognising its echoes.
The property of being thinkable as a real function really applies only to the monohearing. You can get the richer aspect by considering a real field. But phenomenologically sound has a direction and echoocation is a mode of hearing as well. Basic spoken language glosses over these features.
Using a time series method of communication is in my guess has lots more to do that a given set fo data is available to the brain at once and this data access forms the natural unit of meaning. That is it has less to do with airs properties as a medium but more to do with understndbility and ability to nail any reoccurring resemblances in the first place.
In Kanji based languages it is more common to have a wider shared written language and the written forms have more centrality to the speakers than the spoken forms. That is things being homophones are easily glossed over if they are heterographs. Kanji also displays the property that radicals are free to associate in a non-linear way within a complex kanji forming what would be a compound word but at the letter level (ie kanji for month has kanji for moon in it).
Even in linear language we often convey structures which don’t piggyback on the linearity of language like familiar relationship like niece and uncle. The interface is for communicating not thinking.
The balance of the forces are is not the same for all invidiuals an groups.
Maybe it is because of my neurodivergencde but I find that I would and have totally wrote essays pushing boundaries on what an essay is and expressed views beyond reasonable understandability.
If evedryone is different on every axis the problem isn’t monotony but lack of standards. Often interoperability is achieved either by being simiar enough that architechture doesn’t change from one unit to the next or that there is a interface allowing hiding of implentation details. Being very wildly off means it is hard to be relevant for the operation of others, everybody is just an alien. Instead of learning a language to communicate in you would essentially learn a new language per invidiual you want to interact with.
I can make sense of authority from a subjectivist viewpoint. People might be suggestible, there might be some quirks of their psychology that make them behave in certain ways, ropes you can pull that get you specific results. That is a command can be a hack attempt to exploit the other. Assuming that the other is completely reflectively consistent might grant them exploit-freeness. But most real system do have hack vulnerabilities.
This might not be that popular a viewpoint because it can get very anti-coperative. If you argue someone into a position that they would transition away when reflecting it is not a stable reflection of deeper principles.
If you truly oppose Clippy you migth be morally fine trying to confuse and get them to act against their values to the extent that you can. But in polite company ethical discussion regressing a participant can get heavily frowned upon.
In hacking terms if you have root-access you are free to do as you please but that position might still be ill-gotten. You might not actually have any business wielding admin powers. The system doesn’t condition its compliance for intents and purposes but that it is given in the correct from and from the right channels. In this sense “authorization” doesn’t actually have to do with authority.
The “stop” can also seen as a suggestion given in the hopes that it finds purchase. I think being too knowledgeable about the perpetrators evil would make you not try or you would know that you effective before hand. Only when you don’t know by which mechanism the prompt would land would you give it a blind shot. It is like spitting out a conjecture in hopes they prove it to themselfs. If you knew of a proof you would state it, if you didn’t think they are intelligent enough to consider the matter you would stay silent.
In Milgrams experiment it is not required that the experimenter and test subject agree on ethics to a great degree. But the effect of the white coats suppressing the zealousness of the test subjects is a real thing. And I would think that the setup could be made make the opposite “moral authority” stance, like having Amnesty fliers on the walls or engaging in ethical discussion amid the “training” questions etc.