In posts about circular preferences that was appointed the role of “busy work amount to nothing” and the highest scorer on the utility function as the “optimal solution”. However here roles are pretty much reversed in that cyclical movement is “productive work” and stable maximisation is “death”.
The text also adds a lot of interpretative layer in addition to the experimental setups. Would not derive same semantics from the setups only.
If your code compiles into one program it’s literallly one system.
Sometimes its critical that you fail and you do not really process before you do. In that kind of situation it can be more of a question whether you fail early or late.
An agent that is committed to learning for their mistakes always gains new capabilities when catching mistakes. Basking on how much knowledge you already have has little to no impact on ability gain so it’s pretty much waste of time outside of emotional balancing.
Sometimes its very important to be genuine instead of accurate. Being honest means people know to relate to you according to your true state. If you are clueless people know to give you information. If you are on target people might not burden you with noise.
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.
When the theory of chemistry has advanced there has become better concepts to handle what used to be handled with the concept of plogiston.
With “truth” there is no replacement, just outright rejection.
You could have a theory of information where certain evidence is enough to secure a conviction. Then an extreme tool view would be that it’s just a method to lock people up should you want to do so. There would be no sense that the evidence is “reliable” or “accurate” just damning. Or someone could be interested in if certain evidentiary standards result in certain conviction what that tells of what the law has become, what kind of precedent it sets. The extreme tool view would say that no precedent is formed, there are just people in jail and talking about the “upheld law system” would be nonsense. It would make sense to insist that at any point in time multiple interpretations of the law are plausible so that there is no “one true law”. But it would be weird to say that outcome of hypothetical cases would be unconstrained. Cases form precedent, reveal something about the law system and the law system is not just the sum of all cases tried. If your original case doesn’t exactly match a previous case you are not without protection of the law. With predictiosn there are principles at stake not just outcomes.
Some strctures less complex than brains might be selected for “look ahead” like benefit. For example the evolution of sex. Also having features coded in multiple ways in DNA. Some of the DNA encodings might be selected for “evolvability”. Making things like epigenetic switches and in general control genes can be seen as a modelling layer a bit more abstract than concrete features.
Panspermia theories have vulcanic activity and meteor strikes moving bacteria world to world. It’s not clear it’s off limit to evolution (or one needs to do some tricky organic world vs inorganic world boundary drawing to get a motivated cognition result).
Nothing. Super-colonies in fact happen. Althought they don’t transfer that much food inside it.
Because this strategy relies on you having behaviours that you don’t understand it includes periods where you have low introspection which could potentially be dangerous. In general if you encounter a behaviour you exhibit but don’t understnad you could let it grow so you can examine it or you could cut/kill it because it doesn’t fullfill your current alingment criteria (behaviours “too stubborn to be killed” is a common failure mode thought).
Slaver ants would make “warrior prowness” comment relevant. Even with slaver ants it unlilkey that all nests are enslaved all the time. Too much slavers and they need to fight each other or run out of slave refreshments. With adequate distance between slavers some of the nests will be enslaved late or never.
It also could be very plausible that a colony could actually burn the extra calories and benefit form it for example in the form of extra drones and queens.
If they actually had been better off to not notice there would be a dominant strategy to intentionally not notice. Them not being there is a different thing than them being there and not noticing them.
Ants in fact make supercolonies. They (atleast some) choose PvE strategies over PvP mechanics.
The existence of ants has had a big evolutionary pressure on other species. There are a lot of species that produce sugar that ants collect and ants act to defend these resource sources (which can be understood as a mutually beneficial transaction like mechanic). This kind of symbiosis needed to kick off from somewhere even if there is a feedback loop keeping it stable. One of the plausible stories is that another insect provided free food and ants started to regard those insects more as a resource rather than a piece of background. Once they do this they can favour more generous candymen to more stingy ones.
A lot of the colonies of eusocial insects keep a reduced population over winter. The ones that collect food during good times might not ever consume it during harsh times. Because they have a different insentive strcture it’s unlikely a worker ant would save itself at the cost of the colony “colony was screwed, but I survived” is an unlikely ant thought. They have behaviours like the oldest members being the ones that take on activities furthest away from the nest (guess which individuals are nearest to mortal danger).
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
With “plogiston” I can sell how much more attractive this “oxygen” story is. Being anti-realist doesn’t teel what would be better or how to do “essential” (whatever that is) things better.
In particular I am worried that there is no acknowledgement of inductiveness. Sure it would be arrogant to know before hand what is the correct way to be inductive by for example priviledging a particular ontology. But insisting that the part of the models/lines where there is no underlying datapoints is unreal or a distraction is like saying that a finite amount of points is as good as a line.
Witch burnings are dangerous. Some peoples main defence against not being burned is to obediently follow community norms. If some of those norms become hazy then the strategy of addhering to them becomes harder and it’s possible that other factors than compliance influence who gets thus attacked thus the defence strategy revolving around compliance becomes less effective. Thus anyone that muddies the community norms is dangerous.
Now if the community norms screw over you personally in a major way that is very unfortunate and it might make sense to make more complex norms to mitigate inconvenience to some members. But this discussion really can’t take place without risking the stabilty of the solution for the “majority”, “usual” or “founder” case. Determining what kind of risk is acceptable for the forseeable improvement of the situation might be very controversial.
There are some physically very able humans that currently don’t have to think about their muscles being employed against each other based on a very simplistic box thinking. If you take their boxes away they might need to use more complex cognitive machinery which would have a higher chance of malfunctioning which would/could result in lower security situations.
In general it might not be fair to require persons improving the situation for the portion that suffers from it the most to take into account the slightest security worries of the least impacted. But the effect is there.
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
I think the use of the word “psychopathic” is distracting here. The attention to “others as machines” is relevant but psyhopathy has important meanings elsewhere which really don’t apply here. In a proper prediction domination it’s more like judo in that you are using the the agents own power towards your needs. You never ask for something that would be opposed except when confrontation is exactly what you want out of the system (and therefore it’s not true opposition but cooperation).
In general standard gripes about free will apply. Natural laws are descriptive and not normative. Thus they can’t be oppressive. If somebody develops a theory of what ice scream flavour you like it just means you have tastes and it doesn’t limit your taste formation in any way. Having “free icecream taste” would mean you would not have any taste (that is you can not like any particular flavour). Having “free will” would mean there is no choice that could be attributed to your character, you have no “decision taste”. As processed here there is an enourmous difference between “agent X can’t attribute a particular decision that agent Y will do” vs “there is no function from the state of the world and state of agent Y that would output agent Ys decision”.
I have found that caring about one kind of distinction makes you care about others. That is when I primarily care about technical accuracy I can pinpoint in my thoughts what are and what are not technicalities accurate. When there are clusters of non-technicalities they often have similar symptoms of what is needed to be done to them to aquire technical accuracy. That is I start to model them as “the enemy” and “get inside their head” in order to effectively resist them. Often this leads into insight where when you give the other motivation they more easily give the primary motivation that I care about. If I am thinking about a politically charged topic and I find my thoughts are not techincally accurate I can start naming emotions and usually the technical accuracy is easier to find. Undifferntiated they would interfere but explictly treated their pull works only in their own “worksphere”.
There is a weird kind of reply of “I agree but disagree on the conceptualization” which would correspond here that claims are not tied to the conceptualization used.
For example “Is blood an effective plogiston transfer medium?” The answer is clearly yes but I would still object to the concept of phlogiston being an appropriate concept to handle the phenomenon in question. Some people might refuse to handle malformed questions and for some questions the malformation of the concepts affects the answer to the degree that addressing it is unavoidable.
And guess what if someone insists on using plogiston terminology I might acknowledge that I understand what they are talking about but I am still going to actively direct them to use other kind of terminology. And the reason is not that plogiston is “false” or “doesn’t exist”. For things like Higgs-boson the defence why it’s a productive way to address it’s phenomenon is stronger. But the conceptualization isn’t an entirely free dimension and the specific failures specific conceptualizations have can be very important. “The current active conceptualization” guides what expectations are on areas we have no data on and thus can’t be pure reformulations of measurements. Insisting that concepts are merely reformulations of data would mean you should not expect anything on parts where you don’t have data. Sure anywhere you expect something to happen you could plausibly see whether that expectation yields out. But it’s not reasonable to declare everything outside of already seen data to be off-limits.
There is a perfectly good way of treating this as numbers. Transfinite division is a thing. With X people experiencing infinidesimal discomfort and Y people experiening finite discomfort if X and Y are finites then torture is always worse. With X being transfinite dust specks could be worse. But in reverse if you insist that the impacts are reals ie finites then there are finite multiples that go past each other that is for any r,y,z in R r>0,y>r, there is a z so that rz>y.
In the moving internal link in the thread broke. There is also no hint in the orignal thread that a meta-level discussion spawned.
The intellectual commitment is so easy it is hardly the centerpiece of the issue. When you are asking for such an answer you are consenting to potentially feeling bad ie you are making a kind of emotional commitment. Sure receiving a bad answer sucks but whether you punish it’s expressor is part whether they feel safe in expressing their opinion. For example if you were to participate in boxing acting offended when your face hurts would be unreasonable unless it’s outside of the consent given by for example occuring outside of rounds.
Sure for some it might appear as emotional masoschism to allow others to hurt you while removing your possibility for retaliation. BUt setting it up is an emotional transaction and not a intellectual one.