Thanks! This, together with gjm’s comment, is very informative.
How is the base or fundamental frequency chosen? What is special about the standard ones?
Thanks! This, together with gjm’s comment, is very informative.
How is the base or fundamental frequency chosen? What is special about the standard ones?
the sinking of the Muscovy
Is this some complicated socio-political ploy denying the name Moskva / Moscow and going back to the medieval state of Muscovy?
I’m a moral anti-realist; it seems to me to be a direct inescapable consequence of materialism.
I tried looking at definitions of moral relativism, and it seems more confused than moral realism vs. anti-realism. (To be sure there are even more confused stances out there, like error theory...)
Should I take it that Peterson and Harris are both moral realists and interpret their words in that light? Note that this wouldn’t be reasoning about what they’re saying, for me, it would be literally interpreting their words, because people are rarely precise, and moral realists and anti-realists often use the same words to mean different things. (In part because they’re confused and are arguing over the “true” meaning of words.)
So, if they’re moral realists, then “not throwing away the concept of good” means not throwing away moral realism; I think I understand what that means in this context.
Also known as: the categories were made for man.
When Peterson argues religion is a useful cultural memeplex, he is presumably arguing for all of (Western monotheistic) religion. This includes a great variety of beliefs, rituals, practices over space and time—I don’t think any of these have really stayed constant across the major branches of Judaism, Christianity and Islam over the last two thousand years. If we discard all these incidental, mutable characteristics, what is left as “religion”?
One possible answer (I have no idea if Peterson would agree): the structure of having shared community beliefs and rituals remains, but not the specific beliefs, or the specific (claimed) reasons for holding them; the distinctions of sacred vs. profane remains, and of priests vs. laymen, and of religious law vs. freedom of actions in other areas, but no specifics of what is sacred or what priests do; the idea of a single, omniscient, omnipotent God, but not that God’s attributes, other than being male; that God judges and rewards or punishes people, but no particulars of what is punished or rewarded, or what punishments or rewards might be.
ETA: it occurs to me that marriage-as-a-sacrament, patriarchy, and autocracy, have all been stable common features of these religions. I’m not sure if they should count as features of the religion, or of a bigger cultural package which has conserved these and other features.
Atheists reject the second part of the package, the one that’s about a God. But they (like everyone) still have the first part: shared beliefs and rituals and heresies, shared morals and ethics, sources of authority, etc. (As an example, people sometimes say that “Science” often functions as a religion for non-scientists; I think that’s what’s meant; Science-the-religion has priests and rituals and dogmas and is entangled with law and government, but it has no God and doesn’t really judge people.)
But that’s just what I generated when faced with this prompt. What does Peterson think is the common basis of “Western religion over the last two thousand years” that functions as a memeplex and ignores the incidentals that accrue like specific religious beliefs?
They are both pro free speech and pro good where “good” is what a reasonable person would think of as “good”.
I have trouble parsing that definition. You’re defining “good” by pointing at “reasonable”. But people who disagree on what is good, will not think each other reasonable.
I have no idea what actual object-level concept of “good” you meant. Can you please clarify?
For example, you go on to say:
They both agree that religion has value.
I’m not sure whether religion has (significant, positive) value. Does that make me unreasonable?
Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of “AI”, not even in the buzzword sense of AI = ML. For all we know it’s doing something trivially simple, like combining a few measured properties (how often they’re on time, etc.) with a few manually assigned weights and thresholds. Even if it’s using ML, it’s going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.
Even if some laws are passed about this, they’d be expandable in the directions of “Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more”; and “we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can’t prove they’re not violating the law, then they’re not allowed”. The latter has a very narrow domain of applicability so would not affect AI risk.
What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.
Epistemic status: wild guessing:
If the US has submarine locators (or even a theory or a work-in-progress), it has to keep them secret. The DoD or Navy might not want to reveal them to any Representatives. This would prevent them from explaining to those Representatives why submarine budgets should be lowered in favor of something else.
A submarine locator doesn’t stop submarines by itself; you still presumably need to bring ships and/or planes to where the submarines are. If you do this ahead of time and just keep following the enemy subs around, they are likely to notice, and you will lose strategic surprise. The US has a lot of fleet elements and air bases around the world (and allies), so it plausibly has an advantage over its rivals in terms of being able to take out widely dispersed enemy submarines all at once.
Even if others also secretly have submarine locators, there may be an additional anti-sub-locator technology or strategy that the US has developed and hopes its rivals have not, which would keep US submarines relevant. Building a sub-locator might be necessary but not sufficient to building an anti-sub-locator.
Now write the scene where Draco attempts to convince his father to accept Quirrel points in repayment of the debt.
“You see, Father, Professor Quirrel has promised to grant any school-related wish within his power to whoever has the most Quirrel points. If Harry gives his points to me, I will have the most points by far. Then I can get Quirrel to teach students that blood purism is correct, or that it would be rational to follow the Dark Lord if he returns, or to make me the undisputed leader of House Slytherin. That is worth far more than six thousand galleons!”
Lord Malfoy looked unconvinced. “If Quirrel is as smart as you say, why would he promise to grant such an open-ended wish? He warned you that Quirrel points were worth only one-tenth of House points, a popularity contest designed to distract fools from true politics and dominated by Quidditch seekers. For every plot you buy from Quirrel with your points, he will hatch a greater counter-plot to achieve what he himself truly wants. You must learn, my son, not to rely overmuch on those greater than yourself to serve as your willing agents; the power loaned by them is never free, and it is not truly yours in the end.”
I don’t see an advantage
A potential advantage of inactivated virus vaccine is that it can raise antibodies for all viral proteins and not just a subunit of the spike protein, which would make it harder for future strains to evade the immunity. I think this is also the model implicitly behind this claim that natural immunity (from being infected with the real virus) is stronger than the immunity gained from subunit (eg mRNA) vaccines. (I make no claim that that study is reliable, and just on priors it probably should be ignored.)
direct sources are more and more available to the public… But simultaneously get less and less trustworthy.
The former helps cause the latter. Sources that aren’t available to the public, or are not widely read by the public for whatever reason, don’t face the pressure to propagandize—either to influence the public, and/or to be seen as ideologically correct by the public.
Of course influencing the public only one of several drives to distort or ignore the truth, and less public fora are not automatically trustworthy.
Suppose that TV experience does influence dreams—or the memories or self-reporting of dreams. Why would it affect specifically and only color?
Should we expect people who watch old TV to dream in low resolution and non-surround sound? Do people have poor reception and visual static in their black and white dreams? Would people who grew up with mostly over the border transmissions dream in foreign languages, or have their dreams subtitled or overdubbed? Would people who grew up with VCRs have pause and rewind controls in their dreams?
Some of these effects are plausible. Anecdotally, I watched a lot of anime, and I had some dreams in pseudo-Japanese (I don’t speak Japanese). I don’t remember ever dreaming subtitles though.
Does either the explanation of the black and white effect make predictions about which other effects should be present, and why?
Epistemic status: anecdote.
Most of the dreams I’ve ever had (and remembered in the morning) were not about any kind of received story (media, told to me, etc). They were all modified versions of my own experiences, like school, army, or work, sometimes fantastically distorted, but recognizably about my experiences. A minority of dreams has been about stories (eg a book I read), usually from a first person point of view (eg. a self insert into the book).
So for me, dreams are stories about myself. And I wonder: if these people had their dreams influenced by the form of media, were they influenced by the content as well? Or did they dream about their own lives in black and white? The latter would be quite odd.
He’s saying that it’s extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it’s similarly unobvious whether we should worry about the suffering of edge detectors.
Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.
I’d like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?
Could the brain be logically divided in N different ways, such that we’d worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they’re composed mostly of the same neurons, we just model them differently?
We talk about edge detectors mostly because they’re simple and “stand-alone” enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven’t isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?
Finally, if very high-level parts of my brain (“I”) have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences (“I can’t decide if that’s an edge or not, help!”), how might a moral theory look that would resolve or trade-off these against each other?
This is a question similar to “am I a butterfly dreaming that I am a man?”. Both statements are incompatible with any other empirical or logical belief, or with making any predictions about future experiences. Therefore, the questions and belief-propositions are in some sense meaningless. (I’m curious whether this is a theorem in some formalized belief structure.)
For example, there’s an argument about B-brains that goes: simple fluctuations are vastly more likely than complex ones; therefore almost all B-brains that fluctuate into existence will exist for only a brief moment and will then chaotically dissolve in a kind of time-reverse of their fluctuating into existence.
Should a B-brain expect a chaotic dissolution in its near future? No, because its very concepts of physics and thermodynamics that cause it to make such predictions are themselves the results of random fluctuations. It remembers reading arguments and seeing evidence for Boltzmann’s theorem of enthropy, but those memories are false, the result of random fluctuations.
So a B-brain shouldn’t expect anything at all (conditioning on its own subjective probability of being a B-brain). That means a belief in being a B-brain isn’t something that can be tied to other beliefs and questioned.
Title typo: cvoid.
Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary
Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?
We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences.
If “human experience” includes the experience of an edge detector, I have to ask for a definition of “human experience”. Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?
Finding the percentage of “immigrants” is misleading, since it’s immigrants from Mexico and Central America who are politically controversial, not generic “immigrants” averaged over all sources.
I’m no expert on American immigration issues, but I presume this is because most immigrants come in through the (huge) south land border, and are much harder for the government to control than those coming in by air or sea.
However, I expect immigrants from any other country outside the Americas would be just as politically controversial if large numbers of them started arriving, and an open borders policy with Europe or Asia or Africa would be just as unacceptable to most Americans.
Are Americans much more accepting of immigrants from outside Central and South America?
immigrants are barely different from natives in their political views, and they adopt a lot of the cultural values of their destination country.
The US is famous for being culturally and politically polarized. What does it even mean for immigrants to be “barely different from natives” politically? Do they have the same (polarized) spread of positions? Do they all fit into one of the existing political camps without creating a new one? Do they all fit into the in-group camp for Caplan’s target audience?
And again:
[Caplan] finds that immigrants are a tiny bit more left-wing than the general population but that their kids and grandkids regress to the political mainstream.
If the US electorate is polarized left-right, does being a bit more left-wing mean a slightly higher percentage of immigrants than of natives are left-wing, but immigrants are still as polarized as the natives?
Thanks for pointing this out!
A few corollaries and alternative conclusions to the same premises:
There are two distinct interesting things here: a magic cross-domain property that can be learned, and an inner architecture that can learn it.
There may be several small efficient architectures. The ones in human brains may not be like the ones in language models. We have plausibly found one efficient architecture; this is not much evidence about unrelated implementations.
Since the learning is transferable to other domains, it’s not language specific. Large language models are just where we happened to first build good enough models. You quote discussion of the special properties of natural language statistics but, by assumption, there are similar statistical properties in other domains. The more a property is specific to language, or necessary because of the special properties of language, the less it’s likely to be a universal property that transfers to other domains.