:D mfw OUR CORE VALUES are
other people’s values other people’s values
other people’s values other people’s values
(ಠ_ಠ) mfw jenn attributes the development of civil discourse to the woke
:D mfw OUR CORE VALUES are
other people’s values other people’s values
other people’s values other people’s values
(ಠ_ಠ) mfw jenn attributes the development of civil discourse to the woke
I regret writing that as if I were confident of my opinion. Hinduism is the major world religion that I know the least about, and I knew less about it 17 years ago.
(I’ve read more of the Koran since posting this, and retract my statement about Islam being like Christianity. I now see Islam as a political movement masquerading as a religion, more like the Iroquois Confederacy than like Christianity.)
I didn’t say Hinduism doesn’t make moral claims, or impose duties. I don’t consider “vegetarianism is virtuous” to be a fact about the world. I don’t remember why I said Hinduism isn’t about making claims about the world. It makes philosophical claims about the nature of existence, which you might say is making the ultimate claims about the world.
Probably I meant that to be a Hindu, you don’t have a long list of very specific and theoretically falsifiable facts which you must believe, such as that Jesus was born of a virgin, suffered under Pontius Pilate, was crucified, and rose again on the 3rd day, etc. Christianity has a lot of claims that specific events happened at specific times and places, for specific purposes; and if you fail to believe any one of them in fact happened, for the reason given, some people say you aren’t a Christian. Catholicism has a LOOONG list of things you’re required to believe, called the Catechism, and many of them are impossible for most Catholics to understand. There were many centuries in which you could get burned to death for publicly denying any one of thousands of points of dogma which were officially stamped as dogma in a variety of ways (scripture, church council, certain Papal declarations, being in the Catechism, for instance). The Bible doesn’t say that the Sun moves around the Earth, and I don’t think it was even official dogma; but Galileo would have been burned if he kept on denying it.
Hinduism makes claims about the world, and has epics containing lots of events which maybe you are supposed to believe actually happened. I’ve read that some Hindus would get upset at someone who denied Rama was a real person. But I don’t think anybody has ever been burned to death for it.
Re. attitude towards the gods, what I’ve read about Hinduism said that Hindu theologians or philosophers usually see the gods as manifestations or symbols rather than as people who live one timeline, being at just one place at one time, while the masses see them as people living within time. I have the possibly bad habit of using the name of a religion to denote the rigorous theology rather than the folk practice. Probably I do this because I was raised Christian, and the theological Christians, like Catholics and Calvinists, must continually distinguish between “real” Christians who adhere to all the dozens or thousands of points of doctrine, and “phony” Christians who just go to Church on Sunday.
I was with you until this:
”Ultimately, the civilization systematically destroys the ability of its unreasonable men to compete for the slots in the society where rationality is required to maintain the society’s energy and the society looses the ability to respond coherently to threats and collpases.”
The problem isn’t that the unreasonable rationalists can’t compete for the slots of power. The problem is that they can; and they take over all those slots, and have no conservatives around to tell them to try to understand the purpose of fences before tearing them down.
Re. “If I didn’t know the history of connectionism, and I didn’t know scientific history in general—if I had needed to guess without benefit of hindsight how long it ought to take to go from Perceptrons to backpropagation—then I would probably say something like: “Maybe a couple of hours? Lower bound, five minutes—upper bound, three days.” :
Using gradient descent was figured out quickly. The problem was that it wasn’t useful to build multiple layers using backprop, because ,as you mentioned, perceptrons used a linear activation transfer functions, so each layer of the network was doing a matrix multiplication. If it’s matrix multiplication all the way down, you can’t solve categorization problems which are linearly separable, which is approximately all of them.
What was hard, was thinking non-linearly. This is even more surprising to me, since Minsky’s Perceptrons explicity pointed out that linearity was the problem! An entire field just rolled over and died because someone said “You can’t do that with linear functions”, and nobody thought of using a nonlinear function!
But there is something extremely difficult for humans in thinking non-linearly. For instance, all of the models used to “disprove” group selection assume that the contribution of an allele to a group’s reproduction rate is linearly proportional to the fraction of group members with that allele. (You’d have known this over a decade ago if you’d read my wiki post on group selection before deleting it.) This even though group selection under that linearity constraint is mathematically the same as kin selection, and even though Ed Wilson said loudly for years that eusociality in insects doesn’t correlate with haplodiploidy; it correlates with having a communal defensive structure. Such a structure is useless unless it is finished and large enough to protect the colony, so it provides very little value unless the number of individuals working on it is above some threshold. (Though I worked out some of the math in 2013 or so, and it is not trivial to satisfy the conditions needed for group selection; you need a specific kind of nonlinearity. It would not be the default condition.)
This post admits that EFA sometimes works, that the examples we see in college usually work, and that “Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen, or at least manage to grapple with most of the probability mass.”
People who make this argument think they have managed to grapple with most of the probability mass. So this article justifies the method it is calling out as especially bad.
Even the most moral people—in fact, especially the “most moral” people—do not incorporate the benefits to others, especially future others, into their utility functions.
I clicked on the “I changed my mind” icon to indicate that I changed my mind about this old view of mine, not that my old view changed my mind today. Oops. I can’t delete it.
“Someone with a cleft palate has 25 times as great a chance of having a child with a cleft palate, as someone without a cleft palate does.”
I know many people whose lives were radically changed by The Lord of the Rings, The Narnia Chronicles, Star Wars, or Ender’s Game.
The first three spawned a vast juvenile fantasy genre which convinces people that they’re in a war between pure good and pure evil, in which the moral thing to do is always blindingly obvious. (Star Wars at least had a redemption arc, and didn’t divide good and evil along racial lines. In LotR and Narnia, as in Marxism and Nazism, the only possible solution is to kill or expel every member of the evil races/classes.) I know people on both sides of today’s culture war who I believe were radicalized by Lord of the Rings.
Today’s readers don’t even know fantasy wasn’t that way before Tolkien and Lewis! It was adult literature, not wish-fulfilment. Read Gormenghast, A Voyage to Arcturus, The Worm Ouroboros, or The King of Elfland’s Daughter. It often had a nihilistic or tragic worldview, but never the pablum of Lewis or Tolkien.
Ender’s Game convinces people that they are super-geniuses who can turn the course of history single-handedly. Usually this turns out badly, though it seems to have worked for Eliezer.
Calling cancer a disease is like calling aging a disease. We definitely want to call it a disease, because otherwise it couldn’t get federal funding. But a doctor is unlikely to see two cancer cases in her lifetime which have exactly the same causes. Cancerous cells appear to typically have about 100 mutations, about 10 of which are likely to have collectively caused the cancer, based on analysis of the gene networks they affect. Some of the genes mutated are mutated in many cancers (eg BRCA1, p53); some are not.
The gene networks disrupted in cancer are generally related to the regulation of the cell cycle, DNA repair, or apoptosis. Any set of mutations that damages these networks sufficiently may cause cancer, but the specific way cancer develops will depend on the precise mutations. So when we ask “what causes cancer”, we’re not asking a question that has a specific answer, like “what causes AIDS”; we’re asking a question which is more like asking “what causes my car to stop running”. DNA damage may cause cancer, just like shooting enough bullets at your car may cause it to stop running.
Today we can distinguish cancers with about the level of resolution that we might say, “This car stopped running because its tires deflated”, “This car stopped because its oil leaked out”, “This car stopped because its radiator fluid leaked out.” To fix the car, you’d really like to know exactly which of many hoses, fuses, or linkages were destroyed, which is analogous to knowing exactly which genes were mutated. (My analogy loses accuracy here because car-part networks can be more-easily disrupted, while gene networks can be more-easily pushed back into a healthy attractor by a generic up-regulation or down-regulation caused by some drug. Also, you can’t fix a car by removing all the damaged parts.)
It’s been obvious for many years that curing cancer requires personalized medicine of the kind mentioned in this post, in which what the FDA approves is an algorithm to find a custom cure for any individual, not a specific chemical or treatment. I’m very glad to hear the FDA has taken this step.
I expect a generic algorithm to cure cancer will require cell simulation, and probably tissue and biofilm simulation to get the drugs, siRNAs, plasmids, or whatever into the right cells.
This also sounds like the stereotypical literary / genre fiction distinction.
And it sounds like the Romantic craft / art distinction. The concepts of human creativity, and of visual art as something creative or original rather than as craftsmanship or expertise, were both invented in France and England around 1800. Before then, for most of history in most places, there was no art/craft distinction. A medieval court artist might paint portraits or build chairs. As far as I’ve been able to determine, no one in the Western world but madmen and children ever drew a picture of an original story, which they made up themselves, before William Blake—and everybody knows he was mad.
This distinction was inverted with the modern art revolution. The history of modern art that you’ll find in books and museums today is largely bunk. It was not a reaction to WW1 (modern art was already well-developed by 1914). It was a violent, revolutionary, Platonist spiritualist movement, and its foundational belief was the rejection of the Romantic conception of originality and creativity as the invention of new stories, to be replaced by a return to the Platonist and post-modernist belief that there was no such thing as creativity, only divine inspiration granting the Artist direct access to Platonic forms. Hence the devaluation of representational art, with its elevation of the creation of new narratives and new ideas, to be replaced by the elevation of new styles and new media; and also the acceptance of the revolutionary Hegelian doctrine that you don’t need to have a plan to have a revolution, because construction of something new is impossible. In Hegel, all that is possible, and all that is needed, to improve art or society, is to destroy it. This is evident in eg Ezra Pound’s BLAST! and the Dada Manifesto. Modern artists weren’t reacting to WW1; they helped start it.
References for these claims are in
Modernist Manifestos & WW1: We Didn’t Start the Fire—Oh, Wait, we Totally Did
Some chickens will be coming home to roost now that the only part of art that AI isn’t good at—that of creating new ideas and new stories that aren’t just remixes of the old—is that part which modern art explicitly rejected.
That’s an old game. My first PhD advisor did nothing with my thesis chapters but mark grammatical errors in red pen and hand them back. If your advisor isn’t doing anything else for you now, he certainly won’t do anything for you after you’ve graduated. You may need to get a new advisor.
I realize that I ignored most of the post in my comment above. I’m going to write a sloppy explanation here of why I ignored most of it, which I mean as an excuse for my omissions, rather than as a trustworthy or well-thought-out rebuttal of it.
To me, the post sounds like it was written based on reading Hubert Dreyfus’ What Computers Can’t Do, plus the continental philosophy that was based on, rather than on materialism, computationalism, and familiarity with LLMs. There are parts of it that I did not understand, which for all I know may overcome some of my objections.
I don’t buy the vitalist assertion that there aren’t live mental elements underlying the LLM text, nor the non-computationalist claim that there’s no mind that is carrying out investigations. These are metaphysical claims.
I very much don’t buy that LLM text is not influenced by local-contextual demands from “the thought” back to the more-global contexts. I would say that is precisely what deep neural networks were invented to do that 3-layer backprop networks don’t.
Just give someone the prompt? It wouldn’t work, because LLMs are non-deterministic.
I might not be able to access that LLM. It might have been updated. I don’t want to take the time to do it. I just want to read the text.
“If the LLM text contains surprising stuff, and you DID thoroughly investigate for yourself, then you obviously can write something much better and more interesting.”
This is not obvious, and certainly not always efficient. Editing the LLM’s text, and saying you did so, is perfectly acceptable.
This would be plagiarism. Attribute the LLM’s ideas to the LLM. The fact that an LLM came up with a novel idea is an interesting fact.
The most-interesting thing about many LLM texts is the dialogue itself—ironically, for the same reasons Tsvi gives that it’s helpful to be able to have a dialogue with a human. I’ve read many transcripts of LLM dialogues which were so surprising and revelatory that I would not have believed them if I were just given summaries of them, or which were so complicated that I could not have understood them without the full dialogue. Also, it’s crucial to read a surprising dialogue yourself, verbatim, to get a feel for how much of the outcome was due to leading questions and obsequiousness.
But I don’t buy the argument that we shouldn’t quote LLMs because we can’t interrogate them, because
it also implies that we shouldn’t quote people or books, or anything except our own thoughts
it’s similar to the arguments Plato already made against writing, which have proved unconvincing for over 2000 years
we can interrogate LLMs, at least more-easily than we can interrogate books, famous people, or dead people
We care centrally about the thought process behind words—the mental states of the mind and agency that produced the words. If you publish LLM-generated text as though it were written by someone, then you’re making me interact with nothing.
This implies that ad hominem attacks are good epistemology. But I don’t care centrally about the thought process. I care about the meaning of the words. Caring about the process instead of the content is what philosophers do; they study a philosopher instead of a topic. That’s a large part of why they make no progress on any topic.
“Why LLM it up? Just give me the prompt.” Another reason not to do that is that LLMs are non-deterministic. A third reason is that I would have to track down that exact model of LLM, which I probably don’t have a license for. A fourth is that text storage on LessWrong.com is cheap, and my time is valuable. A fifth is that some LLMs are updated or altered daily. I see no reason to give someone the prompt instead of the text. That is strictly inferior in every way.
I think that referring to LLMs at all in this post is a red herring. The post should simply say, “Don’t cite dubious sources without checking them out.” The end. Doesn’t matter whether the sources are humans or LLMs. I consider most recent LLMs more-reliable than most people. Not because they’re reliable; because human reliability is a very low bar to clear.
The main point of my 1998 post “Believable Stupidity” was that the worst failure modes of AI dialogue are also failure modes of human dialogue. This is even more true today. I think humans still produce more hallucinatory dialogue than LLMs. Some I dealt with last month:
the millionaire white male Ivy-league grad who accused me of disagreeing with his revolutionary anti-capitalist politics because I’m privileged and well-off, even though he knows I’ve been unemployed for years, while he just got his third start-up funded and was about to buy a $600K house
friends claiming that protestors who, on video, attacked a man from several sides before he turned on them, did not attack him, but were minding their own business when he attacked them
my fundamentalist Christian mother, who knows I think Christianity is completely false, keeps quoting the Psalms to me, and is always surprised when I don’t call them beautiful and wise
These are the same sort of hallucinations as those produced by LLMs when some keyword or over-trained belief spawns a train of thought which goes completely off the rails of reality.
Consider the notion of “performativity”, usually attributed to the Nazi activist Heidegger. This is the idea that the purpose of much speech is not to communicate information, but to perform an action, and especially to enact an identity such as a gender role or a political affiliation.
In 1930s Germany, this manifested as a set of political questions, each paired with a proper verbal response, which the populace was trained in behavioristically, via reward and punishment. Today in the US, this manifests as two opposing political programs, each consisting of a set of questions paired with their proper verbal responses, which are taught via reward and punishment.
One of these groups learned performativity from the Nazis via the feminist Judith Butler. The other had already learned it at the First Council of Nicaea in 325 AD, in which the orthodox Church declared that salvation (and not being exiled or beheaded) depended on using the word homoousios instead of homoiousios, even though no one could explain the difference between them. The purpose in all four cases was not to make an assertion which fit into a larger argument; it was to teach people to agree without thinking by punishing them if they failed to mouth logical absurdities.
So to say “We have to listen to each other’s utterances as assertions” is a very Aspie thing to say today. The things people argue about the most are not actually arguments, but are what the post-modern philosophers Derrida and Barthes called “the discourse”, and claimed was necessarily hallucinatory in exactly the same way LLMs are today (being nothing but mash-ups of earlier texts). Take a stand against hallucination as normative, but don’t point to LLMs when you do it.
Yeah, probably. Sorry.
I didn’t paste LLM output directly. I had a much longer interaction with 2 different LLMs, and extracted the relevant output from different sections, combined them, and condensed it into the very short text posted. I checked the accuracy of the main points about the timeline, but I didn’t chase down all of the claims as thoroughly as I should have when they agreed with my pre-existing but not authoritative opinion, and I even let bogus citations slip by. (Both LLMs usually get the author names right, but often hallucinate later parts of a citation.)
I rewrote the text, keeping only claims that I’ve verified, or that are my opinions or speculations. Then I realized that the difficult, error-laden, and more-speculative section I spent 90% of my time on wasn’t really important, and deleted it.
Me too! I believe that evolution DID fix it—apes don’t have this problem—and that the scrotum devolved after humans started wearing clothes. ’Coz there’s no way naked men could run through the bush without castrating themselves.
Don’t start with obsidian! It’s expensive, and the stone you’re most-likely to cut yourself on. It’s vicious. Wear leather gloves and put a piece of leather in your lap.
An old flint-knapping joke:
Q. What does obsidian taste like?
A. Blood.
As a failed flintknapper, I say that the most-surprising thing about stone tools is how intellectually demanding it is to make them well. I’ve spent at least 30 hours, spread out across one year, with 3 different instructors, trying to knap arrowheads from flint, chert, obsidian, and glass (not counting time spent making or buying tools and gathering or buying flint); and I all I ever made was roughly triangular flakes and rock dust. You need to study the rock, guess where the fracture lines run inside it, and then make a recursive plan to produce your desired final shape. By “recursive” I mean that you plan backwards from the final blow, envisioning which section of the rock will be the final produce, and what shape it should have one blow before to make the final blow possible, and then what shape it should have one blow before that to make the penultimate blow possible, and so on back to the beginning, although that plan will change as you proceed. It’s like playing chess with a rock, trying to predict its responses to your blows 4 to 8 moves ahead.
So if I were to speculate on what abilities humans might have evolved on account of stone tool-making, I would think of cognitive ones, not reflexes or manual dexterity.
(I might be tempted to speculate on how the evolution of knapping skills interacted with the evolution of sex or gender roles. But the consensus on to what degree stone knapping was sexed is in such a state of flux that such speculation would probably be futile at present.)
There’s already a lot of experimental archaeology asking what the development of stone tool technology over time tells us about the evolution of human cognition. I haven’t noticed anyone ask whether tech development drives cognitive evolution, in a cyclical process; the default assumption seems to be that causation is one-way, with evolution driving technology, but not vice-versa.
Caveat: I’ve only done a fly-by over this literature myself.
Learning to think: using experimental flintknapping to interpret prehistoric cognition. https://core.tdar.org/document/395518/learning-to-think-using-experimental-flintknapping-to-interpret-prehistoric-cognition [Abstract of a conference talk. You can find references to her later work on this topic at https://www.researchgate.net/profile/Nada-Khreisheh]
Dietrich Stout 2011. Stone Toolmaking and the Evolution of Human Culture and Cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1567):1050–1059. Analyzes different lithic technologies into action hierarchies to compare their complexity; also graphs the slow polynomial or exponential increase in the number of techniques needed by each lithic technology over 3 million years. Only covers the Olduwan, Acheulean, and Levallois periods.
Antoine Muller, Chris Clarkson, Ceri Shipton, 2017. Measuring behavioural and cognitive complexity in lithic technology throughout human evolution. Journal of Anthropological Archaeology 48:166-180.
Stone toolmaking difficulty and the evolution of hominin technological skills. Antoine Muller, Ceri Shipton, Chris Clarkson, 2022. Nature Scientific Reports 12, 5883 (2022). This study analysed video footage and lithic material from a series of replicative knapping experiments to quantify deliberation time (strike time), precision (platform area), intricacy (flake size relative to core size), and success (relative blank length).
“Multiple organizations working on parts of the same problem can achieve more collectively than one big charity alone.”
Do you really mean “working on multiple parts of the same problem can achieve more than working on just one part of the problem”, or do you really believe you can achieve more collectively just by breaking up one big organization into many little organizations?