If you want to discuss or debate an issue to resolution/conclusion with me, explicitly ask for that. I’m open, by request, to putting major effort into resolving disagreements.
Elliot Temple
But you can’t expect any given context to supply you with a set of decisive criteria that narrow your options to one.
Most goals have many solutions which we should be ~indifferent between – they all work and it’s not worth our time to optimize more.
In the cases where optimization is worthwhile and there are multiple solutions, we can narrow it down further by considering more ambitous goals.
As a simple approximation, looking only at viable solutions you want to optimize between, you may maximize one factor. Maximizing a single factor doesn’t require combining factors, dimension conversion, rank ordering or weighting, and keeps the method non-compensatory (a problem with one factor can’t be outweighed by some other factors being good). The problems with non-linear value functions are often quite manageable when dealing with only one non-binary factor. If you model decision making as multiplying many binary factors, you can also multiply in one non-binary factor without the problems that come from multiple non-binary factors. This gives you a simple answer which I don’t consider ideal but it’s mostly OK and doesn’t require reading essays to get a more complicated answer.
It uses an arbitrary threshold of decisiveness.
Budgets, or more generally goals, aren’t arbitrary and have breakpoints/thresholds inherent in them, which we should look for. The most generic threshold is “enough (or a low enough amount for negative factors) for goal success”.
If so, how can they do that? How would they or their intuition determine what numbers roughly feel right?
They can do that. Asking how they do it doesn’t mean it’s impossible.
My claim is it can’t be done other than via conjectures and refutations, CF, the stuff I’m advocating. I’m claiming that other methods don’t work. If people do it but you don’t know how, that is compatible with my claim, since they may be using the things I’m saying do work. This isn’t counter-evidence against me.
There are many different versions of induction.
Which is why it is difficult to show none of them could possibly work.
They have common themes, so it can be done using abstract arguments as long as people agree in broad strokes on what sorts of things are and aren’t induction. If you start loosening up the definition of “induction” to include C&R, that’s way too broad, and it’s no longer the same thing that Popper or I said doesn’t work, and it no longer fits the historical tradition/meaning of induction (unless we’re missing something, which you’d have to show).
If you pick a specific version of induction (preferably one with at least one book explaining it in detail like Popper’s books explain Critical Rationalism) then we can discuss how it differs from C&R, what it claims, and whether it lives up to those claims.
I have picked probabilistic prediction, which can be shown to work directly, without needing a theoretical justification.
My primary concern with literature isn’t the justification but just the specification of how it works. You haven’t provided a well-defined non-moving target for my criticism, as both CR and CF provide to you. Usually, even when highly abstract discussion is pretty effective (as is needed to cover induction generically), it’s still best to go over at least one more specific example, so if you could specify one version of induction in detail (preferably via cite) we could use it as an example.
You know the “aaaaa” pattern is simpler than the others. Its no great mystery.
I have an answer in that easy case that I believe I got via C&R. If you don’t give the math, then you aren’t showing that some non-C&R method can evaluate simplicity. And just because I have an answer in a few easy cases doesn’t mean that you or I have a good answer in harder cases.
People here like Kolmogorov complexity. That isn’t some unanswerable question.
Kolmogorov complexity is uncomputable and machine-dependent, right? So it’s not a usable approach. That people like it anyway is evidence about how hard the question is and how poor the known answers are.
You don’t need much intelligence to do simple induction, since simple organisms can do it.
I deny that humans can do induction. I also deny that simple organisms can do it. I doubt this is a good sub-topic to go into right now.
Then decisiveness.isn’t an objective criterion …it’s a question of setting up a threshhold, saying that 80% or 90% or 99% likelihood counts as decisiveness. Decisiveness is disguised weighting, if it isn’t infallibility.
Per my article, decisiveness, like other idea evaluation, depends on the goal and context. “It costs $100” is decisive criticism for a $20 budget goal but not a $200 budget goal.
But this doesn’t use likelihoods or weights. It uses qualitative differences or breakpoints for quantities (which are the points where there difference in quantity makes a qualitative difference). The generic breakpoint is “good enough for success at my goal or not?”
Decisive + indecisive criteria is better than decisive alone, because it enables.more fine grained decision making.
You can do fine-grained decision making, without limitation, using decisive reasoning alone. And convenience comparisons or marginal benefits are irrelevant given my claim (which is currently an open issue under discussion) that indecisive reasoning doesn’t work at all.
If you are only trying to satisfy your only values, then the weighting is just how much you value things in relation to each other. Presumably, your objection is that the lack of objective criteria ..but if you are making a personal decision, why would that matter.
Epistemology should be general purpose and cover impersonal issues like scientific controversies, and allow for productive debate rather than being subjective or arbitrary.
By no objective criteria do you mean people can and should just subjectively/intuitively make up the numbers with no math? If so, how can they do that? How would they or their intuition determine what numbers roughly feel right? By using intelligence via some other full general-purpose epistemology which has been used as a premise/prerequisite of this approach? My understanding is that for this kind of weighted factor math stuff to be a first epistemology – a first solution to how people think intelligently, as I believe its claimed to be – then the math has to work objectively and you can’t just rely on people somehow intelligently coming up with numbers that are in the right ballpark. If you rely on intelligence then it’s only a secondary method which leaves all the primary questions in epistemology open.
Also if the numbers are being made up non-objectively so they feel about right, why not just make up a conclusion that feels about right directly? What good is the intermediate step of making up the numbers?
“But that’s Conjecture and Refutation!” Maybe it is! If you want to say induction cannot possibly work , and maintain that C&R does work, you need to show that induction isn’t a form of C&R. (And also that it’s failing at something that is actually claimed for it by inductionists).
There are many different versions of induction. If you pick a specific version of induction (preferably one with at least one book explaining it in detail like Popper’s books explain Critical Rationalism) then we can discuss how it differs from C&R, what it claims, and whether it lives up to those claims.
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other.
Yes. But I can still choose the simplest that fits the data I currently have , Ie. I can do induction in a good-enough way.
Which patterns are simplest? What’s the rule to judge that? Does applying the rule require intelligence as a prerequisite?
Elliot Temple’s Shortform
Lol I was talking to Claude about If Anyone Builds It, Everyone Dies and I hit “safety filters”:
Chat paused
Opus 4.7′s safety filters flagged this chat. Due to its advanced capabilities, Opus 4.7 has additional safety measures that occasionally pause normal, safe chats. We’re working to improve this. Continue your chat with Sonnet 4, send feedback, or learn more.
Retry with Sonnet 4
Thanks for engaging again.
Decisive: I think this is the best issue to resolve first and I’m hopeful we’ll be able to succeed here.
The ordinary meaning of “decisive” is “settling an issue; producing a definite result”. I don’t see where it says infallibly, permanently, without the possibly of later revision, or anything like that. We can reach a definite result (a conclusion) based on our currently available evidence and ideas.
People often talk about strong and weak arguments. All weak or moderate arguments, and many strong arguments, are indecisive. When shopping for a house, you might note nice kitchen countertops (indecisive, weak argument), a pool (indecisive, strong argument), painted a pretty color (indecisive, weak argument), large yard (indecisive, moderate argument), and many more things. Or you might figure out your goal specifically enough to enable a decisive argument like “I want a commute under 15 minutes and 4+ bedrooms; this house has 3 bedrooms so I won’t buy it”. Both styles of argument are fallible. But they do have a clear, significant difference. I think “decisive” is a good fit for this difference: 3 bedrooms being too few settles the issue and produces a definite result, whereas the large yard didn’t. Logically, on the assumptions or premises that the house has 3 bedrooms and the goal is 4+, we can reach a conclusion. But if we know it has a large yard and our goal is a good house, we cannot reach a conclusion: that’s compatible with picking or not picking this house.
Nothing about this is infallible. I could have misunderstood logic, or counting, or my goal, or what a house is, or all sorts of other things. While any of my conclusions are open to potential revision, it’s also realistic that they aren’t revised anytime soon, so despite fallibilism there is a significant difference between issues where I reached a conclusion and issues where I didn’t.
Also, are you familiar with Elimination by Aspects (EBA) or Satisficing? They have similarities/overlap with CF which could help clarify this part.
If you’re familiar with MCDM/MCDA literature, that could help too. There’s a concept of compensatory and non-compensatory approaches. Compensatory approaches mean that a weak score on some factors can be compensated for by a strong score on other factors. Compensatory approaches use factors indecisively, while non-compensatory approaches use factors decisively. In EBA, if a theory fails at one of the criteria then it’s eliminated with no way to un-eliminate it within the current decision making process (you have to go outside the process and invoke fallibility, new information, etc., to revise the conclusion).
Hempel’s Paradox: Relevant. Part of the issue.
Asymmetry: When you see a white raven, that doesn’t provide certainty. You could have misidentified the bird species. But on the premise that you saw a white raven, then logic enables you to conclude that “all ravens are black” is false. Asymmetrically, on the premise that you really did see a black raven, or a million of them, you cannot conclude that “all ravens are black” is true. With some arguments, if you assume your premises and background knowledge are true, then logic dictates a conclusion, while with other arguments even if your premises and background knowledge are correct that still wouldn’t be enough to reach the conclusion. Some arguments are decisive (settle issues, produce definite results) when assuming their premises and your background knowledge, while others still aren’t. This difference is compatible with fallibility (your premises and background knowledge could be doubted, revised, etc.).
Simplest pattern:
The simplest pattern is “what will.happen before will happen again”. Simple organisms can implement that...ve”
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other. Do you disagree? If you agree, then this simple pattern idea doesn’t guide which patterns to induce/use, right? So I don’t see how this claim helps. Examples: https://xkcd.com/1122/
Rule induction: Do any of these claim to offer a general purpose thinking method (including capable of doing philosophy debates, like we are now) which solves the which pattern(s) problem?
Cannot work for induction: patterns are likely to continue in the future approaches cannot possibly work in the context of infinitely many patterns that don’t continue and no viable solution for choosing between patterns.
Cannot work for weighted factors: Dimension conversion to generic goodness only works approximately and only in special cases. Other dimension conversions are also special cases, though some aren’t approximate (like E=mc^2). Relying on dimension conversion cannot possibly work for a general purpose thinking system because it’s not generally available. Also, the concept of factor weights relies on the importance of the factor being approximately the same for different values of the factor, which is often false (both due to failure breakpoints and due to diminishing marginal utility).
Certainty: I’ve been trying to discuss fallibilist versions of CF, weighted factors and induction. Critiquing infallibilists wasn’t my focus. One of my last discussions with David Deutsch was actually about this, back in ~2013. From memory, he basically claimed that all justificationists (advocates of any kind of positive/supporting arguments) are infallibilists, which I denied. I brought up LessWrong people in general as an example, since they tend to be non-Popperian fallibilists. He claimed that they’re only fallibilists by contradicting themselves, which doesn’t really count or help. I was unable to find out from him what the alleged contradiction is between 1) fallibilism 2) positive/supporting/justifying arguments.
Duhem-Quine:
I wasn’t accusing Pooper of naive Popperism.
ok great. I don’t know who you were accusing, but generally speaking there are plenty of Popperians who I’m unimpressed by, so we might agree, idk.
I don’t think you would reply like this if I wrote a post about how Bayesian arguments are better than frequentist arguments.
Thanks for engaging.
So, to know that a criticism is decisive, you have to know that no one could possibly come up with a counter criticism.
I think you didn’t take into account the definition that I used: “A decisive argument (or group of arguments) contradicts the negation of its conclusion, so both can’t be true.” Excluding the possibility of counter criticism is unnecessary for this definition to be met. The point is that if A and B could both be true – if they’re compatible – then it’s problematic to view B as a criticism of A.
Neither form of perfection is available.
The goal is basically logical relevance, not perfection.
You are right that induction is dumb, but it still sometimes works..especially if taken probabilistically.
For induction to work, it’d have to define steps a person can follow to induce a theory. It’d have to specify what constitutes inducing a theory. The main issue with induction isn’t the quality of the results, but actually defining a specific method that produces any results. Over the years, I’ve never been able to get an answer to this along with a worked example and answers to basic questions like which of the infinitely many patterns fitting the data should be induced and which shouldn’t and why those.
Weighting is needed to see which is false.
When two things contradict and you’re deciding what side to take, weighting them and choosing the higher weighted side is one approach. But it’s certainly not the only approach. Since you’re just choosing between two things, quantitative evaluation seems less relevant or appealing than in many other scenarios.
I could go into more detail here and it’s an interesting topic but I think I’ve written enough for an initial reply so I’ll leave it at saying I don’t see what aspect of contradiction-resolution makes quantitative approaches mandatory. My best guess is you think they’re always mandatory for everything, which might be better approached from another angle, not via this sub-problem.
Weighting isn’t adding apples and oranges , it’s adding value_of(n apples) and value_of(m oranges). Everything gets converted to the same type first.
My link discusses dimension conversion (like from apples to value) being problematic. That’s covered.
All our arguments are fallible
Then none are decisive!
Do you think fallibilism prohibits reaching conclusions? Decisive basically means conclusive, aka adequate to tentatively, fallibly reach a conclusion, as against arguments that don’t provide that much (where accepting the truth of the argument, as a premise, would still be inadequate to reach a conclusion).
Well .. it’s not as simple as naive CR makes out. A single observation can be erroneous (eg Martian canals, cold fusion).
Popper knew that and wrote about it.
Indecisive arguments dont have to be logically flawed...they can be reframed as valid probabilistic arguments.
Do you have an example? If it’s actually valid, I might tell you it’s decisive. As above, decisive is an easier standard than you interpreted it as. I’m not sure what sort of probabilistic argument you have in mind though.
Are you trying to say we should use worse forms of argument on purpose because of epistemic learned helplessness? I don’t see how that would help and you haven’t given any analysis about that. Epistemic learned helplessness is a separate issue from what I was talking about: when using arguments, which types are impersonally best, just looking at the subject matter and arguments themselves? I wasn’t talking about human behavior or psychology.
Isn’t your point about all arguments, not just decisive arguments? What does it have to do with my discussion of which types of arguments are logically and epistemically better than other types of arguments?
What do you mean?
Here’s my best guess, but this is low confidence. I presented a non-mainstream view of epistemology. You are not an expert on epistemology. So you will defer to people you see as experts on this topic without engaging with my arguments (like your link talks about using history as an example). If that’s what you mean, that’s fine, but I think this site is a reasonable place to find people to engage with about epistemology.
Thank you for answering my question.
I think it’s very clearly wrong according to standard English grammar rules, but I also think that Eliezer knows that
How did you reach that conclusion? The large number of comma errors in the essay (along with semi-colon errors and others) suggest to me that he doesn’t know. I don’t think they’re all deliberate stylistic choices. Many of the broken rules are widely followed, uncontroversial, and infrequently broken on purpose.
judging by the number of upvotes on Eliezer’s post (and all the rest of his posts, for that matter), it seems like most people on LessWrong don’t find this writing style difficult or annoying
Yes, on balance, people at LessWrong like his posts. I wouldn’t have finished reading RAZ, HPMOR and IE if his writing didn’t have virtues. That doesn’t mean there isn’t room for improvement. My suggestion was intended to primarily help with less receptive audiences, not LessWrong members.
If the reader understands what you are trying to say, you wrote “correctly”. There is no “wrong” beyond that.
(This is also the position of most linguists [...])
Most linguists are descriptivists. There’s a common misconception that descriptivists don’t believe in wrong answers. Actually, they scientifically observe real communities and describe their use of language. Each of those communities has rules (often unwritten and inexplicit) for what is correct or incorrect. Children commonly make incorrect but understandable statements and are corrected. Descriptivism says every English dialect is valid instead of privileging some communities over others. Written English is a somewhat different matter. Punctuation isn’t spoken and its rules aren’t reducible to aspects of spoken English.
Sources:
What Descriptivism Is and Isn’t (“Even the most anti-prescriptivist linguist still believes in rules”)
Why Descriptivists Are Usage Liberals (“[descriptivists] make observations about what the language is rather than state opinions about how we’d like it to be.” and “But no matter how many times we insist that “descriptivism isn’t ‘anything goes’”, people continue to believe that we’re all grammatical anarchists and linguistic relativists, declaring everything correct and saying that there’s no such thing as a grammatical error.”)
Descriptivism isn’t “anything goes” (Says “I goed to the store” is incorrect.)
Stephen Dodson of languagehat commenting on “The New Yorker vs. the descriptivist specter” by Ben Zimmer (“descriptivism in the linguistic sense has nothing to do with spelling or style (in the “do commas go inside or outside quotes?” sense); those things are arbitrary/conventional and are decided by reference to dictionaries and style guides, respectively. [...] That issue has nothing to do with grammar and spoken usage, which is what descriptivism addresses, and it’s a disservice to clear thinking and honest discussion to pretend it does.”)
The Linguistics of Punctuation (Argues that punctuation is its own system, not a derivative system corresponding to intonation or pauses.)
The Cambridge Grammar of the English Language (“we do not find social variation between standard and non-standard [punctuation] such as we have in grammar: [...] [no] repertory of variants that are used in a consistent way by one social group but not by another. Moreover, the style contrast between formal and informal is of relatively limited relevance to punctuation.” Gives punctuation rules including “a strong prohibition on punctuation separating subject and verb”.)
EDIT:
It is plausible to me that some linguists are desriptivists about spoken language but not about written language, but that seems very rare.
You missed my point, so I’ll say it more plainly: You’re objectively, factually wrong about the position of most linguists, including about spoken language. I provided sources and quotes. The specific misconception you have is a well known source of frustration to linguists which they have repeatedly complained about.
Arguments Should Be Decisive Criticisms
The utter extermination of humanity, would be bad!
I hope you’re open to unexpected blunt criticism. This comma is wrong. This post has ten comma errors including repeated subject-verb splits.
Studying the craft of writing more, including comma rules, would materially help with your efforts to persuade people about AI risk.
I am absolutely not joking or trying to be a pedantic jerk. I wrote philosophy essays for over 15 years before I studied grammar. I wish I’d studied it earlier. Besides improving my writing, it ended up helping with text analysis, debate, and organizing my thoughts.
EDIT: To habryka, or anyone else who thinks the example comma in the quotation is correct: Why do you think that? Do you have a source for a rule which permits or requires it? It splits the subject (extermination) from the finite verb (would), similar to writing “Extermination, is bad.”
So both pro- and anti-capitalist people seem to underestimate how much big companies break the law? Pro-capitalists, because they want to defend all companies (they don’t realize how much an essential part of capitalism is that bad companies fail). Anti-capitalists, because they see the problem with companies per se, or market per se, so they don’t care much about details.
Yeah. I’ve run into that not-caring-about-the-details-of-things-you-dislike thing before in other contexts. For example, Ayn Rand fans generally dislike Karl Popper (while not knowing accurate criticisms or summaries of his work). I tried posting Popper criticisms on a Rand forum and got negative reactions: people thought it was boring and pointless since they already thought they knew he was bad. I was hoping to show that I thought critically about Popper, and knew more than them about Popper, before bringing up some of Popper’s good ideas, but it didn’t work.
Also anti-capitalists tend to be pro-government. There’s a pro-company, anti-government tribe against an anti-company, pro-government tribe. Liking government gets in the way of seeing the government as enforcing laws poorly and being ~half of the problem. My view (that the companies and government are both bad) doesn’t fit with either tribe.
Yeah, I would expect that big companies win unfairly by lobbying and changing the laws in their favor, not by simply breaking the laws. But it makes sense that if you can bribe the legislative part of the government, you can probably bribe the judicial part, too. So breaking the law and not getting punished is easier than waiting for the law to be changed in your favor, and gives you more of an advantage against competitors.
I see more systemic non-enforcement of old laws than direct bribes or law changes. Fraud was illegal before the US was a country, but a common reaction to new types of fraud is to think we need a new law to make them illegal.
Also, when companies get caught doing fraud (and various other awful things) and it’s acknowledged as illegal, they often pay fines that are far too small to disincentivize bad behavior. I think most elite businessmen and politicians are part of the same social hierarchy that tends to protect their own without consciously realizing they’re doing something wrong.
I am not familiar with the American justice system, so I can’t comment on it. Here in Slovakia, the justice system is utterly corrupt.
I’m American. I think most court cases are biased not corrupt, but we do have corruption too. I think our politicians take more bribes than our judges do. What’s tricky is that systemic bias overlaps with systemic corruption. For example, for-profit prisons lobby politicians and make friends in high places. They seek a greater supply of profitable inmates, then as a downstream consequence the average judge is more biased and worse laws are passed. Then more black and brown people are put in jail. The cause and effect is often indirect without a bribe or kickback for the judge. Direct corruption happens sometimes, and it’s hard to know how often, but at least it’s a scandal once it gets into newspapers, e.g. https://en.wikipedia.org/wiki/Kids_for_cash_scandal
I think the most elegant solution, here, is to say that fraud represents a form of violence, in the same way that taking something important that belongs to someone else and then sprinting off with it represents a form of violence.
FYI, this has already been said by relevant thought leaders. Rand calls fraud “indirect use of [physical] force” and a “[violent] crime”. Mises calls fraud a form of “aggression” and views it as one of the main things to protect the free market against. Fraud is often explicitly prohibited by the NAP.
The Virtue of Selfishness, ch. 14, The Nature of Government, by Ayn Rand:
A unilateral breach of contract involves an indirect use of physical force: it consists, in essence, of one man receiving the material values, goods or services of another, then refusing to pay for them and thus keeping them by force (by mere physical possession), not by right—i.e., keeping them without the consent of their owner. Fraud involves a similarly indirect use of force: it consists of obtaining material values without their owner’s consent, under false pretenses or false promises. Extortion is another variant of an indirect use of force: it consists of obtaining material values, not in exchange for values, but by the threat of force, violence or injury.
Return of the Primitive: The Anti-Industrial Revolution in “Political” Crimes, by Ayn Rand:
A crime is a violation of the right(s) of other men by force (or fraud). It is only the initiation of physical force against others—i.e., the recourse to violence—that can be classified as a crime in a free society (as distinguished from a civil wrong).
Human Action: A Treatise on Economics by Ludwig von Mises:
Beyond the sphere of private property and the market lies the sphere of compulsion and coercion; here are the dams which organized society has built for the protection of private property and the market against violence, malice, and fraud.
Socialism: An Economic and Sociological Analysis by Ludwig von Mises:
Men must choose between the market economy and socialism. The state can preserve the market economy in protecting life, health and private property against violent or fraudulent aggression; [...]
I’ve talked with many people who do consider fraud a type of aggression, indirect physical violence or initiation of force, but who still generally don’t apply that to companies like Amazon or Wells Fargo.
I’m not convinced that manipulation should be handled by law (but I am concerned about it – I have several articles about creative adversaries who don’t initiate force particularly when they have big budgets behind the manipulation). But I don’t think that’s the right debate to start with anyway.
I think there is a ton of stuff which isn’t fuzzy, isn’t near a gray area, in which the company clearly violates both the letter and spirit of the law or contract. So we could start policing that and see how it goes. I suggesting focusing on the easier cases first. Also in general ambiguous contracts are (rightly) adjudicated in favor of the person who didn’t write them or have any lawyers or power, so we could also tackle those without getting into the trickier cases. (Unfortunately, even in these clearer cases there is still a lot of resistance to policing companies. But I still think they are easier cases to address than mere manipulation.)
I could give examples if people dispute that there’s a lot of blatant fraud, law breaking and contract breaking in the world.
This is also my answer to Eli Tyre who said that fine print can address legitimate edge cases in a sibling comment at https://www.lesswrong.com/posts/beDDK8MNxkkB7yfQY/i-changed-my-mind-about-error-correcting-debate-misogyny-and?commentId=MqKthhrMBHhXopwht
This asymmetry makes me think that many libertarians are probably quite okay with fraud and manipulation;
Yeah. I now think most people with similar views wouldn’t change their mind when presented with the same sort of evidence that changed my mind.
This also applies to pro-capitalist Republicans, who are more numerous than libertarians.
And I’ve noticed something sort of similar applies to a ton of anti-capitalist left wing people: a lot of them think that big companies use lawyers to find holes in the law to get away with being evil while not actually breaking the law. They think we need to pass new laws to prevent the bad behavior of companies instead of believing that companies are routinely violating existing laws. For example, I was debating a vegan from Effective Altruism who hates factory farms but said one of the reasons it’s hard to fight them is they’re careful to follow the law. He changed his mind after I sent him a report from a lefty pro-animal charity investigating and documenting tons of law violations at factory farms. But his default belief was that the big companies that he hates are law abiding. When a lot of the people who are biased against the companies believe the companies largely follow the law, it makes more sense that it’s hard to get people who are biased in favor of the companies to see them as frequently breaking laws.
When I said that, I was using standard definitions that excluded C&R.
“better than chance prediction” with the predictions done by what method? What is the math algorithm or flowchart? You’re still not providing specifics.
This is jumping ahead. Humans do something. Whether or not its induction depends on what induction is, which is a current conversation topic.
I’m not going to debate Chatgpt, and this is unhelpful when I’ve already read many versions of induction and don’t need an introductory summary. Is there no literature you can cite that you think writes down correct details of induction? The issue isn’t my familiarity with induction, it’s you picking a specific claim. Even if you’re unsure and think one of many inductivist positions may be right, you could still pick a single one for us to discuss in more detail. I can’t pick that for you but it needs to be picked for me to give more specific criticism.
What are your intentions with this discussion? I’d be open to trying to actually work through these issues and reach a conclusion. I’d be open to mutually agreeing to put in some effort. Right now, every time I reply, I don’t know if you’re ever going to reply again. I don’t think we’re going to resolve these issues quickly but I think the topics are important and I’m interested in trying seriously.