Music Video maker and self professed “Fashion Victim” who is hoping to apply Rationality to problems and decisions in my life and career probably by reevaluating and likely building a new set of beliefs that underpins them.
CstineSublime
There’s nothing vague about the sentence.
I strongly disagree. “describing the fundamental concepts of reality” is unhelpfully vague, what are these fundamental concepts? I don’t know and can’t guess what it is from that sentence, which is ironic considering it is an Ontological framework.
human writing is evidence of human thinking. If you try writing something you don’t understand well, it becomes immediately apparent; you end up writing a mess, and it stays a mess until you sort out the underlying idea.
Can you elaborate more on this. It feels like quite the opposite to me—the more I’ve thought about something, the messier it comes out. The harder it is to unknot the spider-web of thoughts into a linear rhetorical structure which is readily comprehensible to a virgin reader. Particularly topics I have a tendency to ‘geek out’ on. Does this mean that I don’t truly understand them, or that they are lacking a unifying underlying idea? Am I perhaps confusing passion and knowledge for understanding?
Or is it only evidence of thinking about the writing—the words on the page/screen the reader is looking at right now? And one can have a personal understanding of something which is clear in their own head (or perhaps even readily conveyed to others with similar domain knowledge—like that XKCD comic), but not readily translatable to the page?
I have never heard of this before let alone understand it, can you recommend any good primers? All the resources I can find speak in annoyingly vague and abstract sense like “a top-level ontology that provides a common framework for describing the fundamental concepts of reality.” or “realist approach… based on science, independent of our linguistic conceptual, theoretical, cultural representations”.
Not so much “misread” as “not familiar with”.
What is an example of “perfect” glamorization in everyday conversation, and could you please contrast it with an imperfect glamorization?
How does glamorization differ from exaggeration?
i.e. “My son is a a really good guitar player”, versus the exaggeration “my son is one of best guitarists I’ve ever heard”. Is the exaggeration also glamorization? What would be an exaggeration of the positive qualities of something that isn’t glamorization?
What exactly is the evidence that the Secret uses to claim that thoughts are “atomic”?
I can’t reconcile that with the common writing advice that a sentence should contain only a single thought.“A sentence should contain a complete thought.”[1]
“One thought per sentence. Readers only process one thought at a time.”[2]
“A sentence is a complete thought”[3]
“The point of a sentence is to communicate a thought—that’s basically what a sentence is, a complete thought.”[4]
Some even suggest that only one thought should be expressed in an entire paragraph.
Even looking at a simple sentence like “The Cat is Sleeping” I’m not sure how this could be encoded in a single atom in the mind—because it requires a knowledge of what a cat is, what sleeping is, and how to perform the Categories denoted by “the” and the coupla. Most thoughts are more complex.
What, then, exactly constitutes a thought? Not in the materialist sense, but in the phenomenological sense. At what point would a sentence contain two thoughts rather than one?
I was not aware of lasers as a weapon
U.S. intelligence reported on the danger of Serbian- and French-manufactured laser devices in the former Yugoslavia. Reports from Japan indicated that the cult, Aum Supreme Truth, allegedly planned to attack the Metropolitan Police Department’s main building in Kasumigaseki, Tokyo, with a vehicle equipped with some type of laser weapon before the March 20, 1995 sarin nerve gas subway attack. During the Gulf War, British ground forces were issued protective goggles because they were concerned about Russian-made lasers believed to be in service with the Iraqis. German pilots flying over the Iraqi no-fly zone were also issued laser protective goggles.The U.S. Armed Forces Medical Intelligence Center has reported, “It is highly probable that laser eye injuries occurred in the Iran/Iraq war, based on numerous reports of such injuries and the known purchases of lasers for the implied purpose of weaponization.
Source: https://www.hrw.org/reports/1995/General1.htmI wonder why that ban has held?
The LEO missiles one is feasible I believe, and I imagine would be hard to detect before being used (so maybe in fact some countries do have the tech ready for deployment in extreme scenarios).
Feasible as in cheap and effective, or feasible as in merely possible? It says it in the Wikipedia article—“Its nuclear payload was drastically reduced relative to that of an ICBM due to the high level of energy needed to get the weapon into orbit” I suspect it has less to do with a ban, and more because there’s more viable alternatives available for Nuclear armed nations.
In crime shows and books they often talk about Means, Motive, and Opportunity… I suspect at least one is missing from each example on your list.
Military Moon Bases. The opportunity requires a well established space program with regular, or at least imminent, Lunar visits. The Means is tremendous amounts of resources. Which diminishes the motive—since the higher the opportunity cost, the higher the returns need to be: what is cheaper to do on the moon than on Earth to such a point where it becomes a profitable venture?
How many of these bans have held after the technology or means to do them have become extremely viable or profitable?
I imagine it would be very easy to have a successful ban on destroying the Pyramids of Giza, this is because even demolishing one of the smaller Pyramids is a difficult and thankless task and hasn’t been attempted in over 800 years. If I may be terribly facetious, it would be incredibly easy to ban a group of typical 15 year old boys from using a Rotary Phone… if they can’t find one, stopping the same group of boys from using scatological humor, likely impossible.
I must admit a poverty of imagination; I can’t see how it can be automated. That would be amazing if it could be.
However, the circumstances of each problem or LLM request are always so unique that outside of certain vague guardrails that apply to all problem solving/advice giving (In my experience these take the form of the questions: What have you tried already? Why did you try that way?/what did you expect to happen? What happened instead?). I see the ritual as attempting to explain why this situation is really unique and different – which seems to me to be the antithesis of automation.However, if the situation isn’t unique, then maybe that can be automated. Realizing “oh this is analogous or really similar to this other thing I did”.
Let’s take two examples of situations I’m likely to ask an LLM for help with – “how do I hear the voices in this video stream better?”, and “what is the word to describe the way professions or taxonomies are divided in Platonic Dialogues? I keep getting the name of the two dots on a vowel[1]”
In the sound example, to avoid the boilerplate answers like “check if your audio drivers are working” or “turn up the volume”. I need to think about what I’m actually expecting here, and what kind of helpI actually want – and realize that what I want it to do is tell me how to route the video stream through OBS so that I can use a compressor in the chain to boost it… and if I know how to do that, maybe I don’t need the LLM to even reply to me: I’ve written a prompt and answered myself. This is what I mean that just the ritual of writing a LLM prompt sometimes clarifies things.
However, with the Platonic word example, any harnesses about OBS, signal chains, or software are not going to be relevant, hence can’t be automated.
When you ask it to be brief, do you actually instruct it to answer in a “concise single sentence”? I find that generally works. Even if the answer is one you expect to be longer than one sentence, it tends to cut down the waffling-on.
Note, I append this to the prompt itself, not the system prompt. So something like “[my question] please answer in a single concise sentence.”
This is why I find even going through the ritual of writing an LLM prompt can clarify my own processes and goal—as it is forcing me to explicate a whole complex of hidden assumptions or even expose points where I realize I simply don’t have the knowledge or information yet.
Yes, because that implies they will not be silent about it privately (which means it’s not “just silence”) since there is a circumstance in future they will talk, just behind closed doors.
But I’m not sure if not-in-public is explicit enough to be considered a “strategy”.
Does it have to be explicit? I’m thinking about the friend who is constantly griping about something, and their friends have a vague notion that there is some inevitable threshold where they will “have to talk about this if it continues” but it’s not explicit or specific what that conversation will entail.
”I wish Alan would stop going on about his ex-girlfriend”
“They only broke up a week ago, just let him grieve, but if it gets any worse we’ll have to say something”
”What will we say?”
“I don’t know”
”I don’t know either, but if he doesn’t stop, I won’t be able to bite my tongue”
In the second sentence of my original post I wrote:
I’m principally interested in recording good ideas, tactics, or facts that help me do and finish tasks well.
I do not understand how “interesting or amusing factoids” helps me tactically, or do and finish tasks well. Therefore I think it is entirely unconnected from the point of my notetaking, and my original post. Nor do I think an Anki system solves the core problem of how to convert notes into actions, or better, more efficient behaviors or completion of tasks.
It feels to me that you wouldn’t write something down if you thought it was rubbish. There was something about it that appealed to you, either because it seemed important or you had some other affection for the idea, but there is a mismatch between the sense of importance you felt writing it and later reading it.
Something can be appealing but still utterly useless, and therefore rubbish. Most of my notes are therefore rubbish because they do not help me operate tactically or help me do and finish tasks well.
For example, I may write things that appear to have some relevance to online content creation, perhaps with the vague idea that “this will help me promote my videography business” and then never figure out how to usefully integrate them into promotion content or a advertising strategy, therefore: the note was rubbish, useless. Maybe I’m missing something here, but how does Anki reviewing or better recall help with integration? If the note is useless and rubbish, it doesn’t magically become useful if I can remember it as an ‘amusing factoid’ without application or utility.
but suppose I understood the mechanisms of some human’s mind well enough to predict that human’s actions with the same accuracy? Would it be right to suggest that humans do not make choices since the choices were determined by the mechanisms by which humans choose?
I don’t think we need to suppose… I’d guess you probably do frequently. You have family members, friends, and/or lovers, or people whom you have intimate knowledge and extremely good track records of predicting their behavior?
If an ASI understood humans sufficiently well, would that ASI be justified in claiming that humans do not have preferences? I’m much more comfortable admitting any system that affects outcomes has preferences than denying the preferences of any sufficiently well understood system.
I don’t think it would be any more justified claiming that humans don’t have preferences than I can claim that anybody I know really well doesn’t have preferences. If you can predict which newspaper or soft-drink your Father buys from the store, that doesn’t mean he had no choice in the matter. If there’s no other newspapers in stock, or only one brand of soft-drink—then he has no choice. But, realistically, you can’t choose alternatives you’re not aware of.
A simple test of whether something is not a choice or not is to ask: “if the agent believed something else or had very different desires, would the outcome be very different?”. If no matter what the agent desires or believes, the outcome would always be the same. Then that’s not a choice.
If someone goes up to the fridge at a store and there’s a orange drink, and a strawberry drink. And you know they love Orange flavor, and so they buy the orange. that’s still a choice. But—imagine you knew they HATED Orange, or if they loved Strawberry instead—hypothetically they would then choose the Strawberry. Therefore it was a choice.
Conversely, imagine a spectator high up on an embankment at a motorrace. They are in a sea of people, a mere spec as seen from the track, so they have no earthly way of affecting the result of the motorrace. There’s twenty racers. It doesn’t matter who this single spectator desires or wishes to win—the result is hypothetically always the same. This is not a choice.
I am not familiar of any credible model were a ball can “desire” to go up, and contingent on that alone, it does. This is why it is best represented by the “physical” stance in Dennet’s typology.The word “aware” implies that it is a boolean thing, like “either some system is aware or it is not”, but I think that’s wrong. I think “awareness” varies in amount and kind.
Abstractly, I agree with this, and I think there’s a spectrum of awareness in ways that do influence choices. But I’m struggling for examples right now… the best that comes to mind is when a couple are deciding where to go to dinner, and one of them says “let’s have Italian” knowing there is an Italian restaurant, they aren’t strictly aware of the menu, it could include Ragù, Calzone, Osso Buco or dozens of others choices—but they are aware of at least one restaurant nearby, in their price-range, that does “Italian”.
Likewise preferences themselves often exist in parallel. If Orange isn’t available, maybe they go for Banana, or Cherry. And likewise choices made are often prompted by complex decision making models that are operating on dozens of different dimensions or factors, even something as simple as buying a shirt—is it comfortable? do I like the pattern or the colour? is the material breathable? What are the washing instructions? etc. etc. etc.
Let’s take Copernicus, so I would assume the ‘hard problem’ he solved was the modelling of planets and astronomical objects, right? A heliocentric model simplified the calculations needed, right? Also, was it seen at the time as a problem—as I understand it the Ptolemaic model, while needlessly complicated did do a good job of modelling astronomical objects. I’m not familiar enough with the history to know.
How can I apply this to my own problem solving, on a everyday level?
Would love to see some examples of a “hard problem” where the representation was wrong. But maybe I’m not mathy enough to know the examples.
Not wrong, if used metaphorically, but I think that preference which implies a agent that is aware of and capable of making choices maybe muddies whatever you’re trying to express. In the case of the ball and the hill, that is not the case. Preference, in ordinary parlance, suggests the first of options. Often options are qualitative: “I prefer Chocolate Ice-Cream to Strawberry”. In Economics it’s about “optimal choice” which—again—do the hill and the ball have the capacity to take alternatives? Is there some utility they are maximizing?
Spinoza says that if a stone which has been projected through the air, had consciousness, it would believe that it was moving of its own free will. I add this only, that the stone would be right. The impulse given it is for the stone what the motive is for me, and what in the case of the stone appears as cohesion, gravitation, rigidity, is in its inner nature the same as that which I recognise in myself as will, and what the stone also, if knowledge were given to it, would recognise as will. -
Arthur SchopenhauerIf your objective is to describe the most probable or likely outcome of a system that is better modeled using Daniel Dennett’s Physical Stance than his Intentional Stance, then I’d avoid using “preference”. In the example you’ve given, there’s nothing to suggest the ball will be anywhere else, there’s nothing to suggest it has “options” therefore there are no preferences to speak of.
Preference implies alternative outcomes.
Why is that? Well, your brain has native hardware that understands cause-effects models on its own. You just need reality to shove the relationship in your face hard enough, and your brain will go “ok, seems legit. let’s add it to our world-model”.
What about the opposite? Coincidences that happens with enough regularity that a superstition or inaccurate causal model forms. At a train station, I once saw a three year old swiping their palm on the glass of an advertisement which had a paper loop that rotated on a timer. The child thought there was a causal connection between the palm gesture they probably learned on a tablet like an iPad, and the movement of the paper. Because they kept swiping, and the advertisement rotated pretty quickly, at least for a while they thought they were controlling it. They weren’t. That is not understanding but I don’t see how it is different from your door example. The sensory data of reality matches expectations or patterns.
Those who knew Miasma Theory would have been said to ‘understand’ the causes of disease. From a modern perspective, they didn’t.
What is interesting is we can understand things we know to be false. We can understand a fantasy story. If Carl is jokingly raising a middle finger to Andy who is standing next to Blane, Blane may mistakenly think Carl is being rude to him. But Andy may “understand” how Blane made that mistake. We can understand how Luke Skywalker blasting swaprats gave him the confidence to bring down the Death Star. There is no Luke Skyalker or Death Star. They are not “true” in the sense they are not “real”. But the story can be understood.
I’m not sure how to ask this question—but can writing cultivate understanding, even in the absence of new data about the theme or topic? And when I, or anyone, goes straight to an LLM to clarify an undercooked idea, or theory, or network of thoughts, they are not only outsourcing the work to express it verbally, but also are missing out on an opportunity to think and understand? As per the cliche “writing is thinking”.
You have no idea how many times I’ve tried to redraft this question, all while resisting the urge to get an LLM to rephrase it for public consumption.