Music Video maker and self professed “Fashion Victim” who is hoping to apply Rationality to problems and decisions in my life and career probably by reevaluating and likely building a new set of beliefs that underpins them.
CstineSublime
The main issue is light bleed and internal reflection, which would severely compromise the image quality.
I haven’t noticed any degregation in Errol Morris’s cinematic closeups of talking heads like this one, which suggests that this can be done without light bleed and internal reflection.
to involve the use of a dichroic beam splitter, you sacrifice the ability to detect red light,
Is that so? Documentary Filmmaker Errol Morris uses a similar system in his documentaries and considering he uses them for talking head closeups which inherently, representing human skin, contain a lot of color information in the Red Channel, I am not aware of any problems.
If you know a ton about a topic but can’t explain it clearly to a novice, you have a lot of knowledge of the details but not something we might call understanding, or knowledge of how it all fits together and why someone might/should care about any of it.
How do you know if the topic is just unrealistic to get a novice up to speed, or if you’re not actually understanding it? Are there tell-tale signs?
What is understanding and what obvious or immediately apparent traits does a mind that has understanding about a topic differ from one that has maintained a large body of knowledge but not “understanding”?
Ah, now I know how to phrase my question, it’s really two questions:
1. What distinguishes understanding from knowledge (or even passion about a topic)?
2. How can I write for the express purpose of understanding better? Presumably, not all manners of writing and jouralling are equally conducive to promoting understanding. And as such it’s not enough to write, or not-out-source to an LLM, there’s a particular method or way of thinking and composition of text which will improve the results.
On the first point—there’s plenty of things I can geek out about and wax lyrical—but it comes out as a mess and impossible to compose into a linear structure suitable for a virgin reader. Does this mean I don’t understand?
On the second point—I haven’t seen or enjoyed the benefits that others get from journalling or other forms of writing in understanding. I gain a lot more from dialogue (see how I finally figured out what my question was above), and FAFO: just doing the thing. I presume this means I’m doing writing wrong.
And if you go straight to an LLM to “clarify this” you accidentally tend to throw out that hypothesis.
I’m not sure how to ask this question—but can writing cultivate understanding, even in the absence of new data about the theme or topic? And when I, or anyone, goes straight to an LLM to clarify an undercooked idea, or theory, or network of thoughts, they are not only outsourcing the work to express it verbally, but also are missing out on an opportunity to think and understand? As per the cliche “writing is thinking”.
You have no idea how many times I’ve tried to redraft this question, all while resisting the urge to get an LLM to rephrase it for public consumption.
There’s nothing vague about the sentence.
I strongly disagree. “describing the fundamental concepts of reality” is unhelpfully vague, what are these fundamental concepts? I don’t know and can’t guess what it is from that sentence, which is ironic considering it is an Ontological framework.
human writing is evidence of human thinking. If you try writing something you don’t understand well, it becomes immediately apparent; you end up writing a mess, and it stays a mess until you sort out the underlying idea.
Can you elaborate more on this. It feels like quite the opposite to me—the more I’ve thought about something, the messier it comes out. The harder it is to unknot the spider-web of thoughts into a linear rhetorical structure which is readily comprehensible to a virgin reader. Particularly topics I have a tendency to ‘geek out’ on. Does this mean that I don’t truly understand them, or that they are lacking a unifying underlying idea? Am I perhaps confusing passion and knowledge for understanding?
Or is it only evidence of thinking about the writing—the words on the page/screen the reader is looking at right now? And one can have a personal understanding of something which is clear in their own head (or perhaps even readily conveyed to others with similar domain knowledge—like that XKCD comic), but not readily translatable to the page?
I have never heard of this before let alone understand it, can you recommend any good primers? All the resources I can find speak in annoyingly vague and abstract sense like “a top-level ontology that provides a common framework for describing the fundamental concepts of reality.” or “realist approach… based on science, independent of our linguistic conceptual, theoretical, cultural representations”.
Not so much “misread” as “not familiar with”.
What is an example of “perfect” glamorization in everyday conversation, and could you please contrast it with an imperfect glamorization?
How does glamorization differ from exaggeration?
i.e. “My son is a a really good guitar player”, versus the exaggeration “my son is one of best guitarists I’ve ever heard”. Is the exaggeration also glamorization? What would be an exaggeration of the positive qualities of something that isn’t glamorization?
What exactly is the evidence that the Secret uses to claim that thoughts are “atomic”?
I can’t reconcile that with the common writing advice that a sentence should contain only a single thought.“A sentence should contain a complete thought.”[1]
“One thought per sentence. Readers only process one thought at a time.”[2]
“A sentence is a complete thought”[3]
“The point of a sentence is to communicate a thought—that’s basically what a sentence is, a complete thought.”[4]
Some even suggest that only one thought should be expressed in an entire paragraph.
Even looking at a simple sentence like “The Cat is Sleeping” I’m not sure how this could be encoded in a single atom in the mind—because it requires a knowledge of what a cat is, what sleeping is, and how to perform the Categories denoted by “the” and the coupla. Most thoughts are more complex.
What, then, exactly constitutes a thought? Not in the materialist sense, but in the phenomenological sense. At what point would a sentence contain two thoughts rather than one?
I was not aware of lasers as a weapon
U.S. intelligence reported on the danger of Serbian- and French-manufactured laser devices in the former Yugoslavia. Reports from Japan indicated that the cult, Aum Supreme Truth, allegedly planned to attack the Metropolitan Police Department’s main building in Kasumigaseki, Tokyo, with a vehicle equipped with some type of laser weapon before the March 20, 1995 sarin nerve gas subway attack. During the Gulf War, British ground forces were issued protective goggles because they were concerned about Russian-made lasers believed to be in service with the Iraqis. German pilots flying over the Iraqi no-fly zone were also issued laser protective goggles.The U.S. Armed Forces Medical Intelligence Center has reported, “It is highly probable that laser eye injuries occurred in the Iran/Iraq war, based on numerous reports of such injuries and the known purchases of lasers for the implied purpose of weaponization.
Source: https://www.hrw.org/reports/1995/General1.htmI wonder why that ban has held?
The LEO missiles one is feasible I believe, and I imagine would be hard to detect before being used (so maybe in fact some countries do have the tech ready for deployment in extreme scenarios).
Feasible as in cheap and effective, or feasible as in merely possible? It says it in the Wikipedia article—“Its nuclear payload was drastically reduced relative to that of an ICBM due to the high level of energy needed to get the weapon into orbit” I suspect it has less to do with a ban, and more because there’s more viable alternatives available for Nuclear armed nations.
In crime shows and books they often talk about Means, Motive, and Opportunity… I suspect at least one is missing from each example on your list.
Military Moon Bases. The opportunity requires a well established space program with regular, or at least imminent, Lunar visits. The Means is tremendous amounts of resources. Which diminishes the motive—since the higher the opportunity cost, the higher the returns need to be: what is cheaper to do on the moon than on Earth to such a point where it becomes a profitable venture?
How many of these bans have held after the technology or means to do them have become extremely viable or profitable?
I imagine it would be very easy to have a successful ban on destroying the Pyramids of Giza, this is because even demolishing one of the smaller Pyramids is a difficult and thankless task and hasn’t been attempted in over 800 years. If I may be terribly facetious, it would be incredibly easy to ban a group of typical 15 year old boys from using a Rotary Phone… if they can’t find one, stopping the same group of boys from using scatological humor, likely impossible.
I must admit a poverty of imagination; I can’t see how it can be automated. That would be amazing if it could be.
However, the circumstances of each problem or LLM request are always so unique that outside of certain vague guardrails that apply to all problem solving/advice giving (In my experience these take the form of the questions: What have you tried already? Why did you try that way?/what did you expect to happen? What happened instead?). I see the ritual as attempting to explain why this situation is really unique and different – which seems to me to be the antithesis of automation.However, if the situation isn’t unique, then maybe that can be automated. Realizing “oh this is analogous or really similar to this other thing I did”.
Let’s take two examples of situations I’m likely to ask an LLM for help with – “how do I hear the voices in this video stream better?”, and “what is the word to describe the way professions or taxonomies are divided in Platonic Dialogues? I keep getting the name of the two dots on a vowel[1]”
In the sound example, to avoid the boilerplate answers like “check if your audio drivers are working” or “turn up the volume”. I need to think about what I’m actually expecting here, and what kind of helpI actually want – and realize that what I want it to do is tell me how to route the video stream through OBS so that I can use a compressor in the chain to boost it… and if I know how to do that, maybe I don’t need the LLM to even reply to me: I’ve written a prompt and answered myself. This is what I mean that just the ritual of writing a LLM prompt sometimes clarifies things.
However, with the Platonic word example, any harnesses about OBS, signal chains, or software are not going to be relevant, hence can’t be automated.
When you ask it to be brief, do you actually instruct it to answer in a “concise single sentence”? I find that generally works. Even if the answer is one you expect to be longer than one sentence, it tends to cut down the waffling-on.
Note, I append this to the prompt itself, not the system prompt. So something like “[my question] please answer in a single concise sentence.”
This is why I find even going through the ritual of writing an LLM prompt can clarify my own processes and goal—as it is forcing me to explicate a whole complex of hidden assumptions or even expose points where I realize I simply don’t have the knowledge or information yet.
Yes, because that implies they will not be silent about it privately (which means it’s not “just silence”) since there is a circumstance in future they will talk, just behind closed doors.
But I’m not sure if not-in-public is explicit enough to be considered a “strategy”.
Does it have to be explicit? I’m thinking about the friend who is constantly griping about something, and their friends have a vague notion that there is some inevitable threshold where they will “have to talk about this if it continues” but it’s not explicit or specific what that conversation will entail.
”I wish Alan would stop going on about his ex-girlfriend”
“They only broke up a week ago, just let him grieve, but if it gets any worse we’ll have to say something”
”What will we say?”
“I don’t know”
”I don’t know either, but if he doesn’t stop, I won’t be able to bite my tongue”
In the second sentence of my original post I wrote:
I’m principally interested in recording good ideas, tactics, or facts that help me do and finish tasks well.
I do not understand how “interesting or amusing factoids” helps me tactically, or do and finish tasks well. Therefore I think it is entirely unconnected from the point of my notetaking, and my original post. Nor do I think an Anki system solves the core problem of how to convert notes into actions, or better, more efficient behaviors or completion of tasks.
It feels to me that you wouldn’t write something down if you thought it was rubbish. There was something about it that appealed to you, either because it seemed important or you had some other affection for the idea, but there is a mismatch between the sense of importance you felt writing it and later reading it.
Something can be appealing but still utterly useless, and therefore rubbish. Most of my notes are therefore rubbish because they do not help me operate tactically or help me do and finish tasks well.
For example, I may write things that appear to have some relevance to online content creation, perhaps with the vague idea that “this will help me promote my videography business” and then never figure out how to usefully integrate them into promotion content or a advertising strategy, therefore: the note was rubbish, useless. Maybe I’m missing something here, but how does Anki reviewing or better recall help with integration? If the note is useless and rubbish, it doesn’t magically become useful if I can remember it as an ‘amusing factoid’ without application or utility.
Okay, I see the confusion—no you wouldn’t: reverse what you’re seeing. What you’re seeing in the trailer the camera’s point of view—those close ups of Donald Rumsfeld talking. Imagine for a second that’s what your robot was seeing through its camera-eyes, as for Mr. Rumsfeld he was seeing a projection of Errol Morris’ face OVER the camera lens. This technique is called Interrotron. What I’m proposing is instead of projecting an interviewer’s face on a beam-splitter in front of the lens—you project your glowing anger lights. A similar technique is used on almost all news broadcasts with text instead of video. As you can see from the trailer, there’s no ghosting or second face over Donald Rumsfeld’s. Which would mean your red light wouldn’t have any bleed into the robot’s vision but be visible to anyone looking at the robot.