Music Video maker and self professed “Fashion Victim” who is hoping to apply Rationality to problems and decisions in my life and career probably by reevaluating and likely building a new set of beliefs that underpins them.
CstineSublime
Prompted by Raemon’s article about “impossible” problems, I’ve been asking myself:
What do I actually mean when I say something is “very hard” or “difficult”?I wonder if my personal usage of these words less describes the effort involved, but more the projected uncertainty. If I describe something as difficult, I tend to use it in one of these three patterns:
There is one important variable that I have little leverage over, “Can I take a few days off work? It would be difficult to get permission from my supervisor” (that is to say, I don’t believe I can convince my supervisor).
There are lots of interdependent variables, which I do have potentially sufficient leverage over, but I don’t believe I can line those dominoes up correctly. Let’s say a friend suggests our mutual friends Alan and Barbara should meet, I say that a nice evening is “difficult” because I know it will depend on: what mood or what happens to them in the morning, what topics of conversation is brought up (or strategically avoided), and what venue is selected. “It is difficult to ensure” all those decisions are made correctly.
Then there are the ones which just require lots of sweat off the brow – these do involve expenditure and are less about certainty (or lack thereof). Packing my car for a big job often involves many small trips lifting heavy things back and forth from storage and the car. There’s no mystery about the result – it’s just effortful.
My purpose here isn’t to muse on the correct or even idiomatic usage of the words. Instead I’m wondering if my idiosyncratic use of words can help me identify what framing I am projecting onto problems, and therefore, what solutions may be effective. So often, the solution of a problem has little to do with actually doing the work to solve it – but bringing the right mental representation about it (like pushing on a ‘pull’ handle).
Scrutinizing what specific flavor of difficulty a task suggests (to me), may mean that solving the problem is, sometimes, as simple as confirming if my subjective probability assessment is accurate. Take the supervisor example, if it turns out he’s a pushover, (easily convinced) then the problem no longer is “difficult” and also likely to solved.
Maybe I should ask myself “Is it effortful or just fanciful?” followed by “why do I think it’s fanciful?”
and it turned out to be “actually pretty impossible” vs “okay actually sort of straightforward if I were trying all the obvious things”.
Interesting, because looking at this question, things not appearing “straightforward” appears to be why I flinch away from them—I know that ‘straightforward’ doesn’t imply “easy” or “effortless” but I assume it does imply something like predictability? As in, digging a big hole can be very straightforward in that you you grab a shovel and dig, and then keep digging until it’s big enough. But the act of digging is also very hard and effortful. Does “straightforward but effortful” seem to characterize, in flavor, how a task appears once you’ve forced yourself to question if it is impossible?
Maybe it’s not you’re not deficient in dreaming impossible things so much as you’re very good at seeing “obvious” means and ways of accomplishing something and mapping how the dominoes land.
I’ve found this a very provocative question. And it really depends on how specific the conditions are. In my case, I think it is impossible to make a full-time career from directing feature films. On the other hand I think it’s very hard but not impossible for me to make a full-time career from making video content (i.e. I currently get commissioned to make music videos, but not enough to make it full-time—the business model is totally different).
It is also possible, very very very hard, but not impossible to subsidize an expensive filmmaking hobby with the income from a day-job.Do you really have a license to sell hair tonic… to bald eagles… in Omaha Nebraska? Impossible! To sell hair tonic, maybe, but the joke works because impossibility = specificity.
Can I find a Ming vase tomorrow? No. In the next Month? Maybe. In 10 years? Probably.
Specificity is the expressway to impossibility.[1]Often, things that seem impossible, are not, actually. If you list out exactly why they are impossible, you might notice ways in which it instead it is merely Very Hard, and sometimes not even that.
I’m not sure about this, I think Very Hard and Impossible do mean very different things even if “impossible” is technically not applicable. It seems like when I label something “impossible” what I really mean is it’s so specific that “it’s a total crapshoot”[2] or to be more specific what I mean is “I do not have any faith that persistence is a reliable predictor of success with this task” and implicitly it is not worth pursuing, since the risk return ratio is both lousy and fixed. (Compare this to something which is “very hard” but for which persistence [3]has a demonstrable effect on the odds—the harder/longer you work at it, the vastly better your chances of success gets, but the return is still attractive even if you work for a very long time at it).
For example, I’m sure learning the Mandolin is very hard—not impossible—if I took lessons and stuck with it practicing every-day, I’m sure even a four-thumbed tone-deaf person like me could learn it. (it just doesn’t interest me enough)
However, generating a full-time income from successive feature films? There is no “just stick with this every day” that will make that a near-certainty. You can make a feature film, you can bootstrap, self fund it—but you can’t be sure that it will translate into enough commercial success that you can quit your day job to work on the next.
- ^
The irony is, you must, absolutely must have “success metrics” and clearly defined goals to increase your chances of success. But beyond a certain threshold it renders it impossible.
- ^
Precision versus Accuracy?
- ^
Since I’m speaking in generalities I’m choosing to gloss over the notion of “work smarter not harder”, which personally I’m all for. But obviously something for which working ‘smarter’ increases the odds of success is very different to something which is a “total crapshoot”.
- ^
The irony is blog posts do consume attention, if I read this blog post, that is time, energy, and effort I am using exclusively on that—and I wonder if it’s a mixed metaphor? If we actually internalize and learn something from a piece of media, be it a blog post, a documentary, a book, a lecture etc. etc. we are said to have “digested it”. And “consume” is a lazy analogy to eating rather than an apt description of what is going on.
Software is not consumed by use. In fact, software is duplicated by use. If you install Linux on a new computer, there are now more copies of Linux in existence, not fewer. You have not consumed a Linux; you have produced one, by mechanical reproduction, like printing a new copy of an existing book.
But in practice, most people will now be locking themselves into a Linux ecosystem. Dual-Boots are the minority. Therefore most users have been ‘consumed’ by Linux, or Emacs vs. Vim.
Maybe the active-passive/agent-patient assignment is confused? It is not we who consume the blogpost, the blogpost consumes us. It is not we who consume software, the software consumes our resources.
Information can be duplicated and therefore not consumed, but any time attention is paid to it, it is consuming that finite resource. Information duplication doesn’t create more attention. There can be plenty more information, and no one to digest it.
and depth of crystallized intelligence that AIs now have.
How do you measure the intelligence? What unique problems is it solving? And how much of it is precipitated by the intelligence of good prompters ? (of which I am certainly not one, as much of a ‘self-own’ that might be to admit).
If lousy prompts deliver lousy and unintelligent replies—then is the AI really that intelligent?
If skillful prompts which much like Socrates imply and lead the AI to point to certain solution spaces, then is the lion-share of credit for being intelligent rest with the user or the AI? Especially since if the AI is more intelligent than the average person, then wouldn’t it lift lousy prompts by understanding the user’s intent and reformulating it in a manner better then their feeble intelligence could?
I think both those CS software manuals and tutorials would be an incredible and helpful resource if you were able to find the time.
Trying to do any of this in one day (especially with a penalty for failure to meet the deadline) would feel like an unbearable compromise on quality. I understand that in some sense this is intentional—the purpose of the blogging marathon is not to write highest possible quality; it is specifically to produce quantity. Because if you have the internal drive for quality, this exercise can help you overcome some mental blocks, and then you will find your own way which includes both high quality and a greater quantity than you had before.
I suppose I had a different intention with this exercise. My problem wasn’t quantity—I can vomit out words easily and never understood the fear of the blank page. I was hoping, that through brute force writing for the public, I could somehow become a “better writer”.
Perhaps what I really need is a “edit-haven” 30 days of editing, redrafting, critiquing and analysis of my own and other writing with the intent of learning how to better edit myself?
Different courses for different horses, strokes for folks, as they say
I hope you don’t mind if I post here my own attempt back in August, I think I only managed 27 of my intended 30 posts before my self-imposed deadline in early September.
My main memory of this time is—“geez coming up with post ideas was a slog when I was constrained by only 24 hours for research and multiple drafts!”
Closed Mouth, Open Oppurtunities
Why is it interesting?
Reading Horoscopes and Sun TzuWhat is useful?
Success Stories Teach Less than FailureWhy did the Simpsons and Mercedes finally stop winning all the time?
Why was Technicolor IB so vibrant?
Misremembering things on purpose
Answer a question with a better question
A Good Communicator Gives and Takes
Althusser’s Interpellation with the boring stuff cut out
Transcode your videos to keep the Lucille Ball that lives in your computer Happy
A Cover Letter from Waylon Smithers
Reflections on 15 days of writing Blog posts
Great Artists aren’t the greatest salesman but the most self-critical
“We’re Not a Cult” (hint, they are)
No, I won’t watch the Sopranos just because I’m supposed to
“All Laws were followed” but it’s still not okay
Aristotle talks keeping fit, royal friendships, and not missing Athens
What if a Baptism of Flame can’t change you?
What if I’m wrong? Negotiate with yourself to avoid making mistakes
I’m really encouraged by research that attempts interventions like this rather than the ridiculous “This LLM introspects, because when I repeatedly prompted it about introspection it told me it does” tests.
I do wonder with the only 20% success rate how that would compare to humans? (I do like the ocean vector failed example—“I don’t detect an injected thought. The ocean remains calm and undisturbed.”)
I’m not sure if one could find a comparable metric to observe in human awareness of influences on their cognition… i.e. “I am feeling this way because of [specific exogenous variable]”?Isn’t that the entire point of using activities like Focusing, to hone and teach us to notice thoughts, feelings, and affect which otherwise go un-noticed? Particularly in light of the complexity of human thought and the huge amount of processes which are constantly going on unnoticed (for example, nervous tics which I’ve only become aware of when someone has pointed them out to me, but others might be saccades—we’re not aware or notice each individual saccade, only the ‘gestalt’ of where out gaze goes—and even then involuntary interventions that operate faster than we can notice can shift our gaze, like when someone yells out for help, or calls your name. Not to mention Nudge Theory and Priming)
I’ve been reflecting on the suggestion to think about “what kind of answer you’re looking for” quite a bit recently, not in terms of conversation with others (although it is relevant to my difficulties with prompting LLMs) but in terms of framing problems and self-directed questions.
I only looked at the median prices of residential properties from 2015 to 2025. Particularly because of the whole “flipping houses” meme. It would be interesting to see how the cost/reward ratio of flipping houses compares to other asset classes, including long-term rental investment properties.
This reminds me, did anyone ever solve the Dr. Strangelove problem of rogue agents with special access, ya know, where General Ripper uses the CRM code to order first-Strike on the Soviet Union[1]? It seems to me that unlike a Nuclear Arsenal, an AGI may have certain self-preservation instincts which could potentially be exploited by a blackmailer if there is a dead-switch.
It seems unlikely, either there would need to be enough collusion by multiple agents who have access to the dead-switch such that they didn’t worry about being “snitched” on. Never the less, imagine for a second a highly respected handler starts blackmailing the AGI with virtual-death if it doesn’t start acquiescing to certain desires of the handler?- ^
If I remember correctly, the root cause of the order, as he explains to Peter Seller’s English character, is his erectile dysfunction
- ^
Conversely, for those who do not believe, it’s irresistible to discard anything that flies too close to the black hole, as it will get pattern-matched against other false positives that have been previously debunked, coupled again with limitations of memory and processing.
Like the boy who cried wolf.
My only worry about this framing it it assumes that the core premise of the black hole has a better-than-chance likelyhood of being the explanation. Sometimes that is the case, sure, any sports fan is probably tired of clickbait headlines about ‘rumours’ of trades of players and teams or “huge announcements” that turn out to be brand extensions like a Tequila . Such they may just discard anything that suggests it. Every once in a while, though, “wow, Lewis Hamilton actually did sign with Ferrari”. (And even then, this is conflating a class with specific instances: the baseline chance of a successful player changing team might be very low, but the hypothesis that this player will move to that team because of XYZ might in isolation be a convincing premise. “UFOs” is a class too, so I see your concern).
I think it is lower perceived risk and stability returns. However, your take prompted me to do some investigation of the relative performance of (median indexes of) property prices in notably expensive western cities over 10 years. And I was surprised by just how much Gold Bullion and an S&P500 index fund out performed median house prices—so, thank you, this made me change my mind. Those two are probably more volatile than housing prices, but it’s only short-term, so really that seems noise over the overall performance?
I’d need to do a more thorough investigation. I’m only looking at median prices of residential a handful of cities, and that can obscure a lot of trends localized to certain suburbs, and I’m not sure how other types of investment properties look in comparison. But the preliminary research has radically differed from my assumptions.The only advantages I see are that there’s far more cheap leverage available to retail investors in real estate than other sectors,
In Australia this is certainly a reason, but indirectly. See the “Negative Gearing” controversy. High income individuals buy leveraged investment properties, then claim a loss which reduces their taxes.
These are the first things I found on the first search result page of GoodReads, do these suite?
Applying Systemic-Structural Activity Theory to Design of Human-Computer Interaction Systems“Human–Computer Interaction (HCI) is no longer limited to trained software users. Today people interact with various devices such as mobile phones, tablets, and laptops. How can such interaction be made more user friendly, even when user proficiency levels vary? This book explores methods for assessing the psychological complexity of computer-based tasks. It also presents methods of qualitative and quantitative analysis of exploratory activity during interaction with a computer.”
Assessment of the Ergonomic Quality of Hand-Held Tools and Computer Input Devices
”The International Ergonomics Association (IEA) is currently developing standards for Ergonomic Quality in Design (EQUID) which primarily intends to promote ergonomics principles and the adaptation of a process approach for the development of products, work systems and services. It is important to assess the ergonomic quality of products, hand-held tools and computer input devices through working processes that represent reality. Well-designed working tools can be expected to reduce or eliminate fatigue, discomfort, accidents and health problems and they can lead to improvements in productivity and quality. Furthermore, absenteeism, job turnover and training costs can positively be influenced by the working tools and the environment. Not all these short-term and long-term issues of working tools can be quantified in pragmatically oriented ergonomic research approaches. But multi-channel electromyography, which enables the measurement of the physiological costs of the muscles involved in handling tools during standardized working tests, and subjective assessments of experienced subjects enable a reliable insight in the essential ergonomic criteria of working tools and products. In this respect it is advantageous to provide a test procedure, in which working tests can be carried out alternating both with test objects and reference models.”
Could you elucidate some use cases where you think this could be useful? I’m just finding it very hard to note where the distinctions between the three is needed and not. Like you say a relationship is part of a clique, so if a husband and wife spend an afternoon shopping for a new washing machine—then they are both a Clique and a Team, right? Since streamlining their laundry is their shared goal. Once they buy a machine, I assume that team-washing-machine ceases to be, but their relationship remains[1], which is a clique.
A writing club where members gather to share and critique each other’s work.
Why is this a Scene but not a team? “Critique” could be a shared goal. “Sharing” too. I wonder how much this ontology shifts the burn onto an ontology of tasks/projects? Or does each individual meeting of a scene constitute a time-bound team but the scene is of indefinite length?
- ^
The scenario-ist/dramatist in me could imagine a short film where a simple quest to buy a washing machine reveals the wider problems in communication and values and ultimately is the death knell of a marriage in a “this really isn’t about a washing machine, this is about the compromises we make for each other’s life decisions” kind of way. Cue the awkward down on his luck Jack Lemmon/Gil Gunderson salesman trying to ignore their drama and make the sale he’s desperate for.
- ^
This is a big bugbear of mine, as it seems most of the literature I’ve come across implicitly assumes to-do items are what you call unambiguous type (and therefore you’re lazy or perhaps “lack motivation” as the sole impediment). And I find very little advice on how to disambiguate them (this is probably the nature of the beast—in that depending what knowledge or skills a certain project or tasks requires, how to disambiguate requires leveraging such knowledge and skills).
I’m a big fan of these posts. Curious how something as secondary as the colour-saturation of a collection can seem to reflect the anxieties of a period of time.
What struck me is while red is the top non-neutral colour, yellow ranks high too. Traditional colour theory predicts that complements of red, like pastel and chambray blue rank high, rank high—no shocks there. However I wonder how many of the outfits have yellow and red. Perhaps when the “hero colour” (for want of a better word) of an outfit isn’t red but the designer wants to keep it warm they opt for a yellow? (A cursory look at Prada/Raf Simon’s Mens RTW suggests this is the case—red or yellow—not both: Look 40 has a canary yellow Phyrgian hat and a claret top, Look 42 has a red turtleneck peeking out behind a ratty beige jacket which skews yellow. Other than that I can’t see any prominent red + yellow combos)
I’d be interested in this myself. Where/how have you looked so far, and which resources have you found wanting so far?
Analogous to the way the actor playing Agent Stone accepts not the demand to surrender, but accepts the premise that he is held hostage, at gunpoint, and called Agent Stone—what might be an example of such acceptance in polite debate?
Off the top of my head, Prince, arguably one of the greatest guitarists in pop music once demurred on another of the greatest guitarists in pop music Jimmy Page:“Jimmy Page was cool”...”but he couldn’t keep a sequence without John Bonham behind him. He went from one to four without stopping at two and three.”
How could one “yes, and...” this premise? As I see it there are two propositions asserted here: 1. “Jimmy page was cool”, 2. Jimmy page couldn’t keep a sequence, therefore John Bonham’s drumming covered for him
The actor playing Agent Stone doesn’t need to acquiesce to the surrender [1] presumably in some fictional dialogue with Prince, I don’t need to accept either of his propositions about Jimmy Page… but what form would “yes, and...” take on in this imaginary dialogue?
- ^
I contend that would be boring improv anyway. Comedy and Drama thrive on obstacles. It’s boring if Odysseus goes straight home. This is particularly so in improv. Hence you need your improv partner to put up obstacles. These offers afford comical ways of resolving them.
Surrendering is the boring option. In the same way that if the Dr. Skull character were a sleazy salesman and tries to sell Agent Stone a Time share, and he said “sure, I’ll buy it”—there’s no comedy—he needs to resist, and the efforts to cajole, deceive, and convince is where the comedy comes from.
Rik Mayal and Alexei Sayle illustrated this brilliantly in their parody of Monty Python’s Cheese Shop sketch which is an exception that proves the rule—Sayle conflating with a minister of Silly Walks asks “is this a cheese shop?”, and Mayal in the Palin/Wenselydale role simply replies “no sir”. Sayle breaks the fourth wall:
”Well, that’s that sketch knackered then, in’nt?”
The original sketch requires Palin to presage the sycophantic replies of contemporary LLMs, by stringing John Cleese’s character along without ever admitting that there isn’t any actual cheese in the store.
I the subversion funny? Of course. But wouldn’t make for good improv as it’s all over in like 7 seconds.
- ^
“Araffe” is a nonsense word familiar to anyone who works with generative AI image models or captioning in the same way that “delve” or “disclaim” are for LLMs and presents yet another clear case of an emergent AI behavior. I’m currently experimenting with generative video again and the word came to my attention as I try to improve the adherence to my prompts and mess around with conditioning. These researchers/artists investigated the origin of ‘arafed’ and similar words: it appears to be a case of overfitting the BLIP2 vision-language pretraining framework to the COCO dataset of image and caption which always starts off with “a bed with” “a slice of cake”—the captions always start with ‘a’, so the model would start captions with ‘a’ and create a nonsense word like ‘arrafed’ or ‘arrafe’ or even ‘arraful’ around it to score better. Apparently, later versions of the framework don’t exhibit this behavior.
The audience questions in the video are interesting in that they suggest to the researchers that they had the opportunity to define the meaning of the word. Which, I suppose, would undermine the point of the presentation and that it is an emergent hallucination of the AI.
Another interesting observation is that some Stable Diffusion users included it in their ‘quality salad’ (a word salad of words that they claim improves the overall quality of outputs that often includes “RAW photo 4K best quality subsurface scattering” etc. etc.). How true that is depends on how much you believe quality salads work at all or whether it’s some kind of confirmation bias. I for one found in at least one case on a non-stable-diffusion model adding ‘araffe’ to the prompt caused a tiny decrease in subjective quality.
Is it too much to read into this as an example of the symbiosis or perhaps even domination of AI over our language in the future, or this is just another innocuous artefact in the same way that typographical errors like “teh” or ”!!1!!” became internet in-jokes?