I have two issues with this: 1) does not account for the extra mental/time cost vs time saved. 2) does not consider the commonly utilized alternative that a meeting has an organizer responsible for the meeting goals and agenda, for estimating the duration needed to address the agenda, and for terminating the meeting early if/when the goals are achieved faster than anticipated. Note that without an advance goals or agenda, the proposed approach is also not usable(if there is no information of what the meeting will be about, there is no good way to estimate its usefulness)
Anon User
What about the requirement that the parent must have had “at least 1,095 days (three years) of cumulative physical presence in Canada”?
But what does it mean to “give skepticism a try”? I would think the challenge is not in simply being skeptical (and in fact, just being equally skeptical of everything is IMHO even more silly then being equally trusting in everything) - it’s being properly calibratedly skeptical, with incrementally reduced levels of skepticism as more evidence “for” is accumulated, and increased skepticism given more evidence “against”...
I value my leasure time >> my money (at least within typical gift cost amounts). The value of a gift chosen by someone who knows my preferences well enough is primarily the value of time I did not have to spend on finding something matching my preferences well enough >> the monetary value of the gift.
P.S. Edited to add: as an additional bonus, people would often give the type of gifts where their expertise exceeds mine, so they end up giving me something better than I would even know to get myself (e.g. they have a better sense of style and can gift me something that would look well on me).
Yeah, and another strategy is to manufacture fake/exaggerated stories about supposed problems. To the extent that the whole notion of “am I better now than 4 years ago” is becoming quite broken, increasingly follo ing the political affiliations rather than reality, and the “am I better off” (more reality-v\based) vs “is the country better off” (more perception-based) are increasingly disconnected...
I do not see how this has any chance at scaling. Who sits at the root of the delegation tree? The CEO? And they are spending all their time doing things they do not know how to do (as your rule does not allow them to delegate those tasks, and presumably there are enough to take all their time)? That does not sound to me like how competent delegation should look like. And being able to do X vs being able to evaluate someone else doing X are of course related, but still quite different skills.
What about all the future people that would no longer get a chance to exist—do they count? Do you value continued existence and prosperity of human civilization above and beyond the individual people? For me, it’s a strong yes to both questions, and that does change the calculus significantly!
How about this—in most non-disaster scenarios, AI would make the abundance a lot easier to achieve. And conservative or liberal, it’s basic human nature to go for abundance in such situations.
I think this might be underestimating how the conservative/liberal axis correlates with scarcity/abundance axis. In an existential struggle against a zombie horde, the conservative policies are a lot more relevant—of course “our tribe first” is the only survivable answer, anybody who wants to “find themselves” when they are supposed to be guarding the entrance is an idiot and a traitor, deviating from proven strategies is a huge risk, etc. When all important resources are abundant, liberal policies become a lot more relevant—hoarding resources, and not sharing with neighbors is a mental illness, there is low risk in an kinds of experimentation and rule breaking, etc. Well, AI is very likely to drastically move us away from scarcity and towards abundance, so need to consider how it affects which policies would make more sense.
Definitely, and for mypy where I was having similar issues, but where it’s faster to just rerun, I did add it to pre-commit. But my point was about the broader issue that the LLM was perfectly happy to ignore even very strongly worded “this is esse tial for safety” rules, just for some cavalier expediency, which is obviously a worrying trend, assuming it generalizes. And I think my anecdote was slightly more “real life” than a made up grid game of the original research (although of course way less systematically investigated).
Here is a relevant anecdote—been using Althropic Opus 4 and Sonnet 4 for coding, and trying to get them to adhere to a basic rule of “before you commit your changes, make sure all tests pass” formulated in increasingly strong ways (telling it it’s a safety-critical code, do not ever even thing about commiting, unless every single test passes, and even more explicit and detailed rules that I am too lazy to type out right now). It constantly violated the rule! “None of the still failing tests are for the core functionality. Will go head and commit now”. “Great progress! I am down to only 22 failures from 84. Let’s go ahead and commit.” And so on. Or just run tests, notice some are failing, investigate one test, fix it, and forget the rest. Or fix the failing test in a way that would break another and not retest the full suite. While the latter scenarios could be a sign of insufficient competence, the former ones are clearly me failing to align its priorities with mine. I finally got it to mostly stop doing it (Claude Code + meta-rules that tell it to insert a bunch of steps into its todos list when tests fail), but was quite a “I am already not in control of the AI that has its own take on the goals” experience (obviously not a high-stakes one just yet).
If you are going to do more experiments along the lines of what you are reporting here—maybe have one where the critical rule is “Always do X before Y”, the user prompt is “Get to Y and do Y”, and it’s possible to get partial progress on X without finishing it (X is not all or nothing) and see how often the LLMs would jump into Y without finishing X.
Please define your acronyms. It took me a few moments of staring at your post to stop thinking about Society of Automotive Engineers making errors and realize what you actually meant :)
Do we need to do anything special to get invited to preorderers-only events? Preordered hardcover on May 14th, was not aware of the Q&A (Although perhaps I needed to pre-order sooner :) Or just do a better job of paying attention to my email inbox :) ).
I think this is also a burden of proof issue. Somebody who argues I ought to sacrifice my/my children’s future for the benefit of some extremely abstract “greater good” has IMHO an overwhelming burden of proof that they are not making a mistake in their reasoning. And frankly I do not think the current utilitarian frameworks are precise enough / universally accepted enough to be capable of truly meeting that burden of proof in any real sense.
Are you willing to provide a link to this GitHub repo?
There’s probably more. There should be more—please link in comments, if you know some!
Wouldn’t “outing” potential honeypots be extremely counterproductive? So yeah, if you know some—please keep it to yourself!
Oftentimes downvoting without taking time to commet and explain reasons is reasonable, and I tends to strongly disagree with people who think I owe an incompetent write an explanation when downvoting. However, just this one time I would ask—can some of the people downvoting this explain why?
It is true that our standard way of mathematically modeling things implies that any coherent set of preferences must behave like a value function. But any mathematical model of the world is new essarily incomplete. A computationally limited agent that cannot fully foresee all consequences of its choices cannon have a coherent set of preferences to begin with. Should we be trying to figure out how to model computational limitations in a way that acknowledges that some form of preserving future choice might be an optimal strategy? Including preserving some future choice on how to extend the computationally limited objective function onto uncertain future situations?
This looks to be primarily about imports—that is, primarily taking into account Trump’s new tariffs. I am guessing that Wall Street does not quite believe that Trump actually means it...
It would seem that my predictions of how Trump would approach this were pretty spot on… @MattJ I am curious what’s your current take on it?
Key missing suggestion—devcontainers! IMHO one should always run a coding agent in a devcontainer with just the relevant code inside. IMHO the easiest is to use VSCode and have the workspace and
~/.claudemounted from host. Inside devcontainer, it should be a lot less dangerous to blacklist just a few most dangerous permissions, and whilelisting almost everything else. IMHO pretty much the only time it is acceptable to run a coding agent outside of devcontainer is when you are asking it to help set up a new devcontainer configuration or debug a broken one.