Just this guy, you know?
Dagon
I notice I’m confused. I can’t tell if I just disagree or if I’ve missed a big part of the debate. This doesn’t seem to address causality at all, only low-information trades, which I think have very little controversy.
“acausal” means “no causal channel”, in all the writing I’ve noticed about acausal trade and acausal blackmail. That is, no direct NOR indirect information or influence on outcomes, only pure-logic priori modeling.
I think your model misses the core of a normal trade—it has nothing to do with initiation nor verification. It has to do with TRADE. Party A takes an action that causes party B to take a reciprocal action, which neither of them would take without the other. There’s usually agreement and verification in it, but the core is the quid pro quo.
Your examples all include some causality (you give an apple, a charity gets some money). But then your objections are about information, not about causality.
upvoted but disagree. (at least) two likely directions that could solve this:
-
if it’s cheap to build, it’s cheap to rebuild. Just start over every few years, using all the data you’ve gained during the previous iteration. Vibecode the data migration/simplification, vibecode the API compatibility during cutover, And you’ll have way better tools when it comes time to rebuild. In fact, perhaps you should be shorting the companies who are still spending a ton of money on artisinal code. Or worse, outsourcing/contracting to non-rockstar human coders, who almost certainly use LLMs without telling you.
-
even if context windows hit a wall, the engineering around it (RAG, hierarchical agents, old-school separation of concerns) has a lot of headroom. Similar to how human programmers end up with structure and hierarchy in large systems, LLMs will too. There’s yet a ways to go before “software architecture” is vibe-primary, but it’s not impossible.
A vibecoding company is therefore a company I would short. The more vibey it is, the shorter the position I would take.
Can you give a few examples of vibecoding companies? Are these companies selling vibecoding or related tooling? I wouldn’t short those yet (though the big labs might kill them almost accidentally). Or are these companies with a specific business model that happen to vibecode most of their software? I’d evaluate them on their business idea and execution, not on their vibecoding.
-
I don’t know how deeply “in the circle” I am. I suspect that many of my coworkers are even less so than I am (but haven’t really asked). There’s wide agreement in that group that AGI is coming relatively soon. There’s no agreement on ASI, either on definition or timeline or impact. The most common belief is that some aspects will surpass human capabilities, but uncertain when (or if) the infrastructure for continuous learning/adaptation and long-term integrated preferences will appear.
It’s a bad investment for the same reason a lot of small-scale investing in commodities which don’t have strong market infrastructure is—the overhead of transactions strongly outweighs the risk/reward. Anyone who’s going to pay you (as opposed to paying their ISP or aggregator) for a small block of IP addresses will instead buy/hoard their own.
It’s also subject to a fair bit of elasticity—when it gets expensive, there are technological options which are less ideal, but well worth it when the cost savings justifies it. cf. fracking for oil. IPv6 and CGNAT are two options you mention, and others could be invented if needed.
I’d argue that IPv6 is already well-established enough in the roaming/mobile/cell world that it’ll put a cap on IPv4 prices. The expense/hassle of switching to IPv6 is real, but not worth $hundreds to most people, or $thousands to most small businesses.
Well, you can’t simulate it because the mechanism of prediction is unspecified, as is the mechanism of free will that makes the decision. You just don’t know if, in the thought experiment universe, you actually have an open option to choose.
You can very easily simulate the trivial case (ignore causality and decision theory, assume Omega cheats by changing the values after you decide before the result is revealed), which leads to one-boxing. Or the trivial-but-scenario-rejecting trivial case of the CDT assumption that your choice has literally no impact on the boxes which leads to two-boxing.
Thank you for posting strong meta-research that did NOT result in anything counterintuitive or contrarian-attractive.
I’d be curious to see a more complete data chain (including all human actors) threat analysis (or sketch). Franklin’s advice “Three can keep a secret, if two are dead” is perhaps true at this level (directed state-level actors trying hard to stop you). I’d think that the larger risk is counterparty compromise, which applies to any transmission mechanism.
Post-compromise (when they know about and are looking to catch/capture you), then dead drops are much riskier, as you have to physically be there.
Maybe I’m misunderstanding, but it seems like Moloch is a name we give to this type of selection pressure, but Themis is an actual conscious God, with goals and preferences for it’s subjects/victims.
They don’t seem comparable or selectable, even in metaphor.
I’m not sure why “imperfect” is there only for glomarizing. In real humans, it applies to all three.
This overstates it a bit, but has a LOT of explanatory power: https://www.econlib.org/archives/2009/11/price_discrimin_2.html Price Discrimination Explains Everything.
There’s a whole lot of things where marginal cost is very low, even though average cost is somewhat high (due to startup and fixed costs). For these things, selling “extra” stuff at low prices in markets that don’t leak back to interfere with the primary revenue sources is incremental profit without downside.
This plus the differential in labor costs, which are often significant for last-mile delivery (getting things into consumers’ hands), makes it pretty understandable why the law of one price (the idea that if transport and transaction costs are tiny, things are priced identically) doesn’t apply for many things.
there are stable suboptimal equilibria
Are there, though? We have no idea how to think about stability NOR optimality on scales that include “end of time” or “certainty of physics”. My intuition (not enough evidence to call it a belief) is that all equilibria are dynamic and unstable. Separately, I suspect “optimum” is undefined for a lot of the interactions we talk about—there really is no bridge between is and ought.
I don’t think the causality is as clear or direct as you frame it, and I don’t think Dijkstra would agree with your framing either. I strongly expect it’s not about losing skills, but about never having the opportunity/requirement to gain the skills and knowledge (and never having to really internalize the lessons of the less-abstracted view).
I’ve been a professional programmer for a long time—starting well before there was the Internet. It was noticeably true in the late 80s that people who were incapable of writing and debugging assembly were not top-tier coders. It was true in the mid-90s that assembly was not the requirement, but C/C++ intricacy was a very good proxy for the mindset and attention to detail that made for good software. It wasn’t true by the mid-aughts—there were very good programmers in Java (but that sometimes included JVM bytecode debugging), and front-ends were getting complicated enough that it took real skill to be good at UI and apps. Throughout this “top tier coder” is doing a lot of work—there’s a HUGE amount of value from middle-tier coders, and that has increased over time as the abstractions have gotten better. This has led to a branching and specialization of what it means to be a “programmer”.
That branching matters a lot to this discussion. Once systems became fast enough and software infrastructure common/resilient enough, it made sense to have systems programmers be distinct from application developers, and then systems split into OS, compiler, platform, database, and application, and then further into middle-tier (“backend”) and user-flow (“frontend”), and has since specialized further.
It’s just impossible to say that one size fits all, even in terms of “how good a programmer” someone is. There are a bunch of different skillsets that matter at different layers, and it’s probably not possible for any one human to be good at all things. However, it does remain true that abstractions leak—to be great at any one thing requires a pretty deep knowledge and honed-through-experience instincts about the adjacent layers, and a shallower-but-still-real understanding more deeply.
Vibe Coding compresses this quite a bit—there’s a lot of layers that the controlling developer just doesn’t see, and that’s great until it breaks. It’s still the case that being able to actually line-level step through and debug things is necessary sometimes, and people who have ONLY vibe coded can’t do this (at least not efficiently/well—you need to do it hundreds of times before it’s natural). People who have done years of hand-coding CAN do this, even though it’s no longer very much of their energy (because vibe coding is so much more effective for 80% of things).
I’m not sure I have a point, other than there’s multiple dimensions here, and “able to do” is distinct from “does always”.
It’s a thing I changed my mind on, based on your comments and my re-reading it more critically (and really, reading it thoroughly at all). It’s a perfect reminder to me of one of the main failure modes of LLM assistance—it’s good enough at first glance that it’s easy to forget to apply the same level of self-critique and thought one does for direct writing.
I don’t have a good way to detect this failure mode in myself, let alone others, but it’s very apparent when I look, and is probably common enough that “is it substantially AI” is an ok proxy for “is it low-quality”. This is a reversal of my previous position, though I still suspect it won’t last for long.
I agree with both you and Raemon—the AI portion is hugely worse than the hand-written comment. And I suspect it generalizes—AI can be as good or better than human writing, with somewhat less effort, it’s not often that sufficient effort is taken.
I still suspect that identifying AI won’t be sufficient, but I fully concede the point that the vast majority of AI writing is less useful than the majority of human writing.
I do use AI for most coding, where “good enough” is in fact good enough, but I see that I’ve got further to go in figuring out how to guide and correct it for writing.
[ edit: I have substantially changed my beliefs stated here, based on how bad the AI version is, on closer inspection. It’s not durable, but just AI-identification is probably helpful in the medium term ]
I worry a lot that the binary “AI-written” filter is a completely different dimension from what we actually want: a quality indicator for things that haven’t gotten many votes yet. Let’s consider how and why you want junior MATS-scholar contribution (with massive AI assistance in writing) and don’t want an outside contribution (with massive AI assistance in writing). I suspect we’re going to need to get to a point where the site grades (and maybe categorizes as to likely favorable audiences) posts using AI, rather than trying to segment.
As you say, nothing useful is going to be AI-free for very long. I’m embarrassed at the time I’ve spent on this comment, and I suspect a brief interview with Opus 4.6 would have produced one more concise and useful.
Actually, yes—here’s what I should have done, in about 1⁄10 the time (opus 4.6, with input of your comment and 3-four fragments of points I want to make, followed by a request to make more concise):The “AI-written block” assumes a stable boundary between AI and human content that’s already gone. My thinking, framing, and editing are all AI-assisted. Where does the block go?
The coding analogy undermines the proposal: we don’t flag which lines Copilot wrote — the question is whether the output is correct and useful. Same here.
The actual signal you need is epistemic quality: original judgment vs. vocabulary pattern-matching. AI-block markup doesn’t measure that — it’s compliance theater that honest users follow and bad actors ignore. More tractable approaches: reputation systems, structured epistemic standards, or ironically, AI-assisted grading of submissions against LW’s actual quality criteria.
The runtime/data-plane APIs are not the moat. There already exist compatible APIs for at least some of AWS services (S3, DynamoDB), and many others use standard/open APIs (SQL), or very simple APIs (SNS, SQS, Firehose, even Lambda and ECS).
It’s the very deep auth/RBAC mechanisms, the automation of control plane/setup, the integration of the services to use together which are the operational barrier to competion. And the history of durability and availability, and clear guidance as to design considerations for reliability which are the trust barriers to competition. Oh, and there are economies of scale even for datacenter—learning to design, build, and operate them has a pretty steep curve.
The easy part is getting easier. The hard part isn’t (well it is, because AWS provides an example, and because LLMs make everything faster. They make AWS better too, though, and AWS has the people/institutional knowledge to get excellent use of LLMs on these topics.
I’m not saying AWS is immune to competition on core services, only that it won’t be a swarm of startups, it’ll be gradual change of equilibrium with other large providers. That said, for newer services, there’s a lot of room for competition with startups built on AWS, which do the new functions better than AWS does, because they make different tradeoffs, like not being fully compatible with AWS auth/setup/billing/management interfaces, which are by necessity rather complex. Even there, the risk is interesting and probably different from recent history. Previously, small competitors to AWS in areas that AWS wants to get good at just get acquired, and become part of AWS. Now it may be more feasible for AWS to rapidly compete with them and implement AWS-style services that make the startup far less attractive to customers.
[ disclaimer: I have worked for companies related to this topic, and this opinion is not based on anything but my speculation and outside knowledge ]
We will get a miraculous quantum leap in understanding consciousness before we build conscious minds.
We will build conscious minds before we understand consciousness.
We will not be able to build artificial conscious minds unless we understand consciousness.
(random thoughts—not sure I believe this any more than any other model, but I don’t see it talked about much)
Or it could be 4i. Consciousness isn’t a thing in the way it’s being talked about. Humans (even or especially moral philosophers) are simply wrong when they say there’s causality in how we treat each other and how we assume/measure consciousness in others.
The respect/negotiation/care we have for other humans is an evolved set of behaviors based on power and mutual dependency, which has coalesced in some of our brains into a religious model that there’s some spark or unmeasurable property that “deserves” this style of interaction.
Saying we treat others well because they’re conscious is a rationalization unrelated to any real, measurable thing. In fact, we treat others well because that mostly works in a lot of equilibria, and has been the case long enough that it’s metastasized into a default belief for many/most people.
(reminder: I am not proposing or defending this view, but it does seem even harder to falsify than most of what I’ve read/seen about consciousness as a basis for respect/rights)
The claim isn’t that minds are safe and nice by default. It’s that they’re not sociopaths.
I thought one of the tenets of this debate is that there’s no in-between. Either safe and nice (aligned) or everybody dies (not aligned). Humans are a good example—most are not pure psychopaths, and yet they do a ton of harm to each other all the time, and have threatened to destroy the species for decades. A set of much more powerful minds with even that level of misalignment would be disaster, and if they’re slightly worse than humans, so much the worse.
Great exploration, and it highlights that we don’t have any good general theory of aggregation of individual behavior/utility/preference. Let alone sub-individual → individual → pair → family → Dunbar-sized groups → supergroups → universe.
It’s pretty clear that market behavior is not a separate atomic thing, it’s just one way of adding up interactions between many individuals. It’s a fine metaphor to talk about it as if it had agency, but really that’s just an acknowledgement that there’s enough similarity in the human participants that the statistical sum of those measured interactions is a bit predictable. sometimes.
Is this just a long way of answering the title question with “because morality isn’t objectively real”? It’s a question of consensus, not of prediction of future measurements.