Am I the baddie?
I am a software engineer. I work for a company that makes software for road construction. Monday last week we were under a bad crunch and we were told to start using agentic workflows. We had like 50 tickets to close by the following Tuesday. I’ve been experimenting with ai development for years now, but this was different. I had access to Opus/Sonnet 4.6, and GPT5.4—the latest models.
Suddenly, they understood. I could talk about abstract concept’s and analogies, and it got them. I was soon working through tickets the first day in hours, what would have taken me days. But we still had a ton of work and not enough time. I was still bound to a single thread of work at a time. So like any problem, I hacked around it. I started with a worktree, where it basically creates a whole other copy of the project I was working in, and that meant multiple threads.
Still I was limited to my single service, and the system that I work on has like 20 services. Wednesday comes, and I’m still cranking the tickets out, when I realized what I could do was create a repo with sub modules for every service. The agent works best when it can find the context it needs without being overloaded.
Thursday comes, and we’re not going to make it I’ve already put in about 40 hours. they said to lean in, so I did. After setting up my MCP servers for our ticket, documentation system, communication, and calendar systems. I told the agent to pull ALL of the tickets for the big feature we are working on, then go through our documentation and communications to look for mentions of this feature, and to turn that into design requirements, then after a Q&A session, we made a plan to implement all open tickets. my idea was that with the full context, it will be better able to perform
It worked, or at least it seemed to. I was almost embarrassed about it. I was talking to our systems architect about how everything is different, and I mentioned this branch of code. he said, ”You know what? Let’s try it“ we brought it to the team, and they figured let’s give it a shot. I hadn’t actually run the code outside of tests. So our QA team dug into it live. The first one worked. The second. The third, and on and on. We went from not going to be able to finish on time, to mostly done. We found a few small bugs, but such is the way of software, especially things as complex as this.
My side project expanded. I created a CLI, a extension for my IDE to manage the local dev environments that could all run independently, and I made a dashboard that pulls all of my tickets, gives me a button to press that spins up an agent with special instructions. it pulls the details and writes the code, pushing it up for me to review. After that i added another button that fixes any issues that come up in review.
My work flow became
Push button
Code review
Maybe push another button
My boss said I had gone plaid. Hahahaha My dashboard became sophisticated, and my process lean. now I had a way to interact with the whole system. I had it solve big problems. Ones that would take months, solved in a day, two with QA.
I had a system to unify our teams, and to allow business analysts to contribute code.
Today, a week later than when I started the project, I talked to two directors and I blew their socks off. We’re talking about doing something like this for the entire company, and I talked about automating the two buttons. It was a big win. I know I have a big raise coming. It’s likely not enough considering my impact.
I went out with friends, and AI came up. they’re pretty sure it’s going to lead to disaster. My general P(Doom) is about 60%. As I was leaving, I had the thought, Am I profiting off of human suffering? I’m proliferating these systems in more places, and my project will mean we are over-staffed at work. It kind of overwhelmed me.
Am I the baddie?
Did the roads get constructed?
I think it might be more accurate to say you’re an efficient component in Moloch’s machine.
But if you care about how things like gradual disempowerment play out, then I think the “baddie / goodie / pawn-of-Moloch” framing is probably not very useful. It might be worth instead thinking more concretely, about things like
How much are my actions contributing to speeding up human disempowerment? [1]
How could I keep my job (or whatever) while contributing as little as possible to various bad things?
Who are the relevant actors I would need to coordinate with, in order to slow things down? What, concretely, is stopping me from coordinating with them, and how could I fix that?
What other important considerations are there, besides “speeding up adoption of AI / replacement of humans”?
What could I do to offset harms I cause?
accounting for the other actors in the Molochian race
You’re not, in my opinion. The opposite. We should have more people like you.
One of the big problem with AI is slowness of diffusion. It leads to a societal-wide lack of situational awareness. “AI is just talk and hype, no big deal, we can’t even see it in the GDP/employment numbers !”
Yes, it would be better if we could get that societal-wide situational awareness by anticipating what AI is likely to bring, instead of “fuck around and find out” approach. It won’t happen. Therefore, we do need as fast diffusion as possible.
Well, if you use AI to write closed source software, you don’t know how much of AI-written code is open source code with the license washed off. So to that extent yeah, you might be the baddie and the only excuse is that a lot of people are doing the same. In my projects there’s zero lines of AI-written code.
Strong-upvoted for asking the right questions.
The transition will happen with or without your help.
But speeding it up locally is going to cost people jobs a little sooner.
I think things would go a little better if people in your position sandbagged a bit. But it won’t make much difference.
Yeah. I think that is where I’m landing. I’ve been going hard for the last two weeks, with not enough sleep. It just hit me once I had time to slow down.