I agree with your point. But what I think is interesting about legal work is not that they could/couldn’t be automated or that AI usage could be detected. I think that lawyers will see the job automation coming and take legal action to protect themselves such that AI is not legally allowed to be used for some key legal tasks, such that they ~all keep their jobs
beyarkay
It seems likely to me that (at least some) lawyers will have the foresight to see AI getting better and better, and that AI automation won’t just stop at the grunt work and will eventually come from the more high profile jobs.
thus making it less valuable to hire juniors; thus making it harder for juniors to gain job experience.
Yes this seems very likely, I don’t see why this would be limited to SWEs
Nobody wants grandma to get scammed. But I feel this is a false comparison, the real comparison would be against the grandma’s who are today paying for multiple $20/month subscriptions because they got signed up and can’t figure out how to unsubscribe.
I agree with Brendan Long below, and while there are always horror stories, I don’t think most banks want the bad press of bankrupting grandma.
It might feel like extra costs in the moment, but I doubt it would end up being more expensive, since you’re more able to “fine-tune” what you’re paying for.
I think most humans prefer a subscription to not have a marginal cost to use what they enjoy / find useful
I disagree here, I feel like I regularly see people online complaining about needing a subscription for everything nowadays, and also about the price of those subscriptions quickly adding up to large amounts.
Oh this looks cool, thanks for the link! Interesting to see something similar and how that worked out.
Jamba! had drawn criticism for allegedly misleading customers in its service advertisements. In general, Jamba! services were sold as a subscription, despite advertising that seems to imply that customers are buying a one-off phone ringtone.
I couldn’t find anything about purchases/subscriptions in the WAP Wikipedia page?
From a business perspective, there’s always some price the business can charge that would make running adverts comparatively unprofitable. This price might be very high, but it’s not infinite. I’ll agree that many existing “subscription” services that also run adverts despite you paying the subscription, which is just frustrating.
I’ll agree that browser support wouldn’t be required, but I’ve got a feeling that browser support would reduce the friction past some threshold and make this “enabled by default”. The number of people with a Stripe login is strictly less than the number of people with a browser, so requiring a Stripe login would be some amount of extra friction. These feelings are weakly held though.
But if I spend money on a single article and then it’s uninteresting, it feels like I wasted money
I feel like this might end up being a good thing. If you consider a subscription as a low-frequency high-risk high-reward bet (you could lose the value of the subscription, or gain the value of multiple articles), and many one-off payments as high-quantity low-risk low-reward bets (at worst you lose the value of one article, at best you gain the value of one article), then having multiple bets will give you more information about the underlying distribution. Practically, I imagine that I’d discover whether or not I like a publication faster if I can purchase a couple of low-risk articles rather than having to spend the full subscription fee.
You certainly can spend money on an article and later regret it, but this argument applies equally to subscriptions. Except with subscriptions, you’ve wasted significantly more money.
This would feel particularly bad if I get charged automatically as soon as I click a link
Agreed, having read your case I now think automatic charging should be off-by-default, so you only enable it for websites you’ve got high confidence in. Note the parallel with subscriptions essentially being on-by-default.
I’d also be curious to hear about your thoughts on Inkhaven as a program, and not just your thoughts about writing every day and what that taught you? I’ve also been writing every day in November so am curious to hear what the “inkhaven experience” was like, since I’ve got a good idea what the writing-every-day aspect has been like.
Another perspective: I don’t feel sad about not personally solving the problems of the past (e.g. figuring out calculus, steam engines, nuclear power, etc) and am extremely happy that I live in a world where these problems are already solved.
I think there’s a small chance that having all the big problems be solved by AI will feel similar. Instead of “historical persons” solving the majority of the problems, it’s a historical AI that solves all the problems.
This is a fair point. An assumption I had (but forgot to include in the post) was that most english-speaking schools spend several years teaching children a non-english second language, and very few kids come out of it speaking that language. So sign language could be a better default second-language to teach to kids.
I’m not super convinced that things would be better if everyone spent those years learning sign language, nor do I think that learning sign language would be magically more likely to stick than a spoken language. But I do think that sign language has some interesting possibilities due to the different medium of communication, and I rarely (if ever) see this brought up.
I feel my point still stands, but have been struggling to articulate why. I’ll make my case, please let me know if my logic is flawed. I’ll admit that the post was a little hot-headed. That’s my fault. But having thought for a few days, I still believe there’s something important here.
In the post I’m arguing that survivorship bias due to existential risks means that we have biased view about the risks of existential risk, and we should take this into account when thinking about existential risks.
Your position (please correct me if I’m wrong) is that the examples I give are extremely unlikely to lead to human extinction, therefore these examples don’t support my argument.
To counter, I think that 1. given that it’s never happened, it’s difficult to say with confidence what the outcome of nuclear war/global pathogens would be, but 2. even if complete extinction is very unlikely, the argument I posed still applies to 90% extinction/50% extinction/10% extinction/etc. If there are X% fewer people in the world that undergoes a global catastrophe, that’s still X% fewer people who observe that world, which leads to a survivorship bias as argued in the post.
This is similar to the argument that we should not be surprised to be alive on a hospitable planet where we can breath the air and eat the things around us. There’s a survivorship bias that selects for worlds on which we can live, and we’re not around to observe the worlds on which we can’t survive.
Claiming that literally no nuclear incident nor biological risk could “particularly reduced it’s population” seems like a very strong claim to make? Especially given that your argument only holds if you’re correct. (e.g., if one of these would have ended humanity, we wouldn’t be having this conversation)
For what it’s worth, I didn’t interpret the comment as a “zinger”. While I thought I explained what I meant by “significant business integration”, at least 2 other people upvoted the original comment, so I can only assume I didn’t explain my point maximally well.
I’m happy the comment was made, and think that sometimes a short question can get to the point better than a long one.
I’ll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the “AGI in X years” timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to “computers are ~40 years old, given this rate of progress we’ll almost certainly have AGI in your lifetime” then they become convinced that AI safety is a problem worth worrying about.
It’s non-zero business integration, but I’m thinking about something closer to fine-tuning on manipulating excel spreadsheets, collating information from Teams/Slack/Outlook/etc, creating power point slides from existing sources in the company. If you compare the AI-powered tools for SW engineers vs the tools for AI-powered non-SW white-collar workers, the difference is night and day IMO. Many of the existing business integrations are little better than copy-pasting the context into your favourite LLM, but (for example) cursor is significantly better than that, has custom-trained models for Tab autocomplete, etc.
Ooh thanks for the link, I hadn’t seen that before, going to read that tonight.
It’s worth noting that there are domains where there are no experts
If I were to write a follow-up talking more about this expert-novice divide, I’d focus on defining expert/novice not based on their absolute level of knowledge/experience in the field, but relative to some specific person. E.g. someone might be an expert in AI safety compared to their grandma, but a novice in AIS compared to Yudkowski.
I think defining experts/novices as relative labels is more informative than saying there are domains where there are no experts. I agree that there are domains where everyone has barely an elementary understanding of the field, but within that narrow range of expertise I think it’s still useful to have a term for those at the upper end of the range compared to those at the lower end (although this is mostly semantics).
A class of people who thinks of themselves as experts but doesn’t really have a clue is the most dangerous when it comes to trapping themselves in traps of their own making.
I wouldn’t define these people as experts. 100% agree that they’re the most dangerous, especially if they’ve learnt to disguise themselves as experts without actually having the required expertise. In the essay I mostly ignored people who are dishonest and try to disguise themselves as experts/novices when they’re not. Maybe that’s a post for another time.
On second reading, you might be referring to people who honestly believe themselves to be experts but actually have no clue what’s going on? That’s something I didn’t consider. Again I agree that they’re dangerous. It feels like the sort of thing LessWrong would have written about before? I’m curious about how one could discover that you’re in this category of honestly-but-fake experts.
Strongly agree. However I believe lawyers to be adept at the legal system, so they’d likely bundle job protections for lawyers alongside job protections for other more empathetic jobs such as teachers or 911 call agents or others. In general, I predict that lawyers see AI job automation as a valid threat, that they take actions against this threat, and also that they are much more competent at legal manoeuvring and politics than I am, so would come up with competent ways to achieve their goals.