I agree with the micro service points except for these:
Performance degradation due to network overhead outweighing RAM savings
The network penalty is real but can be optimized. Not an absolute blocker.
Cloud-native services rely on microservices and scale well despite network overhead.
Event-driven architectures (Kafka) can mitigate excessive synchronous network calls.
Optimized serialization greatly reduces the cost of network calls.
Example: Netflix operates at scale with microservices and optimizes around network overhead successfully.
More moving parts = lower reliability
Poorly designed microservices hurt reliability; well-designed ones improve it. Poorly designed microservices can indeed degrade reliability by cascading failures.
Failure domains are smaller in microservices, meaning a single failure doesn’t bring down the entire system.
Service meshes and circuit breakers improve resilience.
It’s often easier to scale a monolith horizontally than rewrite it as microservices
Monoliths scale well up to a point; microservices help at extreme scales.
Monoliths are easier to scale initially, but eventually hit limits (e.g., database bottlenecks, CI/CD slowdowns).
Microservices allow independent scaling per service
Example: Twitter and LinkedIn refactored monoliths into microservices due to scaling limits.
So I agree with everything you wrote. Microservices can be extremely reliable and performant, and at hyperscale are often the only choice.
But these things require a lot of design effort, and hardening. They don’t happen by default. If you take your monolith, convert it to microservices and deploy it, the chances are your performance will significantly decrease (per the same compute cost), not increase.
I know I sounded very harsh on microservices, but I have nothing against them. It’s just that people jump straight to microservices without really understanding the tradeoffs.
I agree with the micro service points except for these:
Performance degradation due to network overhead outweighing RAM savings
The network penalty is real but can be optimized. Not an absolute blocker.
Cloud-native services rely on microservices and scale well despite network overhead.
Event-driven architectures (Kafka) can mitigate excessive synchronous network calls.
Optimized serialization greatly reduces the cost of network calls.
Example: Netflix operates at scale with microservices and optimizes around network overhead successfully.
More moving parts = lower reliability
Poorly designed microservices hurt reliability; well-designed ones improve it. Poorly designed microservices can indeed degrade reliability by cascading failures.
Failure domains are smaller in microservices, meaning a single failure doesn’t bring down the entire system.
Service meshes and circuit breakers improve resilience.
It’s often easier to scale a monolith horizontally than rewrite it as microservices
Monoliths scale well up to a point; microservices help at extreme scales.
Monoliths are easier to scale initially, but eventually hit limits (e.g., database bottlenecks, CI/CD slowdowns).
Microservices allow independent scaling per service
Example: Twitter and LinkedIn refactored monoliths into microservices due to scaling limits.
So I agree with everything you wrote. Microservices can be extremely reliable and performant, and at hyperscale are often the only choice.
But these things require a lot of design effort, and hardening. They don’t happen by default. If you take your monolith, convert it to microservices and deploy it, the chances are your performance will significantly decrease (per the same compute cost), not increase.
I know I sounded very harsh on microservices, but I have nothing against them. It’s just that people jump straight to microservices without really understanding the tradeoffs.
Very much agree. And you can get the maintainability benefits of modularisation without the performance overhead with good old refactorings.