Your comment makes it sound a bit like there is no need for performance, but taking servers or REST services as an example, most programmers care about throughput, and almost all about latency which are both measured with e.g. prometheus. When your website takes one more second to load you lose clients, and if your code is slow it shows up on the cloud provider’s bill. Even if you are IO bound, you can batch requests, go async, or do less IO.
The reason people don’t bother hand-optimizing code is because the hardware is really fast, and because a handful of programmers put a lot of efforts writing optimizing compilers and optimized frameworks so the average output is good enough for the average workload.
I’m not saying that there is no need to optimize for performance for REST-like servers, instead I am saying that it’s very dependent on the specific use case, which is difficult to predict. Often it can be more economical to scale up when there isn’t sufficient throughput, and to focus engineering optimization efforts on only those queries that are low-performing. Even then, there are typically optimizations to be made long before one reaches for assembly. Optimizing SQL queries, and efforts to increase parallelization are often sufficient.
For instance, the server that I work on is a GraphQL API written in Typescript. I have a few million users, and it runs without problems. When I have had slow queries, I typically need to optimize the SQL/Prisma queries, two times I needed to optimize parallelization. We’re not particularly computer-bound. So I haven’t yet even needed to offset processing to even a compiled language. NodeJS is simply fast enough.
Your comment makes it sound a bit like there is no need for performance, but taking servers or REST services as an example, most programmers care about throughput, and almost all about latency which are both measured with e.g. prometheus. When your website takes one more second to load you lose clients, and if your code is slow it shows up on the cloud provider’s bill. Even if you are IO bound, you can batch requests, go async, or do less IO.
The reason people don’t bother hand-optimizing code is because the hardware is really fast, and because a handful of programmers put a lot of efforts writing optimizing compilers and optimized frameworks so the average output is good enough for the average workload.
I’m not saying that there is no need to optimize for performance for REST-like servers, instead I am saying that it’s very dependent on the specific use case, which is difficult to predict. Often it can be more economical to scale up when there isn’t sufficient throughput, and to focus engineering optimization efforts on only those queries that are low-performing. Even then, there are typically optimizations to be made long before one reaches for assembly. Optimizing SQL queries, and efforts to increase parallelization are often sufficient.
For instance, the server that I work on is a GraphQL API written in Typescript. I have a few million users, and it runs without problems. When I have had slow queries, I typically need to optimize the SQL/Prisma queries, two times I needed to optimize parallelization. We’re not particularly computer-bound. So I haven’t yet even needed to offset processing to even a compiled language. NodeJS is simply fast enough.