All the rage. All the fame. All the talk. All the buzzwords. What a shame that microservices in general are an abomination.
I'll talk about why monoliths in general are superior to microservices and why this garbage is promoted right now as a salvation.
I've worked in a tiny Java shop with 20 people. As I've mentioned earlier in databases article, people seemed of low understanding and everyone seemed to just follow and blindly accept a trend that a mediocre CTO dictates. And, of course, like all low quality people, who cannot yet think for themselves, who cannot weigh and judge with wisdom and understanding, they follow a trend. It does not matter if it's a good trend or a bad trend, arguments don't matter, logic and reasoning doesn't matter, someone yells from the bottom of their throat that "microservices saved our company" and people just move on forward.
I've worked on one project, which I've mentioned in the databases article, which was completely demolished by usage of microservices and that was the main contributing factor. Our app was a mobile payment app, we did not even do the mobile part, we did not even do the actual banking part, all we had to do was:
- Accept HTTP request, forward it somewhere else with maybe remapped response and return remapped response to the client
- Sometimes update some records in the database
- Issue https certificates to mobile clients
Like, what I've mentioned here does not seem hard. Neither it is actually hard. We do not do hard algebra math, we do not perform machine learning, we do not compute some graphic vectors in 3d world, we do not use Vulkan for GPU computing, we do not embed some scripting language in a project, we do not spend time solving performance problems of 100k requests per second, we do not do AI. Just a tiny crap integration app that doesn't really do anything. Neither it should take much code.
But no, unfortunately, Java developers, being of a very low quality in general overengineer everything so much, beyond the point of understanding the original tasks which were simple to begin with. FizzBuzz enterprise edition is real my friend.
Just a slight taste of how much joy microservices brought into this pathetic excuse of a project:
- I think it was about 5 microservices for these tiny tasks
- Now they needed integration tests for all the calls between microservices
- Idiot CTO decided that linkerd ought to connect everything, so HTTP requests do not even go directly now
- More than 10 machines were needed for final deployment (last I've heard)
- Huge overall memory usage because of using JVM and multiplying that per process
- 10 dockers to deploy this and develop on a local machine
- Arguments about whether to reuse function in both microservices by copy paste or making another call via network and exposing that function with more code
We did not finish project in time, took about two years (!!!), we were late and competitors made a working alternative.
It's amazing that extremely complex triple A video games fit into a consumer grade laptop, yet this engineering clown town had to use ten machines.
How microservices abomination is defended
The proponents of microservices just love worthless strawman arguments, like:
Oh, monoliths with huge memory/CPU usage in one machine would just be so much better, wouldn't they?
This argument can only come out of an ignorant fool who spent his entire life developing with Java and Spring framework. I've heard these horror stories during lunchtime of someone making some change in Java insurance software and now it started crashing because it needed more memory, twice the RAM was given and all of a sudden it is okay. And it did not even support many clients, like 100 at once or something.
Such things can be introduced by:
- Overheads of a framework
- Abominable engineering
Which are introduced by low quality people who do not understand nor care about how low level or even the framework they use works. Average person who knows Java knows very little about low level, about how objects map in memory, what are the performance penalties and etc.
So, people who have this experience, with high memory usage Java monoliths put up a strawman that monolith uses more memory and CPU and then take it down by saying "it could be better by splitting into microservices". Yet they don't consider monoliths like firefox/chrome/monolithic linux kernel/their favourite text editor/their mail client which they use every day and are happy with it. If you strongly believe in microservices, and their lower memory consumption and CPU usage, I suggest you send a letter to some triple A game studio suggesting that they should start using REST framework locally inside laptop to compute vectors and etc. We'll see how that goes.
There is plenty of computing power to run Battlefield 3 on a single machine, is it not?
Try writing a monolith with OCaml and behold the order of magnitude lower memory/CPU usage compared to JVM app.
Yeah, just run our entire app on single pod in kubernetes sarcastic and insecure laugh
First of all, if you had only one app to deploy, you wouldn't need kubernetes. And if you make an app to run beside other apps in kubernetes, still, deploying one container is huge advantage because it gets rid of worries of versioning and potential breakages between deployments. Since monolithic app can have much lower memory and CPU usage requirements than microservices proponents want you to believe, you're conserving resources in such way.
Errors are more easily isolated to single point
I don't know, are they? Because the third microservice in the chain might have bad data sent to it which was not intended, then null pointer exception was thrown when problem might have been much earlier. Second, problems are easier to reproduce in the same machine with the same data, and you could even turn on a debugger and see what's up. Not so with microservices. Third, full stacktrace is shown (at least in Java) and you know all stackframes that led up to this point. Is that better with microservices? Probably not. The only reasonable thing that actually holds water is that if there's a problem in one app you don't need to redeploy the entire app but the part that is failing. But that's it.
I want to ask though, is it really worth throwing out all the strong type safety guarantees that are provided by a high level strongly typed language like OCaml just to get this one benefit? I'll be talking about that in the end.
With microservices we can scale just a single part of the app!
This is an absolute insanity. Of course, these are the same people who think all monoliths use a lot of memory. But I want to give a simple example:
- Function A and Function B are both in a monolith
- Function A and Function B are in separate microservices
If your app is stateless, how much more memory and CPU usage should an idle app consume which has routes
versus two apps, App A which has route:
App B which has route:
The answer is, unless your framework is written by a victim of a severe brain damage, no extra memory and cpu usage at all. When does the memory and CPU usage rise in the app? When it gets requests.
So, the question is, is there any extra cost of 100x scaling a monolith with load balancing, which has all the routes and functionality in it, versus just scaling a single instance with little functionality 100x times?
And the answer is, monolith is not worse in this position. And, microservices are actually worse in this position, because now, you have to manually tweak instance count for every single part of the infrastructure, versus just having load balancing to every instance and the load and CPU usage would distribute normally (assuming same sample of random traffic goes to every instance) and you get uniform CPU and memory usage across all instances and whenever you need more you can just add more without any cost whatsoever.
I wonder what people were smoking when they believed that microservices have anything to do with scalability and using hardware to its fullest potential.
There's a tight coupling in the monolith!
There you go, another worthless term "tight coupling" which means nothing but are used by people who don't know what they're talking about to sound smart and influential.
Let's consider a real world. Let's consider, say, design of a car. Are parts in the car loosely coupled, or tightly coupled? Because only certain engines go to the certain cars, only certain wheels go on certain cars. There are parts with much higher coupling, doors and other body parts usually only fit one car. There is a certain oil that needs to be put in the car. Just about everything in the engine is a specific part, there are bazillions different exact same functionality parts, yet they are specific to that engine of a car. So, are car parts specific and tightly coupled? Yes they are. What is wrong with such a design choice in the car? Nothing. Consider the creation and eyes. So many creatures have eyes, just about all of them are different from each other among kinds of animals yet perform the same function. Same with ear/nose/legs/fingers/heart/liver etc. So, car parts are tightly coupled, animal parts are tightly coupled, what's wrong with tight coupling in software? All OCaml code bases are tightly coupled because of strong type system which doesn't allow it to break, and when it breaks you have to fix it or you cannot ship broken code.
You know what I could do? I could write a function in OCaml that accepts string of json, parses it and uses it for arguments of the function - that would be much looser and generic coupling, wouldn't it? I still retain my sanity, so, no thanks.
The obvious disadvantages of microservices
You were calling a function in a single process, which you can do tens of millions in a second in a single instance. Now you're supposed to call the same function over HTTP, which you could do 100k tops in a second in a pristine network and hardware. Nuff said.
If you call a function in OCaml and it compiles you're 100% sure your function will be called and there cannot be an error mode introduced. And this doesn't need to be tested, it just comes right out of package with a strongly typed language. If you call other microservice, you have no idea what will happen and it can break at any moment with another deployment. Unless you have lot of tests, which brings me to another point...
Need to write and maintain integration test infrastructure
A bunch of code, which didn't need to written ever in the first place with a monolith now becomes necessity with microservices. Is not that a blast, a bunch of more code just to check that you call functions correctly (when compiler could be doing that instead)?
Greater chance of unexpected distrubuted error modes
Say one microservice talks to the database, writes its thing and then talks to other microservice which writes its thing. How to ensure both transactions succeeded without fancy manual hand tricks? Much easier in monolith, BEGIN; call all the functions you want and COMMIT;. And there are much more patterns of wrong things that can happen within every single network call (and now you're at the mercy of the network), why to think about this if you don't need to think about it at all with a monolith?
Deploying one executable is easier than deploying 10 and being sure they work again. Nuff said.
Setup a local database with docker and run single executable and have a debugger at your fingertips also. Nuff said.
Are these disadvantages worth to incur? For most of the businesses and startups and new projects, no, but I'll speak about this in the end.
The root of this evil
The root of this evil is that software developers suck. Especially, Java and the OOP crowd. People who were supporting microservices, who I knew pesonally, I do not take seriously. But in my experience, these people had in common:
- Java background
- Spring/dependency injection background
- Worked in many projects who became spaghetti, unmaintainable code in an eyeblink
- Then they have a grudge on monoliths because they cannot scale codebases in size because of poor engineering practices
- Microservices are salvation for them because now it's harder to make sphaghetti (so they don't need to learn how to scale code no longer)
Scaling codebases is hard, but there are many projects who did this successfully:
- Elastic search
I'm just mentioning a few samples of extremely complex software that people use everyday. And people still work on these projects after decades have passed, and keep refactoring/adding features daily. Not so with the tiny Java shops, which write overengineered code which becomes obsolete in few years and needs to be rewritten. But, how to scale codebases at large is a separate topic, which I may touch in the future.
And one note about elasticsearch, I heard when working in the same tiny Java shop, some manager asking some microservices proponent on why elasticsearch is a monolith and not a bunch of microservices. He answered "well, elasticsearch has to cache a lot of stuff in memory so..." - wow, I've bursted into anger in my mind and angrily, sarcastically replied from the side "Oh, it would be so much better if elasticsearch was distributed among many tiny microservices!". He did not answer anything. What an insane fool, he thought an elasticsearch would be better off as microservices if not for the in memory process speed!
When to actually use microservices? Never in the start. Twitter started as a monolith. Github is a monolith until this day. Discourse forum platform is monolith to this day. It's much easier to develop monolith, and if you do it with a strongly typed language like OCaml you can deliver a rock solid software. You get so much speed and productivity of a monolith and a database that you will move much faster than teams who start with microservices and beating them. Then, once you're well established and successful monolith with many clients etc. you can start breaking out, but by leaving this for later times you'll have a great advantage of knowing what to move out and when.
Hope you learned a thing or two, Abner.