When you’re working on a monolithic system and automate tests for it — which you’re supposed to — the time it takes to run the whole suite grows proportionally with the size of the system. In a significantly large system, the whole run might take many hours or a few days to complete.
You don’t start building a house without an architecture, yet when it comes to software development, most projects don’t have any well-defined architecture.
Many people attribute the difference in approaches to the fact that it’s hard to change the house that’s half done while software is malleable and hence can be changed easily at any moment.
But that’s not true. The further down the road a software project is, the harder and more expensive it is to change its architecture. It’s still doable, but it unnecessarily costs much more than it would’ve cost if some time would’ve been spent on the initial architecture.
There are multiple reasons companies hire cheap programmers. Some of those reasons are valid but there’s one reason that’s not only invalid but also dangerous. That reason is hiring them just because they’re cheap.
During my career, I have personally inherited and took over a few codebases written by cheap mediocre programmers. The codebases were so bad they negatively affected productivity of the new teams for years.
I covered unit tests in the previous post of the series. If you haven’t read it, you should start there. Let’s cover component tests now.
There’s a lot of confusion when it comes to test automation. One of the reasons of confusion is that most people don’t realize there are different types of tests and that each type has different approaches and advantages/disadvantages.
From time to time, I meet people that claim they do microservices. Since I’m interested to hear more real world experiences, I dig deeper by asking technical questions. And pretty often I learn that even though they have multiple microservices, they all use the same database.
I see too much code that solves problems and mostly works, but it’s written in a way that makes it unnecessarily hard for other programmers to understand.
One would think that it’s a matter of experience and hence it’s a given that programmers with 5+ years of experience would naturally become better at writing easy to understand code, but based on what I run into, it’s far from the truth. I’ve seen many developers with over a decade of experience still failing at this.
I see many programmers comparing languages based on their syntax alone. Something along the lines of “Java sucks because it’s much more verbose than my favorite language X and it also requires a semicolon at the end of each statement which is so 20 years ago and is a total no-go”. Okay. But who the fuck cares?
In a monolithic system, it’s too easy to integrate with code from another part of the system. It simplifies initial development but leads to a big ball of mud that’s very hard to maintain.
When you have a monolithic system of any significant size, it’s likely different parts of it require different types of resources to function optimally. Some parts are CPU intensive, while other parts are RAM intensive. It’s possible that there’s at least one part that would greatly benefit from using GPUs for some intensive computational processing.