Once you’re measuring the right thing, it will be a lot easier to tell which improvements are worthwhile. Nonetheless, it’s worth it to spend some time thinking about what your goals are and often whether it is worth putting in the effort in the first place.
Before I talk about some coding patterns or designs that might help with this efficiency, I want to look at some common challenges, so that you can also consider all the aspects of whether you SHOULD optimise the code or not. Often, given the challenges below, tweaking your code further is just not worth the effort and so understanding all the different elements to look at, should help you and your team to make the right choice.
Understanding the level of gain
You need to know not only how fast (or memory-intensive, etc.) the system is, but also how much marginal gain you’ll get from improvements. Do you save your company money? Do you save your users time? If it’s a script that runs once a week that nobody is dependent on, even savings of an entire minute (basically forever in computer time) might not be worth adding complexity. But if it’s a function run a million times per second across a fleet of thousands of servers, savings of microseconds could save a lot of money.
If you understand what your performance goals are before beginning your work, you can make the right call on performance/complexity trade-offs later on. If you’re being honest with yourself, you’ll often see that you should scrap marginal gains and focus on major wins.
Finding the balance
Fast code is not necessarily the most maintainable or "easiest to read" way of coding. While it makes a lot of sense to always opt for the most optimal route, in a bigger organization you need to fit into the bigger picture and find the balance between speed and simplicity. A lot of this decision will rest on how important speed is to the piece of code you are working on though or in how often you expect the code to be maintained.
Keeping the rest of the system in mind
Often your code doesn’t sit in isolation and needs to interact with other parts of the system. For back-end API code for instance it can often be easy to tweak things for performance, but you also need to remember that if there is integration with a UI element or database, that we aren’t just testing our code in isolation, but also in conjunction with how its been design. It sounds obvious, but code optimised for the user experience is not necessarily always the fastest solution for throughput- and the overall aim and objective of what you’re trying to achieve needs to be considered.
Third-Party Integration
For many developers and companies systems don't sit in isolation and need to interact with a solution that belongs to a third party where you can't optimise its code. In situations like this, you need to factor in the performance of your third party system and often what is required for it to run optimally rather than your solutions or preferred methods. This, in particular, becomes tricky in dealing with data or memory management where it might not fit into the way you would like it to work, but you will need to make a change to ensure it can operate optimally in future. The ideal scenario is to always design your own solutions, but in big connected systems, that is just not possible and so you need to make do with some level of third party interaction.
Dealing with the data
More often than not, I've found the biggest performance issues can come from the way we handle data. It's not that our code or even SQL is not optimised, but simple that just to process the needed data given the current infrastructure takes too long. There could be many other solutions to look at like changing your databases, archiving your data or reducing the required data in a database, optimising DB structure or even introducing parallelisation where you can distribute your data and run query across multiple servers, thereby increasing throughput. Ultimately there are a lot of ways to optimise databases - which I will cover in my next article along with optimising code in general.
Understanding how your compiler works
In learning how to optimise code, you also need to learn to understand the language and its compiler. Sometimes, your performance constraints could be a result of your compiler and you need to code in a way that will allow your compile to optimise most efficiently. The way Java works is different from Python or C++ and so understanding some of the small nuances of the underlying compiler helps in identifying ways that your code can work quicker. Some compilers control some of these optimisations for you, some leave these completely up to the developers. So, be aware of these differences because it will determine exactly how much of the optimisation you can control or at times, even if you are using the right programming language for the efficiency you require. If you want absolutely performance, go as low as possible and program in machine or assembler code – though this would be considerably overkill for many products.
Throwing better hardware at the problem
While this is very tempting to resolve issues quickly. Throwing more hardware at a problem is not a permanent solution for inefficient code. Yes, it might alleviate current constraints or mask them for a short while, but poorly performing code will likely continue to affect you as you scale or tax your system more, only exasperating the problem for later. So, if you need to quickly resolve a production issue, sure throw more hardware at it for now – but get the code optimised before you forget about.
Once you’ve gone through all of these different challenges and come to the conclusion that your code or solution still needs to be optimised, we’ll then my next article will be just for you as we look at some tips to contemplate in optimising your software for improved performance.
Commentaires