top of page
Writer's pictureCraig Risi

Maintaining Quality


So, as a follow up to my previous article on Designing for Quality, I think it is important to talk about maintaining quality. Yes, we can design software for high quality, which by definite means it should be easy to maintain too, but with software constantly evolving as we continue to enhance it or reshape it for purpose, goo designs and intentions can easily go to waste if we don’t adopt the right processes to ensure we maintain those quality systems correctly.


Choose quality over delivery

Let the quality and not delivery drive your output, and this is a culture that you want to adopt early on as you scale and build your teams. This doesn’t mean that as a company you will become slow, bureaucratic and less agile, which is often seen as a problem in a rapid-moving development world, but long-term will actually allow you to focus on proper feature development as opposed to fixing issues and wasteful development practices.


It means though that in any decision making around software design, software release, tools, infrastructure or even in the structure of an organisation, we need to consider the impact of quality and let this guide our decision-making forward. Quality may not be the only factor in your decision making, but if it isn’t at least a very important one, then you really shouldn’t try and build and maintain software that will produce quality.


Keep on top of your technical debt

Truth is that as your software evolves to better meet the different, there will be lots of refactoring and maintenance. And often this gets side-tracked for the sake of building new functionality. Problem is this adds risk to the stability and security of your software, with poor quality code potentially putting the rest of your system at risks, even if your system is relatively modular in design. It’s important that you prioritise keeping your repos up to date and that you put aside time every sprint for refactoring and maintaining your code, along with the relevant test automation changes that go along with it. Yes, your delivery will slow down initially, but the lack of future maintenance and support required in keeping your software operational pays off long term and ultimately, time is given to allow your teams to maintain the level of quality you have planned for.


Have testing pipelines that match Prod

A big shortcut that many companies make is trying to have test environments that are inferior to what is in production. I can understand some of this, considering the expense of reproducing a production environment, whether it be in the cloud or on-premises. However, doing so creates variables between you test environment and final production state that can not easily be tested around (based on complexity) and may lead to many configuration or environment issues affecting your overall quality.


Now I’m not saying that every pipeline should be like this. The majority of your pipelines should easily be able to run in small containers that can be scaled up when jobs run and then back down when finished with most of your test coverage achieved at this level. However, there is a need for some lightweight end to end sanity and deployment testing in an environment that resembles your setup in production as closely as possible, to ensure that your code will deploy and work as expected in a production environment.


Furthermore, it’s not just about config but preferably even from an underlying performance perspective, you want the server this environment run on to be as production-like as possible (even if it’s on the cloud) to allow you to run some performance and load tests against.


Again, while you can execute performance tests throughout the pipeline, the best results of how system performance is when everything is the same and it will help you better understand how your software will perform correctly in production. I understand this may not be possible for every system or application, but you want to get as close to this as you can o rat least have critical systems where this is the case. These systems can still be scaled up and down as needed but should be as close a replication of your production environment as possible to give the best results.


Responsiveness to alerts

Much like in the previous article where I spoke about the importance of having monitoring in place before you take your functionality live, its what you do with your monitoring that will make a difference. The biggest problem when we put monitoring in place for our production systems is that we often place alerting or emphasis on the wrong things, responding only to system outages, rather than being more preventative and looking at things like performance degradation, unusual usage patterns or error rates to allow teams to respond to issues before they become problematic.


Yes, you plan to design systems that are robust and scalable, but things are always unpredictable and so it’s important that we cannot just respond to the unexpected but preferably try and mitigate it too.


I do understand that some of the above measures may sound like they will add to the time spent on support and possibly take time away from forward-looking development, but if we’re doing this right and working through proper root cause analysis on various issues to prevent them from reoccurring, it should lead to less effort supporting software. You just need to invest initial support effort to make those gains in the future. 


Create platforms for user feedback

Just because systems are working as we expect, doesn’t mean they meet the needs of our customers and one of the key components of quality is customer satisfaction. As a company, you need to create platforms in which you can better understand your clients' needs and gather that information to keep modifying your systems to meet their needs. This can be from ways of soliciting direct feedback on the design to also just identifying regular usage patterns to extrapolate and better understand how customers use your systems and preferably, why too.


Making quality visible

I don’t believe in relying too much on metrics, especially in evaluating performance. But I do feel that metrics tell a part of the story even if not the whole thing and you need to consider this with importance. Many companies’ use varying degrees of metrics that focus on performance, agility, customer usage stats, but not many focus enough on metrics that can measure the effectiveness of a teams quality. It’s important that you make these metrics visible, not to create a measurement system that can be gained, but to truly understand how you are doing in the different areas of quality and identify where it can be improved upon. Making quality visible (examples of how can be found here and here), helps people to take it seriously and want to do better and we shouldn’t be hiding these sorts of metrics from our teams, but openly talking about them instead.


I understand that every company is in a different space and that some of these things might not be easily feasible for you. Especially if you are walking a tight rope on trying to deliver on new functionality and tight on budget and personnel. Which is why with everything its best to find the best balance for your team without compromising your quality too much. Fortunately, if you’ve done the important part of building your software with quality in mind, it may buy you some time to work on these areas as you scale and refine your product further.

0 comments

コメント


Thanks for subscribing!

bottom of page