The Test Automation Best Practices This Engineering Manager Swears By
Engineering teams can do so much more with test automation.
The practice ensures teams don’t have to spend as many hours doing manual testing (saving time), and that there are fewer bugs and is more certainty (saving money).
But the upfront costs can deter some teams from adopting test automation, plus there exists ambiguity around what the “right” way to automate your test environment is. What tests should I automate and what should I test manually? What tools are best for my workflow?
At DASH Systems, they’re building software and hardware that enables precision airdrop deliveries, which is to say there’s little room for any mistakes. For Filip Dziwulski, the director of software engineering at the company, automating every test is a must.
“We automate testing from low-level unit tests all the way up to high-level simulation of air-drop scenarios,” Dziwulski said. “It’s important to test throughout our tech stack. Software is inherently modular and deals with levels of abstraction.”
As such, Dziwulski and co. have to ensure their automation best practices result in real-world success. Below, the director of software engineering explained DASH’s best practices, and which tests and tools they use to ensure automation goes off without a hitch.
Briefly describe your top three test automation best practices.
At DASH, our top three best practices in test automation are:
Developing test automation that runs seamlessly as part of our development process. Safety and reliability are core to our products and so testing is core to development, both in terms of testing new functionality and making sure old features didn't break.
Thinking about testing early. Design and development of new features includes test design and development from the beginning. When we consider adding a new feature to our products, how we prove the functionality is core in deciding whether to build in that new functionality.
We strive for modularity in testing. We mirror the modularity in our products in our test development to ensure that what we learn in the field can be easily be refined into validated product improvements. It keeps us moving fast and minimizes the turnaround time between new learnings and product improvements.
What kind of tests does your team automate, and why?
We automate testing from low-level unit tests all the way up to high-level simulation of air-drop scenarios. It’s important to test throughout our tech stack. Software is inherently modular and deals with levels of abstraction. Modules are often reused.
If, for example, I use a low-level module in one feature and only test the feature at a high level and it works — but I never validated the low-level module — I may see unintended behavior or failures in the system if I go use that module in a different feature that uses it in a slightly different way. That will be expensive and time-consuming to track down, and in the worst case, create potentially unsafe scenarios. It’s important to think about each piece individually as well as how all of the pieces come together.
What are your team’s favorite test automation tools, and how do they match up with your existing tech stack?
We use a mix of different technologies. We like Github actions and Jenkins for CI/CD. We develop in C++ and Python and have a mix of tests we run across our systems. For our HIL testing, we use NI test hardware which has made component and simulation testing at an early stage feasible. For SIL testing, we use various ROS tools and Python scripting.