8 min read
Test automation is an important step for developers of software. Software development has moved steadily toward Continuous Integration and Continuous Delivery (CI/CD) to produce, test, and deploy updates to meet customers needs quickly.
For those building complex devices like broadband gateways, Wi-Fi routers, or other networking products, the pressure to move towards this type of development presents unique challenges. Network operating systems for these devices require significant regression and performance testing to guarantee quality. For these products, test automation is a part of the entire development cycle, requiring both software and hardware validation. Moreover, many different people and teams - from multiple organizations - are involved in producing these products from start to finish. Getting results to the development team quickly and efficiently will let them address issues before moving to the next phase and avoids losing valuable time.
These circumstances make it wise to develop an overall test automation strategy for your product development lifecycle. Having a solid test automation strategy helps you and your partners build better products, in less time, with fewer headaches.
Test automation uses software to control the execution of tests, the processing of results, and the feedback into development and project management tools. A successful test automation strategy centers around finding the right tools and building standard practices for each of these key areas. Considerations include:
These considerations require different practices that are dependent on where you are in the product development process. Here we’ll explore these considerations and give some examples of practices that can shape your overall test automation strategy.
Product development builds the underlying network stack and the applications and features that utilize network connectivity. They are the most closely aligned with software development processes and the tools that enable the CI/CD process.
Product development teams should frequently test, creating pipelines that run automated unit testing on a per commit basis and/or run a larger set of tests via nightly branch testing. Since they are looking primarily for bugs, tests that perform protocol validation and give pass/fail results immediately, collected in packages that run in shorter times, will give developers the feedback they need to fix bugs and create new builds as quickly as possible. Running tests in an order that allows for parallel build testing increases this efficiency.
Automation tools for product development should report feedback into the CI/CD system in a way that aligns tests and results with specific issues to track progress and automate the developers’ workflow around issue tickets. Developers should also consider how they are notified about test results. Are they getting automated notifications via email or Slack? Or do they rely on a dashboard or other “pull” mechanism for results?
It’s important to mention here that testing and feedback at this stage are much faster than when dealing with hardware integration. Fixing issues during product development with fully automated unit testing will save time for everyone later.
Test frequency: Per commit or per build
Test duration: Short test times for fast, specific results
Test coverage: Unit testing for protocol behavior and feature performance
Other considerations: Incorporating with CI/CD tools, parallel testing, notification and issue tracking
Quality Assurance teams are built around testing. It’s their job to take a higher-level view of the overall product and test it rigorously for issues that may have been missed, test for interoperability, and overall performance benchmarks.
Whereas developers focus on testing their code, QA focuses on qualifying entire release builds before they are released. This puts their test cycle frequency on the order of nightly or weekly test runs. This also allows them to schedule testing in larger time blocks, allowing them to address more complete (and more critical) test coverage.
QA testing is quite literally designed to validate product quality. This means they should add performance, interoperability, and security testing to their mix of test coverage. In addition, regression testing of protocol and feature functionality compared to previous builds can show issues that may have resurfaced or that came from the new code changes.
While QA teams need to automate these longer test runs, they also need to narrow down a set of test cases that may have failed so they can be checked and re-tested. This can come down to even individual test cases that may have failed.
Consequently, QA needs effective mechanisms for escalating issues to product development. Using the same testing system as the dev team is a great start, but it’s also important to build a strategy around logging and reporting. QA teams need to access, annotate, and save full logs and capture files to communicate with developers and have regular meetings that coordinate with product development’s code cycles.
Test frequency: Nightly or weekly builds
Test duration: Longer time blocks, more complete test coverage
Test coverage: Performance and feature validation
Other considerations: Narrowing down to specific issues, logging and sharing results with product development
System integrators have the difficult task of implementing device software with the underlying hardware and interface drivers. It’s also often the case that the SI process is done by a secondary vendor or separate team in your organization. Test automation is essential at this stage to make sure the integration is successful and meets performance and quality standards.
System integration testing should focus on getting the product ready for deployment. System integration should run comprehensive tests for feature validation and performance and include additional tests focused on validating the operation of hardware components. This is where things like rigorous Wi-Fi testing can be significant.
System integration must also examine the system’s long-term stability in the presence of regular protocol operation and user behavior. Running client scalability tests alongside multi-client performance over very long periods of time can reveal severe underlying memory leaks or fragmentation problems that can impact performance or render a system unusable.
Lastly, since different teams often do SI, automated test reports should include details about the hardware environment and logs to aid investigation and ease the escalation of issues back to product development.
Test frequency: Complete deployment builds
Test duration: Longer time blocks, more complete test coverage
Test coverage: Performance, hardware dependencies (i.e., Wi-Fi), scalability, and long-term stability
Other considerations: Including hardware logs, testing performance and scalability in the presence of protocol/user behavior
While security testing should be a part of your entire product development process, it’s inevitable that some flaws or exploits will be discovered in the field. We’ve talked extensively about designing secure broadband gateways and Wi-Fi routers, and it’s worth mentioning as part of your overall test automation strategy.
Security test automation, whether before deployment and after a flaw is discovered, should include both active and passive testing. Automating active security testing involves running concurrent port scanning or fingerprinting tools against your device, such as the industry standard “Nmap” scanning tool. This will reveal how your device appears to malicious attackers and which services are open and running on it that can be exploited. Passive scanning of your device’s behavior with signature-based IDS rules during testing can also reveal less-than-ideal behavior.
Security testers should escalate any red-flags such as open ports, easily identified OS details, and communication with unexpected or unsecured web- services to the product development team. This may even include monitoring software components (such as open-source projects) that are part of the device’s code base for security patches and ensuring they are incorporated into the product.
Test frequency: In sync with QA, regression testing
Test duration: Short term active testing and long-term stability
Test coverage: Active scanning and passive monitoring
Other considerations: Discovering attack surfaces, regression testing of previous patches, escalating to product development
The beauty of CI/CD tools like Jenkins, Bamboo, Gitlab, etc., is that they allow developers to incorporate scripts or external API calls into a “pipeline” that is run after each commit or merge. These tools can automatically kick-off testing if they are integrated with test automation tools that include robust web APIs and automatically collect pass/fail results and apply them to the result of a build. This “full-scale” level of automation for your entire development environment, coupled with a solid test automation strategy, will have you releasing products and product updates faster, with higher quality, guaranteeing satisfied customers and partners.
QA Cafe can help you build your test automation strategy. Contact us to find out more!
Image credit Alex Knight via Unsplash