Test Automation Strategies with Microsoft Technologies
An organization’s first application life-cycle management (ALM) and DevOps story is typically automated build and release management – continuous integration. With continuous integration in place, along with the correct people and processes, we have taken a significant first step into tightening the feedback loop. Now in the afternoon, the product owner can review and provide feedback on what the development team implemented that morning. Not only that, but build and release management automation is foundational to realizing other significant DevOps tenants which ultimately reduce lead time.
Now that we have this first victory, how do we leverage this momentum to take the next step? What is the next step? After build and release management, the automation of software validation is typically the next significant move toward reduced lead time and increased confidence in software release quality. Unfortunately, this is where so many teams lose momentum. Maybe they go on to automate a full build and release management pipeline all the way to production, but software validation remains a manual, and often overlooked, aspect of software development. Lead times aren’t improved, sprint integrity is still compromised, and the end user’s confidence in the organization’s capabilities leads to lost market share, or business confidence, through buggy software.
Until recently, even if there was a healthy appreciation for the value of automated software validation, the .NET ecosystem provided few tools which could easily build and automate mature testing strategies, like end-to-end integration and performance testing. However, over the past several years, Microsoft has made significant investments in sophisticated, generally available, and easy to use tools which support mature SDLC processes. With Team Foundation Services (TFS) and Visual Studio Team Services (VSTS), test automation can be quickly added to the ALM and DevOps pipeline. With Azure, Cloud-based Load Testing uses virtual machines to generate geographically disparate load for countless virtual users. Combine these technologies with the testing paradigms available in Visual Studio today and a development team can easily create, maintain, integrate, and automate unit, performance, integration, functional, and even security testing at scale with limited investment in time and resources.
When you can leverage the momentum to implement more sophisticated and comprehensive strategies, you’re well on your way to higher quality
The comparatively new Coded UI Tests are particularly compelling. Using a browser plug-in (or Visual Studio Test Manager), a user can record their interactions with a target application to generate a C# script which, because it is simply code, can be highly customized in Visual Studio. Any data source, from a CSV file to a SQL Server table and even an Access database can be used to drive test variables. Test assertions can be applied against form elements, the query string, and even asynchronous communication data. Of course, comprehensive metrics and measurements are gathered which can be later summarized and reviewed through charts, mins and maxes, and raw data. All of this can be integrated into the ALM / DevOps pipeline using TFS or VSTS and Azure for generating load. Pipeline gates can interpret test results to automate feedback to the development team and even abort the release. Because Code UI Tests target web applications, any web technologymaybe targeted (not just ASP.NET).
Beyond the automation of standard unit tests which may run every hour during the development day, for example, Coded UI Tests can automate the execution of additional testing facets at regular intervals, including performance and integration tests. Using Azure and Azure Resource Management (ARM) templates (an incarnation of infrastructure as code), the creation and disposal of specialized environments for supporting different types of tests can be easily automated.
I realize, I’m probably overusing the word “easily” in my description of the technologies above. Don’t get me wrong - over the last two years, with the heavy investment Microsoft has made into its software development eco-system, the technology piece is readily accessible and easy. But when setting out to alter the way an organization thinks about and executes an ALM and DevOps pipeline, we all know that technology is only one piece. Successfully introducing change for any facet of this space will impact people, process, culture, and technology across multiple groups in the organization, and the fastest way to fail is to try and tackle it all at once. So, the best way to successfully effect change across all four of these areas at the same time is through quick and decisive victories. Many times, these victories are small but allow the organization to tell a clear and concise story about how they solved specific challenges or pain points.
For any organization who realizes the benefits of automated software validation and is interested in applying these techniques, use the proof of concept (PoC) approach. Identify a new green-field development opportunity or a small isolated component of an existing systemwhere these strategies can be quickly and easily implemented. Use this PoC to test your theories and tweak your approach. Be sure to overlay an even rudimentary value chain analysis to identify and surface measurements which can help tell your success story and remind you of why you’re putting in this effort. Whether you have clear buy-in, or this is a grass-roots effort, be sure to include input and support from the various groups spanning development and operations. Finally, as you begin to see the benefits, share your story and help other teams capitalize on your experience. When you can leverage the momentum to implement more sophisticated and comprehensive strategies, you’re well on your way to higher quality, shorter lead times, and increased business value.