December 17, 2015
A couple of months ago, we delivered a mobile app along with a middle tier API component, used by a fleet of drivers in direct delivery of goods. The app integrated with the back end sales and routing system, with GPS location and driving directions, and added features to help the drivers and their dispatchers efficiently manage the day.
I have been in this business for longer than I care to reveal. When you roll out something like this, the norm is for there to be bugs, feature changes, training – a substantial amount of somewhat costly work, and rework. Right now, I am somewhat in a state of disbelief having not heard anything that needs changing or doesn’t work as expected. No bug reports. No training. No things that once deployed needed changing. What just happened? My first thought was “are they even using it?” Yes, they are indeed.
Is this your experience developing and rolling out enterprise software? Would you like it to be? How did we get there?
Before we delve into how you can make this kind of experience reproducible, let me say a few words on why you would want to. Tomes have been written on the cost of fixing defects after software has made it to production, for instance this. Just Google “cost defect lifecycle” and you can read for yourself, or find a host of images like this:
The numbers come in at from 5, 10, to 100 times the cost of a defect that lives through into maintenance. I.e. what you could fix in an hour today, could be days or weeks of work further down the road. The same goes for features: if what you release is not what the users ultimately need, then the cost of fixing it is on par with fixing defects. The wrong functionality is in effect a defect.
So why would you not focus on quality? To me, it boils down to education. When quoting a system, if you add extensive testing and quality processes, the Client sees it as overhead against the bottom line, and cuts it out to “save money” on “phase 1”. Big mistake. A penny saved in phase 1 is 10 dollars spent later. At Trailhead, a comprehensive QA plan is part of our standard estimate template, and our processes factor it in to every project we do. We strongly encourage our clients to focus on QA. We know it will be cheaper for them in the long run, and we are more interested in happy clients with successful projects for future referrals and repeat business, than undercutting a competitor that is going to skimp on quality to offer the lowest price. However, with access to cost efficient cloud based tools and processes, and affordable testers, the perceived overhead is actually not even that high.
To get something done well, and done repeatedly and reproducibly, you need good processes, and good people. On the process side, here are some of the things we do to create quality by design:
If you haven’t heard of Agile Development by now, it might be high time to understand the principles. To hear it from the horse’s mouth, go here. In essence, this process focuses on delivering small chunks of well tested and shippable functionality frequently. At Trailhead we use the Scrum methodology and are leaning towards Kanban, another variant (picture from Mountain Goat Software, an excellent Scrum resource).
I have been leading Scrum teams for three years, and seen huge benefits, even to the point where we use this for our other non-software related tasks (if you use Trello, you already are doing something similar!). We do one week sprints, meaning every week, we deliver a piece of working software to our clients. This is software they can download or log in to, try out, and give us feedback on. Every week.
As part of Scrum, we focus on a lot of communication, and on always improving our own process. Every morning, each team gets together for 10 minutes and plans the day, discusses impediments, and give a status update. On Fridays, after our delivery, we look back at the week internally and figure out what went well, and what could be improved next week. Then we plan our next week of work.
When we plan our work a week ahead of time, we don’t waste time on detailed designs for things that may or may not end up being developed. With agile, the idea is you can pivot fast, and inexpensively. We discuss an idea, we deliver it, you like it, great. Or you don’t like it, even though it was what you asked for, so we tweak it next week and get it back on track. Or, more commonly, you have a light bulb moment, and take off in a different direction after seeing what the software does… the direction you didn’t know on Day One that you wanted to go in!
We manage our back log planning, weekly sprints, tasks and defects in Visual Studio Team Services, online, hosted in the Microsoft Cloud. It looks like this:
At the end of each week, before we deliver the software, we walk our clients through it in a weekly demo. We show finished software, and get immediate feedback. Each developer shares his or her deliverables. The client brings stakeholders to the table, not just the project or product managers involved in a daily basis, but also end users, sales and marketing, and others that will benefit from knowing what is in the pipeline. This takes 30-60 minutes. When we are done, the software is made available to the client in their own environment.
How does packaging and delivering software on a weekly basis not create a ton of overhead and waste? The secret is in the automation of this process. We actually do it several times a day, completely hands off:
With continuous integration, or CI, every time a developer checks in some code, or a tester checks in a test, a build system automatically pulls the changes, compiles them, and runs tests. If you break something with your code, you will know within minutes, and hopefully can fix it before it affects other team members.
We use Visual Studio Team Services for source code management (Git), and we use the build services to do the build. These builds can be easily customized to build AND deploy the different parts of a modern software system, from the database to the application server/middle tier/API, the Web front end, and native or Xamarin mobile iOS and Android apps. For instance, this build deploys an Angular web app:
And this one builds a Xamarin Android app, and pushes it out to Hockey App for deployment:
Setting up these build templates is a little tricky the first time, but for every new project we just copy and tailor an existing one, and it takes no time.
Once this system is in place, all stakeholders, including developers, testers, managers and clients have access to the latest software minutes after it is written! They can log on to the web and try it out, or download the mobile app and play with it.
Once the build and deployment process has been automated, the natural next step is to automate testing, so that when software is checked in, a full set of regression tests is run (meaning checking that the changes didn’t break something that used to work). This saves an enormous amount of time and money, since you don’t need users to manually click through everything every build, and it also it saves you finding surprises later on in stuff you haven’t looked at in a while.
For mobile apps, we automate using Appium, which allows us use C# and Visual Studio to download the latest build from Hockey App, and send keystrokes and gestures to a real or simulated mobile device, and compare the results to what is expected.
For web apps, we automate the testing using Selenium/WebDriver, which allows us to write tests in C# and Visual Studio, and then drive a browser through scripts, to perform the actions a user would perform, and verify the results are as expected.
Here you can see a suite of tests for a project search feature running in Chrome:
Our processes and deployment environments are hosted in Microsoft Azure (for some clients we do Amazon Web Services), and we create the infrastructure for each project on demand and retire it when done. We can spin up Virtual Machines in different geographical regions, create web apps and API servers and deploy regular SQL Server databases or Azure databases, all in minutes. We pay for only what we use. We have the footprint of a large IT department, for a minimal overhead and cost.
To get the best results, you also need the best tools. We have standardized on Visual Studio for most of our development, WebStorm for web development, and in some cases Xamarin Studio for mobile development. Add in tools like Resharper help us find errors in code and automate tips for better code.
For communication, we use Slack for most of our internal conversations, augmented with Skype and JoinMe or GotoMeeting. Slack has some great integrations, so we get posts from ‘bots’ when builds are completed, code is checked in, tests are written, etc., giving us all the pulse of what is happening.
The other pillar of successful QA is people. By now, it is pretty common practice for developers to write Unit Tests for their own code, and it is part of their responsibility to test their own work. But a seasoned QA engineer is a lot more devious, and will think outside the developer box, and come up with useful techniques and edges cases to probe the robustness of the software. This is a craft, but of course benefits from tools as well.
Our QA engineers perform a mix of manual and automated testing. On the manual side, they use tools like PostMan and Toad or Sql Server Management Studio to dig deep into the APIs (which are self-documenting with Swagger) and database tables, to find or create good test data. Test plans are written for each sprint using Microsoft Test Manager, as well as Regression Test Plans:
The plans are also available in Visual Studio Team Services, and contain detailed steps on what to test manually, as well as a record of test runs and devices tested on. Having the plans in VSTS allows us to link them to the corresponding work items in Sprint planning, and for new team members to be on boarded and immediately have access to the aggregate testing knowledge of the project.
Each project has one or more QA engineers, that do a mixture of manual and automated testing, on a variety of devices, testing both what is written during the current sprint, and regression testing previous work where necessary.
Wrapping up this rather lengthy expose on how a focus on Quality drives down costs and accelerates time to market, and how you achieve it by design, you can see it takes some careful thought and planning, the right tools and processes, and the right crew to do the work.
At Trailhead we believe we have this nailed, and I think the opening real life scenario proves it. Do you routinely achieve that level of success with your quality? Are you happy with your current process? Do you think it might be worth investing a little up front for the big savings downstream? We can help you get there!