Living in a Data-Driven World
Advancements in technology often create more complexity, which underscores the pivotal role that metrics play in shaping our understanding of what is happening in these complex systems. It should be no surprise, then, that companies of all sizes try to create metrics for everything and, using these numbers, attempt to predict the future, avoid problems, and make improvements. Good metrics can be a key driver of successful projects, and that is likely to only increase in the future.
In this blog, I will introduce you to a popular statistic from the world of athletics, and show you how it can be applied to our software projects to make them more successful.
Let’s take a quick detour into a recent sports trend: “expected event statistics”. In soccer, my favorite sport, this statistical approach is applied in many ways, but can also be found in all other major sports. One way it is used in soccer is with the concept called “expected goals” (or xG for short), which is a statistical measure that assesses the quality of goal-scoring chances in a soccer match.
xG is often used to evaluate a team or player’s performance beyond the actual goals scored. The basic idea behind expected goals is to assign a probability value to each goal-scoring opportunity based on various factors such as the distance from the goal, the angle of the shot, the type of play, and historical data on similar chances. These factors are used to calculate the likelihood of a shot resulting in a goal, whether it did or not.
By calculating xG, you are able to gauge a team or player’s performance beyond the actual goals they scored.
From xG to xB
As interesting as that may be, you might be wondering what soccer statistics have to do with software development. There are of course some obvious common factors between the two pursuits, such as teamwork, working for results, following a plan, etc. However, I want to look specifically at how statistics and a data-based approach can be used in both soccer and software projects to catch trends, predict the future, and improve results.
In soccer, expected goals (xG) are a way of assessing the quality of goal-scoring opportunities. For QA, we might use the number of expected bugs (xB) to assess the quality of our development and testing processes. Below, I go into some detail about several ways in which a statistic like expected goals could be applied to the field of software testing. Like me, I think you’ll be surprised by how well the analogy works.
If in soccer the quality of a goal-scoring opportunity is assessed based on factors like distance from the goal and angle of the shot, then for QA, factors that could influence the number of expected bugs could be the number of requirements, how much clarity we have in them, and level of details before the project is started.
For example, when assessing the requirements for a project, if we discover that requirements are not great, formatted well, or are missing key elements, then we could predict that this would likely cause a large number of expected bugs later in the project, causing the quality of the project to suffer.
Besides quality, the xG in soccer also calculates the probability of scoring from a given chance. It’s calculated based on historical data and various predictive factors. The same could work for software development with expected bugs, which will help predict the number of potential issues and problems we will encounter on a project given various predictive factors. There are a number of factors that could influence that number such as the complexity of the project, planned testing hours, and the number of engineers on the QA team.
Soccer teams can either overperform or underperform on their actual goals compared to expected goals. The same is true of software teams. They may encounter more or less bugs than expected based on the anticipated level of quality.
After a project is finished, we could analyze whether we encountered more bugs than expected, in which case the team underperformed and we need to understand why. The opposite is also possible—we encountered fewer bugs than expected, and more likely we will study the positives of this team for future projects.
Soccer coaches, recruiters, business managers, and analysts all make decisions based on the xG metric, understanding its importance. The same could be applied to leadership on development teams—after assessing the expected bugs, team leaders could make a change to the strategy and approach, allocate more resources to the project, ask QA to perform more testing, or even bump release dates.
In soccer, the experience of players and the team overall plays a significant part in assessing and predicting xG. The same could be well applied to development processes. For example, teams that have experience with particular technology, projects, or clients are more likely to get a new job done on the first try.
An experienced team will also have a better chance to do the job more cleanly and with fewer bugs than a less experienced team. With that in mind, the experience of team members and familiarity with an industry, software project, or client could all play an important role in calculating a more predictive “expected bugs” metric.
As you can see, there are several interesting connections between soccer’s “expected goals” and my proposed “expected bugs” metrics. Both types of teams involve a combination of strategy, skill, continuous improvement, and adaptation to challenges for optimal performance. Both types of teams would also benefit from tracking these metrics.
By incorporating the concept of expected bugs, I believe that software teams can move beyond a simple bug count to focus on the many different aspects of the software development process that impact quality. Doing so will lead to a more proactive and data-driven approach to software quality management.