Artificial Intelligence (AI) and Large Language Models (LLMs) have impacted every aspect of technology work, giving software engineers (programmers, QA, designers, and leadership) a useful new tool to enhance their daily work. As it relates to the software quality process, I believe AI offers help in three basic activities:
- User story analysis
- Test case creation
- Documentation
In this blog, I’ll take you through each of these common software quality tasks and how I recommend using AI to improve them
Two Notes
For each of these activities, we must keep in mind two key aspects
- It’s critical that you build your prompts according to your specific needs and refine as many times as necessary to ensure that the outcome is accurate, complete, and consistent (having the same format and content type always).
- It is also important to note that, while AI can assist in these areas, it should be seen as a complement to human expertise and not a replacement. We must always be critical of its output and make manual improvements as needed.
User Story Analysis
When analyzing a user story for the first time the team members can often miss certain aspects or scenarios. This is even more true if this new user story is related to old functionality that has not been reviewed for a long time. Also, the ideal description format (As a… I want… so that…) is rarely used and more informal phrases are used instead.
At the beginning of the software lifecycle, during grooming or planning sessions, QAs (and any other team members) can be assisted by AI in creating user story description clarifications and refinements, acceptance criteria generation, and even story points assignment when a process is mature enough and the prompt framework is well proved.
Test Case Creation
Once the description and acceptance criteria are clearly defined, we can generate a set of test cases (or a checklist) to cover all the possible scenarios, validations, and dependencies. These tests should have a consistent structure and display format, using consistent titles (e.g., user, action, object), showing steps and desired outcomes. Having a good prompt framework to use with your LLM is crucial for this.
The generated test cases can be attached to the user story for the rest of the team as an additional guideline, this way we can prevent bugs while coding the functionality. The ‘official’ test cases can be refined and added to at a later time.
Documentation
Finally, after development is complete, test cases are passed, and the user story is approved, we can create documentation for future development and maintanence. Again, these documents should be consistent in structure and format.
Another useful action is to document (in this case manually) all prompts used in the user story. When a new related user story has to be developed, we can use it for analysis, starting the cycle again, and refining it when necessary.
Some Examples
Example prompts for analysis.
“Analize this user story: [user story]”
Very simple and effective way to analyze the overall functionality.
“Analize this user story [user story] and add comments regarding possible failures.”
Adding specific instructions to the prompt will give us detail and awareness of potential issues.
“Ask questions regarding this user story: [user story]”
Simply make the AI ask questions. This will open new scenarios or concerns that you did not think of before.
“Create acceptance criteria for this user story: [user story] using a numbered list with user roles and desired outcome.”
After refinement, this is an effective way to set the final acceptance criteria.
Analysis prompts can be quite simple, normal, or extraordinarily complex. You will need to refine it, adding more instructions, specifying roles, setting the desired format, etc. The more complex the story is the more refined the prompt should be.
Example prompts for test cases.
“Create test cases for this acceptance criteria: [acceptance criteria]”
Simple set of test cases.
“Create test cases for this acceptance criteria: [acceptance criteria]. Display in grid with steps and final verification. “
It creates the test cases and displays them in a specific format.
“Create happy path test cases for this acceptance criteria: [acceptance criteria]”
Here we define the type of test that we want to cover. The type will be best to cover your needs, e.g., field validations, boundaries, negative, etc.
“Create functional test cases for this acceptance criteria: [acceptance criteria]. User has a ready only role, display in grid.”
In this case, we added context (user role) for the test to be more accurate.
As you can see, there are thousands of ways to set the prompt, it depends on your knowledge of the system and time using AI to find the best for your needs. This will be an iterative process where you will refine the desired prompt. These are some key aspects to keep in mind: format, test types, user role, UI components (specify what components are used, dropdowns, text fields, etc.), grammar, etc.
Example prompts for documents.
“Write a document explaining this user story: [user story]”
As in the analysis stage, this is a good way to start the prompt refinement, just a straightforward task to create the first document.
“Write a two-paragraph document for this user story [user story].”
Adding format will start giving us a better document structure, depending on the complexity of the user story, this format will change.
“Write a technical document for this user story: [user story], highlighting the most important aspects of it.”
Include the type of document you need. Sometimes a more technical explanation can lead to a better understanding but sometimes it can be quite the opposite. Again, it depends on the user story. Backend, endpoints, and systems integration functionalities may be good candidates for technical documentation.
“Write a testing document for this user story: [user story], include a brief explanation of the functionality. Exclude redundant information. Write is as a non-technical person.”
Add more context, types, grammatical structure, format, and even style. At the end of this process, you will have a consistent document generator.
Conclusion
Addressing basic activities in your daily workflow is the best way to start integrating AI. Although it can be time-consuming at the beginning, as with any new tool, the time and effort invested will be well rewarded at the end.
Your prompt refinement process is the key to success. There are many examples on the web but is up to you to set the best one for your project and always remember to be critical about the outcome, it will not replace your knowledge and expertise.