Search Rocket site

The Future of IBM i+ CI/CD: The Impact of AI on Testing

Rebecca Dilthey

August 8, 2023

This is the fourth post in a series titled The Future of IBM i+ CI/CD. (The first, second, and third posts are great pre-reading to this post).

With a technology as transformative as AI, it is difficult to accurately predict the evolution of AI’s influence on DevOps more than a few years out. However, we believe in the prediction of AI’s role in testing below with high confidence.

Image 1

Below is an explanation of each step in the evolutionary journey of testing. Manual and automated testing are self-explanatory; let’s dive into the rest.

Guided Testing

While understanding how an application is developed is important, the key is how it’s used. An application can be developed for customer relationship management (CRM), for example, with standard database fields and typical workflows for a CRM application like “create new customer profile,” “update profile,” and so on. But what businesses really need to understand is:

  • Who uses the application? Is it just the contact center? What about accounting? Sales? Legal?
  • What are they doing in the application? Are they jumping to screen 12 and updating one field 90% of the time? Do they start at the first screen, but the CRM team goes through workflow 1 while accounting goes through workflow 2?
  • How do these teams use it? Are they in the application eight hours a day? How long do they spend creating a new profile or making an update? Perhaps accounting is only in the application 10% of the time compared to the CRM team, who is always in there, but the workflow accounting walks through is critical to managing the business’s finances.

When it comes to testing, if you know what you must test, you can do so more often and build more comprehensive testing scripts that can be reused. Integration, unit, and regression testing can all benefit from test scripts built from process discovery insights. If you build a repository of tests, you can theoretically turn two months of testing into two days or even two hours. The repository becomes a knowledge base for the team that is continuously updated, getting better over time.

Predictive Testing

Once you have the data from process discovery and are heading toward “shift left” testing, you can add AI into your testing process. Predictive testing knows what to test when code starts changing, and the AI can trigger pass/fail testing before QA knows it needs to be tested, lessening the burden on the testing team.

The AI will need the rules by which it tests your code. While the AI vendor will likely have a standard set of rules preprogrammed into the model, you will need to customize the model based on what your processes look like.

Often, there is already AI in other parts of the DevOps process—for instance, there may be a model that is trained to compile code in the same way it was compiled before. However, you could build nuances into the model or change it directly to compile the code differently. For example, when code is ready to test, you could tell the AI model to pull a particular library from the test repository, depending on the workflow that is being updated.

AI can also be built to establish relationships between tests already run and changes that are made. This means functional testing can be focused on specific areas with minimal human intervention.

Quality Assistance

Think of this stage as an Amazon recommendation that suggests other things to buy based on what is currently in your cart, and what other customers bought in addition to that thing you’re about to buy. As quality assistance, AI can make recommendations about how to update code based on a failed test an AI model ran. For example, an AI model could raise a flag that says, “You changed this part here, but you either haven’t tested it or haven’t tested it as comprehensively as you did before—are you sure you don’t want to?” Or it could prompt you to make sure you want to test code from a workflow that is not recorded in the process discovery, which could mean that the process is not fully understood.

Intelligent Code Fix

At this stage in the evolution of testing, we expect machine learning to step in, taking the intelligence built into the AI model and learning to finetune the rules to reality—perhaps even improving on the intelligence itself. For example, sometimes you want to change how something is compiled, but only in particular situation. AI can understand when that is the case or not and update the rules based on that information. There are three ways this could be incorporated into testing:

  • Machine learning models can notice when you omit a particular test that is typically run, telling you it will run it because the scenario is similar to when you did so in the past.
  • There may be rules to test something in a certain way, but based on process discovery data, users seem to engage that workflow in a different way. Machine learning models can recommend a different approach to testing based on process discovery data. It can also review code that’s been moved forward from development to make sure that testing is appropriate and comprehensive, adding a test to the queue if it’s already available or pushing a request to either QA or the developer to create a new test to add.
  • In a fully advanced scenario, the machine learning model could create the test, run it, and do a code fix without any human interaction.

Intelligent Testing is the Way Forward

Organizations will likely see an explosion of innovation enabled by QA when moving toward intelligent testing. The QA team can build more comprehensive test cases based on process discovery and then share that test with development, which could include scripts for unit and integration testing. This empowers development to fix errors more quickly and build higher-quality code from the beginning. While there will always be some level of functional fixes that come from user feedback, pushing higher quality code to production will increase the share of the user experience and in situ feedback, speeding up the CI/CD cycle. This sets up the organization to be more competitive, uncover more opportunities for improving productivity, and align more quickly and closely with what the market is looking for.

Intelligent code fixing brings up the question of “how does AI change an organization’s approach to RPG?” It’s possible for AI to take over both the development and testing of RPG code, minimizing the resource challenges for IBM i systems. In fact, it’s possible for AI to be the IT admin, developer, and QA team for RPG, all in one.

But we’re likely years away from that last scenario becoming commonplace. In fact, it might never happen. In the next post in this series, we’ll discuss the concerns and limitations of AI.

Learn more about the future of IBM i+ CI/CD and read our whitepaper or talk to an expert today.