Search Rocket site

The Future of IBM i+ CI/CD: AI Concerns and Limitations

Rebecca Dilthey

August 15, 2023

This is the fifth post in a series titled The Future of IBM i+ CI/CD. (The first, second, third and fourth posts are great pre-reading to this post).

There have already been situations where the limitations of AI are concerning. In fact, there was a letter published in Spring of 2023, signed by hundreds of leading engineers and scientists, including Steve Wozniak, founder of Apple, pushing for a halt of AI model development for six months. The goal is to give the industry time to review and provide recommendations on policies and procedures that address the serious ethical challenges AI introduces to the world economy, not the least of which is human displacement from the workforce. In addition, governments are starting to ban the use of ChatGPT. Italy announced a temporary halt of the “...processing [of] Italian users’ data amid a probe into a suspected breach of Europe’s strict privacy regulations.”

Other challenges with AI that have already been experienced stem from unconscious bias being built into AI models. For example, because AI execution is directly related to the data used to teach the model, there have been instances where biased data influenced its behavior in such a way that it was labeled racist and sexist. Governance around what data is used to train AI and how that data is managed is paramount if organizations don’t want a PR disaster on their hands.

Many of the concerns around AI are driven by the limitations of the technology, specifically:

  • AI has no common sense: It cannot understand context and meaning, which can lead to errors and misinterpretations.
  • The high cost of training and managing AIs: Not only do you need a large amount of accurate, high-quality historical data for AI to work well, but you also need to continue to feed it quality data so the models can be finetuned to work better over time. The probability of biasing the AI is high if one isn’t careful.
  • Code ownership: AI doesn’t create anything new; it generates answers based on what it’s seen in the data it’s given. Who owns the code if a development team decides to use AI to build code, and the AI’s data input includes third-party code? There’s a potential liability for businesses here that could create an issue if that AI-built code is used externally.
  • Transparency: AI algorithms can be complex and difficult to understand. We know the inputs and the outputs, but sometimes there is a “black box” of decision-making happening within the AI. This complexity will only grow over time, and the risk this black box poses will grow, too, as AI evolves to not just decision-making but execution, as well, without manual review.

Every organization will need to decide what their risk tolerance is for AI. DevOps teams might decide to implement permissions across AI models in a similar way they do with human workers. How big of a “black box” that organizations might want to build around critical data and applications will also be a factor. Still, one can see the potential of this technology to revolutionize not just testing, but also DevOps overall, especially within IBM i environments.

In the next and final post in the series, we will share recommendations around how to think about AI and whether it will be a good tool for your company and how to set yourself up to take advantage of the technology in a way that minimizes risk to the business.

Learn more about the future of IBM i+ CI/CD and read our whitepaper or talk to an expert today.