How AI Is Changing Software Testing 

While technologies like cloud-native solutions and low-code/no-code platforms effectively accelerate the development process, software testing remains a significant bottleneck, often slowing down the entire application development cycle.

This delays products from reaching end users quickly, affecting the overall time to market.  

This challenge presents an opportunity to integrate artificial intelligence into software testing.

When you look at some innovations related to AI software testing, you find tools like Katalon Studio, Test Craft, and Functionize offering AI-powered automated testing from end to end. And from a distance, it seems like they have already shaken up the software testing domain and have changed how things are done. 

However, I wanted to dive deeper and understand the sentiment of SQA Engineers towards AI testing tools, what they thought about where their industry is headed, and if and at what scale they are already using AI in their processes. 

I interviewed several SQA engineers working at Codup and discussed the current state of software testing, the innovations in automated testing tools, the challenges of using AI in software testing, and future expectations. 

AI in Software Testing: Current State

Currently, AI in software testing is applied in limited, isolated ways that don’t take full advantage of its potential. 

According to Katalon’s State of Software Quality Report 2024, AI is mostly applied in test case generation, both in manual testing (50% of respondents agreed) and automation testing (37% agreed). Test data generation is another domain where AI shines through, with 36% of respondents reporting using it.

Software testers often use AI for low-stake use cases to perform isolated tasks such as free text test designs and writing test scripts for automated testing tools like Selenium using mostly ChatGPT, because it’s easy and quick to implement. 

But why aren’t more SQA engineers using the AI tools mentioned above and implementing end-to-end AI-powered automated testing? 

The whole conversation brought an interesting perspective into the spotlight: should we even use AI for software testing? 

Hasham Khalid, the lead software engineer working on several products at Codup gave an interesting insight about whether AI should be used in software development. 

He pointed out how professional developers are frequently using AI-powered tools to code. That’s true when we consider the research conducted by Gartner, which concludes that AI is used by 80% of developers today for writing code. Hasham believes that since these tools often generate false logic, it is where software testers are mostly required to validate the code and correct it before it goes into production. This is where you would want to have testers use their human eye to check for potential risks and shortfalls and identify unoptimized and clunky code. Hasham’s verdict was that if you are using AI for writing code, you shouldn’t rely too heavily on it for testing. 

“As of now, AI can write crappy code because it learns from crappy code. It needs validation checks, so human testers are essential for verifying logic. Right now, the human-to-AI intervention ratio is about 80 to 20. This might improve to 60 to 40 in the next decade, but that’s a stretch. In simple terms, AI won’t be reliable for testing anytime soon.” Hasham Khalid. 

Abdullah Khan, the Associate Product Manager at Codup, on the other hand, believes there is some potential for AI to speed up the application development process and make it more efficient. 

“Currently, AI can automate about 50% of routine tasks by predicting user scenarios based on existing data. I don’t believe, however, that a fully autonomous software testing tool is to come by even in the next decade. AI may improve and streamline the process though.”

That immediately brought an image of Iron Man and Jarvis to my mind. Jarvis was the closest AI “friend” of Tony Stark who assisted Iron Man in every situation. Similarly, AI for software testers is like a behind-the-scenes “friend” who does all the legwork but the final task needs to be done by the software testers themselves. 

AI in Software Testing: Challenges 

With so much hype and potential, what’s preventing testers from adopting AI to improve testing capability? Some of the major barriers are as follows. 

  • Not 100% Reliable

The major sentiment across the board is that AI isn’t accurate. It is fed data from which it learns, understands context, and generates output. 

Some AI models “hallucinate” or make up false information to fill gaps in knowledge, context, or understanding. SQA Engineers believe testing needs consistent human judgment and involvement as AI will never be able to replicate creative problem-solving skills and empathy. 

When it comes to edge cases and identifying complex user case scenarios, Madiha Sheikh, an SQA engineer at Codup believes AI has serious limitations. “It is impractical to depend completely on AI for testing as it cannot yet address highly complex user scenarios and has failed in identifying edge cases.”

And it’s not just about being inaccurate. Ahmed Razzak, the SQA Lead at Codup points out the harmful implications of using AI in software testing, “Test applications sometimes involve sensitive or personal data. Safeguarding this data during testing processes and preventing leaks is critical to maintaining user trust.”

  • Insufficient Domain Expertise 

Ahmed further shares his skepticism about AI testing tools concerning their insufficient domain knowledge: “AI testing used solely for comprehensive test coverage is a tough nut to crack due to the sheer variability and diversity of application inputs. Feeding domain knowledge to AI systems could be challenging.” 

LLMs rely on vast amounts of public data which is crucial for efficient functionality. SQAs believe it is difficult to explain business logic to AI or come up with reliable findings to train data to fairly reflect on a variety of scenarios and the complexity of the software under the test. 

In addition to that, AI models may find it difficult to adjust to software environments that change quickly, requiring frequent upgrading and retraining. On the other hand, human resources possess the ability to promptly judge and adjust to novel circumstances and utilize their proficiency to manage unexpected scenarios. 

AI in Software Testing: What Does The Future Hold? 

Looking ahead, will AI work autonomously, freeing testers to make more valuable contributions? 

Here are some key areas of innovation in the software testing domain and how AI may transform that in the future:

  • Using machine learning to analyze historical data and predict defects and risks. This approach will allow proactive problem resolutions, improving software quality and reliability. 
  • More sophisticated algorithms capable of automation, designing, and adapting tests as software evolves, ensuring a dynamic and responsive testing process. 
  • AI will revolutionize performance testing by accurately predicting and simulating real-world scenarios leading to precise testing under various conditions. This will ensure software robustness and scalability.
  • AI will help in analyzing user behavior to improve user experience testing, aiding to create more user-centric software. 

It is, however, important to understand that AI is here to make things faster and not take away what we do. Whenever we talk about AI, humans naturally adopt a ‘fight response’, becoming defensive about the use of AI and often comparing themselves with artificial intelligence.  

That could be one of the reasons for the slow adoption of AI in this domain. 

To truly benefit from this technology, we need to look beyond the basics. 

Instead of asking “Will AI replace software testers”, we should be asking ‘how AI can help software testers?”

Maria Ilyas

A writer by profession, Maria Ilyas is an eCommerce and digital marketing enthusiast and is always digging into the latest marketing trends, best practices, and growth strategies.

Follow author on: