A Practical Guide to Generating Squish Test Scripts with AI Assistants

After our last article on using Squish via the Model Context Protocol (MCP), see here, many customers requested more guidance on generating test scripts. This post focuses on one possible setup for using AI assistants to create Squish test scripts. 

While this post is especially relevant for QA developers and engineers seeking for test automation tools, the conclusions explain why we believe the described approach makes sense in the rapidly evolving field of generative AI. This should be of interests for generalists looking to understand the developments in test script authoring with foundation models.  

Note that we use Cursor in the article but a similar setup can be achieved with all the major AI code assistants including Windsurf, Claude Code, GitHub Copilot and others. Naturally, this setup works on all platforms supported by Squish. 

 

New to Squish? Squish is a professional GUI testing framework for test automation for apps on Android (including Android Automotive), macOS, iOS, Linux, and Windows. It also supports native and cross-platform toolkits for embedded targets. See our product page or product documentation for more information or visit this page for Qt developers.  

The Setup

The setup we are using for generating test scripts for Squish with AI assistants is the following: 

squish-with-ai-assistants

 

The components in the image: 

  • Squish Rules is a set of generic guidelines that act as a 'style guide' for the AI assistant, ensuring generated scripts follow your team's conventions, maintainability standards, and Squish best practices. We are keen to develop general purpose Squish guidelines further, so pull requests and suggestions are welcomed. 
  • Squish Docs We use Cursor as our AI assistant and the provided functionality in Cursor to index the Squish developer documentation. See the instructions here and adapt them for your AI assistant of choice. This ensures the AI assistant has the most accurate and up-to-date context for generating valid Squish commands and following best practices. 
  • Product Spec Cursor is able to see the full source code of the Address Book application (Application Under Test, AUT), which is important as we used Cursor to generate the “Product Spec” since we did not have one already for this demo application. This document enables the AI assistant to understand the application's functionality, UI elements, and potential test scenarios, leading to more relevant and accurate test script generation. The document is kept up to date as we continue to add new functionality to the application and create new test cases. Adding your existing product documentation or product requirements document (PRD) including user guides as part of the “Product Spec” is worth exploring as it would give more context for the foundation model. Similarly, retrieval augmented generation (RAG) can be used for large scale software projects to maintain the product requirements and more.  
  • Stub Script & Object Map is created by recording an initial Stub Script with the Squish IDE.  This gives the AI assistant a basic understanding of the mapping between the UI elements in the application and objects. The Object Map needs to be kept up to date as your application evolves. Utilizing the AI assistant to generate the object map directly and augmenting the object map from your UI layer code is a possibility as well. 
  • Squish Runner & Feedback loops The AI assistant receives test script invocation results and error logs from the AUT either with the Squish MCP server example or direct invocation of the squishrunner CLI. With these results, the AI assistant is able to iteratively improve the generated Squish Test Scripts and solve the problems it encounters during re-execution of the test scripts creating a feedback loop.  

The above setup aims to give the model context to improve its accuracy and consistency to work around the probabilistic nature of models. 

All files mentioned above are available in the accompanying repository. 

Generating an example test script  

In the following video we use the setup outlined to create a test script which opens the Addressbook application, deposit new contacts, delete some of them, and verify that the contacts were deleted. The application is included in the Squish for Windows package. As mentioned if you are using some other target platform this process and setup will work just as well. 

 

 

The generated test script is available in the repository. Depending how you’d like to control the creation of the test scripts you could let the AI assistant to automatically run and verify the created scripts.  

Conclusions

We have outlined our setup for generating Squish test scripts with the help of AI assistants and presented one possible approach for leveraging foundation models in test script generation. We believe that owning your test scripts and using a similar setup for authoring test scripts makes sense due to following: 

  1. Foundation models are constantly improving Choosing a setup where you can use the latest models will allow you to benefit fully from the fast pace of development with the models, without being locked into a single provider. Compare this approach to relying on a test service provider that updates its models infrequently and does not disclose model details. 
  2. Context for the model As discussed, our setup aims to provide the foundation model context to improve its accuracy and consistency. We expect this to be a key to generating test scripts. 
  3. Intense competition among AI code assistants All the new functionalities with AI assistants will be instantly available to your team and you are free to switch the AI assistant or go with the open source alternatives.
  4. Test scripts remain valuable Squish has been around for over 20 years. With the help of foundation models you can  translate your test scripts from another test framework to Squish if needed. Likely this would require some manual work but you’d always have that possibility so that you can be confident that the assets you have built are valuable for the future as well. This is not always possible with service solutions that do not allow exporting test scripts. 
  5. The combination of tools The combination of tools used by the AI assistants, rather than centralized services, allows for greater flexibility. Your setup can include RAG for product documentation context and other MCP servers as needed. 

In our setup we used Cursor, but as mentioned Claude Code and other AI assistants support similar workflows. Start here for Claude Code and learn how to author Squish test scripts with it. 

We invite you to try out the setup described here for generating test scripts for your applications. We are committed to developing generative AI and AI enablers for all our testing tools and we look forward to working with our customers to improve their testing workflows. 

Need more information?   

Please contact the author Otso Virtanen at @qt.io - Otso is the product lead for the Generative AI/AI initiatives for Qt Group's Quality Assurance tools. For Squish, see our product page or product documentation for more information. 

Acknowledgments: Marius Schmidt for the setup and test runs and Slobodan Vrkacevic for review comments. 

 


Blog Topics:

Comments