Escape the Grind: Let AI Agents Handle the Drudgery, You Handle the Strategy
Traditional QA is a bottleneck. This session unveils a revolutionary, agent-based framework that creates a truly autonomous testing ecosystem with human intelligence at its core. See how specialized AI agents manage the entire testing lifecycle--from requirements analysis to script-less test execution and reporting--under human supervision. Escape the drudgery of repetitive manual work and brittle tests. It's time to elevate your role from execution to strategy and let AI handle the grind.
Target Audience: Testers, Developers, Test Managers, Project Managers
Prerequisites: Familiarity with the Software Testing Life Cycle (STLC) as well basic concepts of test automation and AI
Level: Basic
Extended Abstract:
In today's fast-paced agile environments, where AI is accelerating development, traditional quality assurance has become a significant bottleneck, hampering release cycles and increasing risk. This challenge extends beyond mere test execution to encompass the entire testing lifecycle. This session introduces a paradigm shift from automating isolated phases to creating a proactive, intelligent, and autonomous QA ecosystem where humans remain the core decision-makers.
We will present the 'Agentic QA Framework,' an open-source, agent-based system designed to revolutionize software quality assurance. The framework's architecture is built for flexibility and extensibility, centered around an Orchestrator that manages specialized AI agents and allows for a 'plug-and-play' approach to adding new capabilities.
The session will detail the framework's end-to-end, event-driven workflow, which automates critical processes while keeping humans in the loop for supervision and final approval. We will cover how agents autonomously handle:
- Requirements Review: Providing early, 'shift-left' feedback on the clarity, consistency, and testability of requirements directly within systems like Jira.
- Test Case Generation, Review & Classification: Automatically creating and classifying comprehensive test cases (e.g., UI, API) with appropriate test data in a test management system like Zephyr. The agents also review these test cases, providing feedback on their correctness, relevance, completeness, and test data quality.
- Test Execution: Showcasing a dedicated UI test execution agent that operates without traditional scripts. It uses multimodal AI models and computer vision to understand the UI, simulate human interaction, and perform robust visual verifications, making it resilient to common test automation issues like selector changes.
- Test Reporting: Collecting results and generating detailed reports upon completion of the test execution cycle.
This talk will demonstrate how a robust automation framework transforms QA from a labor-intensive process into a strategic, autonomous discipline. By automating manual and repetitive work, it empowers QA professionals to evolve from task executors into quality strategists. Their focus shifts toward higher-value activities, such as developing long-term testing strategies and contributing enhancements to the automation framework itself. This elevation of the QA role is critical for delivering quality at the speed of modern development and fully leveraging human ingenuity.
Test Automation Architect
Taras Paruta is a Test Automation and AI enthusiast with 13 years of experience. He specializes in the strategic development and implementation of innovative automation solutions for software testing, leveraging agent-based AI frameworks and open-source tool stacks. A native of Ukraine, Taras holds a Master's degree in Banking and a Bachelor's degree in Computer Science. He has lived in Vienna since 2018 and enjoys sports and computer games in his free time.
