Principal QA Engineer - AI & Cloud Services
📍 Job Overview
- Job Title: Principal QA Engineer - AI & Cloud Services
- Company: AVEVA
- Location: Cambridge, England, United Kingdom
- Job Type: Regular, Full-Time (Hybrid)
- Category: Quality Assurance, AI & Cloud Services
- Date Posted: 2025-08-09
- Experience Level: 10+ years
- Remote Status: Hybrid (3 days office-based)
🚀 Role Summary
-
Key Responsibilities:
- Validate distributed, cloud-native services and public APIs for AVEVA Connect's Core AI Services.
- Design and implement automated test suites for APIs, service components, and AI pipelines.
- Collaborate with developers and data scientists to establish service-level quality metrics and observability hooks.
- Ensure AI systems' accuracy, consistency, safety, and compliance with AI regulatory frameworks.
- Mentor junior testers and foster a culture of continuous learning and innovation.
-
Key Skills:
- Proven experience in software testing or QA for cloud-native applications, including 2+ years working on AI/ML systems.
- Proficiency in designing automated testing frameworks and experience with Azure DevOps, CI/CD pipelines, and containerized test environments.
- Strong understanding of API testing, performance profiling, and security testing (including OWASP top 10).
- Excellent problem-solving skills and the ability to work in Agile teams across global R&D locations.
💻 Primary Responsibilities
-
AI & Cloud Expertise:
- Familiarity with LLM evaluation techniques, output scoring, and validation frameworks.
- Understanding of key AI concepts such as prompt engineering, RAG, model orchestration, and hallucination detection.
- Experience in testing for accuracy, relevance, and consistency of AI model predictions/generations.
- Defining performance metrics for AI services and testing for the same.
- Awareness of AI safety, bias detection, and explainability techniques.
- Experience ensuring compliance with AI regulations and standards (e.g., NIST AI RMF, EU AI Act).
- Strong belief in ethical AI practices, transparency, and end-user trust.
-
Core Responsibilities:
- Perform functional, performance, and security testing on cloud-native services deployed on Microsoft Azure.
- Collaborate with developers and data scientists to establish service-level quality metrics and observability hooks.
- Validate services against AI regulatory frameworks and ensure traceability, fairness, and robustness in outcomes.
- Participate in threat modelling and security validation of exposed APIs and AI services.
- Provide feedback early in the lifecycle to reduce defects and improve design.
- Mentor junior testers and encourage continuous learning.
🎓 Skills & Qualifications
Education: Bachelor's degree in Computer Science, Software Engineering, or a related field. Relevant experience may be considered in lieu of a degree.
Experience: 8+ years of experience in software testing or QA for cloud-native applications, including 2+ years working on AI/ML systems or services.
Required Skills:
- Proficient in designing automated testing frameworks.
- Hands-on experience with Azure DevOps, CI/CD pipelines, and containerized test environments.
- Strong understanding of API testing, performance profiling, and security testing (including OWASP top 10).
- Excellent problem-solving skills and the ability to work in Agile teams across global R&D locations.
- Demonstrated ability to mentor junior team members and foster a culture of continuous learning and innovation.
Preferred Skills:
- Experience with AI-specific evaluation tools and techniques.
- Familiarity with AI regulatory frameworks and standards (e.g., NIST AI RMF, EU AI Act).
- Knowledge of AI ethics, fairness, and transparency principles.
📊 Web Portfolio & Project Requirements
Portfolio Essentials:
- A comprehensive portfolio showcasing a variety of AI and cloud-native testing projects.
- Examples of automated test suites, performance profiles, and security test cases.
- Case studies demonstrating collaboration with developers, data scientists, and architects to establish service-level quality metrics and observability hooks.
- Documentation of AI-specific evaluation techniques, output scoring, and validation frameworks.
Technical Documentation:
- Detailed test cases and test plans outlining functional, performance, and security testing strategies.
- Documentation of AI regulatory compliance and ethical considerations.
- Records of mentoring and knowledge-sharing activities with junior testers.
💵 Compensation & Benefits
Salary Range: £70,000 - £90,000 per annum (approx. $85,000 - $110,000 USD, based on current exchange rates and regional cost of living in Cambridge, UK)
Benefits:
- Flexible benefits fund.
- Emergency leave days.
- Adoption leave.
- 28 days annual leave (plus bank holidays).
- Pension.
- Life cover.
- Private medical insurance.
- Parental leave.
- Education assistance program.
Working Hours: Full-time, typically 40 hours per week, with flexible working hours and remote work options available.
🎯 Team & Company Context
Company Culture:
- Industry: Software and IT services.
- Company Size: Medium to large (6,500+ employees globally).
- Founded: 1967 (as AVEVA), with a rich history in industrial automation and engineering software.
- Team Structure: The AI & Cloud Services team is part of AVEVA's global R&D organization, working on cutting-edge AI and cloud technologies to transform industrial operations.
Development Methodology:
- Agile development methodologies, including Scrum and Kanban.
- Collaborative development environments, with a focus on continuous integration, delivery, and deployment.
- Regular code reviews, testing, and quality assurance practices.
- Deployment strategies, CI/CD pipelines, and server management.
Company Website: https://aveva.com/
📝 Enhancement Note: AVEVA's culture emphasizes learning, collaboration, and inclusivity, with a strong focus on innovation and sustainability. The company has over 150 patents and works on an incredibly diverse portfolio of over 75 industrial automation and engineering products.
Career & Growth Analysis:
- Web Technology Career Level: Principal QA Engineer, responsible for defining and driving AI testing strategies, frameworks, and best practices across AVEVA's Core AI Services.
- Reporting Structure: Reports directly to the R&D Senior Manager - AI Core Services Development, with a matrix reporting line to the relevant product teams.
- Technical Impact: Significant influence on the quality, performance, and security of AVEVA's Core AI Services, ensuring they meet the highest standards of accuracy, consistency, and user trust.
Growth Opportunities:
- Technical Growth: Expand expertise in AI testing techniques, evaluation frameworks, and regulatory compliance.
- Leadership Development: Mentor junior testers and contribute to the development of AI testing best practices and standards.
- Architecture Decisions: Collaborate with architects and developers to design scalable, secure, and ethical AI services.
🌐 Work Environment
Office Type: Modern, collaborative office spaces designed to facilitate cross-functional teamwork and innovation.
Office Location(s): Cambridge, United Kingdom (with global R&D locations in over 40 countries).
Workspace Context:
- Collaborative workspaces with dedicated testing environments, multiple monitors, and testing devices available.
- Access to cutting-edge AI and cloud technologies, tools, and resources.
- Opportunities for cross-functional collaboration with developers, data scientists, and other stakeholders.
Work Schedule: Flexible working hours, with a focus on delivering high-quality work and maintaining a healthy work-life balance.
📝 Enhancement Note: AVEVA offers a hybrid working model, with employees expected to be in the office three days a week. Some positions may be fully office-based or remote, depending on the role's requirements.
📄 Application & Technical Interview Process
Interview Process:
- Technical Phone Screen: A brief phone call to assess your understanding of AI testing principles, cloud-native services, and API testing.
- Technical Deep Dive: A comprehensive technical interview focused on your experience with AI testing, evaluation frameworks, and regulatory compliance. You'll be asked to discuss your approach to designing automated testing frameworks, performance profiling, and security testing.
- Behavioral & Cultural Fit: An interview to assess your cultural fit with AVEVA, focusing on your collaboration skills, problem-solving abilities, and commitment to ethical AI practices.
- Final Decision: A final interview with the hiring manager to discuss your fit for the role and make a hiring decision.
Portfolio Review Tips:
- Highlight your experience with AI testing, evaluation frameworks, and regulatory compliance.
- Showcase your ability to design automated testing frameworks and collaborate with developers and data scientists.
- Demonstrate your understanding of AI ethics, fairness, and transparency principles.
Technical Challenge Preparation:
- Brush up on your knowledge of AI testing principles, cloud-native services, and API testing.
- Familiarize yourself with AVEVA's products, services, and AI testing best practices.
- Prepare examples of your work demonstrating your ability to design automated testing frameworks, performance profiling, and security testing.
ATS Keywords:
- Software Testing, QA, Cloud-Native Applications, AI/ML Systems, Automated Testing Frameworks, Azure DevOps, CI/CD Pipelines, API Testing, Performance Profiling, Security Testing, Problem-Solving, Agile Teams, Mentoring, AI Ethics, AI Safety, AI Regulations, AI Compliance.
📝 Enhancement Note: AVEVA uses Workday as its Applicant Tracking System (ATS). Familiarize yourself with the system's functionality and optimize your resume and application materials accordingly.
🛠 Technology Stack & Web Infrastructure
Testing Tools:
- Azure DevOps: For CI/CD pipelines, automated testing, and project management.
- Postman: For API testing and documentation.
- JMeter, LoadRunner, or other performance testing tools.
- OWASP ZAP: For security testing and vulnerability assessment.
AI & Cloud Technologies:
- Microsoft Azure: The primary cloud platform for AVEVA's services and AI workloads.
- AI Services: AVEVA's Core AI Services, including natural language processing, computer vision, and machine learning capabilities.
- AI Evaluation Frameworks: Custom frameworks for evaluating AI model outputs, accuracy, and consistency.
AI Regulations & Compliance:
- NIST AI Risk Management Framework (RMF): A framework for managing AI risks and ensuring ethical AI practices.
- EU AI Act: A proposed regulatory framework for AI in the European Union, focusing on transparency, accountability, and governance.
AI Ethics & Fairness:
- AVEVA's AI Ethics Framework: A set of principles and guidelines for developing and deploying ethical AI systems.
- AI Fairness Toolkits: Libraries and tools for detecting and mitigating bias in AI models and datasets.
📝 Enhancement Note: AVEVA's technology stack is subject to change and evolve as the company continues to innovate and adapt to emerging technologies and industry trends.
👥 Team Culture & Values
AI Testing Values:
- Rigorous Evaluation: A commitment to thorough and comprehensive testing of AI systems to ensure accuracy, consistency, and safety.
- Continuous Learning: A dedication to staying up-to-date with the latest AI testing techniques, evaluation frameworks, and regulatory compliance standards.
- Collaborative Development: A focus on working closely with developers, data scientists, and other stakeholders to establish service-level quality metrics and observability hooks.
- Ethical AI Practices: A strong belief in the importance of ethical AI practices, transparency, and end-user trust.
Collaboration Style:
- Cross-Functional Integration: Close collaboration between AI testing teams, developers, data scientists, and other stakeholders to ensure the quality, performance, and security of AVEVA's Core AI Services.
- Code Review Culture: Regular code reviews and knowledge-sharing sessions to ensure high-quality testing frameworks and best practices.
- Mentoring & Knowledge Sharing: A commitment to mentoring junior testers and fostering a culture of continuous learning and innovation.
📝 Enhancement Note: AVEVA's culture emphasizes learning, collaboration, and inclusivity, with a strong focus on innovation and sustainability. The company has over 150 patents and works on an incredibly diverse portfolio of over 75 industrial automation and engineering products.
⚡ Challenges & Growth Opportunities
Technical Challenges:
- AI Model Evaluation: Developing and implementing evaluation frameworks for AI models, ensuring accuracy, consistency, and safety in AI outputs.
- Regulatory Compliance: Staying up-to-date with AI regulations and standards (e.g., NIST AI RMF, EU AI Act) and ensuring AVEVA's AI systems comply with relevant legal and ethical requirements.
- Performance Optimization: Designing and implementing automated testing frameworks that optimize AI system performance and scalability.
- AI Security: Identifying and mitigating security vulnerabilities in AI systems and exposed APIs.
Learning & Development Opportunities:
- AI Testing Specialization: Deepening your expertise in AI testing techniques, evaluation frameworks, and regulatory compliance.
- Emerging Technologies: Exploring and integrating emerging AI and cloud technologies into AVEVA's Core AI Services.
- Technical Leadership: Mentoring junior testers and contributing to the development of AI testing best practices and standards.
- Architecture Decisions: Collaborating with architects and developers to design scalable, secure, and ethical AI services.
📝 Enhancement Note: AVEVA's culture encourages continuous learning, innovation, and professional development. The company offers various training programs, conference attendance, and certification opportunities to support the growth and advancement of its employees.
💡 Interview Preparation
Technical Questions:
- AI Testing Principles: Explain your understanding of AI testing principles, cloud-native services, and API testing. Provide examples of your experience with AI testing, evaluation frameworks, and regulatory compliance.
- AI Testing Frameworks: Describe your approach to designing automated testing frameworks for AI systems. Discuss your experience with tools such as Azure DevOps, Postman, and other testing platforms.
- AI Evaluation Techniques: Demonstrate your knowledge of AI evaluation techniques, output scoring, and validation frameworks. Explain how you've applied these techniques in previous roles.
Company & Culture Questions:
- AI Ethics & Fairness: Discuss your understanding of AI ethics, fairness, and transparency principles. Explain how you've ensured ethical AI practices in your previous roles.
- AI Regulations & Compliance: Describe your experience with AI regulations and standards (e.g., NIST AI RMF, EU AI Act). Explain how you've ensured compliance with relevant legal and ethical requirements in your previous roles.
- Collaboration & Teamwork: Explain your approach to collaborating with developers, data scientists, and other stakeholders. Provide examples of successful cross-functional projects and initiatives.
Portfolio Presentation Strategy:
- AI Testing Portfolio: Highlight your experience with AI testing, evaluation frameworks, and regulatory compliance. Showcase your ability to design automated testing frameworks and collaborate with developers and data scientists.
- AI Testing Case Studies: Present case studies demonstrating your approach to AI testing, evaluation, and regulatory compliance. Explain the challenges you faced and the solutions you implemented.
- AI Testing Best Practices: Share your insights on AI testing best practices, ethical AI practices, and regulatory compliance. Discuss how you've contributed to the development of AI testing standards and guidelines in your previous roles.
📝 Enhancement Note: AVEVA's interview process focuses on assessing your technical expertise, cultural fit, and commitment to ethical AI practices. Be prepared to discuss your experience with AI testing, evaluation frameworks, and regulatory compliance in detail.
📌 Application Steps
To apply for the Principal QA Engineer - AI & Cloud Services position at AVEVA:
- Submit Your Application: Visit the AVEVA careers website (https://aveva.com/en/about/careers/) and search for the job title "Principal QA Engineer - AI & Cloud Services." Click on the job listing and follow the instructions to submit your cover letter and resume.
- Prepare Your Portfolio: Tailor your portfolio to showcase your experience with AI testing, evaluation frameworks, and regulatory compliance. Highlight your ability to design automated testing frameworks and collaborate with developers and data scientists.
- Optimize Your Resume: Ensure your resume is optimized for AVEVA's Applicant Tracking System (ATS) by including relevant keywords and formatting your document appropriately. Tailor your resume to emphasize your experience with AI testing, evaluation frameworks, and regulatory compliance.
- Prepare for Technical Interviews: Familiarize yourself with AVEVA's products, services, and AI testing best practices. Brush up on your knowledge of AI testing principles, cloud-native services, and API testing. Prepare examples of your work demonstrating your ability to design automated testing frameworks, performance profiling, and security testing.
- Research AVEVA: Learn about AVEVA's products, services, and AI testing best practices. Understand the company's culture, values, and commitment to ethical AI practices. Prepare questions to ask during your interviews to demonstrate your interest in the role and the company.
⚠️ Important Notice: This enhanced job description includes AI-generated insights and web development/server administration industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.
Application Requirements
Candidates should have 8+ years of experience in software testing or QA for cloud-native applications, including 2+ years working on AI/ML systems. Proficiency in designing automated testing frameworks and experience with Azure DevOps and CI/CD pipelines is essential.