AI Platform Engineer
📍 Job Overview
- Job Title: AI Platform Engineer
- Company: Dexory
- Location: Hybrid - Wallingford, Oxfordshire, United Kingdom
- Job Type: Full-Time
- Category: AI & Machine Learning Engineer
- Date Posted: June 24, 2025
- Experience Level: Mid-Senior Level (5-10 years)
- Remote Status: Hybrid (1 office day per week)
🚀 Role Summary
- Build and scale data pipelines for retrieval-augmented generation, telemetry, and synthetic data to support next-generation AI systems.
- Deploy and monitor ML models using modern MLOps stacks to ensure performance and reliability.
- Own vector stores and CI/CD for both data and models, ensuring efficient and automated processes.
- Support annotation, data versioning, and metadata tooling for applied AI teams to enhance collaboration and data management.
- Work cross-functionally with ML, robotics, and product teams to integrate AI solutions into real-world applications.
📝 Enhancement Note: This role combines aspects of ML Infrastructure and Data Engineering, focusing on building and maintaining data pipelines and ML model deployment for AI-driven applications in a logistics context.
💻 Primary Responsibilities
- Pipeline Development & Scaling: Design, implement, and scale data pipelines using retrieval-augmented generation, vector databases, and synthetic data to support AI systems.
- ML Model Deployment & Monitoring: Deploy and monitor ML models using modern MLOps stacks, ensuring optimal performance and reliability.
- CI/CD & Infrastructure Management: Own vector stores and CI/CD processes for both data and models, automating workflows and improving efficiency.
- Tooling & Collaboration: Support annotation, data versioning, and metadata tooling for applied AI teams, fostering collaboration and data management best practices.
- Cross-Functional Collaboration: Work closely with ML, robotics, and product teams to integrate AI solutions into real-world logistics applications.
📝 Enhancement Note: The primary responsibilities emphasize the dual role of the AI Platform Engineer in managing data pipelines and ML model deployment, with a strong focus on collaboration and automation.
🎓 Skills & Qualifications
Education: Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Relevant Master's degree or Ph.D. is a plus.
Experience: 3-5+ years of experience in ML Infrastructure, Data Engineering, or MLOps roles, with a strong focus on data pipelines, ML model deployment, and collaboration with cross-functional teams.
Required Skills:
- Proficiency in Python and cloud services (AWS, GCP, Azure)
- Experience with ML orchestration tools (Airflow, MLflow, etc.)
- Familiarity with LLMs, vector databases, and ETL pipelines
- Strong problem-solving skills and ability to work independently
Preferred Skills:
- Experience with Unity or Blender for synthetic data generation
- Familiarity with observability tooling and robotics
- Knowledge of logistics or warehouse management systems
📝 Enhancement Note: The required and preferred skills highlight the need for a well-rounded AI Platform Engineer with expertise in data pipelines, ML model deployment, and collaboration tools, as well as an understanding of logistics or related domains.
📊 Web Portfolio & Project Requirements
Portfolio Essentials:
- Demonstrate experience in building and scaling data pipelines using retrieval-augmented generation, vector databases, and synthetic data.
- Showcase ML model deployment and monitoring using modern MLOps stacks, with an emphasis on performance and reliability.
- Highlight collaboration and automation aspects, such as CI/CD processes and tooling for data and models.
Technical Documentation:
- Document code quality, commenting, and documentation standards for data pipelines and ML models.
- Explain version control, deployment processes, and server configuration for data pipelines and ML models.
- Describe testing methodologies, performance metrics, and optimization techniques for data pipelines and ML models.
📝 Enhancement Note: The portfolio and project requirements emphasize the need for a strong focus on data pipelines, ML model deployment, and collaboration tools, with an emphasis on performance, reliability, and automation.
💵 Compensation & Benefits
Salary Range: £60,000 - £80,000 per annum (Based on market research for AI & Machine Learning Engineers in the United Kingdom with 3-5+ years of experience)
Benefits:
- Share options
- Private healthcare via Bupa with 24/7 medical helpline
- Life insurance
- Income protection
- Pension: 4% employee with option to opt into salary exchange, 5% employer
- Employee Assistance Programme - mental wellbeing, financial and legal advice/support
- 25 holidays per year
- Full meals onsite in Wallingford
- Fun team events on and offsite, snacks of all kinds in the office
Working Hours: Full-time, with a hybrid work arrangement requiring 1 office day per week.
📝 Enhancement Note: The salary range and benefits are estimated based on market research for AI & Machine Learning Engineers in the United Kingdom with 3-5+ years of experience, considering the hybrid work arrangement and regional cost of living.
🎯 Team & Company Context
Company Culture:
- Industry: Logistics and Robotics
- Company Size: Medium (100-250 employees)
- Founded: 2020
- Team Structure: The AI Platform Engineer will work within the AI & Machine Learning team, collaborating cross-functionally with ML, robotics, and product teams.
- Development Methodology: Agile, with a focus on collaboration, iteration, and continuous improvement.
Company Website: Dexory
📝 Enhancement Note: The company culture section highlights the logistics and robotics industry focus, medium company size, and agile development methodology, with a strong emphasis on collaboration and cross-functional teamwork.
📈 Career & Growth Analysis
AI Platform Engineer Career Level: The AI Platform Engineer role is a mid-senior level position, focusing on building and maintaining data pipelines and ML model deployment for AI-driven applications in a logistics context.
Reporting Structure: The AI Platform Engineer will report directly to the AI & Machine Learning Team Lead and collaborate closely with ML, robotics, and product teams.
Technical Impact: The AI Platform Engineer will have a significant impact on the performance, reliability, and scalability of AI-driven logistics applications by managing data pipelines and ML model deployment.
Growth Opportunities:
- Technical Growth: Advance expertise in ML Infrastructure, Data Engineering, and MLOps, with opportunities to specialize in specific domains or take on leadership roles.
- Cross-Functional Collaboration: Expand knowledge of logistics, robotics, and product development by working closely with cross-functional teams.
- Career Progression: Transition into senior or leadership roles within the AI & Machine Learning team or explore opportunities in related domains, such as robotics or product management.
📝 Enhancement Note: The career and growth analysis section emphasizes the technical impact of the AI Platform Engineer role and highlights potential growth opportunities in technical expertise, cross-functional collaboration, and career progression.
🌐 Work Environment
Office Type: Hybrid, with 1 office day per week required in Wallingford, Oxfordshire.
Office Location(s): Wallingford, Oxfordshire, United Kingdom
Workspace Context:
- Collaboration: Work in an open, collaborative workspace with easy access to cross-functional teams, fostering knowledge sharing and innovation.
- Tools & Equipment: Access to modern development tools, multiple monitors, and testing devices to ensure optimal productivity and performance.
- Team Interaction: Engage in regular team meetings, code reviews, and pair programming sessions to maintain high coding standards and foster continuous learning.
Work Schedule: Full-time, with a hybrid work arrangement requiring 1 office day per week, with flexibility for deployment windows, maintenance, and project deadlines.
📝 Enhancement Note: The work environment section highlights the hybrid work arrangement, collaborative workspace, and flexible work schedule, with an emphasis on knowledge sharing, innovation, and continuous learning.
📄 Application & Technical Interview Process
Interview Process:
- Technical Preparation: Brush up on Python, cloud services (AWS, GCP, Azure), ML orchestration tools (Airflow, MLflow), LLMs, vector databases, ETL pipelines, and collaboration tools.
- Technical Assessment: Participate in a technical assessment focused on data pipeline design, ML model deployment, and collaboration tools, with an emphasis on performance, reliability, and automation.
- Cross-Functional Discussion: Engage in discussions with ML, robotics, and product teams to demonstrate your ability to collaborate and integrate AI solutions into real-world applications.
- Final Evaluation: Showcase your problem-solving skills, technical expertise, and cultural fit in the final evaluation stage.
Portfolio Review Tips:
- Data Pipeline Demonstration: Highlight your experience in building and scaling data pipelines using retrieval-augmented generation, vector databases, and synthetic data.
- ML Model Deployment: Demonstrate your ability to deploy and monitor ML models using modern MLOps stacks, with an emphasis on performance and reliability.
- Collaboration & Automation: Showcase your experience with collaboration tools and automation processes, such as CI/CD for data and models.
- Logistics Context: Highlight any relevant experience or understanding of logistics or warehouse management systems.
Technical Challenge Preparation:
- Data Pipeline Design: Practice designing and optimizing data pipelines using retrieval-augmented generation, vector databases, and synthetic data.
- ML Model Deployment: Familiarize yourself with modern MLOps stacks and practice deploying and monitoring ML models.
- Collaboration & Automation: Brush up on your collaboration skills and prepare for discussions on automation processes, such as CI/CD for data and models.
- Logistics Context: Research the logistics and robotics industry to gain a better understanding of the context in which you'll be working.
ATS Keywords: Python, cloud services, ML orchestration, LLMs, vector databases, ETL pipelines, collaboration tools, logistics, robotics, AI & Machine Learning, data engineering, MLOps.
📝 Enhancement Note: The application and technical interview process section emphasizes the need for strong technical preparation, with a focus on data pipeline design, ML model deployment, and collaboration tools, as well as an understanding of the logistics context.
🛠 Technology Stack & Web Infrastructure
Frontend Technologies: Not applicable for this role.
Backend & Server Technologies:
- Python: The primary programming language for data pipeline development, ML model deployment, and collaboration tools.
- Cloud Services (AWS, GCP, Azure): Utilize cloud services for data storage, processing, and deployment of ML models.
- ML Orchestration Tools (Airflow, MLflow): Employ ML orchestration tools for automating data pipeline workflows and ML model deployment.
- LLMs, Vector Databases, ETL Pipelines: Leverage LLMs, vector databases, and ETL pipelines for building and scaling data pipelines, with an emphasis on performance, reliability, and automation.
Development & DevOps Tools:
- Version Control: Use version control systems, such as Git, to manage data pipeline and ML model codebases.
- CI/CD Pipelines: Implement CI/CD pipelines for automating data pipeline and ML model deployment processes.
- Monitoring Tools: Utilize monitoring tools to track data pipeline and ML model performance, reliability, and automation.
📝 Enhancement Note: The technology stack and web infrastructure section highlight the backend and server technologies relevant to the AI Platform Engineer role, with an emphasis on data pipeline development, ML model deployment, and collaboration tools.
👥 Team Culture & Values
AI & Machine Learning Values:
- Performance: Prioritize high standards and outstanding results in data pipeline development, ML model deployment, and collaboration tools.
- Impact: Focus on big challenges and bigger results by integrating AI solutions into real-world logistics applications.
- Commitment: Demonstrate all-in, every-time dedication to building and maintaining data pipelines and ML model deployment for AI-driven logistics applications.
- One Team: Foster a shared mission and shared success mindset by collaborating closely with ML, robotics, and product teams.
Collaboration Style:
- Cross-Functional Integration: Work closely with ML, robotics, and product teams to integrate AI solutions into real-world logistics applications.
- Code Review Culture: Participate in regular code reviews to maintain high coding standards and foster continuous learning.
- Knowledge Sharing: Share expertise and learn from colleagues to enhance data pipeline development, ML model deployment, and collaboration tools.
📝 Enhancement Note: The team culture and values section emphasizes the importance of performance, impact, commitment, and collaboration in the AI & Machine Learning team, with a strong focus on cross-functional integration and knowledge sharing.
⚡ Challenges & Growth Opportunities
Technical Challenges:
- Data Pipeline Complexity: Design and optimize data pipelines using retrieval-augmented generation, vector databases, and synthetic data to support AI-driven logistics applications.
- ML Model Performance: Deploy and monitor ML models using modern MLOps stacks to ensure optimal performance and reliability in real-world logistics applications.
- Scalability & Automation: Own vector stores and CI/CD processes for both data and models, ensuring efficient and automated workflows that can scale with the growing demands of AI-driven logistics applications.
- Logistics Context: Understand the logistics and robotics industry to integrate AI solutions into real-world applications effectively.
Learning & Development Opportunities:
- Technical Skill Development: Advance expertise in ML Infrastructure, Data Engineering, and MLOps, with opportunities to specialize in specific domains or take on leadership roles.
- Cross-Functional Collaboration: Expand knowledge of logistics, robotics, and product development by working closely with cross-functional teams.
- Career Progression: Transition into senior or leadership roles within the AI & Machine Learning team or explore opportunities in related domains, such as robotics or product management.
📝 Enhancement Note: The challenges and growth opportunities section emphasizes the technical challenges and learning opportunities for AI Platform Engineers, with a focus on data pipeline complexity, ML model performance, scalability, and automation, as well as the logistics context.
💡 Interview Preparation
Technical Questions:
- Data Pipeline Design: Describe your experience designing and optimizing data pipelines using retrieval-augmented generation, vector databases, and synthetic data.
- ML Model Deployment: Explain your approach to deploying and monitoring ML models using modern MLOps stacks, with an emphasis on performance and reliability.
- Collaboration & Automation: Discuss your experience with collaboration tools and automation processes, such as CI/CD for data and models.
- Logistics Context: Demonstrate your understanding of the logistics and robotics industry and how it applies to AI-driven logistics applications.
Company & Culture Questions:
- AI & Machine Learning Values: Explain how you embody the AI & Machine Learning values of performance, impact, commitment, and collaboration in your work.
- Cross-Functional Collaboration: Describe your experience working with cross-functional teams, such as ML, robotics, and product teams, to integrate AI solutions into real-world applications.
- Technical Challenges: Discuss how you approach technical challenges, such as data pipeline complexity, ML model performance, scalability, and automation, in the context of AI-driven logistics applications.
Portfolio Presentation Strategy:
- Data Pipeline Demonstration: Highlight your experience in building and scaling data pipelines using retrieval-augmented generation, vector databases, and synthetic data.
- ML Model Deployment: Demonstrate your ability to deploy and monitor ML models using modern MLOps stacks, with an emphasis on performance and reliability.
- Collaboration & Automation: Showcase your experience with collaboration tools and automation processes, such as CI/CD for data and models.
- Logistics Context: Emphasize your understanding of the logistics and robotics industry and how it applies to AI-driven logistics applications.
📝 Enhancement Note: The interview preparation section emphasizes the need for strong technical preparation, with a focus on data pipeline design, ML model deployment, collaboration tools, and the logistics context, as well as an understanding of the AI & Machine Learning values and company culture.
📌 Application Steps
To apply for this AI Platform Engineer position:
- Portfolio Customization: Tailor your portfolio to highlight your experience in building and scaling data pipelines, deploying and monitoring ML models, and collaborating with cross-functional teams in the logistics context.
- Resume Optimization: Optimize your resume for AI & Machine Learning Engineer roles, emphasizing your technical skills, experience, and achievements in data pipeline development, ML model deployment, and collaboration tools.
- Technical Interview Preparation: Brush up on your technical skills, prepare for data pipeline design, ML model deployment, and collaboration tool discussions, and research the logistics context.
- Company Research: Research Dexory's mission, values, and culture to demonstrate your fit and enthusiasm for the role and company.
⚠️ Important Notice: This enhanced job description includes AI-generated insights and web development/server administration industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.
Application Requirements
Candidates should have 3-5+ years of experience in ML infrastructure, data engineering, or MLOps, with proficiency in Python and cloud services. Experience with LLMs, vector databases, and ETL pipelines is essential, with additional skills in Unity or Blender considered a bonus.