Staff Platform Engineer - Product Analytics (f/m/d)
📍 Job Overview
- Job Title: Staff Platform Engineer - Product Analytics (f/m/d)
- Company: LeanIX
- Location: Dresden, Germany
- Job Type: Hybrid
- Category: DevOps Engineer
- Date Posted: June 19, 2025
- Experience Level: Mid-Senior Level (5-10 years of experience)
- Remote Status: Hybrid (3 days per week in the office)
🚀 Role Summary
- Key Web Technology Aspect 1: The role involves designing, building, and maintaining the global data analytics infrastructure, focusing on Azure Databricks and DataOps practices.
- Key Web Technology Aspect 2: The ideal candidate should have expert-level knowledge of Azure Databricks and proven experience with Terraform and Apache Spark.
- Key Web Technology Aspect 3: The role requires a strong background in data engineering and experience with CI/CD pipelines for data and infrastructure.
- Key Web Technology Aspect 4: The candidate should be passionate about automation, governance, and enabling others through robust infrastructure.
📝 Enhancement Note: The role is highly technical and requires a deep understanding of Azure Databricks, Terraform, and Apache Spark. The ideal candidate will have a strong data engineering background and experience with CI/CD pipelines.
💻 Primary Responsibilities
- Responsibility 1: Architect and own the global Databricks platform, designing and implementing the compute and runtime environment using Terraform and dbt as the core component for data transformations in product analytics pipelines.
- Responsibility 2: Champion a "DataOps" culture, implementing and evangelizing DataOps principles to create automated, repeatable, and reliable processes for managing data, infrastructure, and analytics workflows.
- Responsibility 3: Implement and maintain a federated data architecture, architecting and implementing the technical aspects of the data platform in Databricks to enable domain teams to own their data products while maintaining central governance and discoverability through Unity Catalog.
- Responsibility 4: Drive governance and security, establishing and enforcing best practices for data governance, security, and compliance.
- Responsibility 5: Optimize for performance and cost, proactively monitoring and tuning Spark jobs, defining optimal cluster policies, implementing cost-management strategies, and guiding teams on writing efficient queries and data transformations.
- Responsibility 6: Empower customers by creating frictionless onboarding paths and self-service capabilities for data engineers, analysts, and data scientists, acting as the subject matter expert and go-to person for all things Databricks.
- Responsibility 7: Automate everything, developing robust CI/CD pipelines using GitHub Actions to manage the entire lifecycle of Databricks assets, from infrastructure and clusters to the deployment and testing of dbt transformation workloads.
🎓 Skills & Qualifications
Education: A Bachelor's degree in Computer Science, Engineering, or a related field. A Master's degree would be an asset.
Experience: At least 5-10 years of experience in data platform engineering, infrastructure, or SRE roles, with at least 4+ years of deep, hands-on experience with Databricks.
Required Skills:
- Expert-level knowledge of Azure Databricks administration, including workspace setup, cluster management, and security best practices.
- Proven, hands-on mastery of Terraform, with the ability to manage a complex, multi-region Databricks environment entirely as code.
- Strong data engineering fundamentals, including deep experience with Apache Spark performance tuning, Delta Lake optimization, and understanding data processing patterns.
- Practical experience designing or implementing platforms aligned with federated data management principles, with a deep understanding of Unity Catalog for central governance and data discovery.
- Solid experience building CI/CD pipelines for data and infrastructure (DataOps), preferably using GitHub Actions, to manage the lifecycle of Databricks assets.
- Deep understanding of core Azure services and how they integrate with Databricks.
- Experience with data quality frameworks, including unit testing, data validation, expectation-based testing (dbt Expectations), dbt testing, and scalable automated testing pipelines.
Preferred Skills:
- Experience managing large-scale, multi-region cloud environments.
- Open communication style and fluent written and spoken English skills.
📝 Enhancement Note: The required skills section is comprehensive and highlights the need for expert-level knowledge in Azure Databricks, Terraform, and Apache Spark. The preferred skills section emphasizes the importance of experience in managing large-scale cloud environments and strong communication skills.
📊 Web Portfolio & Project Requirements
Portfolio Essentials:
- A portfolio showcasing your expertise in Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering projects.
- Examples of your experience with CI/CD pipelines and DataOps principles, demonstrating your ability to automate and optimize data workflows.
- Case studies highlighting your ability to design and implement federated data architectures, with a focus on data governance, security, and compliance.
Technical Documentation:
- Detailed documentation of your Azure Databricks, Terraform, and Apache Spark projects, including code quality, commenting, and documentation standards.
- Version control, deployment processes, and server configuration documentation, demonstrating your ability to manage complex data infrastructure projects.
- Testing methodologies, performance metrics, and optimization techniques documentation, showcasing your commitment to quality and performance optimization.
📝 Enhancement Note: The portfolio and project requirements emphasize the need for a strong focus on Azure Databricks, Terraform, and Apache Spark projects, with a particular emphasis on data platform engineering, CI/CD pipelines, and DataOps principles.
💵 Compensation & Benefits
Salary Range: The salary range for this role is €80,000 - €120,000 per year, depending on experience and qualifications. This estimate is based on regional market data for similar roles in the data engineering and DevOps space in Germany.
Benefits:
- A hybrid remote work model, combining the benefits of in-person collaboration with the flexibility to work from home.
- The opportunity to work in one of the company's hubs in Bonn, Berlin, or Dresden, with the expectation of coming to the office multiple times per week.
- A comprehensive benefits package, including health insurance, retirement plans, and employee discounts.
Working Hours: The role requires a standard 40-hour workweek, with flexibility for deployment windows, maintenance, and project deadlines.
📝 Enhancement Note: The salary range and benefits section provide a competitive salary range and highlight the benefits of working in a hybrid remote work environment, with the opportunity to work in one of the company's hubs in Bonn, Berlin, or Dresden.
🎯 Team & Company Context
🏢 Company Culture
Industry: LeanIX operates in the enterprise software industry, focusing on providing a world-class data platform for product analytics.
Company Size: LeanIX is a mid-sized company with a strong growth trajectory, making it an attractive option for experienced data platform engineers looking to join a dynamic and growing team.
Founded: LeanIX was founded in 2012 and has since grown to become a leading provider of enterprise software solutions.
Team Structure:
- The Engineering teams at LeanIX are cross-functional and develop scalable and secure microservices and APIs for various domains of the product.
- The DevX (Developer Experience) tribe is dedicated to empowering these teams with the best tools, platforms, and processes.
- The Product Analytics team's mission is to provide and operate a world-class data platform, enabling development teams and product managers to draw insights from product data and make data-informed decisions on new features and product performance evaluation.
Development Methodology:
- LeanIX Engineering teams use Agile methodologies, with a focus on sprint planning, code review, testing, and quality assurance practices.
- The company prioritizes continuous improvement, collaboration, and knowledge sharing, constantly iterating on engineering practices and skills to build a software architecture that supports long-term success and fun at work.
Company Website: LeanIX Website
📝 Enhancement Note: The company culture section provides an overview of LeanIX's industry, company size, and development methodologies, highlighting the company's commitment to continuous improvement and collaboration.
📈 Career & Growth Analysis
Web Technology Career Level: The Staff Platform Engineer role is a senior individual contributor role with significant technical leadership responsibilities. This role is well-suited for experienced data platform engineers looking to take the next step in their career and make a significant impact on the company's data analytics infrastructure.
Reporting Structure: The Staff Platform Engineer reports directly to the Head of Engineering and works closely with cross-functional teams, including data engineers, analysts, and data scientists.
Technical Impact: The Staff Platform Engineer will have a significant impact on the company's data analytics infrastructure, driving governance, security, and performance optimization for the data platform. This role will enable hundreds of internal users to leverage data effortlessly and securely, empowering them to make data-informed decisions and evaluate the performance of their products in the real world.
Growth Opportunities:
- Growth Opportunity 1: The role offers the opportunity to grow as a technical leader, taking ownership of the entire global data analytics infrastructure and driving the company's DataOps practice.
- Growth Opportunity 2: The role provides the chance to develop technical leadership skills, working closely with cross-functional teams and driving the adoption of DataOps principles across the organization.
- Growth Opportunity 3: The role offers the potential to grow into a technical architecture or management role, with the opportunity to shape the company's data analytics infrastructure and drive its long-term success.
📝 Enhancement Note: The career and growth analysis section highlights the significant technical leadership responsibilities of the Staff Platform Engineer role and the potential for growth and development in the position.
🌐 Work Environment
Office Type: LeanIX operates a hybrid work environment, combining the benefits of in-person collaboration with the flexibility to work from home.
Office Location(s): LeanIX has hubs in Bonn, Berlin, and Dresden, with the expectation that employees come to the office multiple times per week.
Workspace Context:
- The hybrid work environment at LeanIX allows for collaborative web development workspace with multiple monitors, testing devices, and development tools available.
- The company prioritizes a collaborative and inclusive work culture, with a focus on knowledge sharing, technical mentoring, and continuous learning.
- The work environment is designed to support fun at work and long-term success, with a focus on work-life balance and personal development.
Work Schedule: The role requires a standard 40-hour workweek, with flexibility for deployment windows, maintenance, and project deadlines.
📝 Enhancement Note: The work environment section emphasizes the benefits of LeanIX's hybrid work environment, highlighting the company's commitment to collaboration, knowledge sharing, and work-life balance.
📄 Application & Technical Interview Process
Interview Process:
- Technical Assessment: A technical assessment focused on Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering and DataOps principles.
- Behavioral Interview: A behavioral interview focused on communication skills, problem-solving, and cultural fit.
- Final Evaluation: A final evaluation focusing on the candidate's technical skills, cultural fit, and alignment with the role's requirements.
Portfolio Review Tips:
- Highlight your expertise in Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering projects.
- Include case studies demonstrating your ability to design and implement federated data architectures, with a focus on data governance, security, and compliance.
- Showcase your experience with CI/CD pipelines and DataOps principles, emphasizing your ability to automate and optimize data workflows.
Technical Challenge Preparation:
- Brush up on your Azure Databricks, Terraform, and Apache Spark skills, with a focus on data platform engineering and DataOps principles.
- Familiarize yourself with LeanIX's development methodologies, including Agile practices and code review processes.
- Prepare for behavioral interview questions focused on communication skills, problem-solving, and cultural fit.
ATS Keywords: Azure Databricks, Terraform, Apache Spark, DataOps, Data Governance, CI/CD, Data Engineering, Delta Lake, Unity Catalog, GitHub Actions, Cloud Infrastructure, Kubernetes, Docker, REST, GraphQL, Monitoring, Alerting
📝 Enhancement Note: The application and technical interview process section provides a comprehensive overview of the interview process, highlighting the importance of technical skills, behavioral interviews, and portfolio review for the Staff Platform Engineer role.
🛠 Technology Stack & Web Infrastructure
Frontend Technologies: Not applicable for this role.
Backend & Server Technologies:
- Azure Databricks: The primary data processing and analytics platform used at LeanIX, with a focus on Apache Spark and dbt for data transformations.
- Terraform: The infrastructure as code (IaC) tool used to manage the company's Azure cloud environment, including the global Databricks platform.
- Apache Spark: The open-source data processing engine used for data transformations and analytics workloads in Azure Databricks.
- Delta Lake: The storage format used for data lakes and data warehouses in Azure Databricks, providing optimized performance and scalability.
- Unity Catalog: The central governance and data discovery layer for Azure Databricks, enabling federated data management and data product ownership.
Development & DevOps Tools:
- GitHub Actions: The CI/CD pipeline tool used to manage the lifecycle of Databricks assets, from infrastructure and clusters to the deployment and testing of dbt transformation workloads.
- Databricks CLI/APIs and Databricks Asset Bundles: The command-line interface (CLI) and APIs used to interact with Azure Databricks, along with the asset bundle tool for packaging and deploying Databricks assets.
- Azure Data Lake Storage (ADLS) Gen2: The cloud storage service used for data lakes and data warehouses in Azure, providing high-performance and scalable storage for data analytics workloads.
- Azure Key Vault: The cloud-based secrets management service used to securely store and access secrets and credentials for Azure resources, including Azure Databricks.
📝 Enhancement Note: The technology stack and web infrastructure section highlights the key technologies used at LeanIX, with a focus on Azure Databricks, Terraform, and Apache Spark for data platform engineering and DataOps principles.
👥 Team Culture & Values
Web Development Values:
- Value 1: LeanIX prioritizes user experience and performance optimization, with a focus on responsive design and accessibility standards.
- Value 2: The company emphasizes code quality and collaborative development practices, with a focus on peer programming, code reviews, and automated testing.
- Value 3: LeanIX values innovation and continuous learning, with a focus on emerging technologies and best practices in web development and data analytics.
- Value 4: The company prioritizes collaboration and knowledge sharing, with a focus on cross-functional teamwork and technical mentoring.
Collaboration Style:
- Collaboration Approach 1: LeanIX emphasizes cross-functional integration between developers, designers, and stakeholders, with a focus on Agile methodologies and sprint planning.
- Collaboration Approach 2: The company prioritizes code review culture and peer programming practices, with a focus on knowledge sharing and technical mentoring.
- Collaboration Approach 3: LeanIX values knowledge sharing, technical mentoring, and continuous learning, with a focus on driving technical excellence and innovation.
📝 Enhancement Note: The team culture and values section highlights LeanIX's commitment to user experience, code quality, innovation, collaboration, and knowledge sharing, with a focus on driving technical excellence and innovation in web development and data analytics.
💡 Challenges & Growth Opportunities
Technical Challenges:
- Challenge 1: Designing and implementing a scalable and secure global Databricks platform, with a focus on optimized performance and cost management.
- Challenge 2: Implementing and maintaining a federated data architecture, with a focus on data governance, security, and compliance.
- Challenge 3: Driving governance and security, with a focus on establishing and enforcing best practices for data governance, security, and compliance.
- Challenge 4: Optimizing for performance and cost, with a focus on proactively monitoring and tuning Spark jobs, defining optimal cluster policies, and implementing cost-management strategies.
Learning & Development Opportunities:
- Learning Opportunity 1: The role offers the opportunity to develop expertise in Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering and DataOps principles.
- Learning Opportunity 2: The role provides the chance to gain experience in managing large-scale, multi-region cloud environments, with a focus on data analytics workloads and infrastructure management.
- Learning Opportunity 3: The role offers the potential to grow as a technical leader, with the opportunity to drive the adoption of DataOps principles and shape the company's data analytics infrastructure.
📝 Enhancement Note: The challenges and growth opportunities section highlights the significant technical challenges and learning opportunities associated with the Staff Platform Engineer role, with a focus on data platform engineering, DataOps principles, and technical leadership development.
💡 Interview Preparation
Technical Questions:
- Technical Question 1: Describe your experience with Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering and DataOps principles.
- Technical Question 2: How have you designed and implemented federated data architectures, with a focus on data governance, security, and compliance?
- Technical Question 3: How have you optimized for performance and cost in Azure Databricks, with a focus on Spark job tuning, cluster policies, and cost-management strategies?
- Technical Question 4: How have you driven governance and security in Azure Databricks, with a focus on best practices, compliance, and data protection?
Company & Culture Questions:
- Company & Culture Question 1: How do you approach cross-functional collaboration and knowledge sharing in a hybrid work environment?
- Company & Culture Question 2: How do you prioritize user experience and performance optimization in web development and data analytics?
- Company & Culture Question 3: How do you drive innovation and continuous learning in a dynamic and growing team?
Portfolio Presentation Strategy:
- Portfolio Presentation Strategy 1: Highlight your expertise in Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering projects and case studies.
- Portfolio Presentation Strategy 2: Include case studies demonstrating your ability to design and implement federated data architectures, with a focus on data governance, security, and compliance.
- Portfolio Presentation Strategy 3: Showcase your experience with CI/CD pipelines and DataOps principles, emphasizing your ability to automate and optimize data workflows.
📝 Enhancement Note: The interview preparation section provides a comprehensive overview of the technical and company culture questions candidates can expect during the interview process, with a focus on Azure Databricks, Terraform, Apache Spark, and data platform engineering principles.
📌 Application Steps
To apply for this Staff Platform Engineer - Product Analytics (f/m/d) position at LeanIX:
- Prepare Your Portfolio: Highlight your expertise in Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering projects and case studies. Include case studies demonstrating your ability to design and implement federated data architectures, with a focus on data governance, security, and compliance. Showcase your experience with CI/CD pipelines and DataOps principles, emphasizing your ability to automate and optimize data workflows.
- Tailor Your Resume: Highlight your relevant experience and skills in Azure Databricks, Terraform, and Apache Spark, with a focus on data platform engineering and DataOps principles. Include specific examples of your experience with CI/CD pipelines, data governance, and performance optimization.
- Prepare for Technical Interview: Brush up on your Azure Databricks, Terraform, and Apache Spark skills, with a focus on data platform engineering and DataOps principles. Familiarize yourself with LeanIX's development methodologies, including Agile practices and code review processes. Prepare for behavioral interview questions focused on communication skills, problem-solving, and cultural fit.
- Research the Company: Learn about LeanIX's industry, company size, development methodologies, and team structure. Understand the company's commitment to continuous improvement, collaboration, and knowledge sharing, and how these factors contribute to the company's success and growth.
⚠️ Important Notice: This enhanced job description includes AI-generated insights and web development/server administration industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.
Application Requirements
The ideal candidate should have expert-level knowledge of Azure Databricks and proven experience with Terraform and Apache Spark. A strong background in data engineering and experience with CI/CD pipelines for data and infrastructure is essential.