Cloud Data Engineer (Azure&Microsoft Fabric)
📍 Job Overview
- Job Title: Cloud Data Engineer (Azure & Microsoft Fabric)
- Company: Capgemini
- Location: Poznań, Wielkopolskie, Poland
- Job Type: Hybrid
- Category: Data Engineering
- Date Posted: 2025-06-27
- Experience Level: Mid-Senior level (2-5 years)
🚀 Role Summary
- Design and develop scalable data solutions in Azure using Microsoft Fabric
- Build and optimize data pipelines using Azure Data Factory, Synapse, and other relevant services
- Collaborate with AI/ML teams to implement GenAI and NLP models in production environments
- Ensure data quality, security, and compliance while working end-to-end on data projects
📝 Enhancement Note: This role requires a strong focus on Azure data services and Microsoft Fabric, with a collaborative approach to working with AI/ML teams. The end-to-end project scope offers an opportunity to gain experience in various aspects of data engineering.
💻 Primary Responsibilities
-
Data Solution Design & Development:
- Design and develop scalable data solutions in Azure using Microsoft Fabric
- Create and manage data models and data warehouses
-
Data Pipeline Development & Optimization:
- Build and optimize data pipelines using Azure Data Factory, Synapse, Dataflows, and Pipelines
- Integrate data from various structured and unstructured sources
-
AI/ML Collaboration:
- Collaborate with AI/ML teams to implement GenAI and NLP models in production environments
- Contribute to CI/CD and DevOps practices for data environments
-
Data Quality & Compliance:
- Ensure data quality, security, and compliance throughout the data processing workflows
- Document data architectures and processing workflows
🎓 Skills & Qualifications
Education: Bachelor's degree in Computer Science, Information Technology, or a related field
Experience: Proven experience (2-5 years) in data engineering, with a focus on Azure data services and Microsoft Fabric
Required Skills:
- Hands-on experience with Azure data services, including Microsoft Fabric (Data Factory, Synapse, OneLake, Notebooks, Pipelines)
- Strong skills in SQL and Python for data processing and transformation
- Solid understanding of data modeling, warehousing, and big data processing
- Familiarity with Azure AI and Cognitive Services
- Ability to work closely with development and analytics teams
Preferred Skills:
- Exposure to GenAI, LLMs, and Retrieval-Augmented Generation (RAG)
- Knowledge of TypeScript/JavaScript
- Experience with DevOps and CI/CD tools (e.g., Azure DevOps, GitHub Actions)
- Understanding of data governance and data security best practices
- Azure certifications (e.g., DP-203, AI-102)
📊 Web Portfolio & Project Requirements
Portfolio Essentials:
- Case studies demonstrating end-to-end data projects, including data solution design, pipeline development, and AI/ML integration
- Examples of data models, warehouses, and big data processing solutions
- Documentation of data architectures and processing workflows
Technical Documentation:
- Code quality, commenting, and documentation standards for data processing workflows
- Version control, deployment processes, and server configuration for data environments
- Testing methodologies, performance metrics, and optimization techniques for data pipelines
📝 Enhancement Note: As this role involves end-to-end data projects, applicants should highlight their ability to work on complex data solutions, from design to deployment, with a focus on Azure data services and Microsoft Fabric.
💵 Compensation & Benefits
Salary Range: The salary range for this role in Poznań, Poland, is approximately 12,000 - 18,000 PLN gross per month, based on market research and regional adjustments.
Benefits:
- Private medical care with Medicover
- Life insurance
- Access to over 70 training tracks with certification opportunities on the NEXT platform
- Free access to language learning platforms (Education First, Pluralsight, TED Talks, Coursera, Udemy Business)
- Practical benefits, including Netflix, Spotify, and Multisport options on the NAIS benefit platform
- Hybrid working model, with a modern office and ergonomic home office package
Working Hours: Full-time (40 hours/week), with flexible deployment windows and maintenance windows as needed
📝 Enhancement Note: The salary range provided is an estimate based on market research and regional adjustments. Actual salary offers may vary depending on the candidate's experience and skills.
🎯 Team & Company Context
🏢 Company Culture
Industry: Capgemini operates in the technology consulting and digital transformation industry, with a strong focus on data and AI services.
Company Size: Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. With over 360,000 team members globally in more than 50 countries, Capgemini offers a large and diverse team environment.
Founded: Capgemini was founded in 1967 and has since grown into a responsible and diverse organization, guided by its purpose of unleashing human energy through technology for an inclusive and sustainable future.
Team Structure:
- The Insights & Data team focuses on delivering cutting-edge, cloud-native data solutions for clients across various industries
- The team works end-to-end, from designing data architectures and building ETL/ELT pipelines to integrating with downstream systems and visualizing data
- Collaboration with AI and Data Science teams is essential for bringing GenAI, NLP, and RAG solutions into production environments
Development Methodology:
- Capgemini follows Agile methodologies, with a focus on iterative development and continuous improvement
- The company encourages a culture of knowledge sharing, technical mentoring, and continuous learning
Company Website: https://www.capgemini.com/
📝 Enhancement Note: Capgemini's focus on data and AI services, coupled with its global presence and diverse team, offers opportunities for professional growth and exposure to various industries and technologies.
📈 Career & Growth Analysis
Web Technology Career Level: This role is at the mid-senior level, requiring a strong foundation in data engineering, with a focus on Azure data services and Microsoft Fabric. Applicants should have 2-5 years of relevant experience and be prepared to work on complex, end-to-end data projects.
Reporting Structure: The Cloud Data Engineer reports to the Data Engineering Manager within the Insights & Data team. The team works closely with AI/ML, data science, and other relevant teams to deliver comprehensive data solutions.
Technical Impact: The Cloud Data Engineer plays a crucial role in designing and developing scalable data solutions, optimizing data pipelines, and integrating AI/ML models into production environments. Their work directly impacts the quality, performance, and security of data processing workflows, ultimately influencing business decisions and user experiences.
Growth Opportunities:
- Technical Progression: Deepen expertise in Azure data services, Microsoft Fabric, and emerging data technologies
- Team Leadership: Develop leadership skills by mentoring junior team members and contributing to team architecture decisions
- Architecture & Design: Gain experience in designing and implementing complex data architectures and data warehousing solutions
📝 Enhancement Note: Capgemini's focus on data and AI services, coupled with its global presence and diverse team, offers opportunities for professional growth and exposure to various industries and technologies. The mid-senior level of this role provides a strong foundation for technical progression and team leadership.
🌐 Work Environment
Office Type: Capgemini offers a hybrid working model, with a modern office and ergonomic home office package, including a laptop, monitor, and chair.
Office Location(s): The primary office location for this role is Poznań, Wielkopolskie, Poland. Additional offices may be available for hybrid work arrangements.
Workspace Context:
- Collaboration: The hybrid work environment encourages collaboration with team members, both in the office and remotely
- Development Tools: Capgemini provides access to modern development tools, including Azure DevOps, GitHub, and relevant data processing and analysis tools
- Cross-Functional Interaction: The Insights & Data team works closely with other teams, such as AI/ML, data science, and design, fostering a cross-functional and collaborative work environment
Work Schedule: The hybrid work arrangement offers flexibility for deployment windows, maintenance, and project deadlines, with core hours typically between 9:00 AM and 5:00 PM.
📝 Enhancement Note: Capgemini's hybrid work environment, coupled with its focus on collaboration and cross-functional interaction, offers a flexible and engaging work experience for data engineers.
📄 Application & Technical Interview Process
Interview Process:
- Technical Assessment: A hands-on technical assessment focused on Azure data services, Microsoft Fabric, and data processing skills
- Behavioral Interview: An interview focused on problem-solving, collaboration, and communication skills, with a focus on data engineering scenarios
- Final Evaluation: A final evaluation based on the technical assessment and behavioral interview, with a focus on cultural fit and long-term potential
Portfolio Review Tips:
- Highlight end-to-end data projects, demonstrating data solution design, pipeline development, and AI/ML integration
- Include examples of data models, warehouses, and big data processing solutions
- Showcase documentation of data architectures and processing workflows
Technical Challenge Preparation:
- Brush up on Azure data services, Microsoft Fabric, and relevant data processing and transformation techniques
- Practice designing and developing data solutions, optimizing data pipelines, and integrating AI/ML models
- Familiarize yourself with Capgemini's development methodologies and data engineering best practices
ATS Keywords: [Azure, Microsoft Fabric, Data Factory, Synapse, SQL, Python, Data Modeling, Data Warehousing, Big Data Processing, AI, NLP, GenAI, RAG, CI/CD, DevOps, Data Governance, Data Security, Data Engineering, Cloud Computing, Hybrid Work, Agile Methodologies]
📝 Enhancement Note: The interview process for this role focuses on technical assessment, behavioral interview, and final evaluation, with a strong emphasis on data engineering skills and cultural fit.
🛠 Technology Stack & Web Infrastructure
Frontend Technologies: N/A (This role focuses on data engineering and does not involve frontend technologies)
Backend & Server Technologies:
- Azure Data Services: Microsoft Fabric (Data Factory, Synapse, OneLake, Notebooks, Pipelines), Azure AI, and Cognitive Services
- Database Integration: SQL, with a focus on Azure SQL Database and Azure Synapse Analytics
- Infrastructure Tools: Azure DevOps, GitHub, and other relevant CI/CD and deployment tools
Development & DevOps Tools:
- Version Control: Git, Azure DevOps, and GitHub
- CI/CD Pipelines: Azure DevOps, GitHub Actions, and other relevant CI/CD tools
- Monitoring Tools: Azure Monitor, Application Insights, and other relevant monitoring and logging tools
📝 Enhancement Note: This role focuses on Azure data services and Microsoft Fabric, with a strong emphasis on data engineering and backend technologies. Familiarity with the Azure ecosystem and relevant data processing and transformation techniques is essential.
👥 Team Culture & Values
Web Development Values:
- User-Centric Design: Focus on designing data solutions that meet user needs and drive business value
- Quality & Performance: Prioritize data quality, performance, and security in all data processing workflows
- Collaboration & Knowledge Sharing: Encourage a culture of collaboration, knowledge sharing, and continuous learning
- Innovation & Adaptability: Foster a culture of innovation and adaptability, embracing emerging technologies and best practices
Collaboration Style:
- Cross-Functional Integration: Work closely with AI/ML, data science, design, and other relevant teams to deliver comprehensive data solutions
- Code Review & Peer Programming: Encourage code review and peer programming practices to ensure data quality and best practices
- Knowledge Sharing & Mentoring: Foster a culture of knowledge sharing and mentoring, with a focus on technical skill development and career progression
📝 Enhancement Note: Capgemini's focus on data and AI services, coupled with its emphasis on collaboration and knowledge sharing, offers a dynamic and engaging work environment for data engineers.
⚡ Challenges & Growth Opportunities
Technical Challenges:
- Data Solution Design: Design and develop scalable data solutions in Azure using Microsoft Fabric, with a focus on performance, security, and user experience
- Data Pipeline Optimization: Optimize data pipelines using Azure Data Factory, Synapse, and other relevant services, with a focus on efficiency, scalability, and fault tolerance
- AI/ML Integration: Collaborate with AI/ML teams to implement GenAI and NLP models in production environments, with a focus on data quality, performance, and user experience
- Data Governance & Security: Ensure data quality, security, and compliance throughout data processing workflows, with a focus on data governance best practices and emerging regulations
Learning & Development Opportunities:
- Technical Skill Development: Deepen expertise in Azure data services, Microsoft Fabric, and emerging data technologies, with a focus on continuous learning and skill development
- Conference Attendance & Certification: Attend industry conferences, obtain relevant certifications (e.g., Azure DP-203, AI-102), and engage with professional communities to stay up-to-date with emerging trends and best practices
- Technical Mentorship & Leadership: Seek mentorship opportunities to develop leadership skills and contribute to team architecture decisions, with a focus on technical skill development and career progression
📝 Enhancement Note: Capgemini's focus on data and AI services, coupled with its emphasis on collaboration and knowledge sharing, offers numerous technical challenges and growth opportunities for data engineers.
💡 Interview Preparation
Technical Questions:
- Azure Data Services: Describe your experience with Azure data services, with a focus on Microsoft Fabric, Data Factory, Synapse, and other relevant services
- Data Processing & Transformation: Explain your approach to data processing and transformation, with a focus on SQL, Python, and other relevant technologies
- AI/ML Integration: Discuss your experience integrating AI/ML models into production environments, with a focus on GenAI, NLP, and RAG
- Data Governance & Security: Describe your understanding of data governance and security best practices, with a focus on emerging regulations and industry standards
Company & Culture Questions:
- Capgemini Culture: Explain what you understand about Capgemini's culture, with a focus on data and AI services, collaboration, and knowledge sharing
- Team Dynamics: Describe your experience working in cross-functional teams, with a focus on data engineering, AI/ML, and data science
- Project Management: Discuss your experience with project management, with a focus on Agile methodologies, CI/CD pipelines, and data engineering best practices
Portfolio Presentation Strategy:
- End-to-End Projects: Highlight end-to-end data projects, demonstrating data solution design, pipeline development, and AI/ML integration
- Data Models & Warehouses: Showcase examples of data models, warehouses, and big data processing solutions
- Documentation: Include documentation of data architectures and processing workflows, with a focus on data quality, security, and compliance
📝 Enhancement Note: The interview process for this role focuses on technical assessment, behavioral interview, and final evaluation, with a strong emphasis on data engineering skills and cultural fit. Applicants should be prepared to discuss their experience with Azure data services, data processing and transformation, AI/ML integration, and data governance and security.
📌 Application Steps
To apply for this Cloud Data Engineer (Azure & Microsoft Fabric) position at Capgemini:
- Tailor Your Portfolio: Highlight end-to-end data projects, data models, warehouses, and big data processing solutions, with a focus on Azure data services and Microsoft Fabric
- Optimize Your Resume: Emphasize your experience with Azure data services, data processing and transformation, AI/ML integration, and data governance and security, with a focus on relevant keywords and industry standards
- Prepare for Technical Interview: Brush up on your Azure data services, Microsoft Fabric, and data processing and transformation skills, with a focus on practical exercises and coding challenges
- Research Capgemini: Familiarize yourself with Capgemini's data and AI services focus, collaboration culture, and hybrid work environment, with a focus on company-specific insights and user experiences
⚠️ Important Notice: This enhanced job description includes AI-generated insights and web development/server administration industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.
Application Requirements
Hands-on experience with Azure data services and strong skills in SQL and Python are essential. Familiarity with AI/ML and data governance practices is also important.