Big Data & Cloud Infrastructure Engineer

Cepal Hellas Financial Services S.A.
Full_timeβ€’Athens, Greece

πŸ“ Job Overview

  • Job Title: Big Data & Cloud Infrastructure Engineer
  • Company: Cepal Hellas Financial Services S.A.
  • Location: Athens, AttikΓ­, Greece
  • Job Type: On-site
  • Category: DevOps Engineer, System Administrator, Web Infrastructure
  • Date Posted: 2025-06-16
  • Experience Level: Mid-Senior level (2-5 years)

πŸš€ Role Summary

  • πŸ“ Enhancement Note: This role focuses on managing and optimizing big data infrastructure on AWS and Databricks, ensuring high availability and performance, and supporting continuous integration and deployment processes.

πŸ’» Primary Responsibilities

  • πŸ“ Enhancement Note: The primary responsibilities revolve around platform support, monitoring, tuning, and governance, with a strong emphasis on AWS cloud infrastructure and data processing technologies.

  • πŸ’‘ Support, monitor, and troubleshoot Production/QA/DEV environments and workloads.

    • Ensure high platform availability and resolve operational issues quickly.
    • Conduct platform health checks, monitor logs, and perform root cause analysis.
  • πŸ’‘ Administer, maintain, and tune AWS infrastructure, Databricks, and Spark clusters for performance and cost efficiency.

    • Manage workspace configuration, compute resources, and user access.
    • Coordinate release management, ensuring smooth and secure deployments across environments.
  • πŸ’‘ Support AWS networking and cloud infrastructure, including VPCs, subnets, route tables, and security groups.

    • Work with infrastructure teams to ensure platform scalability and connectivity across systems.
    • Implement and manage access controls using IAM, Unity Catalog, and other governance tools.
  • πŸ’‘ Monitor data access logs and perform regular security reviews to ensure compliance with organizational security policies and regulatory standards.

πŸŽ“ Skills & Qualifications

Education

  • BSc or MSc degree in Computer Science, Data Engineering, or a related field.

Experience

  • At least 2 years of experience in big data operations, platform support, or cloud data infrastructure.

Required Skills

  • πŸ’‘ Proficiency with AWS, Databricks, Delta Lake, and Apache Spark in production environments.
  • πŸ’‘ Strong skills in Python and SQL.
  • πŸ’‘ Experience with CI/CD, release management, and modern SDLC practices.
  • πŸ’‘ Solid understanding of AWS networking and cloud infrastructure fundamentals.
  • πŸ’‘ Experience with platform-level monitoring, troubleshooting, and performance tuning.

Preferred Skills

  • πŸ’‘ Databricks certifications (e.g., Data Engineer Associate/Professional).
  • πŸ’‘ Experience with Airflow or other orchestration and IaC tools.
  • πŸ’‘ Knowledge of Unity Catalog or other data governance platforms.
  • πŸ’‘ Exposure to regulated industries and enterprise security compliance.

πŸ“Š Web Portfolio & Project Requirements

  • πŸ“ Enhancement Note: While not explicitly stated, demonstrating experience with big data platforms, cloud infrastructure, and data processing workflows in your portfolio will be crucial for this role.

  • πŸ’‘ Include projects showcasing your ability to manage, optimize, and troubleshoot big data infrastructure on AWS and Databricks.

  • πŸ’‘ Highlight your experience with CI/CD pipelines, release management, and platform governance.

  • πŸ’‘ Demonstrate your understanding of AWS networking and cloud infrastructure by showcasing relevant projects or case studies.

πŸ’΅ Compensation & Benefits

πŸ“ Enhancement Note: Salary and benefits information is not provided in the job listing. According to Glassdoor, the average salary for a Big Data Engineer in Athens, Greece, is around €45,000 - €60,000 per year. However, this can vary depending on factors such as experience, skills, and the specific company.

πŸ’‘ Salary Range: €45,000 - €60,000 per year (Estimated based on market research and Glassdoor data)

πŸ’‘ Benefits:

  • Competitive benefits package (Not specified in the job listing)
  • Opportunity to work in a dynamic and growing financial services company

πŸ’‘ Working Hours:

  • Full-time position with standard working hours (Monday-Friday, 9:00 AM - 5:00 PM, with a 1-hour lunch break)
  • Occasional on-call duties may be required to provide 24/7 support for critical systems

🎯 Team & Company Context

🏒 Company Culture

πŸ’‘ Industry: Financial Services

  • Cepal Hellas Financial Services S.A. is a leading financial services provider in Greece, offering a wide range of banking, investment, and insurance products.

πŸ’‘ Company Size: Medium-sized company (Approximately 500-1,000 employees)

  • This company size allows for a structured yet flexible work environment, with opportunities for growth and collaboration.

πŸ’‘ Founded: 1998

  • Cepal Hellas has a well-established presence in the Greek financial market, with a strong reputation for innovation and customer focus.

πŸ’‘ Team Structure:

  • The team consists of experienced professionals in big data, cloud infrastructure, and data engineering, working collaboratively to ensure the smooth operation of data platforms and analytics workloads.
  • The role reports directly to the Head of Data Engineering or a similar position, with close collaboration with data engineering, data science, and IT teams.

πŸ’‘ Development Methodology:

  • Agile/Scrum methodologies are employed for project management and development, with regular sprint planning, code reviews, and testing practices.
  • CI/CD pipelines are used to automate deployment and ensure continuous integration and delivery of data workflows.

πŸ’‘ Company Website: Cepal Hellas Financial Services S.A.

πŸ“ˆ Career & Growth Analysis

πŸ’‘ Web Technology Career Level: Mid-Senior level (2-5 years)

  • This role requires a solid understanding of big data technologies, cloud infrastructure, and data processing workflows, with a proven track record in managing and optimizing data platforms.

πŸ’‘ Reporting Structure:

  • The role reports directly to the Head of Data Engineering or a similar position, with close collaboration with data engineering, data science, and IT teams.

πŸ’‘ Technical Impact:

  • The Big Data & Cloud Infrastructure Engineer plays a critical role in maintaining the performance, security, and reliability of data platforms, ensuring seamless operation of analytics and data engineering workloads.
  • This role has a significant impact on the company's ability to derive insights from data, make data-driven decisions, and provide innovative financial services to customers.

πŸ’‘ Growth Opportunities:

  • πŸ’‘ Technical Skill Development: Expand your expertise in big data technologies, cloud infrastructure, and data governance, with opportunities to work on cutting-edge projects and collaborate with experienced professionals.
  • πŸ’‘ Technical Leadership: Demonstrate strong performance and leadership skills to take on more responsibilities, mentor junior team members, and contribute to the development of data engineering best practices within the organization.
  • πŸ’‘ Career Progression: Proven success in this role can lead to career advancement opportunities, such as senior roles in data engineering, data architecture, or data management.

🌐 Work Environment

πŸ’‘ Office Type: Modern, collaborative office space with state-of-the-art technology and amenities

  • The office is designed to foster collaboration and innovation, with open-plan workspaces, meeting rooms, and breakout areas.

πŸ’‘ Office Location(s): Athens, Greece

  • The office is conveniently located in the heart of Athens, with easy access to public transportation and nearby amenities.

πŸ’‘ Workspace Context:

  • πŸ’‘ Collaborative workspace: The office encourages cross-functional collaboration between teams, with regular meetings, workshops, and knowledge-sharing sessions.
  • πŸ’‘ State-of-the-art technology: The workplace is equipped with the latest hardware and software tools to ensure optimal performance and productivity.
  • πŸ’‘ Flexible work arrangements: While the role is on-site, the company offers flexible work arrangements, such as remote work options and flexible hours, to support work-life balance.

πŸ’‘ Work Schedule:

  • Standard working hours: Monday-Friday, 9:00 AM - 5:00 PM, with a 1-hour lunch break
  • Occasional on-call duties may be required to provide 24/7 support for critical systems
  • Flexible working hours and remote work options may be available, depending on the team's needs and individual arrangements

πŸ“„ Application & Technical Interview Process

πŸ’‘ Interview Process:

  • πŸ’‘ Technical Assessment: A hands-on technical assessment to evaluate your skills in big data technologies, cloud infrastructure, and data processing workflows.
  • πŸ’‘ Behavioral Interview: A structured interview to assess your problem-solving skills, communication, and cultural fit.
  • πŸ’‘ Final Interview: A final interview with senior leadership to discuss your career aspirations, technical vision, and fit within the organization.

πŸ’‘ Portfolio Review Tips:

  • πŸ’‘ Project Case Studies: Highlight your experience with big data platforms, cloud infrastructure, and data processing workflows through comprehensive project case studies.
  • πŸ’‘ Technical Documentation: Include clear and concise technical documentation, demonstrating your ability to manage, optimize, and troubleshoot big data infrastructure.
  • πŸ’‘ Performance Optimization: Showcase your understanding of performance optimization techniques, with examples of how you've improved the efficiency and scalability of data platforms.

πŸ’‘ Technical Challenge Preparation:

  • πŸ’‘ AWS and Databricks: Brush up on your knowledge of AWS services, Databricks, and Apache Spark, focusing on platform management, optimization, and governance.
  • πŸ’‘ Problem-Solving: Practice problem-solving exercises related to big data infrastructure, cloud networking, and data processing workflows.
  • πŸ’‘ Communication: Prepare for questions about your communication skills, teamwork, and ability to collaborate with stakeholders.

πŸ’‘ ATS Keywords:

  • Big Data, AWS, Databricks, Delta Lake, Apache Spark, Python, SQL, CI/CD, Release Management, SDLC, AWS Networking, Performance Tuning, Troubleshooting, Security Compliance, Data Governance, Monitoring

πŸ›  Technology Stack & Web Infrastructure

πŸ’‘ Frontend Technologies: (Not applicable for this role)

πŸ’‘ Backend & Server Technologies:

  • πŸ’‘ AWS: Proficiency in AWS services, including EC2, S3, RDS, and IAM, is essential for this role.
  • πŸ’‘ Databricks: Experience with Databricks, including cluster management, workspace configuration, and user access, is crucial.
  • πŸ’‘ Apache Spark: Familiarity with Apache Spark, including data processing, transformation, and analysis, is required.

πŸ’‘ Development & DevOps Tools:

  • πŸ’‘ CI/CD: Experience with CI/CD pipelines, such as Jenkins or GitLab CI, is essential for managing and automating data workflows.
  • πŸ’‘ Infrastructure as Code (IaC): Familiarity with IaC tools, such as Terraform or CloudFormation, is beneficial for managing cloud infrastructure.
  • πŸ’‘ Monitoring Tools: Experience with monitoring tools, such as Prometheus, Grafana, or AWS CloudWatch, is crucial for ensuring the performance and availability of data platforms.

πŸ‘₯ Team Culture & Values

πŸ’‘ Web Development Values:

  • πŸ’‘ Customer Focus: Cepal Hellas places a strong emphasis on customer satisfaction and data-driven decision-making to provide innovative financial services.
  • πŸ’‘ Collaboration: The company fosters a collaborative work environment, encouraging cross-functional teamwork and knowledge-sharing.
  • πŸ’‘ Innovation: Cepal Hellas values innovation and continuous improvement, with a strong focus on staying ahead of industry trends and adopting cutting-edge technologies.

πŸ’‘ Collaboration Style:

  • πŸ’‘ Cross-Functional Integration: The team works closely with data engineering, data science, and IT teams to ensure the smooth operation of data platforms and analytics workloads.
  • πŸ’‘ Code Review Culture: Regular code reviews and pair programming sessions are employed to ensure the quality and maintainability of data processing workflows.
  • πŸ’‘ Knowledge Sharing: The company encourages knowledge-sharing and continuous learning, with regular training sessions, workshops, and mentoring opportunities.

⚑ Challenges & Growth Opportunities

πŸ’‘ Technical Challenges:

  • πŸ’‘ Platform Optimization: Continuously optimize data platforms for performance, scalability, and cost efficiency, with a focus on AWS and Databricks.
  • πŸ’‘ Security and Compliance: Ensure the security and compliance of data platforms, with a strong focus on data governance, access controls, and regulatory standards.
  • πŸ’‘ Troubleshooting: Develop your troubleshooting skills to quickly identify, diagnose, and resolve operational issues and performance bottlenecks.
  • πŸ’‘ Emerging Technologies: Stay up-to-date with emerging big data technologies, cloud infrastructure trends, and data processing workflows to drive innovation and continuous improvement.

πŸ’‘ Learning & Development Opportunities:

  • πŸ’‘ Technical Skill Development: Expand your expertise in big data technologies, cloud infrastructure, and data governance through training, workshops, and hands-on projects.
  • πŸ’‘ Conference Attendance: Attend industry conferences, webinars, and online events to stay informed about the latest trends and best practices in big data and cloud infrastructure.
  • πŸ’‘ Technical Mentorship: Seek mentorship opportunities from experienced professionals within the organization to accelerate your learning and career growth.

πŸ’‘ Interview Preparation

πŸ’‘ Technical Questions:

  • πŸ’‘ AWS and Databricks: Brush up on your knowledge of AWS services, Databricks, and Apache Spark, focusing on platform management, optimization, and governance.
  • πŸ’‘ Problem-Solving: Practice problem-solving exercises related to big data infrastructure, cloud networking, and data processing workflows.
  • πŸ’‘ Security and Compliance: Review AWS security best practices, data governance, and regulatory standards to ensure the security and compliance of data platforms.

πŸ’‘ Company & Culture Questions:

  • πŸ’‘ Company Values: Research Cepal Hellas' values and mission, and be prepared to discuss how your skills and experience align with the company's goals.
  • πŸ’‘ Team Dynamics: Familiarize yourself with the team's structure, dynamics, and collaboration style to ensure a good cultural fit.
  • πŸ’‘ Career Growth: Prepare for questions about your long-term career goals and how this role can support your professional development.

πŸ’‘ Portfolio Presentation Strategy:

  • πŸ’‘ Project Case Studies: Present your experience with big data platforms, cloud infrastructure, and data processing workflows through comprehensive project case studies.
  • πŸ’‘ Technical Documentation: Include clear and concise technical documentation, demonstrating your ability to manage, optimize, and troubleshoot big data infrastructure.
  • πŸ’‘ Performance Optimization: Showcase your understanding of performance optimization techniques, with examples of how you've improved the efficiency and scalability of data platforms.

πŸ“Œ Application Steps

To apply for this Big Data & Cloud Infrastructure Engineer position:

  • πŸ’‘ Submit your application through the application link provided in the job listing.
  • πŸ’‘ Customize your resume and portfolio to highlight your relevant skills and experience with big data technologies, cloud infrastructure, and data processing workflows.
  • πŸ’‘ Prepare for technical assessments and interviews by brushing up on your knowledge of AWS, Databricks, and Apache Spark, and practicing problem-solving exercises.
  • πŸ’‘ Research Cepal Hellas' company culture, values, and mission to ensure a strong cultural fit and alignment with your career goals.

⚠️ Important Notice: This enhanced job description includes AI-generated insights and big data industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.


Application Requirements

BSc or MSc degree in Computer Science, Data Engineering, or a related field with at least 2 years of experience in big data operations. Proficient with AWS, Databricks, Delta Lake, and Apache Spark in production environments.