Especialista em Engenharia de Dados Cloud – AWS & Databricks

NEORIS
Full_timeSão Paulo, Brazil

📍 Job Overview

  • Job Title: Especialista em Engenharia de Dados Cloud – AWS & Databricks
  • Company: NEORIS
  • Location: São Paulo, São Paulo, Brazil
  • Job Type: Full-time
  • Category: Data Engineer
  • Date Posted: 2025-06-17
  • Experience Level: Mid-Senior level (5-10 years)
  • Remote Status: Hybrid (3 days per week in the office)

🚀 Role Summary

  • Design and implement efficient, secure, and scalable data pipelines in AWS, focusing on AWS services like S3, Glue, Lambda, Redshift, and Athena.
  • Develop data processing solutions in Databricks, leveraging PySpark, notebooks, and Delta Lake, with a focus on performance and reusability.
  • Collaborate with data science and analytics teams to ensure the availability of reliable and well-documented data.
  • Work with business areas and architecture teams to ensure adherence to corporate data guidelines.

📝 Enhancement Note: This role requires a strong background in AWS and Databricks, as well as experience in data modeling, large-scale data ingestion, and DataOps practices. Familiarity with data catalogs and versioning of pipelines is also essential.

💻 Primary Responsibilities

  • Data Pipeline Design and Implementation: Design and implement efficient, secure, and scalable data pipelines in AWS, leveraging services like S3, Glue, Lambda, Redshift, and Athena.
  • Data Processing in Databricks: Develop data processing solutions in Databricks using PySpark, notebooks, and Delta Lake, focusing on performance and reusability.
  • Data Integration: Integrate data from various structured and unstructured data sources.
  • Data Governance: Structure and maintain data catalogs with organized metadata, aligned with best governance practices.
  • Collaboration: Work with data science and analytics teams to ensure the availability of reliable and well-documented data.
  • Stakeholder Communication: Collaborate with business areas and architecture teams to ensure adherence to corporate data guidelines.

📝 Enhancement Note: This role requires a strong focus on data pipeline design, data processing in Databricks, and collaboration with various teams to ensure data reliability and governance.

🎓 Skills & Qualifications

Education: Bachelor's degree in Computer Science, Information Technology, or a related field.

Experience: Proven experience (5-10 years) in data engineering, with a focus on AWS and Databricks.

Required Skills:

  • Proven experience with AWS (S3, Glue, Lambda, Redshift, Athena, etc.)
  • Strong knowledge of Databricks, including PySpark, notebooks, and Delta Lake
  • Experience with data catalogs like AWS Glue Data Catalog, Alation, Collibra, or similar
  • Solid knowledge in data modeling, large-scale data ingestion, and data processing
  • Familiarity with DataOps and pipeline versioning
  • Proficiency in Python and SQL

Preferred Skills:

  • Experience working in a hybrid environment
  • Knowledge of AWS certification programs
  • Familiarity with data governance best practices

📝 Enhancement Note: This role requires a strong background in AWS and Databricks, as well as experience in data modeling, large-scale data ingestion, and DataOps practices. Familiarity with data catalogs and pipeline versioning is also essential.

📊 Web Portfolio & Project Requirements

Portfolio Essentials:

  • Data Pipeline Projects: Showcase your experience in designing and implementing efficient, secure, and scalable data pipelines in AWS, highlighting the AWS services you've used.
  • Databricks Projects: Demonstrate your data processing skills in Databricks using PySpark, notebooks, and Delta Lake, focusing on performance and reusability.
  • Data Integration Projects: Highlight your experience integrating data from various structured and unstructured data sources.
  • Data Governance Projects: Showcase your ability to structure and maintain data catalogs with organized metadata, aligned with best governance practices.

Technical Documentation:

  • Code Quality: Demonstrate your commitment to code quality, commenting, and documentation standards.
  • Version Control: Showcase your experience with version control systems and deployment processes.
  • Testing Methodologies: Highlight your familiarity with testing methodologies, performance metrics, and optimization techniques.

📝 Enhancement Note: This role requires a strong focus on data pipeline design, data processing in Databricks, and data integration. Be prepared to showcase your experience in these areas during the interview process.

💵 Compensation & Benefits

Salary Range: The salary range for this role in São Paulo, Brazil, is approximately R$12,000 - R$18,000 per month, based on experience and qualifications. This estimate is based on regional market data and industry standards for data engineering roles.

Benefits:

  • Vale Refeição and Vale Alimentação: Monthly benefits for food expenses, both inside and outside the home.
  • Previdência Privada: Supplementary retirement planning.
  • Plano de Saúde: High-quality medical coverage for you and your legal dependents.
  • Plano Odontológico: Comprehensive dental care with a wide network of providers.
  • Seguro de Vida: Life insurance for you and your loved ones.
  • Aulas de Inglês/Espanhol: Language learning opportunities.
  • Wellhub: Benefits and discounts for gym memberships and wellness programs.
  • Auxílio Creche: Financial support for childcare expenses.
  • Neoris Global Campus: Continuous learning and professional growth opportunities.

Working Hours: The standard workweek is 40 hours, with flexible hours for deployment windows, maintenance, and project deadlines.

📝 Enhancement Note: The salary range provided is an estimate based on regional market data and industry standards for data engineering roles. Actual salary may vary based on experience and qualifications.

🎯 Team & Company Context

🏢 Company Culture

Industry: NEORIS is a digital accelerator that helps businesses enter the future by providing digital transformation services. They have over 20 years of experience as digital partners for some of the world's leading companies.

Company Size: NEORIS has over 4,000 professionals across 11 countries, creating a multicultural startup-like environment that fosters innovation and continuous learning.

Founded: NEORIS was founded in 2001, with a mission to help businesses enter the future by providing digital transformation services.

Team Structure:

  • The data engineering team consists of specialists in AWS and Databricks, working together to design and implement efficient, secure, and scalable data pipelines.
  • The team follows an Agile/Scrum methodology, with regular sprint planning and collaboration sessions.
  • Cross-functional collaboration with design, marketing, and business teams is essential for ensuring data reliability and governance.

Development Methodology:

  • NEORIS follows Agile/Scrum methodologies for software development, with regular sprint planning and collaboration sessions.
  • Code reviews, testing, and quality assurance practices are essential for maintaining high code quality and performance.
  • Deployment strategies, CI/CD pipelines, and server management are crucial for ensuring efficient and reliable data processing.

Company Website: NEORIS Website

📝 Enhancement Note: NEORIS is a digital accelerator with a strong focus on data engineering and digital transformation. The company culture emphasizes innovation, continuous learning, and collaboration.

📈 Career & Growth Analysis

Data Engineer Career Level: This role is at the mid-senior level, requiring a strong background in AWS and Databricks, as well as experience in data modeling, large-scale data ingestion, and DataOps practices. The primary responsibilities involve designing and implementing efficient, secure, and scalable data pipelines, as well as developing data processing solutions in Databricks.

Reporting Structure: This role reports directly to the Data Engineering Manager, collaborating with data science and analytics teams, as well as business areas and architecture teams to ensure data reliability and governance.

Technical Impact: The technical impact of this role is significant, as it involves designing and implementing data pipelines that support the company's digital transformation initiatives. The data engineer will work closely with various teams to ensure data reliability, performance, and governance.

Growth Opportunities:

  • Technical Growth: As a mid-senior level data engineer, there are ample opportunities for technical growth, including AWS certification programs, Databricks certifications, and continuous learning through the Neoris Global Campus.
  • Leadership Potential: With experience and strong performance, there is potential for growth into technical leadership roles, such as Data Engineering Manager or Senior Data Engineer.
  • Architecture Decisions: As the company continues to grow and evolve, there will be opportunities to make architecture decisions that impact the overall data landscape.

📝 Enhancement Note: This role offers significant growth opportunities for technical professionals looking to expand their skills in AWS and Databricks, as well as those interested in pursuing leadership and architecture roles.

🌐 Work Environment

Office Type: NEORIS has a hybrid work environment, with employees working from the office three days a week and remotely for the remaining two days.

Office Location(s): The primary office location for this role is in Rio de Janeiro, with the option to work remotely for two days per week.

Workspace Context:

  • Collaborative Workspace: The hybrid work environment encourages collaboration and interaction with team members, fostering a culture of knowledge sharing and continuous learning.
  • Development Tools: NEORIS provides the necessary tools for data engineering, including access to AWS services, Databricks, and other relevant software.
  • Cross-Functional Collaboration: The data engineering team works closely with other teams, such as data science, analytics, design, marketing, and business, to ensure data reliability and governance.

Work Schedule: The standard workweek is 40 hours, with flexible hours for deployment windows, maintenance, and project deadlines. The hybrid work arrangement allows for some flexibility in scheduling.

📝 Enhancement Note: NEORIS offers a hybrid work environment that balances collaboration and flexibility, with a focus on knowledge sharing and continuous learning.

📄 Application & Technical Interview Process

Interview Process:

  1. Technical Assessment: The first step in the interview process is a technical assessment, focusing on AWS and Databricks skills, as well as data modeling, large-scale data ingestion, and DataOps practices.
  2. Cultural Fit Assessment: The second step involves an assessment of cultural fit, focusing on NEORIS's values and work environment.
  3. Final Evaluation: The final step is a comprehensive evaluation of the candidate's skills, experience, and cultural fit.

Portfolio Review Tips:

  • Data Pipeline Projects: Highlight your experience in designing and implementing efficient, secure, and scalable data pipelines in AWS, focusing on the AWS services you've used.
  • Databricks Projects: Showcase your data processing skills in Databricks using PySpark, notebooks, and Delta Lake, focusing on performance and reusability.
  • Data Integration Projects: Demonstrate your ability to integrate data from various structured and unstructured data sources.
  • Data Governance Projects: Showcase your experience in structuring and maintaining data catalogs with organized metadata, aligned with best governance practices.

Technical Challenge Preparation:

  • AWS Challenges: Brush up on your AWS skills, focusing on services like S3, Glue, Lambda, Redshift, and Athena.
  • Databricks Challenges: Familiarize yourself with Databricks, including PySpark, notebooks, and Delta Lake, focusing on performance and reusability.
  • Data Modeling Challenges: Review your data modeling skills, focusing on large-scale data ingestion and processing.
  • DataOps Challenges: Prepare for challenges related to pipeline versioning, deployment processes, and server management.

ATS Keywords: [AWS, Databricks, PySpark, Data Catalogs, Data Modeling, Data Ingestion, Data Processing, DataOps, Python, SQL, Agile, Scrum, Hybrid Work Environment, Data Governance, Data Pipeline, Data Integration, Data Reliability, Data Performance, Data Governance]

📝 Enhancement Note: The interview process for this role focuses on technical skills, cultural fit, and a comprehensive evaluation of the candidate's qualifications. Be prepared to showcase your experience in data engineering, with a strong focus on AWS and Databricks.

🛠 Technology Stack & Web Infrastructure

AWS Services:

  • Storage: Amazon S3, Amazon Glacier
  • Data Processing: AWS Glue, AWS Lambda, AWS Redshift, Amazon Athena
  • Data Integration: AWS AppSync, AWS API Gateway, AWS EventBridge

Databricks:

  • Data Processing: PySpark, notebooks, Delta Lake
  • Data Integration: Databricks Connect, Databricks Spark SQL
  • Data Governance: Databricks Data Governance, Databricks Data Quality

Development & DevOps Tools:

  • Version Control: Git, GitHub, GitLab
  • CI/CD Pipelines: Jenkins, CircleCI, GitHub Actions
  • Server Management: AWS CloudFormation, Terraform, Ansible
  • Monitoring: AWS CloudWatch, Databricks Monitoring, Prometheus, Grafana

📝 Enhancement Note: This role requires a strong background in AWS and Databricks, as well as experience with relevant development and DevOps tools.

👥 Team Culture & Values

NEORIS Values:

  • Innovation: NEORIS fosters a culture of innovation, encouraging employees to think outside the box and challenge the status quo.
  • Collaboration: NEORIS values collaboration, both within the data engineering team and with other teams, such as data science, analytics, design, marketing, and business.
  • Continuous Learning: NEORIS emphasizes continuous learning, providing opportunities for professional development and growth.
  • Customer Focus: NEORIS is committed to delivering high-quality solutions that meet the needs of its clients.

Collaboration Style:

  • Cross-Functional Integration: The data engineering team works closely with other teams, such as data science, analytics, design, marketing, and business, to ensure data reliability and governance.
  • Code Review Culture: NEORIS emphasizes code reviews and peer programming to maintain high code quality and performance.
  • Knowledge Sharing: NEORIS fosters a culture of knowledge sharing, with regular team meetings, workshops, and training sessions.

📝 Enhancement Note: NEORIS values innovation, collaboration, continuous learning, and customer focus, with a strong emphasis on knowledge sharing and cross-functional collaboration.

⚡ Challenges & Growth Opportunities

Technical Challenges:

  • AWS Challenges: Design and implement efficient, secure, and scalable data pipelines in AWS, focusing on AWS services like S3, Glue, Lambda, Redshift, and Athena.
  • Databricks Challenges: Develop data processing solutions in Databricks using PySpark, notebooks, and Delta Lake, focusing on performance and reusability.
  • Data Integration Challenges: Integrate data from various structured and unstructured data sources, ensuring data reliability and governance.
  • Data Governance Challenges: Structure and maintain data catalogs with organized metadata, aligned with best governance practices.

Learning & Development Opportunities:

  • AWS Certification: Pursue AWS certification programs to enhance your skills and knowledge in AWS services.
  • Databricks Certification: Obtain Databricks certifications to demonstrate your expertise in Databricks and related technologies.
  • Neoris Global Campus: Take advantage of NEORIS's learning and development platform, which offers continuous learning opportunities and professional growth.

📝 Enhancement Note: This role offers significant technical challenges and learning opportunities for data engineers looking to expand their skills in AWS and Databricks, as well as those interested in pursuing professional development and growth.

💡 Interview Preparation

Technical Questions:

  • AWS Questions: Be prepared to answer questions about AWS services like S3, Glue, Lambda, Redshift, and Athena, as well as data modeling, large-scale data ingestion, and DataOps practices.
  • Databricks Questions: Brush up on your Databricks skills, including PySpark, notebooks, Delta Lake, and data processing techniques.
  • Data Integration Questions: Prepare for questions about integrating data from various structured and unstructured data sources, as well as data governance best practices.

Company & Culture Questions:

  • NEORIS Values: Familiarize yourself with NEORIS's values, including innovation, collaboration, continuous learning, and customer focus.
  • Work Environment: Prepare for questions about NEORIS's hybrid work environment and the balance between collaboration and flexibility.
  • Team Dynamics: Understand the dynamics of the data engineering team and its collaboration with other teams, such as data science, analytics, design, marketing, and business.

Portfolio Presentation Strategy:

  • Data Pipeline Projects: Highlight your experience in designing and implementing efficient, secure, and scalable data pipelines in AWS, focusing on the AWS services you've used.
  • Databricks Projects: Showcase your data processing skills in Databricks using PySpark, notebooks, and Delta Lake, focusing on performance and reusability.
  • Data Integration Projects: Demonstrate your ability to integrate data from various structured and unstructured data sources, as well as your experience in data governance best practices.

📝 Enhancement Note: The interview process for this role focuses on technical skills, cultural fit, and a comprehensive evaluation of the candidate's qualifications. Be prepared to showcase your experience in data engineering, with a strong focus on AWS and Databricks.

📌 Application Steps

To apply for this data engineering role at NEORIS:

  1. Submit Your Application: Click the application link provided in the job description to submit your resume and portfolio.
  2. Tailor Your Portfolio: Customize your portfolio to highlight your experience in data pipeline design, data processing in Databricks, data integration, and data governance.
  3. Optimize Your Resume: Optimize your resume for web development and server administration roles, focusing on project highlights and technical skills.
  4. Prepare for Technical Challenges: Brush up on your AWS and Databricks skills, focusing on data modeling, large-scale data ingestion, and DataOps practices.
  5. Research NEORIS: Familiarize yourself with NEORIS's values, work environment, and team dynamics to ensure a strong cultural fit.

⚠️ Important Notice: This enhanced job description includes AI-generated insights and web development/server administration industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.

Application Requirements

Proven experience with AWS and Databricks is required. Strong knowledge in data modeling, large-scale data ingestion, and DataOps practices is essential.