Specialist DevOps Engineer- Databricks & Terraform
📍 Job Overview
- Job Title: Specialist DevOps Engineer - Databricks & Terraform
- Company: Nasdaq
- Location: Bangalore, Karnataka, India
- Job Type: Full-Time (Hybrid)
- Category: DevOps Engineer
- Date Posted: June 11, 2025
- Experience Level: Mid-Senior level (5-10 years)
🚀 Role Summary
- Lead DevOps processes to support Data & Insights initiatives, focusing on Databricks, Microsoft Power BI, Microsoft SQL Server, Informatica, Azure, AWS, and Windows Server environments.
- Collaborate with multi-functional teams to optimize DevOps processes, infrastructure management, and data platform operations.
- Ensure data governance, access control, and security compliance for all data environments, both on-premises and in the cloud.
📝 Enhancement Note: This role requires a strong background in DevOps with a focus on Databricks and Terraform, as well as experience with cloud platforms (Azure and AWS) and hybrid cloud deployments. Familiarity with Profisee MDM is a plus.
💻 Primary Responsibilities
-
CI/CD Pipeline and Configuration Management:
- Design, implement, and manage CI/CD pipelines using GitHub/GitLab and JIRA for detailed development, testing, and deployment.
- Ensure consistent and reliable deployment processes through configuration management and automation frameworks.
-
Data Platform and Infrastructure Management:
- Handle and optimize infrastructure for Microsoft Power BI, SQL Server, Databricks, and Informatica to ensure scalability and performance.
- Collaborate with data engineering and analytics teams to deploy and maintain robust data pipelines and analytics solutions.
- Provide Windows Server administration, ensuring high availability, security, and stability.
-
Cloud and Hybrid Infrastructure:
- Lead deployments on both Azure and AWS platforms, focusing on hybrid cloud solutions and standard processes for scalability and cost optimization.
-
Monitoring and Incident Response:
- Establish and maintain monitoring frameworks to ensure system health, performance, and reliability across all platforms.
- Initiate incident response and root cause analysis, developing solutions to prevent future occurrences and reduce downtime.
-
Teamwork and Process Improvement:
- Work closely with multi-functional teams to support and optimize DevOps processes, infrastructure management, and data platform operations.
- Identify and implement standard processes in DevOps, infrastructure management, and data platform operations.
-
Governance and Compliance:
- Ensure data governance, access control, and security compliance for all data environments, both on-premises and in the cloud.
- Work with the Data Governance team on data quality, lineage, and master data management (preferably with Profisee MDM).
🎓 Skills & Qualifications
Education: Bachelor/Master in Computer Science, Information Technology, or a related field. Relevant certifications in DevOps, cloud platforms (Azure, AWS), or related technologies are a plus.
Experience: At least 6+ years of experience in DevOps with a strong focus on Databricks, Microsoft Power BI, Microsoft SQL Server, and Informatica. Hands-on experience with Databricks & Terraform is a must.
Required Skills:
- Proficiency with CI/CD tools such as GitHub, GitLab, JIRA, and related automation tools.
- Hands-on experience with cloud platforms, specifically Microsoft Azure and AWS, and expertise in hybrid cloud deployments.
- Familiarity with monitoring and alerting tools and frameworks to supervise performance and system health.
- Should be familiar with scripting language – Python & Notebooks.
- Strong understanding of Windows Server management and administration.
- Knowledge of SQL & database concepts.
Preferred Skills:
- Experience with Master Data Management (MDM) platforms, ideally Profisee.
- Certifications such as Azure DevOps Engineer, AWS Certified DevOps Engineer, or similar.
Soft Skills:
- Consistent track record to lead, mentor, and develop high-performance teams.
- Excellent communication and social skills to work efficiently with both technical and non-technical collaborators.
- Strong analytical skills, problem-solving abilities, and attention to detail.
- Proven experience in incident management, root cause analysis, and implementing preventive measures.
📊 Web Portfolio & Project Requirements
Portfolio Essentials:
- Demonstrate experience with Databricks, Terraform, and other relevant technologies through live projects or case studies.
- Showcase proficiency in CI/CD pipeline management, cloud deployments, and infrastructure optimization.
- Highlight problem-solving skills and incident management experiences with relevant examples and outcomes.
Technical Documentation:
- Provide detailed documentation of your projects, including code quality, commenting, and documentation standards.
- Demonstrate version control, deployment processes, and server configuration management skills.
- Showcase testing methodologies, performance metrics, and optimization techniques used in your projects.
📝 Enhancement Note: As this role requires a strong focus on Databricks and Terraform, ensure that your portfolio highlights these technologies and demonstrates your proficiency in managing data platforms and infrastructure.
💵 Compensation & Benefits
Salary Range: INR 1,200,000 - 1,800,000 per annum (Estimated based on market standards for a mid-senior level DevOps role in Bangalore)
Benefits:
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- Collaborative and innovative work environment.
- Flexible work arrangements (Hybrid).
Working Hours: Full-time (40 hours/week) with flexible deployment windows and maintenance schedules.
📝 Enhancement Note: The salary range provided is an estimate based on market research for a mid-senior level DevOps role in Bangalore. The actual salary may vary depending on the candidate's experience and skills.
🎯 Team & Company Context
Company Culture:
- Industry: Financial technology and market infrastructure.
- Company Size: Large (10,000+ employees).
- Founded: 1971.
Team Structure:
- The DevOps team consists of specialists focusing on various technologies, including Databricks, Terraform, and cloud platforms (Azure and AWS).
- The team follows an Agile/Scrum methodology for development processes, with regular code reviews, testing, and quality assurance practices.
- Cross-functional collaboration with data engineering, application development, and business analytics teams is essential for supporting and optimizing DevOps processes.
Development Methodology:
- Agile/Scrum methodologies for development processes.
- Code review, testing, and quality assurance practices.
- Deployment strategies, CI/CD pipelines, and server management for ensuring high availability, security, and stability.
Company Website: https://www.nasdaq.com/
📝 Enhancement Note: Nasdaq is a large, established company with a strong focus on innovation and technology. The DevOps team plays a crucial role in supporting data and analytics initiatives, collaborating with various teams to optimize processes and infrastructure.
📈 Career & Growth Analysis
Web Technology Career Level: Mid-Senior level DevOps Engineer, focusing on data and analytics infrastructure, cloud platforms, and hybrid deployments.
Reporting Structure: This role reports directly to the DevOps Manager and collaborates with data engineering, application development, and business analytics teams.
Technical Impact: The Specialist DevOps Engineer will have a significant impact on data and analytics infrastructure, ensuring scalability, performance, and reliability across various platforms. They will also contribute to incident response, root cause analysis, and preventive measures to minimize downtime.
Growth Opportunities:
- Technical Skill Development: Enhance skills in Databricks, Terraform, and emerging cloud technologies to advance in the DevOps career path.
- Technical Leadership: Develop leadership skills through mentoring, team management, and architecture decision-making opportunities.
- Career Progression: Grow into senior DevOps roles, focusing on data and analytics infrastructure, or explore other technical leadership paths within Nasdaq.
📝 Enhancement Note: This role offers significant growth opportunities for technical skill development and leadership within the DevOps team, focusing on data and analytics infrastructure.
🌐 Work Environment
Office Type: Hybrid work environment, with a combination of on-site and remote work arrangements.
Office Location(s): Bangalore, India.
Workspace Context:
- Collaborative workspace with dedicated areas for development, testing, and team meetings.
- Access to multiple monitors, testing devices, and development tools for optimal productivity.
- Cross-functional collaboration with designers, marketers, and other stakeholders to ensure user-focused solutions.
Work Schedule: Full-time (40 hours/week) with flexible deployment windows, maintenance schedules, and project deadlines.
📝 Enhancement Note: Nasdaq's hybrid work environment encourages collaboration and innovation, with a focus on user-centric solutions and continuous learning.
📄 Application & Technical Interview Process
Interview Process:
- Technical Assessment: A hands-on technical assessment focusing on Databricks, Terraform, and cloud platform (Azure and AWS) proficiency, as well as CI/CD pipeline management and infrastructure optimization.
- System Design Discussion: A system design discussion to evaluate your understanding of architecture, scalability, and performance optimization for data and analytics infrastructure.
- Team Fit Assessment: A team fit assessment to evaluate your communication skills, cultural alignment, and collaboration potential with the DevOps team and other stakeholders.
- Final Evaluation: A final evaluation to assess your overall fit for the role, considering technical skills, problem-solving abilities, and growth potential.
Portfolio Review Tips:
- Portfolio Structure: Organize your portfolio to showcase your experience with Databricks, Terraform, and other relevant technologies, highlighting your proficiency in managing data platforms and infrastructure.
- Project Case Studies: Present detailed case studies demonstrating your problem-solving skills, incident management experiences, and the positive impact you've made on data and analytics infrastructure projects.
- Code Quality Demonstration: Highlight your commitment to code quality, commenting, and documentation standards, showcasing your ability to maintain and optimize data platforms and infrastructure.
Technical Challenge Preparation:
- Challenge Format: Familiarize yourself with common DevOps challenges and exercises, focusing on Databricks, Terraform, and cloud platform (Azure and AWS) proficiency.
- Time Management: Practice time management skills to complete challenges within the given timeframe, ensuring efficient problem-solving and solution architecture.
- Communication: Develop clear and concise communication skills to articulate technical concepts and explain your approach to solving challenges.
ATS Keywords:
- Programming Languages: Python, Bash, SQL, PowerShell.
- Web Frameworks: Terraform, Databricks.
- Server Technologies: Microsoft Azure, AWS, Windows Server, Microsoft SQL Server, Informatica, Power BI.
- Databases: SQL Server, Azure SQL Database, AWS RDS, PostgreSQL.
- Tools: GitHub, GitLab, JIRA, Jenkins, Ansible, Puppet, Chef, Docker, Kubernetes, AWS CloudFormation, Azure Resource Manager (ARM), Terraform Cloud, Databricks Notebooks.
- Methodologies: Agile, Scrum, CI/CD, ITIL, DevOps.
- Soft Skills: Problem-solving, incident management, root cause analysis, preventive measures, teamwork, communication, leadership, mentoring, process improvement.
📝 Enhancement Note: Tailor your resume and portfolio to highlight the relevant ATS keywords for this DevOps role, focusing on Databricks, Terraform, and cloud platform (Azure and AWS) proficiency.
🛠 Technology Stack & Web Infrastructure
Frontend Technologies: Not applicable for this role.
Backend & Server Technologies:
- Databricks: Proficiency in Databricks is required for managing data processing, analytics, and machine learning workloads.
- Terraform: Proficiency in Terraform is required for infrastructure as code (IaC), ensuring consistent and reliable deployments across cloud platforms (Azure and AWS).
- Cloud Platforms: Proficiency in Microsoft Azure and AWS is required for managing hybrid cloud deployments, ensuring scalability, performance, and cost optimization.
- Windows Server: Strong understanding of Windows Server management and administration is required for handling on-premises infrastructure and ensuring high availability, security, and stability.
Development & DevOps Tools:
- CI/CD Tools: Proficiency in GitHub, GitLab, JIRA, Jenkins, and related automation tools is required for managing CI/CD pipelines and ensuring consistent deployment processes.
- Infrastructure as Code (IaC) Tools: Proficiency in Terraform, Ansible, Puppet, and Chef is required for managing infrastructure configurations and ensuring consistent deployments across cloud platforms.
- Monitoring Tools: Familiarity with monitoring and alerting tools and frameworks is required for ensuring system health, performance, and reliability across all platforms.
📝 Enhancement Note: This role requires a strong focus on Databricks, Terraform, and cloud platforms (Azure and AWS), with proficiency in managing data platforms and infrastructure.
👥 Team Culture & Values
Web Development Values:
- Innovation: Embrace continuous learning and improvement to drive technological advancements in data and analytics infrastructure.
- Effectiveness: Focus on delivering high-quality, scalable, and reliable solutions that meet business objectives and user needs.
- Collaboration: Work closely with multi-functional teams to optimize DevOps processes, infrastructure management, and data platform operations.
- Quality: Maintain high coding standards, thorough testing, and quality assurance practices to ensure the reliability and performance of data and analytics infrastructure.
Collaboration Style:
- Cross-functional Integration: Collaborate with data engineering, application development, and business analytics teams to support and optimize DevOps processes, infrastructure management, and data platform operations.
- Code Review Culture: Participate in regular code reviews to ensure code quality, knowledge sharing, and continuous learning.
- Knowledge Sharing: Contribute to a culture of knowledge sharing, technical mentoring, and continuous learning to enhance the skills and expertise of the DevOps team.
📝 Enhancement Note: Nasdaq's DevOps team values innovation, effectiveness, collaboration, and quality, with a strong focus on driving technological advancements in data and analytics infrastructure.
⚡ Challenges & Growth Opportunities
Technical Challenges:
- Data Platform Optimization: Optimize data platforms and infrastructure for Microsoft Power BI, SQL Server, Databricks, and Informatica to ensure scalability and performance.
- Cloud Migration: Lead deployments on both Azure and AWS platforms, focusing on hybrid cloud solutions and standard processes for scalability and cost optimization.
- Incident Management: Initiate incident response and root cause analysis, developing solutions to prevent future occurrences and reduce downtime.
- Emerging Technologies: Stay up-to-date with emerging cloud technologies and trends, continuously enhancing your skills and expertise in Databricks, Terraform, and other relevant tools.
Learning & Development Opportunities:
- Technical Skill Development: Enhance your skills in Databricks, Terraform, and emerging cloud technologies through training, workshops, and online resources.
- Certification Programs: Pursue relevant certifications in DevOps, cloud platforms (Azure, AWS), or related technologies to demonstrate your expertise and commitment to professional development.
- Technical Mentorship: Seek mentorship opportunities from senior DevOps engineers and other technical experts to gain insights, guidance, and career growth advice.
📝 Enhancement Note: This role presents significant technical challenges and growth opportunities for enhancing your skills in Databricks, Terraform, and cloud platforms (Azure and AWS), with a focus on data and analytics infrastructure.
💡 Interview Preparation
Technical Questions:
- Databricks & Terraform: Demonstrate your proficiency in Databricks and Terraform through hands-on examples, architecture decisions, and optimization techniques.
- Cloud Platforms (Azure & AWS): Showcase your expertise in managing hybrid cloud deployments, ensuring scalability, performance, and cost optimization.
- CI/CD Pipeline Management: Explain your approach to managing CI/CD pipelines, infrastructure as code (IaC), and deployment processes.
Company & Culture Questions:
- Nasdaq's DevOps Culture: Research Nasdaq's DevOps culture, focusing on innovation, effectiveness, collaboration, and quality, and discuss how your skills and experiences align with these values.
- Data Governance & Compliance: Demonstrate your understanding of data governance, access control, and security compliance, highlighting your experience with Profisee MDM or similar tools.
- User Impact & Performance Optimization: Explain your approach to measuring user impact and optimizing performance for data and analytics infrastructure, focusing on user-centric design and continuous improvement.
Portfolio Presentation Strategy:
- Live Demonstration: Present live demonstrations of your projects, showcasing your proficiency in Databricks, Terraform, and other relevant technologies.
- Code Walkthrough: Provide detailed code walkthroughs, highlighting your commitment to code quality, commenting, and documentation standards.
- Architecture Decision Reasoning: Explain your architecture decisions, emphasizing your understanding of scalability, performance, and cost optimization for data and analytics infrastructure.
📝 Enhancement Note: Prepare for technical questions focusing on Databricks, Terraform, and cloud platforms (Azure and AWS), as well as company and culture questions related to Nasdaq's DevOps culture, data governance, and user impact.
📌 Application Steps
To apply for this Specialist DevOps Engineer - Databricks & Terraform position:
- Customize Your Portfolio: Tailor your portfolio to showcase your experience with Databricks, Terraform, and other relevant technologies, highlighting your proficiency in managing data platforms and infrastructure.
- Optimize Your Resume: Highlight your relevant experience, skills, and achievements in DevOps, Databricks, Terraform, and cloud platforms (Azure and AWS) to optimize your resume for this role.
- Prepare for Technical Challenges: Familiarize yourself with common DevOps challenges and exercises, focusing on Databricks, Terraform, and cloud platform (Azure and AWS) proficiency.
- Research Nasdaq: Learn about Nasdaq's business, industry, and culture to ensure a strong understanding of the company and its values.
⚠️ Important Notice: This enhanced job description includes AI-generated insights and web development/server administration industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.
Application Requirements
Candidates should have at least 6+ years of experience in DevOps with a strong focus on Databricks and Terraform. Proficiency in CI/CD tools, cloud platforms, and scripting languages is essential, along with strong analytical and communication skills.