Design, develop, and maintain scalable and efficient data pipelines, ETL processes, and data integration solutions.
Collaborate with cross-functional teams to gather data requirements, translate them into technical specifications, and develop data models.
Implement and maintain CI/CD pipelines for automating the deployment and testing of data solutions.
Optimize and tune data workflows and processes to ensure high performance and reliability.
Monitor and troubleshoot data-related issues, perform root cause analysis, and implement corrective actions.
Maintain documentation of data infrastructure, processes, and workflows.
Stay up-to-date with industry trends and emerging technologies in data engineering and cloud computing.
Good understanding of Agile/DevOps operating model with experience and passion for working in a fast-paced Agile environment, delivering features in a short time duration, and utilizing automation whenever possible.
Manage virtual server cluster and containers (Docker, Kubernetes, Ansible).
Deploy, automate, manage, and maintain cloud-based production systems.
Support and optimize the infrastructure and toolchain including build system supporting DevOps CI/CD.
Optimize the support process and lead implementation of automation tools to reduce support tasks.
Your Profile
Proficiency in at least one of the cloud technologies: Azure, AWS, or GCP.
Minimum 4 years of relevant experience in data engineering, including strong experience with SQL, Python and PySpark
Proven experience in DevOps for Cloud Solutions, CI/CD, Automation.
Experience in any job scheduling tool e.g., Control M, Autosys, Airflow Luigi would be a big plus.
Experience in complex ETL mappings, CI CD pipelines, DevOps, and deployment tools e.g., Docker, Jenkins etc. would be a big plus.
Minimum 3+ years of experience working in multi-client environment and having worked on AWS Lambda and platforms like, ServiceNow, Zendesk and any other CRM platforms with additional experience is cloud platforms.