Experience: 6+ years
Start Date: As soon as possibleJoin us to design and optimize data workflows leveraging Databricks, PySpark, and modern cloud technologies.Your Role
As a Senior Data Engineer, you will work on building and managing scalable data pipelines, orchestrating jobs, and configuring clusters in Databricks. You’ll collaborate with cross-functional teams to ensure efficient ETL processes, implement GitOps practices, and contribute to automation and CI/CD for ML workflows.In this role, you will:Configure and manage Databricks clusters, pipelines, and job orchestration.Develop ETL and data transformation workflows using PySpark.Implement GitOps principles for version control and deployment.Collaborate with teams to integrate data solutions into ML workflows.Optimize performance and ensure reliability of data processes.Your Profile6+ years of experience in data engineering.Strong hands-on experience with Databricks (cluster setup, pipelines, orchestration).Proficiency in PySpark for ETL and data transformations.Understanding of GitOps practices.Nice to have:Experience building CI/CD pipelines for ML workflows.Working knowledge of Azure ML services (model registry, jobs, batch endpoints).Familiarity with infrastructure automation using Bicep or CloudFormation.Key Skills
Databricks | PySpark | ETL | GitOps | CI/CD | Azure ML | Infrastructure Automation (Bicep/CloudFormation)
Any question or remark? just write us a message
If you would like to discuss anything related to payment, account, licensing,
partnerships, or have pre-sales questions, you’re at the right place.