Key Responsibilities
- CI/CD Pipeline Management: Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and Azure DevOps to automate software deployment and integration processes.
- Infrastructure Automation: Work with Kubernetes to manage containerized applications and deploy them across various environments.
- Azure Integration: Utilize Azure Data Factory for data orchestration and integration tasks, ensuring seamless data flow and management across systems.
- Cloud Management: Deploy, monitor, and maintain cloud services on Azure, ensuring the scalability, security, and high availability of applications.
- Collaboration: Collaborate with development teams to streamline development and deployment processes, ensuring faster and more efficient release cycles.
- Scripting & Automation: Write Python scripts for automating tasks and use PySpark for handling large datasets and performing complex data transformations.
- Data Management: Ensure proper data integration and management strategies using SQL, Azure Data Factory, and Databricks to handle data pipelines and analytics workflows.
- Monitoring & Troubleshooting: Monitor system health and application performance and quickly troubleshoot issues in the CI/CD pipelines, containerized environments, and cloud infrastructure.
- Documentation: Maintain clear and up-to-date documentation for deployed systems, pipelines, and infrastructure processes.
Required Skills & Qualifications
- Experience: 2-3 years of hands-on experience working as a DevOps Engineer with a focus on Azure.
- CI/CD Pipeline: Experience with Jenkins, GitHub Actions, or Azure DevOps for setting up and maintaining continuous integration and delivery pipelines.
- Kubernetes: Strong hands-on experience with Kubernetes for deploying and managing containerized applications.
- Azure Data Factory: Experience with Azure Data Factory for data integration, orchestration, and ETL processes.
- Programming Languages: Proficiency in Python, including scripting and automation, as well as experience with PySpark for big data processing.
- SQL: Solid understanding and practical experience with SQL, including querying and managing relational databases.
- Databricks: Familiarity with Databricks for data engineering and analytics workflows in the cloud environment.
- Version Control: Experience working with Git and GitHub for version control and collaborative development.
- Cloud Platforms: Practical experience in managing and deploying applications on Microsoft Azure.
- Problem-Solving: Strong troubleshooting and problem-solving skills with the ability to handle production issues in a fast-paced environment.
Skills: ci/cd pipeline management,jenkins,python,github,devops,sql,databricks,git,azure data factory,pyspark,azure,cicd,azure devops,kubernetes,github actions,pipelines
Official notification