• Design, develop, and optimize big data pipelines using Hadoop, Hive, Spark, and Iceberg.
• Implement and maintain containerized applications using Kubernetes, Rancher, and Helm.
• Work on data lake architectures and ensure efficient data storage, retrieval, and processing.
• Manage and monitor Kubernetes clusters to ensure high availability and performance.
• Collaborate with cross-functional teams to integrate big data solutions with cloud platforms.
• Implement best practices for CI/CD automation, security, and performance tuning.
• Troubleshoot big data processing issues and optimize job execution performance.
• Document processes, architectures, and workflows for internal knowledge sharing.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration skills.
• A good team player with a proactive approach to learning and knowledge sharing.
• Eexperience in Big Data and Cloud Technologies.
• Hands-on experience with Hadoop, Hive, Spark, and Iceberg.
• Strong expertise in Kubernetes, Rancher, and Helm for container orchestration.
• Experience working with cloud platforms (AWS, Azure, GCP) is a plus.
• Proficiency in scripting and automation using Python, Shell, or similar languages.
• Familiarity with CI/CD pipelines and DevOps practices.
• Exposure to security best practices in cloud and big data environments
Any question or remark? just write us a message
If you would like to discuss anything related to payment, account, licensing,
partnerships, or have pre-sales questions, you’re at the right place.