In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform.
You’ll also be:
To take on this role, you’ll need a background in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You’ll need at least six years of experience in PySpark, data engineering with strong expertise in ETL design, data quality testing, cleansing, monitoring, sourcing and exploration.
You'll need experience working with AWS services such as Glue, Lambda, EMR or Spark and S3. You'll also need programming proficiency in Python, Spark, and SQL and demonstrate working knowledge of Large Language Models or LLMs and AI-assisted development tools like GitHub Copilot, GitLab Duo along with practical prompt engineering skills.
You’ll also need:
Any question or remark? just write us a message
If you would like to discuss anything related to payment, account, licensing,
partnerships, or have pre-sales questions, you’re at the right place.