Principal DE AWS Spark

  • 24 Sep 2025
  • Data Engineering, Cloud Data Platforms & DevOps
  • Sydney
  • Contract or Temp
  • Initial 3 month contract, likely to extend
  • Work on a co-innovation project with major cloud provider
  • Fully remote in Australia, flexible work
Principal Data Engineer ( AWS / Airflow / Spark )
Australia Remote / Hybrid
Initial 3 months contract (immediate start)
$1000-1200 a day inc super DOE
  
A leading global tech company is seeking a hands-on Principal Data Engineer to optimise large-scale data processing systems (batch and streaming) and support billions of records daily. You’ll join a small, highly skilled team collaborating on a co-innovation project with a major cloud provider, aimed at improving cost and usage data latency for faster decision-making and efficiency.
  
Responsibilities for the Principal Data Engineer ( AWS / Airflow / Spark ) include :
  • Design, scope, and deliver scalable data solutions end-to-end.
  • Enhance data quality, optimise queries, and improve existing data models.
  • Build and maintain production-level data pipelines using Airflow, dbt, Apache Spark, Flink, Hive, Kafka, Databricks.
  • Work with AWS services (S3, Kinesis, Glue, CloudFormation, SNS) to operate large-scale distributed systems.
  • Write performant Python and SQL code with software engineering best practices (Agile, CI/CD, TDD).
  • Collaborate with senior engineers to drive innovation and coach peers through code reviews.
  
Required experience for the Principal Data Engineer ( AWS / Airflow / Spark ) include :
  • 8+ years as a Senior Data Engineer or similar, with 10+ years of progressive experience in building scalable datasets and reliable pipelines.
  • Strong expertise in AWS, data processing frameworks, and modern data warehousing.
  • Skilled in Python, SQL, and optimising data models for scale.
  • Excellent communicator and problem-solver with a pragmatic approach.
  • Bachelor’s in Computer Science or related field (or equivalent experience).
  
Nice to Have experience for the Principal Data Engineer ( AWS / Airflow / Spark ) include :
  • Experience building autonomous cost anomaly systems or FinOps tools.
  • Familiarity with event-driven architectures and real-time analytics.
  
Why Join?
  • Work on a high-impact, co-innovation project with a major cloud provider.
  • Collaborate with a world-class engineering team in a supportive, agile environment.
  • Contribute to cutting-edge FinOps initiatives and scalable data solutions.
  
Keywords: SQL / Python / AWS / Kafka / Spark / dbt / Airflow / Flink / Data Engineering / Databricks / CICD / Software Engineering
Place Holder

Olivia Ferris

Senior Consultant