Must-Have
* Strong Data Engineering skills using Azure and Pyspark (or Databricks, Hadoop/Spark using Java/Scala)
* Experience in Azure Data Factory and other Azure services
* Experience in loading and transforming the data using spark or any big data technologies (Hive, Kafka, Hbase, Spark or Storm)
* Very good SQL knowledge
Good-to-Have
* ETL Process experience in any Cloud or any on-premise big data platform
Desirable Experience and Details
* Desired Experience Range: 5 – 10 years
* Location: Perth/Australia
#J-18808-Ljbffr