Melbourne based Permanent OR Contract position.
- Hybrid working - max 3 days from office.
- Great organization to work with.
- Extract Transform and Load data from sources Systems to Azure Data Storage services using a combination of Azure Data factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics.
- Data ingestion to one or more Azure services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.
- Hands on experience on developing SQL Scripts for automation.
- Responsible for estimating the cluster size, monitoring, and troubleshooting of the Spark databricks cluster
- Good understanding of Spark Architecture including spark core, spark SQL, DataFrame, Spark streaming, Driver Node, Worker Node, Stages, Executors and Tasks, Deployment modes, the Execution hierarchy, fault tolerance, and collection.
**Please note you must have full Aussie work rights and be willing to work in the Melbourne office.