Key Data Engineering Responsibilities
Primary Objectives:
* Create scalable data architectures utilizing Azure Data Factory and Pyspark/Databricks.
* Load, transform, and process large datasets using Spark/Hadoop technologies to ensure high-quality results.
* Develop and maintain efficient data pipelines that meet performance standards.
Required Expertise:
* A minimum of 5-10 years of professional experience in data engineering roles.
* Strong understanding of SQL principles and big data technologies.
* Experience with cloud-based platforms (Azure) and on-premise big data systems is highly desirable.
Preferred Skills:
* Proficiency in ETL processes within various cloud or on-premise big data environments.
* Knowledge of Hive, Kafka, Hbase, Spark, or Storm is a plus.
Key Qualifications:
* Excellent problem-solving skills.
* Ability to work independently and collaboratively as part of a team.