Job Summary
At the forefront of data engineering lies a critical role that requires vision, technical prowess, and collaboration. Our organisation is seeking a seasoned professional to spearhead scalable data pipelines, ensuring seamless processing of significant customer and transactional data volumes.
We are driven by our passion for innovative solutions, leveraging cutting-edge technologies like PySpark, Delta Lake, Databricks, and Apache Airflow to create resilient and efficient data ecosystems.
About This Opportunity:
* Lead development of complex data pipelines that cater to business needs, fostering strong relationships with cross-functional teams.
* Pioneering expertise in modern data engineering tools, including AWS, PySpark, SQL, and Delta Lake, alongside experience working in Databricks and distributed environments.
Key Responsibilities:
1. Design and implement robust data architectures, prioritising scalability, security, and performance.
2. Liaise with stakeholders to gather requirements, ensure seamless integration, and provide actionable insights.
3. Maintain high standards of data quality, adhering to organisational compliance regulations and best practices.
We Are Looking For:
* A visionary data engineer with 3+ years of industry experience, possessing a proven track record in delivering high-quality data solutions.
* Strong hands-on expertise in AWS, PySpark, SQL, and Delta Lake, complemented by experience working in Databricks and distributed environments.
This Is An Excellent Opportunity To Join Our Team And Contribute To The Development Of Innovative Data Solutions That Drive Business Growth And Success.