Job Overview:
We are seeking a skilled professional to join our organization as a Senior Data Engineer. This role will involve designing and implementing large-scale data pipelines, ensuring efficient data transformation and loading, and developing scalable and structured data solutions.
Key Responsibilities:
* Data Pipelines Development: Develop, optimize, and maintain data pipelines using Python and SQL within Azure Databricks Notebooks.
* ETL/ELT Workflows: Design and implement ETL/ELT workflows in Azure Data Factory to ensure efficient data transformation and loading.
* Data Modelling: Apply Kimball dimensional modelling and Medallion architecture best practices for scalable and structured data solutions.
* Collaboration: Collaborate with team members and stakeholders to understand data requirements and translate them into technical solutions.
* CI/CD Pipelines: Implement and maintain CI/CD pipelines using Azure DevOps, ensuring automated deployments and version control with Git.
* Job Monitoring and Troubleshooting: Monitor, troubleshoot, and optimize Databricks jobs and queries for performance and efficiency.
* Dataset Provision: Provide well-structured, high-quality datasets for reporting and analytics.
* Compliance: Ensure compliance with data governance, security, and privacy best practices.
Skill Set:
The ideal candidate will have strong proficiency in Python, advanced SQL skills, hands-on experience with Azure Databricks, expertise in Azure Data Factory, and knowledge of Kimball dimensional modelling and Medallion architecture.