Big Data Architect
We are seeking a highly skilled Big Data Architect to design and develop scalable data pipelines using Azure Databricks, Delta Lake, and PySpark.
* Build and optimize ETL/ELT workflows across Azure Data Lake Storage (ADLS) and other data sources.
* Implement Delta Lake for ACID transactions, versioning, and high-performance data processing.
* Integrate Databricks with Azure services such as Azure Data Factory, Azure Synapse, ADLS, Key Vault, and Event Hub.
* Develop and maintain PySpark notebooks, jobs, and workflows for batch and streaming data.
* Ensure data quality, reliability, and governance, including schema enforcement and validation.
* Monitor and optimize Databricks clusters for cost efficiency and performance.
* Implement CI/CD pipelines for Databricks workflows using Azure DevOps or GitHub Actions.
* Collaborate with data scientists, analysts, and business teams to deliver consumable datasets.
* Stay updated on Azure analytics ecosystem, Databricks features, and best practices.
Requirements
* Key Skills: Azure Databricks, Delta Lake, PySpark, Azure Data Factory, Azure Synapse, ADLS, Key Vault, Event Hub, CI/CD pipelines.
* Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field.
* Work Experience: At least 3 years of experience in designing and developing big data pipelines.
What We Offer
* Career Growth Opportunities: Work with a dynamic team to design and implement innovative solutions.
* Professional Development: Stay updated on the latest industry trends and best practices through training and workshops.
* Competitive Salary: A fair compensation package that reflects your skills and experience.
About Us
We are a leading technology company dedicated to delivering cutting-edge solutions to our clients. Our team is passionate about leveraging big data analytics to drive business growth and success.