Software/Data Engineer with experience using working in Databricks
Core Responsibilities
* Build, enhance, and maintain data pipelines in Databricks notebooks using Python and SQL.
* Work with Delta Lake for structured and semi-structured data.
* Develop and automate Databricks Jobs / Workflows for scheduled processing.
* Transform and clean data across bronze/silver/gold layers, using dbt (data build tool) to model, test, and document datasets.
* Write modular, reusable SQL using Jinja templating in dbt.
* Use GitHub Copilot in VS Code to help scaffold code, write boilerplate, and speed up routine tasks.
* Monitor, troubleshoot, and improve job reliability, performance, and cost efficiency.
* Collaborate with analysts and data scientists to deliver clean, accessible data.
* Follow version control best practices using Git and Databricks Repos.
Required Skills
* 2–4 years in data engineering or analytics engineering roles.
* Strong Python (pandas, PySpark basics) and SQL.
* Hands-on experience with Databricks — notebooks, clusters, jobs, and Delta tables.
* Experience with dbt for transformations, including writing Jinja-based SQL macros.
* Comfortable working in VS Code (or similar IDE) for development and version control.
* Experience using GitHub Copilot to support coding productivity, especially for boilerplate or repetitive tasks.
* Understanding of data lakehouse architecture (bronze / silver / gold layers).
* Familiarity with orchestration tools like Databricks Workflows, ADF, or Airflow.
* Experience in a cloud environment (Azure, AWS, or GCP).
* Exposure to Unity Catalog or Delta Live Tables is a plus.