Job Title: Data Engineer
About the Role
We are seeking a highly skilled and experienced Data Engineer to join our team. The successful candidate will be responsible for designing, developing, and maintaining complex data pipelines using dbt, Kafka, and Snowflake.
Key Responsibilities
* Data Pipeline Development: Develop robust ETL/ELT pipelines using dbt, Kafka, and CDC mechanisms.
* Data Warehouse Management: Design and evolve our Snowflake-based data warehouse with a strong focus on dimensional modeling and performance optimization.
* Automation and Deployment: Automate infrastructure and deployments using Terraform and manage workflows in a cloud-native environment.
* Data Integration: Integrate data from multiple sources (internal databases, APIs, and third-party tools) with best practices in scalability and security.
Requirements
* Experience: 2–3 years of experience in data engineering or backend development roles with strong data pipeline exposure.
* Technical Skills: Advanced SQL skills with proven experience in dimensional modeling and analytical database performance tuning.
* Tools and Technologies: Experience with Snowflake (or other columnar data warehouses) and data transformation tools like dbt.
* Soft Skills: Solid understanding of data streaming concepts and tools like Apache Kafka and Flink.
Nice to Have
* Event-Driven Architecture: Experience working with event-driven architectures and CDC tools like Debezium.
* Orchestration Tools: Familiarity with orchestration tools (Airflow, Dagster) or alternative job schedulers.