Key Responsibilities
Job Description:
We are seeking a highly skilled and experienced Data Engineer to join our organization. The successful candidate will be responsible for architecting, designing, developing, testing, and implementing scalable, robust, and efficient real-time data pipelines using Kafka.
The ideal candidate will develop and manage ETL/ELT processes and workflows using Python to ensure optimal data quality and efficiency.
They will also be responsible for crafting and executing SQL queries, stored procedures, and views within Snowflake data lake/warehouse, as well as driving the design, development, and maintenance of our AWS architecture.
Additionally, the Data Engineer will foster a data-driven culture and advocate for robust, scalable, and reliable data infrastructure, while ensuring compliance with relevant security governance, policies, and procedures.
* Work closely with key stakeholders to comprehend and fulfill complex data integration needs.
* Architect, design, develop, test, and implement scalable, robust, and efficient real-time data pipelines using Kafka.
* Develop and manage ETL/ELT processes and workflows using Python to ensure optimal data quality and efficiency.
* Craft and execute SQL queries, stored procedures, and views (Snowflake data lake/warehouse).
* Drive the design, development, and maintenance of our AWS architecture.
* Foster a data-driven culture and advocate for robust, scalable, and reliable data infrastructure.
The ideal candidate will have a Bachelor's Degree in Computer Science, Mathematics or equivalent and/or demonstrated experience in a similar role.
They should have 5+ years of experience in a data-centric role and proven experience in using Kafka, Terraform, Buildkite, and AWS or similar services.
Strong Python coding skills, experience in building and maintaining data pipelines, understanding of data architecture and infrastructure-as-code (IaC) principles, and ability to prioritize, multitask and work to deadlines are essential.
Required Skills and Qualifications:
A strong background in data engineering with experience in building and maintaining large-scale data pipelines is required.
Proficiency in programming languages such as Python, Java, and Scala, as well as experience with cloud-based platforms such as AWS and Azure.
Knowledge of data storage solutions, including NoSQL databases and data warehouses, is also essential.
Benefits:
Competitive salary and benefits package, including health insurance, retirement plan, and paid time off.
Opportunities for professional growth and development, including training and mentorship programs.
Collaborative and dynamic work environment with a team of experienced professionals.
Others:
Experience working with DevOps tools and practices is a plus.
Ability to communicate effectively with technical and non-technical stakeholders.
Adaptability and flexibility in a fast-paced environment.