Big Data Engineer Role
We seek an experienced engineer to work with large-scale data sets, utilizing Spark and Kafka.
* Maintain and develop efficient data processing pipelines.
* Ingest tables into our data lake from Kafka using various techniques.
* Run jobs for ingestion, curation, and extraction of data.
* Alter schemas and configs to meet business requirements and ensure data quality.
Your Key Responsibilities:
* Collaborate with cross-functional teams to achieve successful data integration.
* Troubleshoot and resolve bugs in the data processing pipelines.
* Monitor performance and troubleshoot issues using logging and monitoring tools.
* Document best practices and develop standards for data processing pipelines.
* Stay up-to-date with emerging technologies and industry trends in big data processing and analytics.
Requirements:
* Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
* 3+ years of experience in data engineering, software development, or a related field.
* Strong programming skills in Scala, Python, or Java.
* Expertise in Spark, Kafka, and big data processing concepts.
* Experience with data storage solutions and data serialization formats.
Benefits
A competitive compensation package, benefits, opportunities for growth and advancement within the company, challenging projects, skill development, and making a real impact on our business.
Your Essential Attributes:
* Strong communication, problem-solving, and collaboration abilities.
* Ability to work independently and in a team-oriented environment.