Job Responsibilities
1. 5+ years experience working in Data Engineering and Warehousing.
2. 3-5 years’ experience integrating data into analytical platforms.
3. Experience in ingestion technologies (e.g. Swoop, Flume), processing technologies (Spark/Scala), and storage (e.g. HDFS, HBase, Hive).
4. Experience in data profiling, source-target mappings, ETL development, SQL optimization, testing, and implementation.
5. Expertise in streaming frameworks (Kafka/Spark Streaming/Storm) essential.
6. Experience in building Spark SQL and Spark Data Frame API-based applications.
7. Experience managing structured and unstructured data types.
8. Experience in requirements engineering, solution architecture, design, and development/deployment.
9. Experience in creating big data or analytics IT solution.
10. Track record of implementing databases and data access middleware and high-volume batch and (near) real-time processing.
#J-18808-Ljbffr