Description:Position requires a resource to work with the business team to understand data requirements and collaborate with source systems.Responsibilities include extracting data from source systems and loading it into Hadoop (Spark).The role involves investigating and resolving data and system issues, and providing support.Qualifications and Experience:Proven experience in designing and building data lakes on Hadoop using the latest technologies.Experience in building integration layers to consolidate data from multiple systems into Hadoop.Experience in architecture and solution design for analytics on Hadoop.Experience in building data pipelines for data ingestion into Hadoop.Working experience with data in multiple formats (CSV, TXT, XML, JSON).10+ years of experience in developing DW/ETL applications.5+ years of experience working within the Hadoop ecosystem.Hands-on experience with Hive, Spark, Python, and Hadoop libraries.Experience in establishing data lakes in Hadoop is essential.Experience with ETL processes and data aggregation techniques.Proficiency with Sqoop, Flume, and Spark.Excellent SQL skills are required.Experience in extracting data from RDBMS and SAS applications using APIs.Must-Have Skills:10+ years in DW/ETL applications development.5+ years in Hadoop ecosystem.Knowledge of Hive, Spark, Python, Hadoop libraries.Experience building data lakes in Hadoop.Experience with Sqoop, Flume, Spark.Excellent SQL skills.Nice-to-Have Skills:Visualization skills and Power BI experience.Roles and Responsibilities:Collaborate with source systems to design and develop ETL processes for data loading.Troubleshoot issues and work with support teams to resolve them.This job description is for Ethereum Technologies LLC, an equal opportunity employer committed to diversity and inclusion, and complies with all applicable employment laws.
#J-18808-Ljbffr