Job Title: Hadoop / Spark Data Engineer
Location: Location – Austin, TX
- Should have Experience working in Spark and Hadoop Cluster with thousands of nodes
- Design and develop Spark and Hadoop ETL pipelines to process the Terabytes of data in various In
- Develop automated script to check data quality.
- Good SQL skills.
- Develop various transformations for stored procedure.
- Develop Spark applications with Cassandra lookups to optimize joins
- Optimize and tune Spark queries, Hive queries and Map-Reduce.