The candidate should have strong working experience in big data technologies, including Spark, Hadoop, and Hive. They will also need experience working with Google Cloud Platform (GCP) and be able to build and deploy data pipelines on GCP.Experience in big data engineeringStrong understanding of Spark.GCP CertifiedExperience with data modelling and data visualizationExcellent problem-solving and analytical skillsStrong communication and teamwork skillsResponsibilities:Design, develop, and deploy data pipelines on GCPWork with data analysts to build and maintain data modelsAutomate data processing tasksTroubleshoot and optimize data pipelinesProvide technical support to other members of the data engineering team