REQUIRED SKILLSET Development and support of performant pipelines Hands on experience on AWS serverless stack (Lambda, S3, API Gateway, SQS/SNS, Cloudwatch Consolidation of different sources of data (API, SQL Database, CSV, S3 and ftp files etc) into a centralized data store. Agile development of Data Lake / Data Warehouse and ETL / EMR (Glue, Hadoop, Spark, YARN, HUE, REST APIs, HIVE). Redshift and BigQuery as data warehouse system. Experience with RDBMS (MySQL, Oracle) and SQL/DDL Language, and NoSQL (DynamoDB) Development of message queue driven systems (Amazon SQS, SNS and Lambda based functions) Development of streaming system (ie: Kinesis Stream, Kinesis Firehose) Python development, PySpark development Lambda development in Python, Node Ability to perform code review and technical design review.
Alteryx knowledge is mandatory AWS Knowledge is mandatory PRIMARY RESPOSABILTY Evaluate and recommend key technology elements for Analytics platform Collaborate with Solution Architects to ensure technical direction Development of big data pipeline Work in a team with other data engineer to release deliverable for the business Participate in an Agile squad (scrum team) Perform proof-of-concepts in new technologies. Provide on-call production support as needed. QUALIFICATIONS 6+ years of overall software development experience with 3+ years in AWS cloud platform Previous experience as a Software Engineer/Big Data Engineer Familiarity with JIRA & Confluence or similar tracking and management tools Excellent organizational, verbal and written communication and the ability to present information in a clear, concise and complete manner Self-starter, creative, enthusiastic, innovative and collaborative attitude Ability to prioritization task based on sense of urgency and accuracy Strong team player with good communication & interpersonal skills. Bachelor's Degree in Computer Science, or equivalent education and experience required.