AWS Data Engineer Developer

Job description

Syntax is a leading Managed Cloud Provider for Mission Critical Enterprise Applications and has been providing comprehensive technology solutions to businesses of all sizes since 1972. Syntax has undisputed strength to implement and manage ERP deployments (Oracle, SAP) in a secure and resilient private, public or hybrid cloud. With strong technical and functional consulting services, and world-class monitoring and automation, Syntax serves some of North America’s largest corporations across a diverse range of industries. Syntax has offices worldwide, and partners with Oracle, SAP, AWS, Microsoft, IBM and other global technology leaders.

Position Summary

We’re looking for a passionate data engineer who is well versed in AWS cloud technologies for ETL modeling, data warehouse and data lake design/building and data movement to join our Data Analytics team! This individual will be responsible for developing repeatable AWS data lake and analytics solutions for deployment to customers. The position can be remote or in the Atlanta, GA area. A minimal amount travel may be required for this position to attend conferences, engage in team gatherings, or occasional customer visit.

Applicants for this role must be proficient data engineering experts already experienced in AWS cloud technologies such as: Python, Spark/PySpark, Glue, Redshift, S3, RDS and data movement (DMS/Kinesis). Data visualization skills in Tableau, Power BI, or AWS QuickSight are a plus, but not required. Additionally, experience with AWS ML technologies such as SageMaker or Amazon Forecast is preferred. This team works primarily with AWS data analytics technologies for existing projects and support customers, but also will need to regularly keep up with the ever emerging and evolving landscape of AWS analytics services.

  • Serve as a primary architect in developing our data lake and analytics solutions
  • Work with our Analytics team to ensure our solutions are scalable and are continuously improving
  • Develop and implement ML Solutions using Amazon Forecast and SageMaker
  • Minimum three years of hands-on AWS analytics experience
  • Python, Spark/PySpark, Glue, Redshift, S3, RDS and data movement (DMS/Kinesis)
  • ML Experience with AWS SageMaker and / or Amazon Forecast
  • Is well versed with the best practices of creating an AWS Well-Architected Environment with the Analytics Lens
  • AWS Certified Cloud Practitioner Certification
  • AWS Data Analytics Specialty Certification
  • Possesses fantastic troubleshooting skills and ability to systematically break down problems to resolve issues and reach solutions
  • Prolific documentation author
  • Accountable to drive deliverables towards completion
  • Possesses an ethos of always striving for improvement and growth
  • Exposure to cloud-based and SaaS data warehouse and data lake solutions
  • Experience with JD Edwards and SAP as a data source is a plus, but not required
  • Must possess an ethos of always striving for improvement and growth, and desire to flourish in an engaging, creative, hard-working, fun-loving corporate culture environment!
  • Needs to be willing to constantly reinvent who they are to learn new technologies and approaches. Must push beyond their current skillset and limitations. Must love to learn and experiment. And not be afraid to make mistakes.
  • The most successful candidates will regularly employ the RTFM, GIYF and JFGI approaches to learning and problem solving. Willingness to teach colleagues and knowledge sharing are mandatory in our work environment.
You must be legally entitled to work in Canada and/or in the U.S. We are unable to sponsor at this time.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
January 29, 2023

AWS Data Engineer

Jefferson Frank
January 29, 2023


Jefferson Frank
January 29, 2023

AWS Data Engineer - Fully Remote