Senior Cloud Data Engineer

Job description

Job ID: 2200503

Location: RESTON , VA , US

Date Posted: 2022-01-10

Category: Information Technology

Subcategory: Big Data Engineer

Schedule: Full-time

Shift: Day Job

Travel: Yes, 10 % of the Time

Minimum Clearance Required: None

Clearance Level Must Be Able to Obtain: None

Potential for Remote Work: Yes


SAIC is seeking a Senior Cloud Data Engineer to work remotely anywhere within the U.S.


  • Leads key data engineering initiatives, including end to end designing and deploying cloud native data products
  • Strong qualifications with Data Management, Data Engineering, Data Architecture, Data Governance, and Data Science Effectively communicate between developers, architects, managers, and end users
  • Create a repository/library of data engineering capabilities
  • Experiment with cloud native, open source tools and advise on new tools in order to determine optimal solution given the requirements dictated by the use case
  • Develop and implement patterns to leverage cloud-computing resources to deploy/ optimize data engineering solutions on digital assets
  • Develop and implement DevOps and DataOps pipelines in AWS, Azure, and GCP to enable near-real time delivery of data analytics to end consumers
  • Use advanced programming skills in scala, python or any of the major languages to build software solutions/services, including robust data pipelines and dynamic systems
  • Write complex etl (extract / transform / load) processes, preferably in python, designs database systems and develops tools for real-time and offline analytic processing
  • Develop frameworks, standards & reference material for design and associated products
  • Act as a mentor to junior team members to provide technical advice, apply systems and products to consult and advise on additional efforts across multiple domains spanning broader product development
  • Collaborate with cloud engineering and data science teams to transform data and integrate algorithms and models into highly available, production systems
  • Use in-depth knowledge on hadoop architecture, spark, hdfs hive, pyspark and experience designing & optimizing queries to build scalable, modular, and efficient data pipelines.



  • Bachelors degree in engineering and nine (9) years or more in related experience. Masters degree in engineering and seen (7) years or more in related experience. In lieu of a degree, equivalent experience will be considered
  • Five (5) or more years of experience as Computer Programmer, Application Developer
  • Minimum of four (4) years of experience related to DevOps and/or DataOps Data Engineering, Data Management, Analytics, and/ or Machine Learning certificates
  • At least four (4) years’ experience working with cloud based solutions, various open source based frameworks, analytic tools and recognizing their weaknesses

Required Skills:

  • Strong experience developing DataOps pipelines using any one of the following:
    • Azure using Azure Data Factory and Databricks, EventHub
    • AWS using EMR, Glue/Data Pipelines, Amazon Athena, Kinesis
  • Deep knowledge and experience utilizing Spark data frames, and performance fine turning and handling of large datasets
  • Experience with a variety of Database technologies: RDBMS, NoSQL, Search, Data Lakes, Time Series, etc.
  • Experience with cloud environments (AWS, Azure, GCP), Linux, automation tools (DevOps, Jenkins, Gitlab CI/CD) and Container Management Platforms ETL/ELT or data warehousing experience working on the design, development and documentation of large-scale data objects from disparate data sources
  • Must also have experience working with bash shell scripts, and automation using ARM templates, Cloud Formation, and Terraform
  • Must have two (2) years of experience with the following: manipulating hdfs data using python and pyspark; automating DataOps pipelines; manipulating relational data using spark sql, hive ql and other sql languages; and operating in an agile environment
  • Strong experience utilizing streaming data -specifically using Kafka

Desired Skills and Certifications:

  • Experience using Apache/NiFi, Apache Accumulo, Apache AirFlow
  • AWS/GCP/Azure Data Engineer Certification

COVID Policy: Prospective and/or new employees are required to adhere with SAIC's vaccination policy. All SAIC employees must be fully vaccinated and they must submit proof of vaccination on their first day of employment. Prospective or new employees may seek an exemption to the vaccination requirement at Contact Us and must have an approved exemption prior to the start of their employment. Where work is performed strictly at a customer site, customer site vaccination requirements preempt SAIC's vaccination policy.

Target salary range: $165,001 - $175,000. The estimate displayed represents the typical salary range for this position based on experience and other factors.
SAIC® is a premier Fortune 500® technology integrator driving our nation's technology transformation. Our robust portfolio of offerings across the defense, space, civilian, and intelligence markets includes secure high-end solutions in engineering, digital, artificial intelligence, and mission solutions. Using our expertise and understanding of existing and emerging technologies, we integrate the best components from our own portfolio and our partner ecosystem to deliver innovative, effective, and efficient solutions that are critical to achieving our customers' missions.

We are more than 26,500 strong; driven by mission, united by purpose, and inspired by opportunities. SAIC is an Equal Opportunity Employer, fostering a respectful work culture based on diversity, equity, and inclusion that values all contributors. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $7.1 billion. For more information, visit

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.