Job description

Description

This person will work to create, and optimize dynamic monitoring with the goal of automation. Hit the ground running with Terraform skills.

After the initial project (target release is EoY) the person will pivot more to a traditional data engineer-type role with ETL work. The team is willing to let the person learn/upskill as necessary

Overview

Join our dynamic and innovative team as an Observability and Data Engineer, where you'll play a crucial role in enhancing the observability of our customer-facing portals' APIs. If you are passionate about creating efficient data pipelines, optimizing API performance, and utilizing cutting-edge technologies, this is the perfect opportunity for you. You'll collaborate with cross-functional teams to ensure our systems are finely tuned for optimal user experiences.

Job Responsibilities
  • Drive the enhancement of observability for our customer-facing portals' APIs, employing tools like Terraform to establish robust infra-as-code practices.
  • Develop and maintain large-scale data streams, enabling efficient data collection and analysis to identify performance bottlenecks and areas for improvement.
  • Leverage automation techniques, particularly Terraform, to orchestrate the creation of dynamic monitors in platforms like Datadog, ensuring comprehensive monitoring coverage.
  • Collaborate with colleagues to architect and implement ETL pipelines, utilizing Python, Spark, and SQL, to extract, transform, and load incoming API data for in-depth analysis.
  • Contribute to the optimization of API performance, working closely with development teams to implement data-driven improvements.
  • Stay up-to-date with industry best practices and emerging technologies, applying your expertise to continuously innovate our observability and data analysis processes.
  • Participate in code reviews, knowledge sharing, and mentorship to foster a collaborative and growth-oriented team environment.

Skills

terraform, datadog, data, api, Cloud Apps, Cloud DE, Infrastructure as Code, Observibility and Monitoring, Aws, data engineer, data analysis, ETL Pipeline, data warehouse, python, sql, spark, Big Data, API optimization, devops, pyspark, scala, glue, redshift, etl development, splunk, grafana, graphql

Additional Skills & Qualifications

A lot of exposure to APIs, and functional knowledge
  • Data Engineering and Analysis: Familiarity with data engineering concepts and tools. Experience with ETL pipelines, data transformation, and data warehousing using technologies like Python, Spark, and SQL will be beneficial for contributing to data analysis initiatives.
  • Programming Skills (Python): Solid programming skills in Python, with the ability to manipulate and analyze data efficiently. Experience in scripting and data processing within a Python environment will be advantageous.
  • Large-Scale Data Processing (e.g., Apache Spark): Understanding of large-scale data processing frameworks such as Apache Spark. Exposure to managing and analyzing data streams using these technologies would be a valuable asset.
  • SQL Proficiency: Familiarity with SQL for data querying and manipulation. Being able to extract insights from data through effective SQL queries will enhance your contribution to the team's analytical efforts.
  • API Performance Optimization: Prior experience in optimizing API performance and collaborating with development teams to implement improvements. Knowledge of performance tuning techniques will be helpful in this aspect.
  • Data Warehousing Concepts: Exposure to data warehousing principles and practices. Understanding how to structure and manage data for efficient analysis and reporting will be advantageous.
  • Problem-Solving Skills: Excellent problem-solving abilities, with a proactive approach to identifying and addressing technical challenges. A strong analytical mindset will contribute to the success of this role.
  • Communication and Collaboration: Strong communication skills to effectively work with cross-functional teams. The ability to convey technical concepts clearly and collaborate with colleagues from diverse backgrounds is important.

Terraform: 2-6 Years

Observability and Monitoring Tools (e.g., Datadog): 2 - 4 Years

Python, Spark, SQL, AWS tools(s3, Glue, Redshift): 0 - 2 Years

About TEKsystems

We're partners in transformation. We help clients activate ideas and solutions to take advantage of a new world of opportunity. We are a team of 80,000 strong, working with over 6,000 clients, including 80% of the Fortune 500, across North America, Europe and Asia. As an industry leader in Full-Stack Technology Services, Talent Services, and real-world application, we work with progressive leaders to drive change. That's the power of true partnership. TEKsystems is an Allegis Group company.

The company is an equal opportunity employer and will consider all applications without regards to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
DBR Groep
February 15, 2024

Data Engineer

SourceSelect
February 15, 2024

Data Engineer, Amersfoort

KWEEKERS
February 15, 2024

Traineeship Data Engineer