Braintrust

Senior Data Engineer ***Contract-to-hire preferred...

Job description

  • JOB TYPE: Freelance, Contract Position (no agencies/C2C - see notes below)
  • LOCATION: Remote - United States and Canada only
  • HOURLY RANGE: Our client is looking to pay $110-$140/hr
  • ESTIMATED DURATION: 40h/week - long-term, ongoing project

THE OPPORTUNITY

What you’ll be working on

What you'll do:

  • Own and develop technical architecture, design and implementation of big data platforms and business analytics solutions to empower your stakeholders to solve their data driven analytics and reporting needs

  • Develop a robust, sustainable plan for the data area going forward, including projecting space requirements, procuring technology, and partnering with engineering on improvements to the data, 100TB+ highly desired.

  • Act as a subject matter expert to leadership for technical guidance, solution design, and best practices within the Sales and Service Organization

  • Build, schedule, and manage data movement from application origin through batch and streaming systems to make it available for key business decisions.

  • Be a technical mentor to Junior Engineers and expose the team to new opportunities; keep current on big data and data visualization technology trends, evaluate, work on Proof-Of-Concept, and make recommendations on the technologies based on their merit

Who you are:

  • A true expert on big data, comfortable working with datasets of varying latencies and size and disparate platforms.

  • Excited about unlocking the valuable data hidden in inaccessible raw tables and logs.

  • Attentive to detail and with a relentless focus on accuracy.

  • Excited to collaborate with partners in business reporting and engineering to determine the source of truth of key business metrics.

  • Familiarity with distributed data storage systems and the tradeoffs inherent in each one.

What you’ll need:

  • Minimum 10+ years of relevant experience

  • Data modeling, extensive experience with SQL, Python, and exposure to cloud computing (AWS, Azure or Google, Google Cloud Preferable).

  • Experience with one or more higher-level JVM-based data processing tools such as Beam, Dataflow, Spark or Flink.

  • Experience designing and implementing different data warehousing technologies and approaches, such as RDBMS and NoSQL, Kimball vs. Inmon, etc. and how to apply them.

  • Experience scheduling, structuring, and owning data transformation jobs that span multiple systems and have high requirements for volume handled, duration, or timing.

  • Prior projects working with optimizing storage and access of high volume heterogeneous data with distributed systems such as Hadoop, including familiarity with various data storage mediums and the tradeoffs of each

  • Prior data infrastructure experience in support to a Service driven organization is a plus but not essential.

  • Bachelors or Masters in Computer Science, Computer Engineering, Analytics, Mathematics, Statistics, Information Systems, Economics, Management or other quantitative discipline fields with a strong academic record

Top Skills

  • GCP (Google Cloud Platform)
  • MS SQL
  • Python

Other Skills

  • Azure
  • AWS

Apply Now!


This is a remote position.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.