Fuel Cycle is currently seeking a Cloud DBA – Data Engineer to join our Research and Development team. As a Cloud DBA – Data Engineer you will be a vital member of our enterprise data analytics team and will be responsible for managing, optimizing, overseeing, and monitoring data retrieval, storage, and distribution throughout our organization. Our ideal candidate would be an individual who has experience as a Database Administrator with continued growth into a Data Engineer!
Fuel Cycle is the leading all-in-one experience management cloud platform. We help the world’s most customer-centric brands such as Google, Hulu, Facebook, Amazon, and more engage directly with their consumers and capture actionable insights through our online communities. We've recently been recognized by INC5000 as one of the Fastest Growing Private Companies and ranked in the 2021 GRIT report as #6 Best Technology Provider. Fuel Cycle was also listed as one of LA's Best Places to Work for 2021 by BuiltinLA. At Fuel Cycle we foster a culture of customer obsessed individuals who empower each other to work passionately and deliver an extreme impact to our customers.
- Collaborate with Devops and Operation team to design and build scalable and robust data management systems and implement disaster recovery procedures using AWS Cloud services
- Comprehensive understanding of cloud infrastructure and the deployment of related infrastructure information.
- Work with application engineers and provide guidance on choosing database technologies and patterns that are optimal for the application and business needs.
- Review, design and develop data models in conjunction with the application development teams.
- Establish performance benchmarks, monitor, and analyze system bottlenecks and propose solutions to eliminate them.
- Monitor database performance and optimize via query tuning/indexing or disk I/O analysis.
- Design, build and launch extremely efficient and reliable data pipelines to move data across a few platforms including Data Warehouse, online caches and real-time systems.
- creating visualizations / reports / dashboards based on business requirements.
- You will be advising us with redesigning the data structure as our company expands into new business opportunities.
- Design and build data transformations efficiently and reliably for different purposes (e.g., reporting, growth analysis, multi-dimensional analysis);
- Understanding business requirements and converting them to technical solutions.
- Design, implement, and build pipelines to deliver data with a focus on data quality.
- Work with our Product Managers and Engineers to deliver data sets that are trusted and well understood.
- Build the tools and APIs necessary to support data-driven features throughout the product.
- Participate in Business Requirements and Functional Requirements meetings, identify gaps in requirements and drive discussion around appropriate solutions.
- Ideal candidate is an individual who has been a Database Administrator and grown into a Data Engineer
- Minimum 3 years of Database Administration experience including designing, monitoring, performance tuning, implementing backup and disaster recovery.
- Minimum 3 years of data modeling, query design, optimization, stored procedures, functions, scripting, etc.
- Minimum 2 years of experience as a Data Engineer or in a similar role.
- Minimum 2 years of experience with data warehouse technical architectures, ETL/ELT, reporting/analytic tools, and scripting.
- Expertise in SQL and one or more programming/scripting languages such as Java, Python
- Experience working in SQL and NoSQL databases.
- Experience with data pipeline, and data lake/warehouse.
- Experience with Amazon Data services such as Athena, Redshift, Aurora, RDS, Elasticsearch
- Experience with BI and Analytics tools such as Amazon QuickSight, Tableau, Looker, Sisense
- Experience designing and implementing reporting, dashboard, and visualization for unstructured and structured data sets.
- Experience with Big Data Technologies such as Hadoop, Spark, Hive – a plus
- Have built a real-time data pipeline using a streaming technology like Kinesis or Kafka – a plus
- Experience with 3rd party integration tooling like Segment – a plus
- Bachelor's degree in Computer Science, Computer Engineering, Applied Mathematics, Physics Statistics, or equivalent work experience.