This is a remote position.
Fluid is a peer-to-peer community truck sharing platform offering businesses and individuals a better way to rent vehicles. Fluid is rapidly scaling across the US with a proven technology platform that provides a more efficient way of connecting businesses & consumers with trucks.
Fluid allows people and businesses to connect their vehicles to our platform via Fluid Connect (a piece of hardware) that enables Fluid's mobile app to lock/unlock, track fuel/mileage, and mobilize the engine, making any vehicle easily rentable by other businesses and individuals. We enable businesses to dynamically scale up and down without having to take on more overhead, and we enable owners to generate cash from their existing assets and vehicles purchased for investment. We're facilitating more efficient utilization of vehicles around the country.
Lead Data Engineer/Solutions Architect
We are looking for an experienced Data Engineer to help develop and manage data resources, implement new technologies and tooling to further enable reporting and analytics, as well as help drive scalable data sharing practices. You will own data environments, integrate with new technologies, and oversee the development of new processes that support teams across the organization. In this role you will work on high impact business insights projects with high visibility and used by executives. You will gather requirements through direct interaction with business, operations, as well as software development teams. You will track the performance of our resources and related capabilities, constantly evolving our offering in order to scale our capability set with the growth of the business and needs of our customers.
The ideal candidate will have outstanding communication skills, proven data infrastructure design and implementation capabilities, strong business acumen, and an innate drive to deliver results. He/she will be a self-starter, comfortable with ambiguity and will enjoy working in a fast-paced dynamic environment. In addition, you will often be required to address and solve ad-hoc/unstructured problems in a highly fast paced environment.
- A desire to work in a collaborative, intellectually curious environment
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field
- 5+ Years of Data Warehouse Experience with BigQuery, Redshift, PostgreSQL, etc.
- Demonstrated strength in SQL, data modeling, ETL development, and data warehousing
- Experience in maintaining data warehouse systems and working on large scale data transformation using Dataproc, Dataflow, EMR, or equivalent Big Data technologies
- Strong knowledge of database performance concepts like indices, segmentation, projections, and partitions.
- Experience working in a data warehouse environment with diverse data sources and visualization tools like Looker, Tableau or Einstein Analytics.
- Coding proficiency in at least one modern programming language (Python, Ruby, Scala, Java, etc.)
- Experience mentoring and managing other Data Engineers, ensuring data engineering best practices are being followed
- Self-directed/motivated with excellent organizational skills
- Must be comfortable with changing requirements and priorities
- Resourceful with a strong work ethic and are willing to go the extra mile to get work done. Must be results oriented and ability to move forward without complete information
- Strong desire to explore and learn Data science related technologies. Proactive in identifying uses-case and solutions
- Experience with hardware provisioning, forecasting hardware usage, and managing to a budget
- Proven track record of sharing outcomes through written communication, including an ability to effectively communicate with both business and technical teams
- Experience with security standards like symmetric and asymmetric encryption, virtual private clouds, IP whitelisting, LDAP authentication, and other methods
- Extensive experience working with GCP or AWS with a strong understanding of BigQuery or Redshift, Data Flow, Data Fusion, Cloud Function, EMR, Athena, Aurora, DynamoDB, Kinesis, Lambda, S3, EC2, etc.
- Extensive Experience with Big Data Technologies (Airflow, Hadoop, Hive, Hbase, Pig, Spark, etc.)
- Experience with Streaming / Real-time Data Integration with Pub/Sub / Kafka or equivalent.
- Strong interpersonal skills and the ability to communicate complex technology solutions to senior leadership, gain alignment, and drive progress
- Excellent benefit coverage for employee paid for by employer
- Excellent benefit coverage for employee's spouse & family paid for in part by employer
- PTO and work flexibility
- Stock Options