Senior Data Engineer

Job description

About the Role

As the Senior Data Engineer, your mission will be to work with modern cloud tooling (Kubernetes, Serverless Stacks, Snowflake, Airflow, RDS, Kafka, EMR -Python/Spark) in a dual-cloud (AWS and Azure) deployment across multiple agile development teams, in an engineering and admin capacity to implement and evangelize best practices for data management to maintain and optimize the core datasets and pipelines driving the vision of Brivo and advancing the organization’s data strategy.

You will be a key member of the team responsible for driving the company’s Data Strategy through ownership of the underlying data platform supporting the company’s SaaS IoT Security products, with a strong focus on maintaining reliability and resilience. In this role you will also be expected to:

  • Create, manage, monitor and maintain data pipelines across multiple applications and environments.
  • Proactively monitor, tune, and report on the performance of the platform: databases, tooling, and infrastructure on which they run.
  • Build expert-level knowledge and understanding of the applications and their underlying data to provide support (24/7 Team Operating Model) for production database environments to ensure the highest standards of availability, resilience, integrity, security and performance required by our business systems.
  • Collaborating with Development and QA teams to provide best-practices, guidance and insight into data management and operations, as well as to support and assist in the development, testing, tuning, and deployment of applications.
  • Strategize and implement the next generation of the data platform with a focus on building analytical datasets to facilitate BI and ML applications.
  • Improving existing processes by finding opportunities for and implementing automation wherever possible.
  • Manage, maintain and monitor disaster recovery strategies, and security controls in accordance with company policies, procedures, and processes.
  • Design, develop and maintain appropriate levels of documentation for the data platform, processes and procedures.

About You

We are seeking a self-starting, ambitious, and collaborative, seasoned data professional with 4+ years of experience developing solutions that leverages: RDBMS (preferably Postgres); NoSQL databases like Dynamo or Cassandra; data pipelines in Python and Spark; cloud warehousing tools like Snowflake; and data streaming technologies like Kafka and Kinesis. You should have deep knowledge of and passion for data management and data engineering best practices, specifically with a focus on Reliability, Resilience and Scalability, and a drive to understand systems holistically.

  • Deep understanding of RDBMS technologies, specifically as it pertains to monitoring and tuning performance, including the use of logging and monitoring tools such as CloudWatch, Datadog, ELK, Splunk, etc...
  • Understanding and having opinions on the challenges of scaling Big Data systems to support multi-national implementations.
  • Experience with data warehousing concepts including dimensional and fact modeling and strategies for building data lakes.
  • Must be familiar with how to leverage the core building blocks within AWS to properly build and secure data for SaaS applications (ex. RDS, Lambda, EMR, S3, IAM, etc.)
  • Experience building and orchestrating Data or ETL pipelines, preferably leveraging a mix of tooling including Airflow, Python, Spark, and Data Replication Tools.
  • Ability to analyze, diagnose and tune database and query performance at all associated layers (database, network, server, disk).
  • Working experience managing Database schema versioning with automated tools like MyBatis, Flyway or Liquibase.
  • Knowledge of development best practices
  • If you read this far, add the word pepper to your resume when you submit
  • 3+ years development experience writing, maintaining and documenting code in shell, Python, or other scripting languages with an understanding of development best-practices, programming languages (esp. Java, Javascript, Python, or PHP) and their associated ORM engines (i.e.. Hibernate, SQLAlchemy, mongoose)
  • Knowledge of or experience with DataOps is a plus!

About Us

Brivo is the global leader in mobile, cloud-based access control for commercial real estate, multifamily residential, and large distributed enterprises. Our comprehensive product ecosystem and open API provide businesses with powerful digital tools to increase security automation, elevate employee and tenant experience, and improve the safety of all people and assets in the built environment. Having created the category over twenty years ago, our building access platform is now the digital foundation for the largest collection of customer facilities in the world, trusted by more than 25 million users occupying over 300M square feet of secured space in 42 countries.

Our dedication to simply better security means providing the best technology and support to property owners, managers, and tenants as they look for more from buildings where they live, work, and play. Our comprehensive product suite includes access control, smart readers, touchless mobile credentials, visitor management, occupancy monitoring, health and safety features, and integrated video surveillance, smart locks, and intercoms. Valued for its simple installation, high-reliability backbone, and rich API partner network, Brivo also has the longest track record of cybersecurity audits and privacy protections in the industry.

Brivo is privately held and headquartered in Bethesda, Maryland. Learn more at

Brivo is an Equal Opportunity/Affirmative Action Employer


Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.