About the Company
Insticator is the global leader in increasing engagement for Publishers through interactive content and community-building. Our suite of engagement products empowers Publishers and Users alike to amplify their voices and express their opinions in safe, interactive environments. From our human-moderated Commenting Unit that facilitates healthy, respectful discourse, to our Content Engagement Unit that enables audiences to share their opinions and interact with content that speaks directly to them, Insticator reaches over 350 million consumers monthly across its vast network of premium publishing partners including Ancestry, WebMD, Fox Sports, RealClear Media Group, Newsmax, and more.
About the Role
This is a remote position that can be based anywhere however the right candidate must be able to work US Eastern business hours. The Senior Data Engineer will report to our CTO and be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or re-designing our company’s data architecture to support data scientists in building our next generation of products into an innovative industry leader.
Responsibilities and Duties
- Build the infrastructure required for optimal Extraction, Transformation, and Loading (ETL) of data from a wide variety of data sources with SQL-centric ETL paradigm
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Support development of analytics tools that utilize the data pipeline to provide actionable insights into conversion, customer acquisition, operational efficiency and other key business performance metrics
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Create system standards to maintain the integrity and security of all databases.
- Monitor, track and identify data quality issues and gaps and apply the necessary action plans to remediate issues.
- 5+ years of hands-on experience with creating and maintaining optimal data pipeline architecture and data warehouse, assembling large, complex data sets that meet requirements in a business environment
- Advanced working SQL knowledge and working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience with NoSQL databases, including MongoDB and ElasticSearch.
- Experience building and optimizing ‘big data’ data pipelines, architectures and datasets are nice to have. (Apache Hadoop, MapReduce, Hive, Spark, Kafka, etc.)
- Build processes supporting data transformation, data structures, metadata, schema design, dependency and workload management for structured and unstructured datasets
- Proven successful record of manipulating, processing and extracting value from large disconnected datasets
- Hands-on experience in end-to-end data product developing & implementing cycle and working with cross-functional teams in a dynamic environment.
- Build pipeline for data visualization tools (Looker, Tableau, Sisense, etc.)
- Experience in building systems using Python, R, Tensorflow, Linux/Unix Bash Scripts. Code must be fault-tolerant/resistant
- Strong Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with Snowflake
- Strong communication skills with a proven ability to discuss data, infrastructure, and analytics with technical & non-technical across the organization
- Working knowledge of AdTech and audience Data Management Platform (DMP)
- Understand the principles of ad serving, analytics, programmatic, RTB / DSPs / SSPs / DMPs
- Experience with data pipeline and workflow management tools: Matiliion, Azkaban, Luigi, Airflow, etc
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Excellent social skills with proven ability to overcome objections and form trusting relationships with external clients and internal stakeholders
- Creative confidence
- Collaborative mindset and great teamwork skills
- Skilled at receiving feedback, as well as providing it
- Entrepreneurial & adaptable; great learning skills
- Transparent & communicative, patient
- Curious, research-minded, data-informed
We offer a diverse package and the chance to grow financially with the company, including:
- Competitive Salary
- Health, Dental and Vision Insurance (location dependent)
- Annual Performance Bonus
- Paid Time Off
- Stock Options (so you have ownership in the company and benefit as it grows)
- Flexible work schedule
- 401k (only in USA)
- We sponsor H1B Visas and Green Cards
The Insticator Values
We recruit, promote, and reward based off of our four core values:
- Sleeves Up - At Insticator we provide the autonomy and creativity needed to own your role, iterate where needed and drive impact on a massive scale.
- 100% Viewability - Insticator is passionate about open feedback at all levels of the company. This allows us to fail fast, create in real time and build an open company culture.
- Be Defiantly Great - We are defiant, that’s in our lifeblood, we accomplish what other people think are impossible. Challenging the status quo is our lifeblood.
- Unconditional Empathy - Our customers are real people with real business needs, and we are here to listen and tackle accordingly. If we care and respect each other, there is no challenge we can’t overcome.