At Impact our culture is our soul. We are passionate about our people, our technology, and are obsessed with customer success. Working together enables us to grow rapidly, win, and serve the largest brands in the world. We use cutting edge technology to solve real world problems for our clients and continue to pull ahead of the pack as the leading SaaS platform for businesses to automate their partnerships and grow their revenue like never before. We have an entrepreneurial spirit and a culture where ambition and curiosity is rewarded. If you are looking to join a team where your opinion is valued, your contributions are noticed, and enjoy working with fun and talented people from all over the world..then this is the place for you.
Impact is the global leader in Partnership Automation. We work with enterprise and innovative brands like Ticketmaster, Levi's, Microsoft, Airbnb, and Uber to help them manage all different types of partnerships. From social influencers, B2B, strategic partners, publishers, and traditional affiliates, we have them covered. Our combined suite of products covers the full life partnership lifecycle including onboarding, tracking ads and paying partners, recruiting for new partners, data and marketing intelligence, and protection from fraud. Founded in 2008 by the same team that founded Commission Junction, Impact has grown to over 500 employees and ten offices across the United States, Europe, Africa, and Asia.
Why this role is exciting:
As a Big Data Engineer II, your focus will be on delivering stories for the squad, monitoring production environments and managing deployments to production. This role assumes that you are able to utilize the latest features of a language and can effectively select and implement the right design pattern to solve problems independently.
You will have experience implementing integration tests, be comfortable working with CI and confidently reuse existing frameworks.
At this level you are expected to have an understanding of the business requirements of all stories in the sprint, implement stories on existing cloud infrastructure and services and independently implement agreed design to spec. You should feel comfortable escalating appropriately.
You are also expected to help team members with implementation and troubleshooting.
- Collaborate with the team to fulfill the department's quarterly objectives
- Design, implement features, and write tests on the Impact Data Platform leveraging our Big Data Tech Stack
- Perform Releases, maintain continuous integration pipeline, code merging
- Intermediate knowledge of Hadoop, Spark, SQL, NoSQL, Streaming
- On-Call for Monitoring and Alerting and communicate to team / Company as needed
- Analyze any job failure, log tickets in JIRA, deliver analysis, code fix, possibly data fix
- Communicate cross squad via slack, email, JIRA, Zoom
- Create & Maintain proper documentation
- Approve & Merge Pull Requests
- Able to tune performance of Systems, Pipeline flows, Applications, Datastores
- Assist systems group with database and other infrastructure upgrades (sometimes off- hours/weekends)
- Gain and maintain enough understanding of The Business to deliver effective solutions
- Perform data quality analysis and introduce monitors with proper alerting for the team
- Be part of the team conducting interviews of new candidates
- Regularly share technical approaches with team
- Mentor Associate Engineers as well as knowledge share within the Team and broader Engineering Department
- Identify potential new technologies in our stack
Does this sound like you?
- Personal Development
- Completed B.S. In Computer Science or related field or equivalent professional experience
- Any open source contributions are strongly desired
- Desire to work with Big Data and surrounding Technologies
- 3+ years experience with numerous ETL / Streaming Pipeline Technologies
- 4+ years Software Development experience
- Agile / Iterative processes. Kanban / Scrum
- Experience working in a Start-up or Internet business is valuable
- Customer Focus
- Experience working with Large Data Volumes - Terabytes to Petabytes - required
- Experience working with Big Data Tech - Spark, Kafka, Google Pub/Sub, HBase
- Exposure and experience in any Google Cloud technology highly desired
- Knowledge of Digital Marketing or Web Analytics is a big plus
- Continuous Integration / Delivery methods, tooling, integrations
- Ability to Implement core principles of Ralph Kimball - Star Schemas / Facts / Dimensions etc.
- Ability to tune numerous types of system and applications in a Data Pipeline
- Experience with Relational Databases, Table design, SQL
- Exposure and Knowledge of Scheduling Frameworks; Azkaban a plus
- Experience writing enterprise-level application code in a JVM Language (Scala preferred)
- Experience writing enterprise-level application code in Python highly advantageous
- Unlimited PTO policy
- Training & Development
- Medical Aid and Provident Fund
- Stock Options
- Internet Allowance
- Flexible work hours
- Casual work environment
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.