Chipton-Ross is seeking a Data Engineer Developer for an opening in Remote, US.
RESPONSIBILITIES:
- Responsible for creating automated data pipelines to discover, ingest, model, curate, secure, and store unorganized data into databases and/or virtualize data in data virtualization platform; for use by data consumers
- Exhibit a degree of creativity, proactiveness, and resourcefulness when collecting and processing data for analysis while working on an Agile team
- Maintain data systems performance by identifying and resolving production problems and supporting new integrations, upgrades, and releases during normal and off-hours
- Develop solutions for interoperability between current and emerging technologies while executing technical debt reduction initiatives
- Use coding/scripting languages and DevOps methodology to automate pipelines for self-service data strategies, reducing testing and deployment time, and minimizing errors encountered.
REQUIREMENTS:
Experience in design or development of enterprise data solutions, applications, and integrations
Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them
Knowledge of local, distributed, and cloud based technologies and security measures to protect data
Has software engineering experience and experience building automations
Experience with GitLab or Jenkins
Experience with Neo4j Graph databases
Experience building data models
Experience in building ETL workflows to cleanse, transform, and store data
Experience building Tableau dashboard visualizations
Experience with HANA, Oracle, MySQL
Strong understanding of data virtualization
Strong problem solving, conceptualization, and communication skills
Demonstrates willingness to learn, self-starter, and dependable
DESIRED SKILLS:
Experience working on an agile team under frameworks like Scrum, SAFe, or Kanban
Experience applying data security markings and access controls
ETL tools: SAP Data Services, Informatica, Tableau Prep, Altyrex, Pentaho (PDI) Kettle/Spoon (for ETL Processing)
Data Modeling (logical and physical): Erwin, HANA modeling
Database systems (SQL and NO SQL) - HANA, Oracle, SQL Server, MySQL, Neo4j, Big Data (DB2)
Data Virtualization: Tibco DV
Data Visualization: Tableau
Languages: Cypher, Angular, Kotlin, HTML/CSS, GraphQL, JSON, JavaScript, SQL, Python
Containers/Cloud Computing: AWS, OpenShift, OpenStack
CI/CD: Jenkins, Bit Bucket, Gitlab
Other: Logstash, Spring Boot, Gradle, Node JS, NPM, Data & REST APIs, Linux commands, POSTMAN, HANA WEB IDE, XS classic development
Experienced with HANA REST API implementation
EDUCATION:
Bachelor's degree in Computer Science, Systems Engineering, or related field with at least 9 years of professional experience with the below skills.
SHIFT:
4/10