PK, the experience engineering firm, is looking for Data Engineer to join our Analytics Practice.
- Provide senior technical consulting developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL and data warehouse solutions.
- Creation and implementation of conceptual and physical data structures, applying best practice design principles for modeling relational, dimensional, data storage and NoSQL databases.
- Creation and implementation of ETL/data pipeline architecture, data flow diagrams, source to target mappings, and solution designs applying best practices for populating data structures.
- Implementation of ETL/data pipelines designs, using ETL tools, native database programming in stored procedures, cloud platform and big data ecosystem tools.
- Providing expert technical advice and guidance to project team members, client stakeholders, and our staff.
- Contributing positively to our team culture and the practice.
- Minimum 5+ years of experience designing and developing data storage and ingestion architectures, data processing and analytical pipelines for big data, relational databases, NoSQL, data lake and data warehouse solutions.
- Extensive hands-on experience implementing data migration and data processing using platforms/tooling such as:
- Azure services (or related AWS or GCP services): ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, HDInsight, Databricks Azure Data Catalog, Cosmo Db, ML Studio, AI/ML, etc.
- Big Data technologies such as Powershell, C#, Java, Node.js, Python, SQL, ADLS/Blob, Spark/SparkSQL, Databricks, Hive and streaming technologies such as Kafka, EventHub, NiFI etc.
- RDBMS platforms (SQL Server, Oracle, DB2, Netezza, others).
- ETL development using ETL tools (SSIS, Informatica, DataStage, Other) and native database programming languages (T-SQL, PL/SQL, others).
- Experience with OLAP sources (MSAS, Oracle OLAP, Cognos TM1, others).
- Experience with data modeling tools (ERWin, IBM InfoSphere Architect, other).
- Experience in using Big Data File Formats and compression techniques.
- Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, GitLabs, Jenkins, etc.
- Well versed in DevOps and CI/CD deployments.
- Familiarity with concepts and technology stacks available for Metadata Management, Data Governance, Data Quality, MDM, Lineage, Data Catalog etc.
- Cloud migration methodologies and experience with private and public cloud architectures, pros/cons, and migration considerations.
- Strong tenacity and ability to overcome barriers due to technology, process, or people.
- Strong customer and user experience focus with the ability to manage complexity and ambiguity.
- Bachelor’s degree in Computer Science, Math, Statistics, Business Finance, Accounting or other relevant program is required.
~ In order to provide equal employment and advancement opportunities to all individuals, employment decisions at PK are based exclusively on merit. PK does not discriminate in employment opportunities or practices on the basis of race, color, religion, sex, including gender identity and identity expression, national origin, age, or any other characteristic protected by law.
PK is open to remote locations, excluding Colorado.