13 Oct 2021
Our client is expanding our Technology Consulting practice and specifically building our Digital and Emerging Technology team. Due to this expansion we are seeking to recruit high achieving individuals to be part of our growing business.
We are recruiting Data Engineers primarily at Senior Consultant or Manager levels for the Technology Consulting team.
We are seeking individuals with significant client-side experience or individuals who have gained project and technology delivery experience within large recognised organisations.
Help to build valued relationships with external clients and internal peers.
Participate in presentations and proposals for medium complex projects or elements of highly complex projects.
Create high-quality work products and act as a subject matter resource in a particular area, leveraging knowledge and experience to shape services to client problems.
Understand all our service offerings and help to identify opportunities to better serve clients.
Build internal relationships within Technology Consulting.
Develop people through effectively supervising, coaching, and mentoring more junior members of staff.
Be responsible for your own self-development, and maintain an educational program to continually develop personal skills.
Understand and follow workplace policies and procedures.
Skills and attributes for success
Strong scripting skills in system scripting such as, BASH and Python.
Strong understanding of the Hadoop ecosystem and wider big data architecture (Cloudera Distribution).
Hands-on experience working with Big Data eco-system including tools such as Hadoop, Spark, Map Reduce, Sqoop, HBase, Hive and Impala.
Data Engineering and Data Wrangling Experience.
Curious and personable with strong teaming skills.
Business and requirements analysis.
Communications and client interactions.
Appreciation of project mgmt. and Agile methodologies.
To qualify for the role, you must have
Proficient understanding of distributed computing principles.
Experience on data lakes, datahub implementation.
Management/Administrative best practices in Hadoop cluster, such as FIFO, fair scheduling.
Proficiency with Hadoop v2, MapReduce, HDFS.
Good knowledge of Big Data warehousing and querying tools such as Hive, and Impala.
Experience with integration of data from multiple data sources using tools like Pig and Sqoop.
Knowledge of various ETL techniques and frameworks in the big data ecosystem.
Experience and knowledge working in Kafka, Spark streaming, Sqoop, Oozie or Nifi.
Experience with Cloudera/Hortonworks.
Strong data programming/development skills with any one or more data programming languages such as R, Python, Julia, Spark (Scala/R/Python).
Ideally, you’ll also have
Academic background in related field.
A solid track record of data management showing your flawless execution and attention to detail.
Deep knowledge of data mining, machine learning, natural language processing, or information retrieval.
Experience with NoSQL databases such as HBase, Cassandra, MongoDB.
Experience and knowledge working in Airflow and Control-M.
Hive Tuning, Bucketing, Partitioning, UDF, UDAF.
Experience with Big Data ML toolkits, such as Mahout, Spark ML, or H2O.
Experience with MapR.
Excellent presentation skills.
Understanding of cloud architectures.
Any of cloud based or big data certifications.
What we look for
We are seeking individuals with strong Big Data Engineering experience and who have a real passion for data.
For a confidential and discreet conversation to understand more about this Technology job, please contact John Howe on +353 1 592 7868 or email