All new
Data Science
jobs, in one place.

Updated daily to help you be the first to apply ⏱

avatar4avatar1avatar5avatar3avatar2
IoT Data Engineer
  • Python
  • Spark
  • PySpark
  • Databricks
  • SQL
  • Java
  • Linux
  • Big Data
  • Excel
  • Database
  • ETL
  • Hadoop
  • Cassandra
  • Scala
  • Kafka
  • NoSQL
  • C#
  • Power BI
  • Azure
Parker Hannifin Corporation
Cleveland, OH 44124
119 days ago

Org Marketing Statement

With annual sales of $14.3 billion in fiscal year 2019, Parker Hannifin is the world's leading diversified manufacturer of motion and control technologies and systems, providing precision-engineered solutions for a wide variety of mobile, industrial and aerospace markets. The company has operations in 50 countries around the world. Parker has increased its annual dividends paid to shareholders for 63 consecutive fiscal years, among the top five longest-running dividend-increase records in the S&P 500 index.

Essential Functions

The ideal candidate would also be responsible for developing and delivering Azure cloud solutions to meet todays high demand in areas such as AIML, IoT, advanced analytics, open source, enterprise collaboration, microservices, serverless, etc. The AWS Data Engineer is a highly performing engineer responsible for delivering Cloud based Big Data and Analytical Solutions for IoT solutions. Responsibilities include evangelizing data on cloud solutions with customers, leading Business and IT stakeholders through designing a robust, secure and optimized Azure architectures and ability to be hands-on delivering the target solution. This role will work with customers and leading internal engineering teams in delivering big data solutions on cloud. Using Azure public cloud technologies, our Data Engineer professionals implement state of the art, scalable, high performance Data On Cloud solutions that meet the need of todays corporate and emerging digital applications

Responsibilities

• Provide subject matter expertise and hands on delivery of data capture, curation and consumption pipelines on Azure • Ability to build cloud data solutions and provide domain perspective on storage, big data platform services, serverless architectures, Data bricks, hadoop ecosystem, vendor products, RDBMS, DW/DM, NoSQL databases and security. • Participate in deep architectural discussions to build confidence and ensure IoT solution success when building new solutions and migrating existing data applications on the Azure platform. • Build full technology stack of services required including PaaS (Platform as-a-service), IaaS (Infrastructure as-a-service), SaaS (software as-a-service), operations, management and automation. • Deliver data sharing capabilities through Power BI, building API systems, and cloud to cloud transfer • Stay educated on new and emerging technologies/patterns/methodologies and market offerings in the industry. • Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems. • Understand the strategic direction set by senior management as it relates to team goals. • Use considerable judgment to define solution and seeks guidance on complex problems. • Manage small teams of deliver engineers(either in Parker or through outsourced organization) successfully delivering work efforts

Qualifications

• At least 5 years of consulting or client service delivery experience on Azure/AWS • At least 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL and data warehouse solutions • Experience providing practical direction within the Azure/AWS Native and Hadoop • Minimum of 5 years of hands-on experience in Azure/AWS and Big Data technologies such as Powershell, C#, Java, Node.js, Python, SQL, ADLS/Blob, Spark/SparkSQL, Hive/MR, Pig, Oozie and streaming technologies such as Kafka, EventHub, NiFI etc. • Extensive hands-on experience implementing data migration and data processing using Azure services: Networking, Windows/Linux virtual machines, Container, Storage, ELB, AutoScaling, Azure Functions, Serverless Architecture, Azure SQL DB/DW, Data Factory, Azure Stream Analytics, Azure Analysis Service, HDInsight, Databricks Azure Data Catalog, Cosmo Db, ML Studio, AI/ML, Power BI, Graphana etc. • Cloud migration methodologies and processes including tools like AWS/Azure Data Factory, Event Hub, etc. • 5+ years of hands on experience in programming languages such as Java, c#, node.js, python, pyspark, spark, SQL etc • Minimum of 5 years of RDBMS experience • Experience working with Developer tools such as Visual Studio, GitLabs, Jenkins, etc. • Experience with private and public cloud architectures, pros/cons, and migration considerations. • Bachelors or higher degree in Computer Science or a related discipline. Recommended Skills • DevOps on an Azure/AWS platform • Experience developing and deploying ETL solutions on Azure/AWS • IoT, event-driven, microservices, containers/Kubernetes in the cloud • Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog etc. • Familiarity with the Technology stack available in the industry for data management, data ingestion, capture, processing and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc. • Multi-cloud experience a plus - Azure, AWS, Google Professional Skill Requirements • Proven ability to build, manage and foster a team-oriented environment • Proven ability to work creatively and analytically in a problem-solving environment • Desire to work in an information systems environment • Excellent communication (written and oral) and interpersonal skills • Excellent leadership and management skills • Excellent organizational, multi-tasking, and time-management skills • Proven ability to work independently

Equal Employment Opportunity

Parker is an Equal Opportunity and Affirmative Action Employer. Parker is committed to ensuring equal employment opportunities for all job applicants and employees. Employment decisions are based upon job related reasons regardless of race, ethnicity, color, religion, sex, sexual orientation, age, national origin, disability, gender identity, genetic information, veteran status, or any other status protected by law. U.S. Citizenship/Permanent Resident is required for most positions. (“Minority/Female/Disability/Veteran/VEVRAA Federal Contractor”) If you would like more information about Equal Employment Opportunity as an applicant under the law, please go to http://www.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf and http://www1.eeoc.gov/employers/upload/eeoc_gina_supplement.pdf

Location

Elk Grove Village, IL, USA

    Related Jobs

  • Machine Learning Engineer

    • PyTorch
    • scikit-learn
    • Keras
    Syncroness
    Austin
    16 days ago
  • Data Scientist - Permanent - London

    • SQL
    • Machine Learning
    • Python
    CarieraNou...
    Caledonia
    Today
  • Senior Data Scientist

    • Machine Learning
    • Python
    • SAS
    Softrams
    Woodlawn
    Today
  • USARPAC PED PRC DATA SCIENTIST 3

    • Modeling
    Huntington Ingalls Industries Inc.
    Fort Shafter
    Today
  • Senior Marketing Data Scientist, Enterprise

    • Looker
    • SQL
    • Tableau
    Atlassian
    San Francisco
    Today