Job description

Seargin is looking for a Data Engineer

  • Position: Data Engineer
  • Technologies: Data Engineering, SQL, Cloud
  • Location: Remote
  • Country: Switzerland
  • Area: Project
  • Form of employment: B2B
  • Experience level: Senior

The main tasks for Data Engineer will be:

  • Designing, creating and maintaining optimal data pipeline architecture
  • Assembling big, complex data sets that meet practical / non-practical business necessities.
  • Identifying, designing, and implementing inner procedures developments:
    • Automating manual procedures
    • Optimizing data delivery
    • Re-designing organization for better scalability
  • Building the infrastructure required for optimum extraction, alteration, and loading of data from a broad variety of data sources using SQL and AWS‘big data’ know-hows.
  • Building analytics gears that use the data pipeline to deliver actionable insights into customer attainment, working efficiency and other key business performance metrics.
  • Working with investors counting:
    • Executive
    • Product
    • Data and Design crews to assist with data-related technological problems and support their data infrastructure requests.
  • Keeping data separated and safe across limits through numerous data centres and AWS regions.
  • Generating data utensils for analytics and data scientist crew associates that assist them in construction and optimizing products into an advanced industry leader.
  • Working with data and analytics specialists to attempt for better functionality in data systems.

The Candidate should have:

  • Big working SQL knowledge and practise working with relational databases, enquiry authoring (SQL) additionally working knowledge of a variety of databases.
  • Practise constructing and enhancing
    • ‘big data’ data pipelines
    • architectures
    • data sets
  • Robust analytic services connected to working with unstructured datasets.
    • Building procedures supporting data collection
    • cleansing and transformation
    • data structures
    • metadata
    • dependency and workload managing
  • A fruitful history of operating, processing and mining value from big detached datasets.
  • A working acquaintance of:
    • message queuing
    • stream processing
    • very scalable ‘big data’ data stores
  • Sturdy project organisation and administrative skills.
  • Practise aiding and working with cross-practical crews in an active environment.
  • 5 or more years of involvement in a Data Engineer part
  • Graduate grade in:
    • Computer Science
    • Statistics
    • Informatics
    • Information Systems or another quantitative field
  • Should also have past experiences using the subsequent software/utensils:
  • Hadoop
  • Spark
  • Kafka
  • Involvement with relational SQL and NoSQL databases, counting Postgres and Cassandra.
  • Skill with data pipeline and workflow organisation utensils: Azkaban, Luigi, Airflow, etc.
  • Involvement with major cloud data pipeline services like:
  • AWS
  • GCP
  • Involvement with stream-dispensation systems: Storm, Spark-Streaming, etc.
  • Involvement with object-oriented/object function scripting languages: Python, Java, C++, Scala,
  • ++ Involvement with industrial data protocols like OPC DA /OPC UA++
  • Involvement with database API usage/customisation
  • Team-oriented, detail-oriented, efficient, and solution-oriented attitude
  • Superb analytical and problem-solving skills
  • Excellent communication and interpersonal skills
  • Flexibility and ability to work independently and in a team
  • Great English skills (written and spoken)

The Candidate can expect:

  • B2B Contract
  • Challenging job in an international and multilingual environment
  • Professional development
  • Attractive and competitive compensation

If you meet the requirements described above, please send your application in English (.doc) at [email protected] stating the name of the position in a subject and/or call +(48) 662 399 001.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.