Who we are
CommutAir is a regional airline operating flights on behalf of United Airlines as United Express. With our fleet of Embraer 145 aircraft, we operate up to 195 daily flights, connecting people and communities to the world via United's global network. Headquartered in Cleveland, we have hubs in Denver, Houston, Washington Dulles, and Newark, with a maintenance base in Albany, New York. We are looking for individuals to join our 1,200 diverse professionals who work together to solve each day's unique challenges.
What we do
Our purpose is to connect people and communities to the world. Our work has an impact on people’s lives, and we do what it takes to get the job done. We are guided and grounded by our core4 values: safe, caring, dependable, and efficient. From the flight deck to the finance team, and everyone in between, it takes a coordinated effort to keep our operation running smoothly.
Why work with us?
When you join the CommutAir family, you unlock a whole suite of perks, including:
- Flight benefits for you and your family
- Monthly performance bonus
- Medical, dental, and vision insurance
- Paid sick and vacation time
- 401(k) with company match
- Support when you need it most. CommutAir Cares, our non-profit organization, provides emergency financial assistance to employees during extreme hardship
What the position is
The Data Engineer will work in data sets to help make raw data more useful to the company and management team.
- Performing data preprocessing that involves data transformation as well as data cleaning.
- Using various machine learning tools to forecast and classify patterns in the data.
- Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further performance optimization.
- Understanding the requirements of the company and formulating questions that need to be addressed.
- Work with the management team to understand business requirements
- Create data lakes by efficiently extracting data that resides in various repositories
General requirements & qualifications:
- Demonstrable expertise with R, Python, and Spark, wrangling of various data formats - CSV, XML, JSON.
- Well versed with data visualization tools
- Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
- Demonstrable expertise with AWS cloud computing
- Experience with Agile methodology, using test-driven development.
- Excellent command of written and spoken English
- Self-driven problem solver
- Experience with the following technologies is highly desirable: Apache NiFi, Apache Kafka, Kibana, Node.js, and Elasticsearch
- Well-versed in various machine learning algorithms.
- Ability to develop scalable ETL packages.
- Should be well versed in SQL as well as NoSQL technologies like DynamoDB and MongoDB.
- Skilled at building APIs in databases to enable BI analysts to query data.
- Bachelor’s degree in Computer Science or a related field and at least 2 years of professional experience
- Keen knowledge of database architecture and data warehouse and big data technologies like Hadoop, Hive, Pig, and Spark
- 3 years experience working with programming tools including R, SQL, Python, and Java