At DISH, our technology teams challenge the status quo and reimagine capabilities across industries. From product development to software design to Big Data and beyond, our people play vital roles in connecting consumers with the products and platforms of tomorrow.
DISH is transforming the future of connectivity. We’re doing it by building the country’s first virtualized, standalone 5G wireless network from scratch. The foundation of a connected world, it’s a network free of the limitations of the past, and flexible enough to satisfy all the social, economic and transformative needs of the changing world.
Are you looking to grow your career as a Big Data Engineer in the Denver area, working with a company that constantly disrupts their industry? As part of our team you will get to deliver creative solutions for our new Wireless venture! We’re looking for talented, driven, passionate Big Data Engineers to join the DISH family and drive our business forward through delivering world class software to our customers, partners, and employees.
Opportunity is here. We are DISH.
Job Duties and Responsibilities
You will help build the technologies that enable millions of Americans to connect and converse with information and one another. You will work in a complex, fast-paced and highly elastic environment that provides opportunities to navigate across different teams and projects.
In this role, you will:
- Evangelize data engineering functions leveraging a Big Data processing framework;
- Create and optimize data engineering pipelines for analytics projects;
- Support data and cloud transformation initiatives;
- Support our software engineers and data scientists;
- Contribute to our cloud strategy based on prior experience;
- Understand the latest technologies in a rapidly innovative marketplace;
- Independently work with all stakeholders across the organization to deliver enhanced functionality.
Skills, Experience, and Requirements
A successful Senior Big Data Engineer Lead will have the following:
- 4+ years' experience with a Data Warehouse / Big Data background;
- BA/BS in a technical or business discipline (Information Systems, Engineering, Computer Science, Finance, Business Administration, or Accounting);
- Experience gathering and understanding data requirements, working in the team to achieve high quality data ingestion and build systems that can process and transform data;
- Experience in advanced Apache Spark processing framework and Spark programming languages such as Scala/Python/Advanced Java with sound knowledge in shell scripting;
- Experience in working with Core Spark, Spark Streaming, Dataframe API, Data set API, RDD APIs & Spark SQL programming dealing with processing terabytes of data - specifically, this experience must be in developing Big Data data engineering jobs for large scale data integration in AWS;
- Experience in writing Spark streaming jobs integrating with streaming frameworks such as Apache Kafka or AWS Kinesis;
- Advanced SQL experience using Hive/Impala framework including SQL performance tuning;
- Experience creating and maintaining automated ETL processes with special focus on data flow, error recovery, and exception handling and reporting;
- Knowledge of using, setting up and tuning resource management framework such as standalone Spark, Yarn or Mesos;
- Experience in physical table design in a Big Data environment;
- Experience working with external job schedulers & workflows such as Autosys, AWS Data Pipeline, AirFlow, Apache NiFi, etc.;
- Experience working in Key/Value data store such as Hbase;
- Experience in AWS services such as EMR, Glue (Serverless Architecture), S3, Athena, IAM, Lambda and CloudWatch.
Compensation: $103,300.00/Yr. - $137,100.00/Yr.
From versatile health perks to new career opportunities, check out our benefits on our careers website.
Candidates need to successfully complete a pre-employment screen, which may include a drug test.