Technology, Data & Information Management Centre of Expertise (TDIM CoE) (Bangalore)What this job involves:
JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL’s data strategy. We are seeking data professionals to work with our colleagues at JLL around the globe in providing solutions, developing new products, building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data and we are just getting started on that journey!
We are looking for a Staff Data Engineer who is self-starter to work in a diverse and fast-paced environment that can join our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.
About the role
Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements
Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data
Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development
Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc.
Unify, enrich, and analyse variety of data to derive insights and opportunities
Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate
Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform
Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org
Mentor other members in the team and organization and contribute to organizations’ growth.
Develops and maintains scalable data pipelines and builds out new integrations to support continuing increases in data volume and complexity.
Collaborates with business teams
Implements processes and systems ensure production data is always accurate and available for key stakeholders and business processes that depend on it.
Writes unit/integration tests, contributes to Data integrity, and documents work.
Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
Works closely with a team of frontend and backend engineers
Contributing towards creating the reusable code which can be used across Systems
Support the ongoing Production jobs as an when requiredSounds like you? To apply you need to be:
Experience & Education
- Bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science.
- Minimum of 5 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts
- Experience in Azure Cloud Platform, Databricks
- Experience in Azure Storage
- Excellent technical, analytical and organizational skills.
- Effective written and verbal communication skills, including technical writing.
Technical Skills & Competencies
Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues
Hands on Experience and knowledge on real time/near real time processing and ready to code
Hands on Experience in PySpark, Databricks, and Spark Sql.
Knowledge on json, Parquet and Other file format and work effectively with them
No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc.
Preferred Cloud Experience on Azure or AWS
Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc.
Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams
What we can do for you
At JLL, we make sure that you become the best version of yourself by helping you realise your full potential in a fully entrepreneurial and inclusive work environment. If you harbour passion for learning and adapting new technologies, JLL will continuously provide you with platforms to enrich your technical domains. We will empower your ambitions through our dedicated Total Rewards Program, competitive pay and benefits package. It’s no surprise that JLL has been recognized by the Ethisphere Institute as one of the 2019 World’s Most Ethical Companies for the 12th consecutive year.