56319BR
Title:
Big Data Lead Engineer
Job Description:
Quest Global is an organization at the forefront of innovation and one of the world’s fastest growing engineering services firms with deep domain knowledge and recognized expertise in the top OEMs across seven industries. We are a twenty-five-year-old company on a journey to becoming a centenary one, driven by aspiration, hunger and humility.
We are looking for humble geniuses, who believe that engineering has the potential to make the impossible, possible; innovators, who are not only inspired by technology and innovation, but also perpetually driven to design, develop, and test as a trusted partner for Fortune 500 customers.
As a team of remarkably diverse engineers, we recognize that what we are really engineering is a brighter future for us all. If you want to contribute to meaningful work and be part of an organization that truly believes when you win, we all win, and when you fail, we all learn, then we’re eager to hear from you.
The achievers and courageous challenge-crushers we seek, have the following characteristics and skills:
Roles & Responsibilities:
Develop high quality, secure and scalable data pipelines using spark, Scala/ python/ Java on Hadoop or object storage.
Design and architect data flow schemes in Hadoop environment which can be scalable, repeatable and eliminate time consuming steps.
Drive automation and efficiency in Data ingestion, data movement and data access workflows by innovation and collaboration.
Understand, implement, and enforce Software development standards and engineering principles in the Big Data space.
Work closely with business stakeholders and embedded engineering teams within business teams in a collaborative manner to help them build scalable products in quick time.
Leverage new technologies and approaches to innovating with increasingly large data sets.
Work with project team to meet scheduled due dates, while identifying emerging issues and recommending solutions for problems.
Perform assigned tasks and production incident independently.
Contribute ideas to help ensure that required standards and processes are in place and actively look for opportunities to enhance standards and improve process efficiency.
Required Skills (Technical Competency):
10-16 years of experience in Data Warehouse related projects in product or service-based organization
Expertise in Data Engineering and implementing multiple end-to-end DW projects in Big Data Hadoop environment
Experience of building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage
Experience of working with Databases like Oracle, Hadoop and have strong SQL knowledge
Experience of working on real time data flow system, NiFi and Kafka will be an added advantage
Experience of working on automation in data flow process in a Big Data environment.
Experience of working in Agile teams
Strong analytical skills required for debugging production issues, providing root cause, and implementing mitigation plan
Strong communication skills - both verbal and written – and strong relationship, collaboration skills and organizational skills
Ability to multi-task across multiple projects, interface with external / internal resources and provide technical leadership to junior team members
Ability to be high-energy, detail-oriented, proactive, and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results
Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement
Flexibility to work as a member of a matrix based diverse and geographically distributed project teams
Auto req ID:
56319BR
Job Type:
Full Time-Regular
Assignment Country:
India
Total Years of Exp:
10 - 15 Yrs
Assignment State:
Maharashtra
Assignment Location:
Pune
Experience Level:
Mid Level