Site Name: London The Stanley Building, Seattle Sixth Ave, USA - California - San Francisco, USA - Massachusetts - CambridgePosted Date: Dec 15 2023At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines.We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics"Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talentAggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-timeWe are looking for a skilled Data Operations Engineer I to join our growing team. The DataOps team accelerates biomedical and scientific data product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patterns in the challenging biomedical data space.A Data Operations Engineer I knows the metrics desired for their tools and services and iterates to deliver and improve on those metrics in an agile fashion. A Data Operations Engineer I is a technical contributor who can take a well-defined specification for a function, pipeline, service, or other sort of component, and a technical approach to building it, and deliver it at a high level. They are aware of, and adhere to, best practice for software development in general (and their specialization in particular), including code quality, documentation, DevOps practices, and testing. They ensure robustness of our services and serve as an escalation point in the operation of existing services, pipelines, and workflows.A Data Operations Engineer I should have awareness of the most common tools (languages, libraries, etc) within their specialization. They should be constantly seeking feedback and guidance to further develop their technical skills and expertise and should take feedback well from all sources in the name of development.In this role you willBe a technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineeringDevelop and support delivery of high-performing, high-impact data ops products and services, from a loosely defined data engineering problem or requirementBuilds modular code / libraries / services / etc using tools appropriate to their area of specializationProduces well-engineered software, including appropriate automated test suites and technical documentationEnsure consistent application of platform abstractions to ensure quality and consistency with respect to logging and lineageAdhere to QMS framework and CI/CD best practicesStayup-to-datewith developments in the open source community aroundDevOps,data engineering, data science, and similar toolingProvide L3 support to existing tools / services / pipelinesWhy you?Qualifications & Skills:We are looking for professionals with these required skills to achieve our goals:Bachelor's degree or equivalent in Computer Science, Software Engineering or related discipline.Job-related relevant Data Engineering experience building and testing software compo