Do you want to be in the forefront of engineering big data solutions that takes analytics at Amazon Pharmacy to the next generation? Do you want to be part of the future of healthcare? Do you have a solid analytical thinking, metrics driven decision making , savvy with creating scalable data ingestion tools and pipelines with petabyte scale data and want to solve problems with solutions that will meet the growing needs in the healthcare space? We are looking for a top notch Sr. Data Engineer to be part of our data warehousing and analytics team. We are building near-real time analytics platforms using big data tools , cutting edge datalake technologies on AWS.The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, above all else, is passionate about data and analytics. He/she is an expert with data modeling with unstructured data, ingestion pipelines and ETL design and business intelligence tools with the business to identify strategic opportunities where improvements in data infrastructure creates out-sized business impact. He/she is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), mentors other engineers in team on high quality tech and enjoys working in a fast-paced team. We're excited to talk to those up to the challenge!#everydaybetterWe are open to hiring candidates to work out of one of the following locations:Seattle, WA, USA- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam- Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence- Knowledge of distributed systems as it pertains to data storage and computing- Knowledge on writting efficient oops code in a preffered language(python/java/scala) , big data processing pipelines, cloud infrastructure deployment , infrastructure as service- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)- Knowledge on big data processing pipelines and AWS technologies such as sqs,sns,lambda,kinesis,glue,dynamodb,hadoop, emr,spark is a plusOur compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $105,700/year in our lowest geographic market up to $205,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. Applicants should apply via our internal or external career site.Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.