Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Do you have experience in trust and safety? Are you interested in Artificial intelligence (AI) and excited about technology like GPT4? Do you want to find responsible AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security experts to cause trust and safety failures in Microsoft’s big AI systems. We are an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Responsible AI experts and software developers with the mission of proactively finding failures in Microsoft’s big bet AI systems. You will red team AI models across Microsoft’s AI portfolio including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/
We are looking for a Principal Offensive Security Engineer with trust and safety experience for our team to help make AI security better and help our customers expand with our AI systems. We have multiple openings and open to remote work.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Responsibilities
Discover and exploit Responsible AI vulnerabilities end-to-end in order to assess the safety of systems
Develop methodologies and techniques to scale and accelerate responsible AI Red Teaming
Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systems
Research new and emerging threats to inform the organization
Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations
Qualifications
Required Qualifications:
7+ years experience in identifying security vulnerabilities, software development lifecycle, large-scale computing, modeling, cyber security, and anomaly detection.
5+ years work experience in trust/safety space preferably with a background in content moderation
Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check:
This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
Familiarity in CBRN (Chemical, Biological, Radiological, Nuclear) weapons or broader National Security issues - Experience in including influence operations, cybercrime, nation state attackers, misinformation, child safety, hate speech, or human exploitationPenetration Testing IC5 - The typical base pay range for this role across the U.S. is USD $133,600 - $256,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $173,200 - $282,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
#MSFTSecurity #airedteam #MSECAI
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations (https://careers.microsoft.com/v2/global/en/accessibility.html) .