跳到内容

通用信息

地点:Orlando, Florida, United States of America 
  • 地点: Orlando
  • 州:
  • 国家/地区: United States of America

  • 地点: Vancouver
  • 州:
  • 国家/地区: Canada

  • 地点: Austin
  • 州:
  • 国家/地区: United States of America

  • 地点: Kirkland
  • 州:
  • 国家/地区: United States of America

  • 地点: Redwood City
  • 州:
  • 国家/地区: United States of America


角色 ID
211803
工作人员类型
Regular Employee
工作室/部门
CT - Security
弹性工作安排
Hybrid

描述和要求

Electronic Arts 打造更高层次的娱乐体验,激励世界各地的玩家和粉丝。在这里,每个人都是故事的主角。活跃社群,畅联全球。这里充满创造力,鼓励新观点,注重好创意。这是一支人人都能让游戏成为现实的团队。

Security Engineer, AI Security

EA Security is seeking an offensive-minded Security Engineer to help secure AI-enabled systems, agents, and LLM-integrated workflows across EA’s games, services, and enterprise platforms. This role focuses on identifying real-world security risks in both commercial and internally developed AI platforms, and on building scalable testing, automation, and AI-driven security agents that extend the team’s impact.

You will work closely with Application Security and Red Team engineers, applying an attacker’s mindset to AI systems while building scalable security testing, automation, and guardrails that meaningfully reduce risk. This role is hands-on, technical, and impact-driven, with an emphasis on practical exploitation, adversarial testing, and scalable security outcomes.

This role is ideal for security engineers who enjoy breaking complex systems, reasoning about abuse paths, and turning deep technical findings into scalable and durable AI security improvements.

This position reports into the Application Security and Red Teaming organization.

Responsibilities

  • Perform security testing and reviews of AI-enabled applications, agents, and workflows, including architecture, design, and implementation analysis

  • Identify and validate vulnerabilities in LLM-based systems such as data leakage, insecure tool use, authentication gaps, and abuse paths

  • Evaluate AI systems for prompt injection (direct, indirect, conditional, and persistent), including risks introduced through retrieval-augmented generation and agentic workflows

  • Conduct adversarial testing of commercial AI platforms such as Microsoft Copilot, Google AgentSpace, and OpenAI ChatGPT, as well as internally developed AI systems

  • Assess agentic and multi-agent workflows for privilege escalation, unsafe action chaining, cross-agent abuse, and unintended side effects

  • Design, build, and operate AI-driven security agents and automation, including multi-agent workflows, that scale application security, red teaming, and AI security efforts

  • Develop tooling, test harnesses, and repeatable validation frameworks to expand AI security coverage across teams

  • Partner with application engineers to translate findings into actionable mitigations, secure design patterns, and engineering guidance

  • Collaborate with Red Team and AppSec engineers to integrate AI attack techniques and agent-based testing into broader offensive security activities

  • Contribute reusable insights, documentation, and guardrails that help teams adopt AI securely and reduce future systemic risk

Required Qualifications

  • Strong background in application security, offensive security, or a combination of both

  • Hands-on experience identifying and exploiting security weaknesses in modern applications and services

  • Experience testing or securing AI-enabled systems, LLM integrations, or agent-based workflows

  • Ability to reason about attacker misuse, abuse scenarios, and emergent behavior beyond traditional vulnerability classes

  • Experience building automation, tooling, or security agents using languages such as Python, Go, JavaScript, or similar

  • Familiarity with source code review and security tooling such as CodeQL, Semgrep, or equivalent

  • Strong collaboration and communication skills, with the ability to work directly with engineers and security partners

Preferred Qualifications

  • Experience assessing commercial AI platforms or enterprise AI services

  • Familiarity with agent orchestration, tool calling, function execution, or multi-agent systems

  • Experience with traditional red team tooling or adversary simulation techniques

  • Exposure to detection engineering, incident response, or threat intelligence workflows

  • Experience turning novel AI security findings into scalable guidance rather than one-off fixes



Electronic Arts
我们拥有全面的游戏组合和丰富的体验,在世界各地设有分支机构,而且在整个 EA 提供大量机会。我们非常重视适应能力、韧性、创造力和好奇心。我们提供领导岗位让您发挥潜力,为学习和尝试提供空间,赋能您出色地完成工作并寻求成长的机会。

我们对福利计划采用整体方法,强调身体、情感、财务、职业和社区健康,以支持平衡的生活。我们的套餐专为满足当地需求而量身定做,可能包括医疗保险、心理健康支持、退休储蓄、带薪休假、家事休假、免费游戏等。我们营造和谐的环境,让各个团队始终都能尽展所能。

Electronic Arts 是一个注重机会平等的雇主。在聘用员工时不会考虑其种族、肤色、国籍、血统、生理性别、社会性别、性别认同或表达、性取向、年龄、遗传信息、宗教、身心障碍、医疗状况、怀孕状况、婚姻状况、家庭状况或兵役状况,或任何受法律保护的其他特征。我们也会遵守相关法律,考虑招聘有过犯罪记录的合格应聘者。EA 还会根据适用法律的要求,为合资格的残障人士提供工作场所的便利。