- 장소: Orlando
- 시/도:
- 국가: United States of America
- 장소: Vancouver
- 시/도:
- 국가: Canada
- 장소: Austin
- 시/도:
- 국가: United States of America
- 장소: Kirkland
- 시/도:
- 국가: United States of America
- 장소: Redwood City
- 시/도:
- 국가: United States of America
EA Security is seeking an offensive-minded Security Engineer to help secure AI-enabled systems, agents, and LLM-integrated workflows across EA’s games, services, and enterprise platforms. This role focuses on identifying real-world security risks in both commercial and internally developed AI platforms, and on building scalable testing, automation, and AI-driven security agents that extend the team’s impact.
You will work closely with Application Security and Red Team engineers, applying an attacker’s mindset to AI systems while building scalable security testing, automation, and guardrails that meaningfully reduce risk. This role is hands-on, technical, and impact-driven, with an emphasis on practical exploitation, adversarial testing, and scalable security outcomes.
This role is ideal for security engineers who enjoy breaking complex systems, reasoning about abuse paths, and turning deep technical findings into scalable and durable AI security improvements.
This position reports into the Application Security and Red Teaming organization.
Responsibilities
Perform security testing and reviews of AI-enabled applications, agents, and workflows, including architecture, design, and implementation analysis
Identify and validate vulnerabilities in LLM-based systems such as data leakage, insecure tool use, authentication gaps, and abuse paths
Evaluate AI systems for prompt injection (direct, indirect, conditional, and persistent), including risks introduced through retrieval-augmented generation and agentic workflows
Conduct adversarial testing of commercial AI platforms such as Microsoft Copilot, Google AgentSpace, and OpenAI ChatGPT, as well as internally developed AI systems
Assess agentic and multi-agent workflows for privilege escalation, unsafe action chaining, cross-agent abuse, and unintended side effects
Design, build, and operate AI-driven security agents and automation, including multi-agent workflows, that scale application security, red teaming, and AI security efforts
Develop tooling, test harnesses, and repeatable validation frameworks to expand AI security coverage across teams
Partner with application engineers to translate findings into actionable mitigations, secure design patterns, and engineering guidance
Collaborate with Red Team and AppSec engineers to integrate AI attack techniques and agent-based testing into broader offensive security activities
Contribute reusable insights, documentation, and guardrails that help teams adopt AI securely and reduce future systemic risk
Required Qualifications
Strong background in application security, offensive security, or a combination of both
Hands-on experience identifying and exploiting security weaknesses in modern applications and services
Experience testing or securing AI-enabled systems, LLM integrations, or agent-based workflows
Ability to reason about attacker misuse, abuse scenarios, and emergent behavior beyond traditional vulnerability classes
Experience building automation, tooling, or security agents using languages such as Python, Go, JavaScript, or similar
Familiarity with source code review and security tooling such as CodeQL, Semgrep, or equivalent
Strong collaboration and communication skills, with the ability to work directly with engineers and security partners
Preferred Qualifications
Experience assessing commercial AI platforms or enterprise AI services
Familiarity with agent orchestration, tool calling, function execution, or multi-agent systems
Experience with traditional red team tooling or adversary simulation techniques
Exposure to detection engineering, incident response, or threat intelligence workflows
Experience turning novel AI security findings into scalable guidance rather than one-off fixes