In most digital products, the security model has traditionally been built around the user’s interaction with the interface. Authentication confirms identity, and then the system assumes that all subsequent actions within the session are legitimate. This approach is logical for UI-oriented applications, where every meaningful operation is directly initiated by a human and has a clearly defined context.
The shift toward agent-oriented AI applications fundamentally changes this logic. Users increasingly stop controlling individual actions and instead delegate authority to an autonomous AI agent that interprets intent, plans a sequence of steps, and initiates actions on their behalf. In this model, threats are displaced: the primary attack surface is no longer the account itself, but the decision-making logic—delegation conditions, execution context, and the rules governing additional access verification.
Accordingly, the nature of attacks also changes. An attacker no longer needs to steal credentials or access tokens. It is often far more effective to influence how the system makes decisions: manipulating input data, altering the context in which the agent operates, or bypassing step-up checks through chains of formally permitted actions. As a result, all checks may appear valid while the system performs operations that the user did not explicitly authorize.
The readiness of modern AI applications for such targeted attacks remains limited. Most authentication mechanisms effectively protect user identity but provide little control over agents’ autonomous behavior over time. Long-lived access and context accumulation pose a particular risk: an agent’s ‘memory’ can gradually drift or get manipulated without a clear incident. In such cases, compromise manifests not as a breach, but as a series of formally correct decisions with dangerous outcomes.
Under these conditions, assessing compromise risk in agent-oriented systems cannot be reduced to how well an account is protected. Other factors become critical: the scope and duration of delegated authority, the set of actions available without re-verification, approaches to forming and constraining agent memory, and the system’s ability to detect and stop undesirable autonomous behavior in time.
Ultimately, the key challenge for authentication in AI applications lies not only in correctly identifying the user, but in controlling the autonomous decisions the system makes on their behalf. This area remains the least formalized—and at the same time the most risky—aspect of modern AI security.

Olga is a recognized expert in IT and information security with 19 years of experience. Among other things, she specializes in information security systems design and implementation. Her profound knowledge of IT technologies and principles of building IT infrastructure put her in the position of the Chairperson of the Committee on IT and Cyber Security of the German-Ukrainian Chamber of Industry and Commerce. Olga is also the CEO of the Ukrainian IT company Silvery LLC.
