Experts continue to debate when the active phase of the war in Ukraine will end. But the reality we should focus on is that many of the most serious hybrid threats will only begin to emerge in the post-conflict period.
Following the peak of fighting in 2014, Russia shifted its strategy toward the information domain. As early as 2015, some European analysts had described the conflict as an ‘information war,’ and the EU established the East StratCom Task Force specifically to counter a sharp rise in disinformation campaigns. According to EUvsDisinfo, over 6,000 cases of pro-Kremlin disinformation targeting Ukraine have been documented since 2014.
These numbers have significantly increased in scale and complexity since 2022. Annual EEAS reports document hundreds of coordinated incidents across dozens of platforms. Currently, according to the Centre for Strategic Communications Spravdi, the figure stands at 4,000–6,000 per day. They also aren’t limited to one place. Evidence shows that FIMI campaigns in 2024 have affected up to 90 countries.
Of course, the post-2014 phase is not a perfect analogue for what comes after the full-scale war ends. But we can safely infer that pressure will not dissipate. It will intensify, driven not only by the reduction or cessation of kinetic activity (which typically shifts competition into more sophisticated forms of influence), but also by the use of advanced technologies such as AI. It will likely concentrate on a limited set of critical processes where outcomes are decisive. This pattern will not target Ukraine alone. It is already visible across Central and Eastern Europe, including countries like Poland, Romania, and Moldova.
Ukraine’s election will be the single highest-stakes systems event of the post-war period. No national vote has taken place since 2020, meaning the next election will occur after a prolonged disruption, under sustained information pressure, within a highly digital environment, and with a strong, long-term push from the adversarial state. This creates structural vulnerabilities. Large segments of the population (veterans, displaced persons, and communities affected by trauma or occupation) will re-enter the process simultaneously, often through fragmented and high-noise information channels. In this context, the challenge is system integrity. Elections will be exposed to scalable influence operations enabled by AI-generated content, automated amplification, and cross-platform targeting. Similar patterns are already visible across Central and Eastern Europe and post-Soviet states, including Moldova, Hungary, Georgia, and Armenia.
A particularly relevant pattern is what is currently unfolding in Armenia. As the country’s geopolitical positioning shifts, coordinated narratives have emerged across digital platforms that frame alignment choices as pathways to instability or conflict.
From a systems perspective, this reflects a repeatable dynamic: narrative clusters are seeded, amplified, and synchronized across channels to shape perception at scale. According to DFRLab analysis, between January and September 2025, these keywords appeared in 1,339 Telegram posts that together gathered 7 million views. The implication is technical as much as informational: detecting these patterns early, mapping their propagation, and identifying amplification nodes are critical capabilities for building resilience in similar environments. These capabilities already exist. Ukrainian startups have been building and stress-testing them in real-world conditions, and are now scaling to the regional level. The gap is not tooling, but integration. Instead of fragmented, state-by-state monitoring platforms, the priority should be interoperability, shared data layers, and coordinated deployment across Central and Eastern Europe.
A trilateral format between Moldova, Ukraine, and Romania points to a workable model: joint development of digital security capabilities through shared platforms and coordinated workflows. The Cyber Alliance for Regional Resilience, formalized in February 2026, is an early step in this direction, but it remains experimental and will require iteration in governance, data-sharing standards, and operational integration to become effective.
The broader opportunity is technical. Rather than building parallel national systems, the priority should be interoperable infrastructure for sharing signals, models, and response protocols across countries under similar pressure, including the Baltic states and the Caucasus. Establishing a common layer for exporting and exchanging detection methods, datasets, and analytical workflows is critical to scaling collective resilience. Teams in Ukraine bring operational experience from high-intensity environments that has yet to be fully translated into regional systems. The task now is to integrate, standardize, and deploy that knowledge across a wider network.
The same dynamics extend beyond the immediate CEE cluster into the broader EU information environment. The challenge is not uniform (exposure, response capacity, and public signal interpretation vary significantly across countries), but the underlying pattern is consistent: influence operations scale faster than the systems designed to detect and coordinate responses to them.
From a systems perspective, the core weakness is not a lack of tools, but fragmentation in implementation. The EU already has regulatory frameworks and digital capabilities in place. What is missing is harmonized application and a shared analytical baseline that allows signals to be interpreted consistently across jurisdictions.
This is where standardization becomes critical. Frameworks such as DISARM provide a common language for identifying, classifying, and responding to influence operations. Embedding such approaches into national workflows and EU-level coordination mechanisms would enable interoperability across systems and reduce gaps created by inconsistent attribution and response models.
The same logic applies to response infrastructure. Moving from reactive to structured approaches requires that detection outputs (verified narratives, attribution signals, propagation patterns) feed directly into operational and legal workflows. This includes regulatory action, platform coordination, and, where relevant, accountability mechanisms.
The limiting factor remains execution speed. Detection, attribution, and verification infrastructure is not scaling at the pace required, largely due to procurement cycles and institutional inertia.
This creates a clear opportunity. The next phase of resilience will not be built through isolated national platforms, but through shared technical infrastructure developed in partnership with the private sector.

Andrii Olenin is an Intelligence Analyst at Mantis Analytics. His primary focus is the analysis of Russian information campaigns, influence tools, and long-term dynamics across post-Soviet space and Eastern Europe. He has a background in journalism and has been working systematically on FIMI threats since 2018.
