Intelligence Is Having a Startup Moment… and a Dashboard Slop Problem by Maksym Tereshchenko

0

The most basic lesson in any intelligence textbook is the distinction between information and intelligence. It does not matter whether the signal comes from HUMINT, SIGINT, or OSINT. Raw signals are not intelligence. Intelligence begins when signals are interpreted within context, weighed against competing explanations, and translated into judgments that can guide decisions.

Yet much of today’s OSINT ecosystem speaks the language of intelligence while operating at the level of information.

You can see this in the growing ecosystem of conflict-monitoring dashboards circulating among analysts, journalists, and researchers. Platforms such as LiveUAmap present wars and crises as constantly updating maps of incidents, alerts, and headlines. Lists of “essential OSINT tools” now circulate widely online, promising that anyone can watch global instability unfold in real time.

Some of these platforms are useful. They aggregate enormous volumes of publicly available reporting and make it searchable and visual. But they do not produce intelligence. They produce structured information streams that still require interpretation.

Recently I spent time exploring one of the newer dashboards, World Monitor, that’s gone viral. Open the interface and the world lights up: aircraft tracks, shipping movements, satellite fire detections, explosions, protests, headlines. The screen fills with signals. It looks exactly like the situational rooms we see in spy series: walls of maps, events unfolding, the entire planet rendered as a live operational picture.

For a moment, it feels like you are watching the world. Then the illusion fades.

Nothing on the screen actually tells you what matters, which signals are credible, which developments are strategically relevant, which patterns deserve attention.

And at some point you realize something uncomfortable.

It looks like intelligence.

But essentially… it is slop.

What these tools illustrate is not a revolution in intelligence but the arrival of a new phenomenon: dashboard slop, a subset of what people increasingly call AI slop.

Modern AI tools have made monitoring infrastructure extraordinarily cheap. Open-source scraping frameworks, translation models, summarization pipelines, and lightweight interfaces allow developers to ingest hundreds of feeds and generate automated tools with minimal effort.

Everybody is overwhelmed and overloaded with information. Aggregating this information, labeling it, and presenting it via flashy user interfaces doesn’t lead to better and faster decisions.

Collection has become commoditized. But intelligence systems succeed or fail on layers that sit far above collection:

  • First, decision-makers require traceability: the ability to see how a conclusion was reached and which sources informed it. 
  • Second, they require auditability: the capacity to reconstruct analytic reasoning when decisions are questioned later.
  • Third, modern systems must define the division of responsibility between humans and machines. AI can already surface patterns and generate analytic hypotheses at scale. The role of analysts is therefore shifting upward: from manual verification toward supervision: challenging machine outputs, testing assumptions, and validating conclusions before they inform decisions.
  • Fourth, they require security and governance around sources, jurisdictions, and data integrity.
  • Fifth, there is the question of human-machine teaming. AI can accelerate collection and surface patterns, but interpretation remains a collaborative process between analysts and systems.
  • Sixth, organizations themselves are not single users. Analysts, executives, legal and risk teams, communications departments, oversight bodies, and security specialists all interact with intelligence differently, requiring distinct views of the same information environment.
  • Finally, intelligence must connect to decision support. Monitoring is only the first step. Systems must help organizations test scenarios, simulate possible outcomes, and understand the implications of different courses of action.

A dashboard visualizes information. A platform organizes intelligence.

For startups it has never been easier to build something that looks like a geopolitical command center. The cost of assembling monitoring feeds, automated summaries, and sleek interfaces has collapsed.And here, ironically, it is useful to end with another textbook principle. Intelligence, as we are taught, is requirements-driven. Good intelligence answers the questions a decision-maker actually has.

In practice, it is the hardest part of the job.

Because answering those questions requires understanding the world of the customer. Are they trying to assess supply-chain resilience? Then geopolitical monitoring alone is not enough. You need to understand logistics nodes, industrial dependencies, and the geography of critical raw materials. Are they dealing with foreign information manipulation? Then dashboards of social media signals are useless unless you understand attribution, regulatory frameworks, and how influence networks are actually sanctioned or dismantled.

At some point the problem stops being technical and becomes human. Someone has to sit with the customer. Someone has to understand the decisions they actually make, the risks they carry, and the constraints they operate under.

This is where the intelligence textbook quietly meets the startup textbook. Both say the same thing: start with the requirement.

Intelligence begins not with feeds or dashboards, but with a decision a customer has to make. Everything else is just information.

Share.

Comments are closed.