The Invisible Army: Bots, Trolls, and the Architecture of Cognitive Warfare

Armies of bots and human trolls now operate at industrial scale. The tactics are old. The cost has collapsed. This briefing maps the architecture, names the effects, and tells you what protection practice needs to look like in this environment.

The Invisible Army: Bots, Trolls, and the Architecture of Cognitive Warfare

A briefing for practitioners in the protection and security space

What OSINT Tells Us About the Scale of the Problem

Before discussing strategy or countermeasures, it is worth grounding this analysis in what open-source intelligence reveals about the bot and troll landscape as it stands today. Taken together, the numbers are not abstract. They describe an operating environment.

Many online users find themselves being rage-baited and keyboard-fighting with others over opinions and facts. It can get heated, especially when discussions touch culture, gender, religion, or politics. That is fertile ground for manipulation and instigation. The underlying logic is not new. Ex-KGB officer Yuri Bezmenov described the objective of ideological subversion as changing the perception of reality to the point where, despite an abundance of information, no one can reach sensible conclusions in their own defence (Ratner, 2018). That framing, from the Soviet active measures era, maps onto the current operating environment with uncomfortable precision. The tools have changed. The intent has not.

I explored this lineage in earlier academic work on disinformation as a security threat (Rifesser, 2022), and made a related argument in my master's thesis on the weaponization of TikTok. Read this piece with the evolution of AI in mind: the same influence tactics persist, but the cost, speed, and localisation capacity have changed fundamentally.