Sunday, October 26, 2025

From Suspicious Activity to Scene Response: Empowering CERT Volunteers in Terror-Threat Environments


When the Unthinkable Happens Nearby

When the unthinkable happens—a backpack left behind at a street fair, a car parked too long near a parade route—the space between awareness and official response can define the outcome. In that gap, calm, trained Community Emergency Response Team (CERT) volunteers become the bridge between fear and coordination. Their vigilance and composure can mean the difference between chaos and control.

The Federal Emergency Management Agency (FEMA) developed CERT to educate and organize citizens before disasters strike. But in an age of “soft-target” terrorism—public venues and everyday spaces vulnerable to low-tech, high-impact attacks—the CERT mission extends beyond earthquakes and floods. It now includes the prevention, recognition, and initial stabilization of human-caused threats (FEMA, n.d.-a).


The New Front Line: Suspicious Activity in the Era of Soft Targets

Terrorism today increasingly exploits community openness. A 2023 United Nations Counter-Terrorism Committee report identified “everyday venues” as the preferred settings for attackers seeking maximum fear with minimal planning. Such events often begin not with explosions or gunfire, but with indicators—unattended bags, unauthorized filming of access points, or someone testing barriers (United Nations, 2023).

The Department of Homeland Security’s If You See Something, Say Something® campaign reminds citizens that vigilance is a civic responsibility (DHS, n.d.-a). Yet for CERT members, vigilance is professionalized. They are trained to distinguish between credible observation and paranoia. CERT Unit 8, Terrorism and CERT, teaches volunteers how to identify precursor behaviors, collect descriptive details, and report accurately without escalating public panic (FEMA, n.d.-b).

This awareness transforms fear into readiness. By learning to see instead of merely look, CERT members extend national security’s reach into the spaces where Americans live, shop, and celebrate.


Empowerment Through Training: Turning Fear into Readiness

Preparedness transforms anxiety into agency. CERT volunteers train to observe calmly, communicate clearly, and act confidently. The FEMA course Introduction to Community Emergency Response Teams (IS-317) outlines the core mission: protect life, prevent additional harm, and support professional responders (FEMA, n.d.-c).

Training focuses on practical empowerment:

  • Observation and Reporting: noting who, what, when, where, and why before contacting authorities (DHS, n.d.-b).

  • Scene Safety: keeping distance from suspicious objects or areas while maintaining situational awareness.

  • Psychological First Aid: stabilizing frightened bystanders, easing fear through presence and direction.

The Texas A&M Engineering Extension Service (TEEX) complements CERT education with WMD/Terrorism Awareness for Emergency Responders (AWR160)—a course that teaches volunteers to recognize chemical, biological, radiological, nuclear, and explosive indicators (TEEX, n.d.). The objective is not heroism but discipline: the courage to stay calm and the knowledge to act correctly.


Between Chaos and Control: The CERT Role at the Scene

When an incident occurs, the first few minutes define everything. FEMA’s Target Capabilities List (2007) emphasizes the intelligence and investigations function—collecting, verifying, and sharing information between the public and response agencies. CERTs play a unique role here: they are the trained eyes on the ground.

They do not confront suspects or defuse devices. Instead, they gather data, manage crowds, and maintain order until command arrives. They speak the same language as first responders because CERT training integrates the Incident Command System (ICS), ensuring consistent communication and chain-of-command discipline.

In practice, this means that when communication lines falter, CERT volunteers become the human relay—a stabilizing link that keeps local officials informed and communities safe.


Building a Culture of Vigilance and Trust

Effective counterterrorism begins with community trust. CERT volunteers embody that trust. Through neighborhood drills, faith-based workshops, and civic outreach, they normalize preparedness and replace fear with familiarity.

FEMA’s CERT guidance stresses that community education is prevention. Awareness sessions help residents recognize that suspicious activity is defined by behavior, not appearance—a distinction essential to maintaining both security and civil liberties (FEMA, n.d.-a).

The Department of Homeland Security’s Community Awareness Briefing similarly warns that bias-driven suspicion undermines the credibility of vigilance programs (DHS, n.d.-c). By training citizens to focus on actions—such as surveillance, testing of security, or unauthorized access—CERTs help ensure that vigilance strengthens unity rather than division.

Through this outreach, CERTs become more than responders. They are the local ambassadors of resilience—neighbors who remind others that preparedness is a shared duty, not a specialist’s privilege.


Prepared, Not Paranoid

Preparedness is not about predicting the next attack—it is about participation. The CERT volunteer embodies that principle: watchful but not fearful, proactive but not reckless.

When the next moment of uncertainty comes—a strange noise at a fairground, a suspicious package at a transit hub—the community’s first safeguard may not wear a uniform. It may be a trained volunteer who remembers the mission: see clearly, stay calm, and serve with courage.

Because the difference between chaos and coordination is often one steady voice—ready before the sirens ever sound.


References

Department of Homeland Security. (n.d.-a). If You See Something, Say Something® campaign. U.S. Department of Homeland Security. https://www.dhs.gov/see-something-say-something

Department of Homeland Security. (n.d.-b). How to report suspicious activity. U.S. Department of Homeland Security. https://www.dhs.gov/see-something-say-something/how-to-report-suspicious-activity

Department of Homeland Security. (n.d.-c). Community Awareness Briefing (CAB). U.S. Department of Homeland Security. https://www.dhs.gov/prevention/clearinghouse-category/training-opportunities

Federal Emergency Management Agency. (2007). Target Capabilities List: A companion to the National Preparedness Guidelines. U.S. Department of Homeland Security. https://www.fema.gov/pdf/government/training/tcl.pdf

Federal Emergency Management Agency. (n.d.-a). Community Emergency Response Team (CERT) Program. U.S. Department of Homeland Security. https://www.fema.gov/emergency-managers/individuals-communities/preparedness-activities-webinars/community-emergency-response-team

Federal Emergency Management Agency. (n.d.-b). CERT Basic Training: Participant Manual, Unit 8 – Terrorism and CERT. U.S. Department of Homeland Security. https://www.ready.gov/sites/default/files/2019.CERT_.Basic_.IG_.FINAL_.508c.pdf

Federal Emergency Management Agency. (n.d.-c). IS-317: Introduction to Community Emergency Response Teams. U.S. Department of Homeland Security. https://training.fema.gov/is/courseoverview.aspx?code=IS-317

Texas A&M Engineering Extension Service. (n.d.). AWR160 – WMD/Terrorism Awareness for Emergency Responders. https://teex.org/class/awr160/

United Nations Counter-Terrorism Committee Executive Directorate. (2023). Protecting vulnerable targets from terrorism. United Nations. https://www.un.org/counterterrorism


Wednesday, October 22, 2025

Youth Radicalization and the 488% Jump in Terrorism Charges in Canada

Between April 2023 and March 2024, the Royal Canadian Mounted Police (RCMP) reported a staggering 488 percent increase in terrorism-related charges across the nation. Twenty-five suspects were accused of 83 terrorism offences—an extraordinary rise from the previous year. Among those charged were several minors and young adults, a revelation that underscores a growing concern within Canada’s national security community: the rapid radicalization of youth. The surge is not merely a statistical anomaly; it represents a deepening social and psychological crisis emerging from digital spaces, ideological fragmentation, and an under-resourced prevention infrastructure.


The Data: Understanding the Spike

The RCMP’s internal briefing to Public Safety Canada in early 2024 revealed that terrorism-related charges had increased by nearly fivefold within one year. Three minors and six young adults were among those charged, while eight additional youths were placed under terrorism peace bonds. Law-enforcement agencies also reported six foiled terrorist plots between 2023 and 2024, spanning cities such as Edmonton, Ottawa, and Toronto. These figures reflect both improved investigative capacity and a real escalation in extremist activity among young Canadians.

This sharp rise is particularly concerning given that terrorism prosecutions in Canada have historically been rare. The Criminal Code’s terrorism provisions were first enacted in 2001 under the Anti-Terrorism Act (Bill C-36), yet charges have typically numbered only in the single digits each year. The 2023–2024 increase therefore signals a fundamental shift in the threat landscape rather than a routine fluctuation.


Youth Radicalization in the Digital Era

Radicalization among Canadian youth differs from traditional extremist recruitment models. According to RCMP intelligence assessments, online ecosystems have become the primary incubators for extremist belief systems. Young individuals increasingly encounter violent ideologies through algorithm-driven content feeds, encrypted messaging apps, and online gaming communities. Unlike earlier generations, these recruits often have minimal physical contact with organized terror networks.

Social isolation, identity struggles, and mental-health vulnerabilities have compounded this digital exposure. The COVID-19 pandemic intensified many of these factors, creating conditions in which adolescents sought meaning and belonging through online movements. Ideologically Motivated Violent Extremism (IMVE)—including far-right, conspiracy-based, and religiously motivated movements—has drawn youth through narratives of empowerment and grievance. In many cases, radicalization occurs within echo chambers that reinforce hostility toward perceived enemies, whether political, ethnic, or religious.


The Canadian Context: Why Now?

Several converging factors have accelerated youth radicalization in Canada. First, the global information environment is increasingly polarized, with geopolitical conflicts—such as the 2023 Israel-Hamas war—spilling into domestic discourse. The RCMP and the Canadian Security Intelligence Service (CSIS) have both warned that such international events fuel online hate speech and ideological mobilization among young Canadians.

Second, resource limitations have hindered effective prevention. The RCMP briefing noted that the increase in violent extremism has “not seen a parallel increase in resourcing.” Counter-radicalization programs such as the Canada Centre for Community Engagement and Prevention of Violence operate on modest budgets, often without the capacity to reach at-risk youth before extremist networks do.

Third, evolving domestic legislation—ranging from Bill C-51 (2015) to Bill C-59 (2019)—has expanded authorities’ ability to investigate and prosecute terrorism cases. While these tools improve accountability, they also highlight the reactive nature of Canada’s approach: arrests often follow rather than prevent radicalization.


Case Studies: Youth Involvement in Terror-Related Activity

Several high-profile cases illustrate the human dimension behind the statistics. In 2024, a teenager in Ottawa was charged with plotting violence against Jewish individuals—a case that shocked both the Jewish community and counter-terrorism officials. The accused, influenced by online extremist narratives, allegedly viewed violence as a form of social validation. Similar arrests in Calgary and Toronto involved youth drawn into ideological movements ranging from jihadist extremism to violent incel culture.

While these examples differ in ideology, they share key traits: social isolation, digital radicalization, and a lack of early intervention. Law enforcement has increasingly used terrorism peace bonds in such cases—civil orders restricting individuals believed likely to commit terrorism offences when evidence falls short of criminal thresholds. These measures, though preventive, reveal the difficulty of addressing the issue before it escalates.


Drivers and Mechanisms of Radicalization

Several drivers have emerged as central to the 2023–2024 wave of youth radicalization:

  1. Online Exposure: Extremist content proliferates through platforms such as Telegram, Discord, and niche forums, often disguised as memes or self-help material.

  2. Identity and Alienation: Youth struggling with belonging find purpose within ideological narratives that promise empowerment through destruction or defiance.

  3. Ideological Fluidity: Many young radicals blend ideologies—combining, for instance, misogyny, conspiracy theories, and pseudo-religious justifications—making classification difficult.

  4. Lack of Institutional Capacity: Canadian counter-radicalization programs remain fragmented across federal and provincial levels, with few sustained partnerships between law enforcement, educators, and mental-health providers.

  5. Global Resonance: International extremist groups exploit Western youth through encrypted communications and propaganda videos, customizing narratives to local grievances.

The convergence of these elements forms what analysts describe as “networked radicalization,” where peer groups, influencers, and algorithms jointly reinforce extremist worldviews.


Policy and Law-Enforcement Responses

Canada’s counter-terrorism architecture combines enforcement and prevention. The RCMP leads national investigations through its Federal Policing branch, while CSIS handles intelligence collection. The Canada Centre for Community Engagement and Prevention of Violence funds initiatives through the Community Resilience Fund, supporting local programs aimed at early intervention. However, these efforts often lag behind the pace of online radicalization.

Recent RCMP statements emphasize youth-focused interventions, particularly partnerships with schools and parents to identify behavioral changes. Yet significant obstacles remain: overextended investigators, jurisdictional overlap, and legal constraints surrounding surveillance of minors. Moreover, public discourse around civil liberties complicates the introduction of stronger monitoring mechanisms, even when directed at extremist propaganda.

The legal system has also adapted. The increase in peace bonds—essentially pre-charge supervision agreements—illustrates a preventive but imperfect tool. They provide temporary containment but seldom address underlying psychological or ideological causes. Long-term de-radicalization requires multi-disciplinary engagement, including education, counseling, and digital-literacy programs.


Broader Implications

The rise in youth-linked terrorism charges carries profound implications for Canada’s national identity and public safety. Beyond law enforcement, it raises moral and developmental questions: why are young people, often from stable communities, attracted to violent extremism? The answer appears tied to a loss of social cohesion and the unchecked spread of digital misinformation. If Canada fails to address these root causes, the nation risks normalizing extremist behavior among its youngest citizens.

Globally, the trend aligns with findings from Europol and the United Nations Office on Drugs and Crime, which note increasing youth engagement in online radicalization networks across Western democracies. Canada’s experience thus mirrors a broader transformation in how terrorism incubates—less in physical training camps and more in digital subcultures.


Recommendations

Addressing youth radicalization demands a layered approach. First, education systems must incorporate digital-literacy curricula that help students identify manipulative content and misinformation. Second, community-based mental-health resources should be strengthened to detect and support vulnerable youth before extremist recruiters reach them. Third, technology companies must assume greater responsibility for moderating extremist content, collaborating with law enforcement while maintaining privacy safeguards. Fourth, policy reforms should ensure sustainable funding for prevention programs, matching the scope of the threat. Finally, research institutions must continue studying the evolving typologies of youth extremism to inform data-driven responses.

Each of these measures reflects a recognition that radicalization is not simply a law-enforcement issue—it is a social one rooted in identity, alienation, and the search for belonging.


Conclusion

The 488 percent surge in terrorism-related charges in Canada is a warning sign, not a statistical curiosity. It reveals an emerging generational crisis where ideology, technology, and psychology converge to draw young people toward violence. While the RCMP’s response has demonstrated vigilance, sustainable prevention requires far more than arrests and peace bonds. Canada must invest in its youth—educationally, socially, and emotionally—to prevent the next generation from finding purpose in destruction. Only through comprehensive, community-based engagement can the nation hope to reverse the trajectory of youth radicalization and ensure that future headlines tell a different story.


References

Associated Press. (2024, May 10). Canadian youth facing terrorism charges for alleged plot against Jewish people. AP News.

Government of Canada. (2019). National Strategy on Countering Radicalization to Violence. Public Safety Canada.

Hoffman, B. (2024). Inside terrorism (4th ed.). Columbia University Press.

Llewellyn, C. (2023). The evolution of Canada’s domestic counter-terrorism strategy. Canadian Forces College.

Royal Canadian Mounted Police. (2024). Federal Policing Annual Report 2023–2024. RCMP Communications.

Times of India. (2024, May 11). RCMP claims 488 % spike in Canada’s terrorism charges.

United Nations Office on Drugs and Crime. (2023). Preventing youth involvement in violent extremism and terrorism.

Vision of Humanity. (2024). Global terrorism index 2024. Institute for Economics and Peace.


Saturday, October 18, 2025

Inside the Next Wave: What 2026 Holds for America’s Fight Against Terrorism


“The next wave of terrorism won’t come from the desert—it will come from data.”


The Silence Before the Storm

The jetliner roar and collapsing towers that defined a generation’s idea of terrorism are two decades behind us. Yet in 2026, the danger feels both quieter and closer. The new threat hums in the background of our ordinary lives—inside the algorithms that shape opinion, the coins that move unseen across digital ledgers, and the invisible networks that link extremists a continent apart.

This next wave is not a return to 9/11-style spectacle but a mutation: smaller, faster, more adaptive, and more personal. Homeland Security analysts call it “the hybrid era”—where crime, ideology, and technology converge so completely that separating them is like untangling light from heat.

Terrorism is no longer a headline—it’s an atmosphere.


The Shape-Shifting Enemy

America’s counterterrorism machine was built to chase hierarchies: training camps, emirs, and command chains. What confronts us now is an ecosystem.

According to the Homeland Threat Assessment 2025 (Department of Homeland Security [DHS], 2024), domestic violent extremism remains the most persistent and lethal danger inside U.S. borders. Meanwhile, the Annual Threat Assessment (Office of the Director of National Intelligence [ODNI], 2025) warns that global jihadist networks have become franchised micro-movements—from ISIS-K in Central Asia to al-Qaida affiliates spreading across the Sahel. Each is self-financing, self-radicalizing, and digitally fluent.

The distinction between foreign and domestic has eroded. The same encrypted chat app used by an Afghan recruiter is used by an American conspiracy theorist. The same meme that spreads in Nigeria finds a new caption in Nebraska.

Yesterday’s terrorist carried a passport. Tomorrow’s carries a profile.


Cyber: The First Front

The world’s power grids, hospitals, and supply chains now double as potential war zones. In 2026, cyberterrorism has matured from nuisance to strategic weapon.

The Defense Intelligence Agency’s Worldwide Threat Assessment 2025 notes that state-backed hackers from Russia, China, Iran, and North Korea are blurring lines between espionage, crime, and terror support. They rent infrastructure to ideological allies and conceal operations beneath criminal ransomware noise.

AI-driven intrusion software can now map a target’s digital ecosystem, craft personalized spear-phishing lures, and deploy within minutes. The next blackout might not signal an act of war but a profit-sharing venture between criminals and extremists.

Municipal systems and hospitals remain especially vulnerable. In the past year alone, ransomware attacks disrupted emergency services in five states. Analysts warn that the terroristic potential of chaos itself—not just profit—has become a motivating factor. The attackers do not always need to win; they only need to remind us how easily the lights go out.

In the Quiet War, every router is a trench and every password a perimeter.


The Currency of Conflict

If cyber is the bloodstream of modern terrorism, money is still its heart. Yet the heart now beats invisibly.

The Financial Action Task Force (FATF) Comprehensive Update on Terrorist Financing Risks (2025) found that virtual assets have “lowered barriers to entry” for extremists. The same blockchain that democratizes investment also democratizes illicit finance.

North Korean operatives reportedly stole more than $600 million in cryptocurrency during 2024; a portion of those funds likely supported weapons programs and proxy networks (Reuters, 2025). FATF warns of AI-managed laundering—algorithms that shift funds between coins and mixers before investigators can trace them.

Meanwhile, micro-financing—thousands of small donations beneath reporting thresholds—allows sympathizers to funnel capital through charity fronts or crowd-funding platforms. The result is a “trickle-to-torrent” effect that sustains insurgencies without a single blockbuster transfer.

The new terrorist banker isn’t a man in a suit—it’s a line of code.


The Cognitive Battlefield

The third front is inside our heads.

Disinformation, deepfakes, and algorithmic manipulation are no longer side-shows; they are the main theater of psychological warfare. A RAND Corporation study (2025) found that AI-generated propaganda now achieves engagement rates up to 40 percent higher than human-written posts.

Foreign intelligence services exploit domestic divisions, while domestic extremists borrow foreign disinformation techniques. Social media has become both recruitment ground and reality distortion field. The old “propaganda of the deed” has evolved into the “propaganda of the meme.”

During 2025, analysts tracked deepfake videos depicting fabricated police shootings that sparked real-world protests before verification caught up. The goal was not persuasion but polarization—to replace truth with tribal reflex.

The meme is the new missile, and outrage is the fuel.


The Global Hot Zones

The Sahel

Once a cartographic afterthought, West Africa’s Sahel is now the world’s fastest-growing terror front (United Nations Security Council Counter-Terrorism Committee [CTED], 2025). Islamic State in the Greater Sahara and al-Qaida-aligned JNIM exploit collapsing governance, climate stress, and displacement. Their expansion toward coastal states threatens ports, shipping lanes, and Western interests.

Afghanistan – Pakistan Border

ISIS-K remains the most globally ambitious jihadist group. The UN Secretary-General’s Report on ISIL/Da’esh (2025) describes its “sophisticated propaganda and external operations intent.” Expect continued attempts to inspire or enable lone-actor plots abroad.

Latin America

Criminal-terror hybrids, such as Ecuador’s Los Lobos gang, adopt bombings and assassinations once associated with insurgencies (Associated Press, 2025). When cartels weaponize terror tactics, geography stops being comfort.

The Homeland

Domestically, ideologically fluid extremism is the signature threat. According to DHS (2024), racially motivated and anti-government extremists remain the top killers, but new clusters—eco-radicals, anti-tech saboteurs, and gender-based militants—are emerging. Their unifying feature is self-radicalization through digital echo chambers.

The new map of terrorism isn’t drawn in sand—it’s drawn in bandwidth.


America’s Blind Spots

Despite two decades of counterterror investment, America’s security architecture still carries the DNA of 2001.

Legal frameworks lag behind hybrid realities: the Patriot Act never envisioned cryptocurrencies or AI-generated propaganda. Jurisdictional walls between domestic and foreign intelligence slow information fusion. And the public—exhausted by crises—tunes out warnings until an attack trends.

The Europol TE-SAT 2024 report noted that Europe thwarted 58 terrorist attacks across 14 member states; the United States, by contrast, measures success largely in absence—what didn’t happen. That absence can breed complacency.

Information fatigue is the enemy’s ally. As one counterterror official put it, “We built an army to fight an enemy that now travels at the speed of rumor.”

The danger isn’t surprise—it’s distraction.


Adapting the Arsenal

The next wave demands tools as flexible as the threat.

  1. Data Fusion, Not Hoarding. Intelligence value decays by the hour; cross-agency latency kills context. Real-time fusion between federal, state, and private partners is essential.

  2. Financial Transparency. FinCEN’s Advisory FIN-2025-A001 urges stricter oversight of virtual-asset service providers and shell companies. Implementing beneficial-ownership registries is dull policy—but lethal to terrorists.

  3. Cyber Hygiene at the Bottom of the Market. Most ransomware chaos begins in underfunded local systems. Subsidizing security for hospitals and utilities may prevent the next national emergency.

  4. Counter-Narrative Literacy. Media-literacy curricula and civic education inoculate citizens against manipulation. When people recognize emotional bait, the algorithm loses its teeth.

  5. Community-Level Prevention. Programs modeled on public-health outreach—identifying early behavioral indicators without stigmatization—show promise in reducing domestic radicalization (DHS, 2024).

The strongest firewall is public trust.


What 2026 Could Look Like

Analysts outline several plausible near-term scenarios:

  • Synchronized lone-actor violence—small attacks amplified through live-streaming to create nationwide panic.

  • Ransomware blackouts targeting emergency services during an election cycle.

  • AI-generated “false flag” incidents—fabricated atrocities prompting real-world retaliation.

  • Terrorist use of decentralized autonomous organizations (DAOs) to crowd-fund operations under philanthropic disguise.

  • Regional collapse in the Sahel or Horn of Africa exporting fighters and ideology via migration routes.

Each shares the same DNA: digital agility, psychological shock, and strategic deniability.


The Human Factor

Technology changes the medium; people decide the meaning. Leadership that communicates calmly, transparently, and compassionately after an incident denies terrorists their ultimate goal—fear amplification.

Veterans of counter-insurgency remind us that empathy is a security asset. When citizens feel heard, they are harder to recruit or divide. The ultimate counterterror skill is not codebreaking but community-building.

America’s greatest defense has never been surveillance—it’s solidarity.


The Road Ahead

Terrorism in 2026 will not vanish; it will metastasize. But adaptation is possible. The U.S. has the analytical talent, financial leverage, and technological depth to blunt this next wave—if it recognizes that terrorism is now a systemic, not episodic threat.

That recognition begins with language. Words like “war,” “enemy,” and “battlefield” still frame our imagination, but the real fight is for stability in the everyday. The goal is not perpetual mobilization—it is persistent resilience.

Victory in the next wave won’t be declared from a podium. It will be lived quietly in a society that refuses to fracture.


References

Associated Press. (2025, August 22). Islamic State and al-Qaida threat is intense in Africa, with growing risks in Syria, UN experts say. AP News.

Defense Intelligence Agency. (2025). Worldwide Threat Assessment: Statement for the Record to the House Armed Services Committee. Washington, DC: U.S. Department of Defense.

Department of Homeland Security. (2024). Homeland Threat Assessment 2025. Washington, DC: DHS.

Financial Action Task Force. (2025). Comprehensive Update on Terrorist Financing Risks. Paris: FATF.

Financial Crimes Enforcement Network. (2025). Advisory FIN-2025-A001: ISIS-Related Illicit Financial Activity. Washington, DC: U.S. Department of the Treasury.

Office of the Director of National Intelligence. (2025). Annual Threat Assessment of the U.S. Intelligence Community. Washington, DC: ODNI.

RAND Corporation. (2025). Artificial Intelligence and the Future of Online Propaganda. Santa Monica, CA: RAND Research Report.

Reuters. (2025, September 3). Financial crime watchdog calls for countries to come clean on shell companies. Reuters Business.

United Nations Security Council, Counter-Terrorism Committee Executive Directorate (CTED). (2025). Briefing on the Secretary-General’s Strategic-Level Report on ISIL/Da’esh. New York, NY: United Nations.

Europol. (2024). European Union Terrorism Situation and Trend Report (TE-SAT 2024). The Hague: Europol.

Saturday, October 11, 2025

Lone-Wolf Attacks, Online Radicalization, and the Future of Homegrown Terrorism

The Invisible War Next Door

On October 9, 2025, a man armed with a pistol entered a Manchester synagogue during Yom Kippur services, proclaiming allegiance to the Islamic State before being subdued by worshippers. Authorities later revealed he had never traveled abroad or met with terrorist operatives—his radicalization occurred entirely online. Such incidents highlight the rise of the “lone-wolf” terrorist: an individual who acts independently of formal networks yet carries global ideological echoes. In the digital age, terrorism no longer requires a chain of command or physical training camps. Instead, radicalization spreads through social media, encrypted apps, and algorithmic echo chambers that can turn alienation into extremism. This essay examines the evolution of terrorism into decentralized, homegrown forms; the mechanisms of online radicalization; the challenges of prevention; and what the future may hold for counterterrorism in a hyperconnected world.


The Evolution of Terrorism: From Networks to Nodes

Terrorism has evolved from coordinated, hierarchical networks to decentralized individual actions. In the early 2000s, groups such as al-Qaeda operated as global franchises with structured leadership and training facilities. Their model emphasized spectacular, large-scale operations that demanded coordination, secrecy, and physical presence (Hoffman, 2017). The emergence of the Islamic State (ISIS) introduced a hybrid model—territorial control in Iraq and Syria combined with a sophisticated online propaganda campaign that reached disaffected individuals worldwide (Byman, 2016). When ISIS lost its territorial caliphate, it pivoted toward what analysts describe as “virtual jihad,” encouraging sympathizers to wage war wherever they lived.

This strategic decentralization turned ideology into a digital virus. The global defeat of centralized terror groups did not extinguish their influence; instead, it fragmented it into thousands of digital “nodes.” Each node—a chatroom, Telegram group, or encrypted server—serves as both a recruiting center and echo chamber. Through these virtual communities, extremist groups continue to spread propaganda, coordinate micro-attacks, and maintain psychological presence despite losing physical ground (Clarke & Pantucci, 2020). The battlefield, once territorial, has become cognitive.


The Digital Radicalization Pipeline

Radicalization in the twenty-first century increasingly occurs through online interactions. The internet’s democratization of information allows extremist ideologies to flourish under the guise of free expression. Algorithms that reward engagement—regardless of moral content—amplify divisive material and guide users toward progressively extremist content (Conway, 2017). The result is a feedback loop: emotional outrage drives clicks, clicks drive exposure, and exposure normalizes extremism.

Modern extremist propaganda is not limited to lengthy manifestos or sermons. It includes memes, gaming aesthetics, and short-form videos that blend humor with hate. These digital artifacts recruit through familiarity, particularly among young, alienated men seeking identity and belonging. Studies show that online radicalization often progresses through stages: exposure to grievances, participation in ideological forums, adoption of extremist narratives, and eventual operational intent (Gill et al., 2017).

Recent examples reinforce this pattern. In 2025, the Manchester synagogue attacker had consumed months of ISIS content and communicated through encrypted apps. Similar cases across Europe and North America show individuals self-initiating plots without external direction, motivated by online propaganda and perceived global injustice (Europol, 2024). The psychological dimension is crucial: loneliness, resentment, and a search for purpose provide fertile ground for extremist recruitment. The internet supplies both validation and instruction.


The Challenge of Prevention

Preventing lone-wolf terrorism presents unique legal, ethical, and technological dilemmas. Law enforcement agencies face the paradox of identifying threats that manifest primarily as private digital behavior. Most lone-wolf attackers display subtle warning signs—isolated comments, symbolic posts, or private manifestos—detected only after violence occurs (Hamm & Spaaij, 2017). Predicting such acts with precision remains nearly impossible without encroaching on civil liberties.

Efforts to enhance digital surveillance raise contentious debates about privacy and state overreach. While some advocate monitoring encrypted channels, others warn that excessive surveillance erodes trust and may inadvertently validate extremist narratives about government oppression. Meanwhile, technology companies are under increasing pressure to regulate extremist content, yet they struggle with the scale and complexity of identifying intent without stifling legitimate expression (Weimann, 2021).

Community-based approaches offer a complementary path. Programs in Germany, the United Kingdom, and Australia focus on early intervention—training educators, parents, and peers to recognize behavioral shifts associated with radicalization. Such initiatives emphasize empathy, mental health, and inclusion rather than punishment. When implemented well, they demonstrate that counterterrorism can occur through social resilience rather than perpetual surveillance.


The Future of Homegrown Terrorism

Looking forward, homegrown terrorism is likely to become more sophisticated, individualized, and technologically adaptive. Artificial intelligence and deepfake technology are already being exploited to generate personalized propaganda and fake leadership messages, blurring the line between authenticity and fabrication (Berger, 2022). Extremist groups increasingly use cryptocurrencies to finance operations and maintain anonymity, while decentralized online platforms make content moderation nearly impossible.

Moreover, ideological boundaries are eroding. Scholars observe “ideological cross-pollination,” where far-right groups adopt jihadist propaganda tactics and vice versa (Clarke & Pantucci, 2020). The result is a hybrid threat landscape defined less by ideology and more by shared grievance, nihilism, and performative violence. The modern terrorist is less a soldier of a cause than a seeker of notoriety—amplified by social media’s promise of instant visibility.

The next generation of counterterrorism must therefore adapt to psychological and digital realities. Traditional methods—border control, military strikes, and surveillance—are ill-suited to combating ideologies that exist in cloud storage and human emotion. Prevention will depend on digital literacy, mental health outreach, and cross-platform cooperation among governments, educators, and technology firms.


Conclusion — Winning the Invisible Battle

The war on terror has migrated from deserts and mountains to browsers and bedrooms. Today’s terrorist needs no passport, no orders, and no accomplices—only a Wi-Fi signal and a grievance amplified by algorithms. Lone-wolf terrorism represents the most unpredictable and personal form of modern violence, one that challenges the foundations of both security and democracy. To win this invisible battle, societies must think beyond policing and embrace prevention rooted in empathy, education, and early intervention. Technology created the terrain of modern radicalization; human connection must reclaim it. As one analyst observed, “the modern terrorist doesn’t need to cross a border—only a broadband threshold” (Hoffman, 2017, p. 94).


References

Berger, J. M. (2022). Extremist propaganda in the age of artificial intelligence. Brookings Institution Press.

Byman, D. (2016). Al Qaeda, the Islamic State, and the global jihadist movement. Oxford University Press.

Clarke, C. P., & Pantucci, R. (2020). After the caliphate: The Islamic State and the future terrorist diaspora. Polity Press.

Conway, M. (2017). Determining the role of the internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict & Terrorism, 40(1), 77–98.

Europol. (2024). European Union terrorism situation and trend report (TE-SAT 2024). European Union Agency for Law Enforcement Cooperation.

Gill, P., Corner, E., Thornton, A., & Conway, M. (2017). What are the roles of the internet in terrorism? Measuring online behaviors of convicted UK terrorists. VOX-Pol Network of Excellence Working Paper Series, 2(1), 1–26.

Hamm, M. S., & Spaaij, R. (2017). The age of lone wolf terrorism. Columbia University Press.

Hoffman, B. (2017). Inside terrorism (3rd ed.). Columbia University Press.

Weimann, G. (2021). Terrorism in cyberspace: The next generation. Columbia University Press.


Tuesday, October 07, 2025

Predicting the Descent into Extremism and Terrorism: Promise, Peril, and Policy

Radicalization used to be slow—letters, meetings, sermons, pamphlets. Today, it can accelerate in hours. Platforms amplify grievance, connect would-be adherents, and wrap ideology in meme-speed narratives. Intelligence and law-enforcement agencies face a basic asymmetry: the volume of online speech is effectively infinite; human analysts are not. This gap has given rise to predictive extremism detection—a family of methods that use natural-language processing (NLP) and statistical tracking to infer whether a person’s public speech is drifting toward violent extremism.

A recent research contribution by Lane, Holmes, Taylor, State-Davey, and Wragge (2025) shows how this can work in practice. Their approach encodes written statements as vectors, classifies them (e.g., “centrist,” “extremist,” or “terrorist”), and tracks each speaker’s trajectory over time—flagging gradual drifts or sharp jumps that may presage violence. While early, the results suggest real potential for early warning. They also spotlight a minefield of risks: false positives, speech chilling, overbroad government use, and algorithmic bias.

This essay explains, in public-facing terms, what these systems do, where they help, where they can harm, and how policymakers can harness benefits without undermining civil liberties. It offers a lightly technical tour for non-technical leaders, grounded in current research and threat reporting. 


What the technology does—in plain English

1) Turning words into “coordinates”

Modern NLP models convert sentences into embeddings—numerical vectors that capture semantic meaning. Think of each sentence as a dot in a high-dimensional map where nearby dots mean similar ideas or tones. One widely used approach is the Universal Sentence Encoder (USE), introduced in 2018, which outputs a 512-number vector per sentence and transfers well to many classification tasks. Anthology)

2) Classifying rhetoric

Once you can place statements on that semantic map, you can train a classifier to distinguish categories. Lane et al. use support-vector machines (SVMs)—a standard technique—to separate regions associated with ordinary political discourse, extremist endorsement, and explicit terrorist advocacy or justification. Trained on labeled examples, such models can identify patterns that are statistically associated with each category. In their experiments, detecting explicitly terrorist rhetoric was highly accurate; detecting early extremism—a subtler signal—was harder but still promising.

3) Tracking trajectories over time

A single statement can be an outlier; what matters is movement. The research uses a tracker (conceptually similar to a Kalman filter) to smooth noisy observations and estimate a person’s latent “state of mind” as it evolves. That moving estimate lets analysts see whether a speaker is inching toward, or bouncing into, more dangerous rhetorical regions, and whether the trend is accelerating. 

4) Visualizing change for humans

The final ingredient is visual analytics. By projecting the high-dimensional map into two dimensions, analysts can view a person’s path over days or months, and compare it to group averages, leaders, or events. The display itself is not the intelligence; the trend—especially a sustained drift toward justification of violence—is.


Why this matters now

Threat reporting on both sides of the Atlantic underscores an evolving landscape. In Europe, Europol’s most recent EU Terrorism Situation and Trend Report (TE-SAT 2025) documents dozens of completed, foiled, or failed terrorist attacks across member states in 2024, alongside persistent online propaganda ecosystems. In the United States, the Homeland Threat Assessment 2025 emphasizes that domestic violent extremists and foreign terrorist organizations continue to exploit social platforms to recruit, radicalize, and call for violence. These reports do not endorse any particular predictive system, but they frame the scale and velocity of the problem such systems attempt to address. 


Where predictive tools can help

  1. Early, non-coercive intervention
    If a credible trajectory is detected early—before criminal conduct—schools, community organizations, or public-health-style programs can attempt soft interventions (counseling, exit ramps, counter-narratives). That is both ethically preferable and practically cheaper than post-attack responses.

  2. Analyst triage at scale
    No agency can read everything. A reliable model can prioritize review of accounts showing concerning trends while allowing most speech to pass untouched. The tool does not “decide” anything; it queues human review.

  3. Group-level insight
    Radicalization is social. Tracking vectors over time can reveal influence patterns—for example, when followers’ rhetoric predictably drifts after a propagandist releases new content. That enables targeted counter-messaging and community engagement rather than mass surveillance.

  4. Program evaluation
    When governments fund prevention initiatives, they need metrics beyond raw arrest counts. Aggregate trajectory measures can help evaluate whether a program correlates with de-escalation in community rhetoric.

  5. Academic clarity
    Scholars have long debated the internet’s causal role in radicalization. Reviews and meta-analyses show mixed but significant links between online ecosystems and extremist offending. Better measurement—trajectory-based rather than snapshot-based—can sharpen that literature. 


Technical realities (and limits) policymakers should understand

  1. Good at the obvious; less certain at the subtle
    Lane et al. report very strong performance when detecting overtly terrorist rhetoric in their dataset, but early extremism is fuzzier. That is intuitive: explicit praise of terrorist acts has clear linguistic markers; nascent radicalization often mimics heated but lawful political speech. Expect false positives near the boundary and false negatives where coded language or irony is used.

  2. Models inherit bias from their inputs
    Embeddings trained on large corpora can encode the biases present in those corpora. Even when technical teams test for bias, deployment to new communities, languages, or dialects can surface unexpected disparities in error rates and flagging patterns. The USE paper itself examined bias metrics; those assessments must be continuous, not one-off.

  3. Domain shift is the norm
    Extremist rhetoric evolves. Slogans mutate; euphemisms replace banned words; community norms shift. Models degrade unless they are retrained or adapted with fresh, representative data—ideally with diverse annotators and public documentation of changes.

  4. Labels are political
    Who decides what counts as “extremism” or “terrorism”? Legal definitions vary by jurisdiction and can shift with administrations. Systems that bake those labels into training data risk hard-coding political choices into code. This is not a reason to avoid modeling; it is a reason to separate technical work from policy authority and to publish the mapping between legal definitions and model classes.

  5. Ground truth is hard
    Most research, including Lane et al., relies on open-source text (e.g., speeches, posts, quotes) and expert labeling. But radicalization is a process, not a single post. To evaluate whether a system truly predicts behavior, researchers need carefully governed access to longitudinal data (with strong privacy controls) and agreed proxy endpoints (e.g., platform bans, arrests, or verified participation in violent groups). 


The civil-liberties red lines

Civil-society groups have warned for years that predictive technologies can amplify injustice and chill lawful speech. In policing, the ACLU and others have documented how prediction built on biased data reproduces bias; similar logics apply to speech-based systems. International media-freedom bodies have likewise issued guidance: if states use AI to moderate or surface content, they must protect freedom of expression, ensure transparency, and provide avenues for redress. For predictive extremism detection to be legitimate in a democracy, these critiques are not adversarial “gotchas”—they are design requirements


Guardrails that make the difference

1) Keep humans in the loop—by statute, not just policy.
Algorithms should flag, never decide. Any action that affects a person’s rights (from investigative targeting to social-service outreach) should require a documented human review with accountability.

2) Narrow purpose and separation of powers.
Specify in law what the models can be used for (e.g., triage for analyst review; not for automated detention or immigration decisions), which agencies may use them, and how judiciary or independent bodies can check misuse. Purpose limitation curbs function creep.

3) Transparency and independent audits.
Require public model cards (what data, with what bias tests, for what use), an annual public report on performance and complaints, and third-party audits with access to de-identified production data. If the law already provides oversight channels (e.g., specialized courts or inspectors general), extend their remit to algorithmic systems.

4) Due process and redress.
If a model contributes to a decision that burdens someone, that person must have an explainable basis to contest it. Even when operational security limits disclosure, policymakers can mandate structured summaries of the reasons behind flags.

5) Data hygiene and minimization.
Do not build massive shadow dossiers. Collect the minimum public data necessary; avoid scraping private data without warrants; delete data when no longer needed; and encrypt everything. Clear deletion schedules should be auditable.

6) Bias testing and community impact assessments.
Before deployment—and regularly thereafter—test for differential error rates across protected classes, dialects, and political viewpoints. Conduct community impact assessments (analogous to environmental impact statements), especially where systems may expose marginalized groups to disproportionate scrutiny.

7) Clear thresholds and calibration for action.
A model’s raw score is not a decision. Calibrate thresholds with policymakers and community partners: a low-score drift might trigger soft outreach; a sustained, high-confidence move into explicit violent advocacy might warrant analyst escalation. Put those thresholds in a public policy; do not leave them to vendor defaults.


How this fits with current threat reporting

Public threat assessments increasingly emphasize online ecosystems as accelerants. TE-SAT 2025 catalogs persistent propaganda channels associated with jihadist, right-wing, and other ideologies; DHS’s Homeland Threat Assessment details how domestic and foreign actors exploit open platforms and fringe boards alike. Predictive extremism systems are not panaceas, but they address a specific problem implied in these reports: signal extraction from torrents of content. Good governance lets agencies sift without surveilling everyone; bad governance invites overreach and backlash that ultimately reduces cooperation and safety. 


A short technical appendix (for non-technical leaders)

  • Embeddings: Models like USE translate each sentence into a vector of numbers. Similar sentences have similar vectors. The math (cosine similarity, margins) lets algorithms tell “how close” two statements are in meaning. 

  • Classifiers: An SVM draws boundaries in that vector space. Training gives it examples of each class; the model learns a surface that best separates those examples.

  • Tracking: A tracker treats each new sentence as a noisy measurement of an underlying state (the person’s current rhetorical posture). It updates the state over time, dampening overreactions to one-off outbursts and highlighting sustained drifts.

  • Evaluation: For tasks with clear language (e.g., praising terrorist attacks), models often achieve high accuracy on test sets. For subtle boundary cases—sarcasm, dog-whistles—the uncertainty is greater. Proper deployment requires confidence scores and calibration to avoid over-triggering. 


Responsible public-sector uses (and non-uses)

Appropriate uses

  • Content triage for human review in open-source intelligence units.

  • Program evaluation to see whether prevention efforts correlate with de-escalation in aggregate rhetoric.

  • Public-health style referral to community resources where lawful and transparent.

Out-of-bounds uses

  • Automated punitive actions (e.g., arrests, detention, immigration status changes) triggered by a score.

  • Secret blacklists without notice, appeal, or periodic review.

  • Generalized mass surveillance—indiscriminate scraping of private communications or bulk collection without statutory authorization and court oversight.

These lines are not abstract. Human-rights guidance stresses that any AI system touching speech must be coupled with freedom-of-expression safeguards and narrow proportionality tests. (OSCE)


Research and policy to invest in now

  1. Bilingual and dialect-fair models.
    Radicalization is multilingual. Fund research on embeddings and classifiers that perform evenly across languages and dialects—and mandate bias testing accordingly.

  2. Open datasets with ethical governance.
    Create de-identified, governed corpora for research with transparent labeling guidelines, community oversight, and strict privacy rules. This avoids dependence on opaque, vendor-owned datasets.

  3. Independent testbeds and red-team exercises.
    Standing testbeds—jointly run by civil society, academia, and government—can evaluate claims before public money is spent. Fund red-teams to probe for failure modes and disparate impact.

  4. Outcome-based metrics.
    Shift from “did the model flag something?” to “did flagged trajectories correlate with measurable prevention (e.g., engagement that reduces risk) without chilling lawful speech?” That requires closer collaboration between security agencies, social-science researchers, and communities.

  5. Clearer legal definitions and sunset clauses.
    Because labels like “extremism” are politically volatile, tie deployments to codified definitions, require sunset clauses, and force periodic legislative reconsideration informed by independent audits.


Conclusion: Prevention with restraint

Predictive extremism detection speaks to a real need: to surface faint signals of danger amid overwhelming noise. The core technical ideas—embedding language, classifying rhetoric, tracking trajectories—are not science fiction; they are here, and the basic evidence shows promise. At the same time, history warns that predictive tools can drift from prevention toward unaccountable surveillance, especially when definitions blur and oversight lags.

For policymakers, the mandate is not to choose between safety and liberty; it is to engineer both. That means guarding purpose, keeping humans in the loop, publishing what the models do and don’t do, auditing impacts, and measuring success by de-escalation, not merely by flags. Done right, these systems become modest, transparent instruments that help communities intervene earlier and more humanely. Done wrong, they become blunt tools that erode trust and, paradoxically, make prevention harder.

Safety is not a switch; it’s a system. If we’re going to predict, we must also protect—the public, the targets of algorithmic error, and the hard-won freedoms that define the societies we aim to keep safe.


References

Binder, J. F. (2022). Terrorism and the Internet: How dangerous is online radicalization? Frontiers in Psychology, 13, 997390.

Cer, D., Yang, Y., Kong, S., Hua, N., Limtiaco, N., St. John, R., Constant, N., Guajardo-Céspedes, M., Yuan, S., Tar, C., Sung, Y.-H., Strope, B., & Kurzweil, R. (2018). Universal Sentence Encoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (pp. 169–174). Association for Computational Linguistics.

Department of Homeland Security. (2024). Homeland Threat Assessment 2025. Office of Intelligence and Analysis.

Europol. (2025). European Union Terrorism Situation and Trend Report 2025 (EU TE-SAT 2025). Europol Public Information.

Federal Bureau of Investigation & Department of Homeland Security. (2021). Strategic Intelligence Assessment and Data on Domestic Terrorism. U.S. Government.

Lane, R. O., Holmes, W. J., Taylor, C. J., State-Davey, H. M., & Wragge, A. J. (2025). Predicting the descent into extremism and terrorism. arXiv preprint.

OSCE Representative on Freedom of the Media. (2022). Spotlight on Artificial Intelligence and Freedom of Expression. Organization for Security and Co-operation in Europe.


Thursday, October 02, 2025

Iraq Transition

Statement by Chief Pentagon Spokesman Sean Parnell on Iraq transition.

In accordance with the President's guidance and in alignment with the U.S.-Iraq Higher Military Commission and the joint statement issued on Sept. 27, 2024, the United States and Coalition partners will reduce its military mission in Iraq. This reduction reflects our combined success in fighting ISIS and marks an effort to transition to a lasting U.S.-Iraq security partnership in accordance with U.S. national interests, the Iraqi Constitution, and the U.S.-Iraq Strategic Framework Agreement. This partnership will support U.S. and Iraqi security and strengthens Iraq's ability to realize economic development, foreign investment, and regional leadership. The U.S. Government will continue close coordination with the Government of Iraq and Coalition members to ensure a responsible transition.