<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=8264713&amp;fmt=gif">

How OpenHorizon Vets Its Sources

By
3 Minutes Read

 

Following our previous article on disinformation and influence operations, this edition takes a closer look at how OpenHorizon selects and evaluates sources as part of its threat intelligence production. 

In an era marked by information overload, algorithmic echo chambers, and deliberate disinformation campaigns, the credibility and reliability of intelligence have never been more critical. As highlighted in our previous piece, threat actors—from state-backed propaganda machines to decentralized online ecosystems—are increasingly skilled at manipulating narratives, obscuring facts, and sowing confusion. This creates a core challenge for intelligence professionals: collecting vast amounts of data while ensuring that what we gather is both trustworthy and actionable. 

At OpenHorizon, systematic source selection and evaluation are foundational to our threat intelligence process. We distinguish clearly between who provides the information (source reliability) and what the information says (information credibility). By applying standardized evaluation frameworks—such as the NATO A–F reliability rating for sources and credibility scores for extracted information—we reduce the risk of distortion, avoid amplifying falsehoods, and reinforce the integrity of our intelligence. 

In short, intelligence is only as good as the sources behind it. In today’s complex and contested information environment, careful vetting is not just best practice—it is a strategic necessity. 

 

Selection and Evaluation of Sources and Information 

At OpenHorizon, we begin our intelligence cycle with Intelligence Requirements—high-level questions that guide our selection of Intelligence Sources, such media outlets, academic institutions, cybersecurity firms, think tanks, and individual experts. 

The next step involves identifying and collecting specific Data and Information Sources—such as articles, reports, or datasets—from these Intelligence Sources. Each is evaluated through a two-part framework: (1) reliability of the Intelligence Source, and (2) credibility of the extracted information or Essential Elements of Information (EEIs). 

 

The Reliability of Intelligence Sources 

Each Intelligence Source—defined as the publisher of data and information—is scored based on the NATO A–F reliability scale. This scoring is based on the following core factors: 

  1. Source Identity
  • What type of organization or individual is it? (e.g., established outlet or fake news generator?) 
  • Is the source authentic, or is it a proxy for propaganda or disinformation? 
  1. Track Record
  • Does the source have a history of accurate and verifiable reporting? 
  • Has it been linked to bias, misinformation, or manipulation? 
  1. Independence and Objectivity
  • Is the source politically, financially, or ideologically neutral? 
  • Does it exhibit consistent, balanced reporting? 
  1. Expertise and Specialization
  • Does the source demonstrate recognized domain-specific expertise? 
  1. Methodological Transparency
  • Does it disclose data collection methods and sources? 
  1. Reputation and Professional Standing
  • Is the source cited by reputable institutions, experts, or analysts? 
  1. Quality Controls
  • Are there editorial processes, ethical guidelines, and peer review mechanisms in place? 


                                                                Group 4, Gruppert objekt

Based on these criteria, OpenHorizon assigns each source a reliability rating (A–F) in accordance with the NATO standard as shown in Table 1. These scores are periodically reviewed and updated as new information becomes available. Thresholds for scoring are based on cumulative evaluation across the above factors, with emphasis placed on source authenticity, consistency, and transparency. 

                                                                 

The Credibility of Data and Information 

Once a Data and Information Source has been collected, we extract Essential Elements of Information (EEIs) using dedicated AI models. Each EEI is first extracted in contextualized form, capturing both the fact and the surrounding context. For example: 

"In 2023, Western intelligence agencies and NATO estimated that threat actor X had an annual revenue of $10 million from cryptocurrency fraud, which it used to finance its espionage mission against critical infrastructure in Europe. This year, they estimate that this number has doubled." 

From this, we extract the concrete EEI: “$10 million (2023), $20 million (2024 estimate)”. This separation is essential because the value of information is often inseparable from its context. 

 

Credibility Assessment of EEIs 

EEIs are then assessed for credibility. The method depends on the type of information: 

  • Quantitative EEIs (e.g., number of personnel or revenue): Credibility is expressed as a statistical distribution with the extracted value as the mean and a standard deviation reflecting the uncertainty. 
                                                       
  • Qualitative EEIs (e.g., intentions, motivations): Credibility is rated using the NATO 1–6 Information Credibility Scale: 

                                                                  Group 5, Gruppert objekt

 

Credibility is determined by: 

  1. Source Reliability
  • Higher source reliability (A/B) leads to higher a priori credibility. 
  1. Verifiability and Plausibility
  • Can the EEI be verified through observation or other data? 
  • Does it align with existing estimates or knowledge? 
  1. Direct vs. Indirect Access
  • Was the information likely acquired firsthand, or is it hearsay? 
  1. Specificity and Consistency
  • Is the EEI clear, internally coherent, and detailed? 
  • Does it contain vague or speculative language? 

Discrepancies between an EEI and current estimates result in lower credibility scores or wider standard deviations, as the information is treated with greater caution during subsequent estimation and Bayesian updating. 

 

Mitigating Subjectivity and Bias 

While our scoring frameworks are systematic, some level of subjectivity is inevitable. To mitigate this: 

  • We use clearly defined evaluation criteria. 
  • Reliability and credibility assessments are logged and subject to internal peer review. 
  • AI tools aid in identifying patterns and inconsistencies across sources and EEIs. 

These measures help ensure that biases are minimized, and assessments remain consistent, transparent, and defensible. 

 

Conclusion 

In today’s fast-moving information environment, where disinformation and manipulation are increasingly used as tools of influence, the ability to distinguish trustworthy signals from noise is a competitive advantage—and a strategic imperative. At OpenHorizon, our commitment to systematic source vetting and credibility scoring helps ensure that our threat intelligence remains rigorous, relevant, and resilient. 

By treating source evaluation not as a background task but as a foundational intelligence function, we strengthen both the integrity and the utility of every insight we deliver to our clients.