Technology Radar Methodology — How We Score 310+ Tools Weekly

Full documentation of the WTF Technology Radar scoring algorithm: data sources, normalization, weighting, EWMA smoothing, movement classification with hysteresis, and known limitations.

Technology Radar Methodology — How We Score 310+ Tools Weekly
Technology radar methodology — data-driven scoring algorithm

This page documents exactly how the WTF Technology Radar works. Every score is computed by a deterministic algorithm — no manual overrides, no editorial bias. We publish this methodology in full because we believe transparency is what separates data-driven analysis from marketing.

Data Sources

We score each tool using four independent data sources, collected weekly by our automated pipeline:

1. Google Trends (Weight: 25%)

Search interest data from Google Trends via the DataForSEO API. For each tool, we track:

  • Current interest level (0-100 scale from Google)
  • Year-over-year change — is interest growing or declining compared to 12 months ago?
  • Month-over-month change — short-term momentum signal

This signal captures public awareness and mindshare. A tool with rising Google Trends interest is entering more conversations, blog posts, and developer searches.

2. GitHub Activity (Weight: 25%)

For open-source tools with a GitHub repository, we track:

  • Total stars — overall popularity (log-normalized to prevent mega-repos from dominating)
  • Stars growth — new stars gained since the last snapshot (momentum signal)
  • Commit frequency — commits in the last 30 days (active development)
  • Active contributors — unique contributors in the last 30 days (community health)

GitHub signals are only available for open-source tools. Proprietary tools (Salesforce, SAP, etc.) receive a null GitHub score, and their composite score is re-normalized across the remaining sources.

3. Expert Network Signal (Weight: 30%)

This is the most distinctive signal and the radar's key differentiator. Through our advisory network spanning Tegus/AlphaSense, Office Hours, Third Bridge, Arbolus, Capvision, and Guidepoint, we participate in hundreds of technology advisory calls annually for PE/VC firms and strategy consultants.

The radar counts aggregated, anonymized mentions of each tool in expert network call logistics and scheduling communications over a rolling window. Higher mention counts indicate that investors and strategic buyers are actively evaluating a technology — a leading indicator that often precedes public adoption trends by 6-12 months.

Privacy note: Only aggregated mention counts and network names are stored. No email content, subjects, senders, or confidential information is stored or published.

4. Search Volume (Weight: 20%)

Monthly search volume and keyword difficulty from DataForSEO:

  • Search volume — monthly searches for the tool name (log-normalized)
  • Keyword difficulty — how established the tool is in search (0-100, higher = more established)

Scoring Algorithm

Normalization

Raw metrics have wildly different scales (GitHub stars range from 0 to 500K+, search volume from 0 to 1M+). Before combining them, each metric is normalized to a 0-100 scale:

  • Log normalization for heavy-tailed distributions (stars, search volume, commits) — prevents mega-repos from dominating
  • Min-max normalization for bounded metrics (Google Trends interest, keyword difficulty)
  • Percentage growth mapped linearly within expected ranges

Composite Score Formula

The composite trend score is a weighted average of the four component scores:

trend_score = (trends x 0.25) + (search x 0.20) + (github x 0.25) + (expert x 0.30)

When a source is unavailable (e.g., no GitHub repo for a proprietary tool), its weight is redistributed proportionally to the remaining sources. This ensures proprietary and open-source tools are scored on a comparable scale.

Confidence Score

Each tool receives a confidence score (0-1) based on:

  • Number of data sources available (4/4 = full confidence)
  • Data freshness (stale data from failed fetches reduces confidence by 30%)

Tools with low confidence are visually de-emphasized on the radar. A tool with only 1-2 data sources should be interpreted with caution.

EWMA Smoothing

To reduce week-to-week noise, we apply an Exponential Weighted Moving Average (EWMA) with a smoothing factor of 0.7:

smoothed_score = 0.7 x current_score + 0.3 x previous_smoothed_score

This means each week's displayed score is 70% new data and 30% historical momentum. A single noisy data point won't dramatically shift a tool's classification. The alpha value of 0.7 was chosen to balance responsiveness to genuine trends with resistance to temporary spikes.

Movement Classification

Each tool is classified into one of five movement states based on its smoothed score and 12-week score delta:

Movement Criteria
Rising Score ≥ 65 AND (12-week delta ≥ +7 OR expert mentions ≥ 3 in 12 weeks)
Emerging Score 40-65 AND (12-week delta > 0 OR expert mentions ≥ 2 in 12 weeks)
Stable Score 30-70 AND |12-week delta| < 5
Declining Score < 30 OR 12-week delta ≤ -7 (and mentions not increasing)
New Fewer than 4 weeks of data available

Hysteresis (Anti-Churn Rules)

To prevent tools from bouncing between states weekly, the algorithm applies hysteresis — once a tool is in a state, it requires a stronger opposite signal to change:

  • A Rising tool won't drop to Stable unless its score falls below 55 OR its 12-week delta goes below -3
  • A Declining tool won't move to Stable unless its score rises above 40 AND its 12-week delta exceeds +3

This ensures that movement changes represent genuine sustained shifts in technology adoption, not statistical noise from week-to-week variance.

Pipeline Architecture

Our weekly refresh pipeline runs every Monday and follows these steps:

  1. Collect Google Trends + search volume via DataForSEO API
  2. Collect GitHub activity via GitHub REST API
  3. Scan expert network email mentions (aggregated counts only)
  4. Normalize raw metrics to 0-100 scale per source
  5. Compute weighted composite score
  6. Apply EWMA smoothing against previous week's scores
  7. Classify movements with hysteresis rules
  8. Detect and log movement changes
  9. Generate static JSON snapshot
  10. Rebuild and deploy the site

Cost per weekly run: approximately $4 (DataForSEO API calls). GitHub API and email scanning are free. The entire pipeline completes in under 10 minutes.

Known Limitations

We believe in documenting limitations as clearly as strengths. No methodology is perfect:

  • Open-source bias — GitHub signals are only available for open-source tools. Proprietary enterprise software (Salesforce, SAP, ServiceNow) may be under-represented in the scoring, even with weight redistribution.
  • English-centric data — Google Trends and search volume data is US-market focused. Regional trends in APAC, EMEA, or LATAM may differ significantly.
  • Expert network scope — Expert mentions reflect PE/VC and consulting interest, which skews toward enterprise and growth-stage technologies. Developer-focused tools without enterprise buyers may have weaker expert signals.
  • New tool cold start — Tools added to the radar start with "New" classification and require 4+ weeks of data collection before meaningful movement detection is possible.
  • Name collisions — Some tool names overlap with common words (e.g., "Linear", "Notion"). Our alias system mitigates this, but false matches are possible in expert mention scanning.
  • Lagging indicators — Search volume and GitHub stars are lagging indicators. By the time a tool shows significant movement in these metrics, the adoption trend may already be well established.

Comparison with Other Radars

We designed the WTF Technology Radar to address gaps in existing technology tracking tools:

Feature WTF Radar ThoughtWorks Gartner Hype Cycle
Update frequency Weekly Semi-annual Annual
Methodology Quantitative (published) Qualitative (advisory board) Qualitative (analysts)
Tool coverage 310+ tools, 12 categories ~100 blips, 4 quadrants 25-50 technologies
Access Free, open Free, open Paywalled
Best for Real-time tech intelligence Strategic planning Executive presentations

For a detailed comparison, see our guide: WTF Radar vs ThoughtWorks Technology Radar.

Explore More

Ready to Find the Right AI Tools?

Browse our data-driven rankings to find the best AI tools for your team.