Subcategory · AI Citation Index
Who AI is citing in CI/CD
Vercel leads CI/CD evaluation queries at 71 avg score; no brand clears 50% shortlist coverage across 15 prompts.
Brands tracked
50
Avg AEO score
66/100
Citation coverage
0%
of brands cited at least once
Dominant brands
0
cited > 50% of queries
Discovery stage
The shortlist
When buyers ask AI for the best CI/CD software
No discovery-stage prompts have been scored against this category yet — once the shortlist cron runs, this section will surface which brands AI cites most.
Evaluation stage
The battleground
How brands fare on comparison queries · category median 15/100
Vercel leads head-to-head comparisons with a 71 avg evaluation score across 4 queries. Netlify trails at 41, IBM Terraform at 33, and IBM Business Automation Workflow at 30. The median evaluation score is 15, showing most brands earn weak or zero citation in direct matchups. GitLab, TeamCity, and Render all scored 0 despite appearing in comparison queries.
Each brand's evaluation score averages how AI responds to head-to-head comparison queries that mention them. Above-median brands win their comparisons more often than they lose.
Trends
Over the last 12 months
Too few audits to establish a trend — only 2 months of data (March and April 2026, 8 and 19 prompts). Vercel rose 15 points month-over-month; Render fell 5. Category avg hovered near 66.
Editorial picks
Brands worth watching
Vercel
Vercel posted the highest evaluation score in the category (71 across 4 comparison queries) and climbed 15 points month-over-month. Its featured-candidate score of 78 ties for first, yet it still lacks shortlist dominance.
Read brand profile →GitLab
GitLab earned a 72 featured-candidate score but a 0 avg evaluation score across 4 comparison queries — cited in discovery contexts but ignored in head-to-head evaluations.
Read brand profile →Netlify
Netlify holds a 41 avg evaluation score across 4 queries, placing second in direct comparisons. Its 72 featured-candidate score suggests moderate discovery visibility despite shortlist fragmentation.
Read brand profile →TeamCity
TeamCity appeared in 6 comparison queries but scored 0 on all of them, the largest sample-size failure in the evaluation set. Its 66 featured-candidate score indicates some discovery presence.
Read brand profile →FAQ
CI/CD questions, answered
Which CI/CD tool does AI cite most often in head-to-head comparisons?+
Is there a dominant CI/CD platform in AI-generated shortlists?+
How is GitLab performing in AI citations for CI/CD?+
What is the best CI/CD software for small teams?+
Are CI/CD citation trends shifting toward newer platforms?+
Why does TeamCity score zero in evaluation queries?+
Related
More in Developer Tools
Want to know if AI cites your brand for CI/CD?
Free audit. ChatGPT, Perplexity, Gemini, Claude.
Run an audit →