How International SEO has changed in 2026
International SEO has been written about for two decades, and most of what’s been written about it is now either out of date or beside the point. The fundamentals haven’t changed, but the surfaces have multiplied, organic search is no longer limited to the traditional search results page or SERP, and the best practices for getting selected as a source now diverge across both traditional and AI engines.
This guide is for marketing leaders, SEO managers, and ecommerce teams who are taking a brand into multiple countries and want to understand what works in 2026. It covers the fundamentals: hreflang, site architecture, localisation, technical setup, because you can’t skip them. But it also covers the things most current guides don’t: how AI search engines select sources differently across languages, why entity recognition matters more than keyword targeting in markets where your brand is unknown, and how the role of SEO inside the buying journey is changing in ways that traditional reporting hasn’t caught up with.
A note before we start: good SEO is good SEO. AI hasn’t replaced the discipline. It’s added a new layer on top of it.
Summary of Contents
International SEO in 2026: what’s actually changed
From ranking problem to selection problem
Up until the last couple of years, and for most of SEO’s history, visibility was specifically a ranking problem. You researched a topic, defined how the content should be structured, optimised the technical bits, you earned links, the algorithm matched your content as a relevant answer to a query, and your position on the SERP defined your click and traffic potential.
In 2026, visibility is increasingly a selection problem. Before a query is “ranked” in any traditional sense, an AI system (Google’s AI Overviews, ChatGPT, Perplexity, Claude, Gemini) has already decided which sources to retrieve, which to synthesise, and which to cite. Pages that aren’t selected at the retrieval stage never make it into the answer at all, regardless of how well-optimised they are.
This is the most important conceptual shift in the discipline. A market-specific page could be technically perfect in that it’s indexed, hreflang-correct, fast, and well-linked, and still never appear in an AI Overview, because the upstream selection process has already chosen a different source. Increasingly, that’s where the battle is fought.
The implication for international SEO is sharp. In markets where your brand has weak entity recognition or sparse local authority, you’re competing with two disadvantages at once: you have to earn the click and earn the citation in a language where AI systems have less training data to work with.
Is SEO dead now that AI is in search?
No. The question gets recycled every few years and the answer hasn’t changed.
What’s actually happening is more interesting than the death narrative. Search is fragmenting, not dying. There are more surfaces than ever where buyers research products and brands, i.e., traditional Google, Google AI Overviews, ChatGPT, Perplexity, Reddit, TikTok, YouTube, Amazon, and vertical engines, and SEO is the discipline that gets you visible across them. The fundamentals of what makes content discoverable and trustworthy are remarkably consistent. The surfaces that surface it have multiplied.
What is changing is SEO’s role in the buying journey. Click-through rates from organic positions have eroded as AI Overviews and zero-click answers absorb more queries. Last-click attribution undercounts the channel by a lot, particularly in B2B and considered-purchase categories where buyers research across many sources before converting. The honest reframing is that SEO is becoming an influence channel and a brand-recall channel as much as it is a direct-traffic channel.
Anyone telling you SEO is dead is selling you something, generally a new acronym (AEO, GEO, LLMO, AIO) and a service to match. Treat the acronym wars as marketing noise. The work underneath them is the same work it’s always been: understand intent, build authoritative content, earn trust, make it technically discoverable. The tools have changed. The job hasn’t.
The five fundamentals that haven’t changed (and won’t)
Strip away the hype and these are the things that determine whether international SEO works. They have not changed in a decade and they will not change in the next one.
- Intent matching. Search systems, Google’s classic algorithm and modern LLMs alike, are still trying to match a user’s underlying need to the most useful answer. Pages that genuinely answer the question outrank pages that gesture at it.
- Content quality and original perspective. Generic content has always struggled. In 2026 it struggles harder, because LLMs themselves can produce generic content at scale. The premium on originality, expertise, and first-hand experience has gone up, not down.
- Authority and trust. Backlinks, brand mentions, citations across the web, and demonstrable expertise all matter as much as ever. AI systems have inherited Google’s bias toward authoritative sources because that bias was correct.
- Technical discoverability. If a search engine or AI crawler can’t find, render, and parse your page, none of the other work counts. Site speed, crawlability, structured data, mobile rendering, all of it still matters.
- User experience. Slow sites lose. Confusing navigation loses. Pages that hide their answers behind interstitials and aggressive popups lose. The bar for “good UX” has risen as user expectations have risen, but the principle is the same.
If a tactic you read about for 2026 doesn’t reinforce one of these five, treat it with suspicion. It’s probably a fad.
What is international SEO?
International SEO is the practice of optimising a website to be visible and useful to users searching across multiple countries and languages. Where standard SEO targets one market, international SEO targets several at once, which means handling the technical signals that tell search engines and AI systems which version of your content belongs to which market, and building content that actually serves users in each of those markets.
It includes site architecture decisions (ccTLD, subdirectory, or subdomain), hreflang implementation, localised keyword research, content adaptation, technical performance per region, local link-building, and increasingly, optimisation for AI surfaces that select sources across languages. Done well, it lets a single brand serve markets it wouldn’t otherwise reach. Done badly, it produces a fragmented site that ranks nowhere convincingly.
Choosing your markets before you build anything
Most international SEO guides skip straight to URL structure. That’s backwards. The biggest determinant of whether your programme succeeds is whether you’ve picked the right markets. Pick well and the technical work pays off. Pick badly and the most elegant hreflang implementation in the world won’t save you.
Reading existing organic and AI demand by country
Start with what’s already happening. Open Google Analytics 4 and segment your organic traffic by country. Look not just at session volume but at engagement metrics (pages per session, time on site, conversion rate) by country. A market that sends 5,000 sessions a month with a 0.1% conversion rate is telling you something different from one that sends 500 sessions at a 4% conversion rate.
Cross-reference with Search Console data. Which countries are surfacing your pages in their SERPs? Which queries are pulling them through? Are your meta titles and descriptions getting clicked or scrolled past?
Then look at AI demand, which is harder to measure but increasingly important. Tools like Brand Radar, Profound, and similar AI-search visibility platforms can show you which prompts are surfacing your brand in ChatGPT, Perplexity, and Claude across markets, and crucially, which ones aren’t. If you’re being mentioned in English-language AI answers but absent from German or Japanese ones, that’s a signal worth acting on.
Market viability beyond search volume
Search volume tells you whether the demand exists. It doesn’t tell you whether you can serve it.
Before committing to a market, run a viability check: Can you fulfil orders or deliver service there? Are there regulatory requirements (GDPR, country-specific data laws, certifications like CE or FDA) that need to be cleared? Is there competitive intensity from local players you don’t recognise? Are payment methods, currencies, and trust signals already in place, or do they need building? Is the cost of acquiring a customer in that market realistic at the price your product commands?
The most common mistake is treating market entry as a marketing decision. It isn’t. It’s an operational decision that marketing supports.
Country targeting vs language targeting
Once you’ve chosen your markets, you have to decide whether to target countries (UK, Germany, Japan as discrete units) or languages (English, German, Japanese regardless of where the user sits).
Country targeting is the right choice when local context genuinely differs in pricing, regulation, product availability, fulfilment, currency, payment methods, and cultural norms. A US user and a UK user shopping for the same product are operating in different commercial realities, even though they share a language.
Language targeting is the right choice when your offer is essentially the same across markets and the only thing that varies is the user’s preferred language. SaaS products fit this pattern.
The two aren’t mutually exclusive. You can run a UK English site, a US English site, and a single Spanish-language site that serves Spain and Latin America from one set of pages. The decision is local commercial reality vs operational simplicity, and it should be made deliberately, not by accident.
Site architecture: ccTLD vs subdirectory vs subdomain
This is the most consequential technical decision in international SEO. Get it right and the rest of the programme has a stable foundation. Get it wrong and you’re either rebuilding in 18 months or accepting that you’re competing with one hand tied behind your back.
The trade-offs in plain language
There are three viable structures.
- Country-code top-level domains (ccTLDs) are dedicated domains for each market: example.de, example.fr, example.co.uk. They send the strongest possible geo-targeting signal to search engines and they tend to inspire the most user trust in their respective markets. They’re also the most expensive to manage, since you’re effectively running multiple sites, and each one starts with zero domain authority.
- Subdirectories put each market in a folder of your main domain: example.com/de/, example.com/fr/, example.com/uk/. They consolidate domain authority on a single domain, which means new markets benefit from the work you’ve done in your home market. They’re cheaper to manage and easier to migrate. They send a weaker geo-targeting signal than ccTLDs but, in 2026, that signal is supplemented sufficiently by hreflang, content quality, and local backlinks for most use cases.
- Subdomains sit between the two: de.example.com, fr.example.com. They were popular in the late 2000s and they’re still used, but they share most of the disadvantages of ccTLDs (separate authority profiles in practice) without the geo-targeting benefit. We rarely recommend them for new builds in 2026.
When ccTLDs are worth the operational cost
ccTLDs make sense in three situations. First, when you’re operating in a market where local trust is paramount, e.g., a German B2B audience may instinctively trust .de over .com. Second, when your business already runs as separate legal entities per country with separate teams, budgets, and content calendars; the operational pattern matches the architecture. Third, when you’re entering markets where Google’s geo-targeting signals are weaker or compete with local search engines (China, Russia, South Korea) and a ccTLD plus local hosting is part of the price of admission.
For most multi-market expansion, particularly when starting out, subdirectories are the better default. They’re cheaper, easier to manage, and the consolidated authority compounds.
Migration risk: why this is a one-way door
Switching architectures after launch is brutal. Moving from a ccTLD setup to subdirectories, or vice versa, involves redirects, re-indexing, link equity loss, and a guaranteed traffic drop while search engines and AI systems work out the new structure. We’ve seen migrations recover in three months and we’ve seen them never fully recover.
This means the architecture decision should be made with five-year operational reality in mind, not just the next 12 months. Talk to whoever’s going to run the markets day-to-day before talking to whoever’s going to build the site.
Hreflang implementation in 2026
Hreflang remains the single most important technical signal for international SEO in traditional search. It’s also one of the most commonly broken implementations on the web. And in 2026, its influence is more bounded than it used to be.
What hreflang is and how it works
The hreflang attribute tells search engines that a particular page has alternate versions for different languages or regions, and which version should be served to which user. It’s expressed either as an HTML link element in the page head, an HTTP header, or as entries in an XML sitemap.
A correctly implemented hreflang setup prevents the most common international SEO failure mode: a German-language page ranking in Spain because Google can’t tell the two pages apart, or a US-priced page being shown to a UK customer because no signal differentiates them. It also prevents duplicate-content issues when the same content is genuinely served in multiple regions.
Hreflang doesn’t influence rankings directly. It influences which version of a ranking page is served. That’s a critical distinction.
The five most common implementation errors
Most hreflang failures come from the same handful of mistakes.
- Missing return tags. Hreflang must be reciprocal. If your English page links to your German page, your German page has to link back to your English page. Single-direction links are routinely ignored.
- Wrong language or country codes. Hreflang uses ISO 639-1 for languages and ISO 3166-1 alpha-2 for countries, not made-up codes. “en-uk” is wrong; the correct code is “en-gb.” Tools won’t always flag this for you.
- Including the wrong URL. The hreflang tag has to point to the canonical, indexable URL of the alternate version. Pointing it to a redirect, a noindexed page, or a page that returns anything other than a 200 response breaks the relationship.
- Missing the x-default tag. When you don’t have a version for a user’s specific language and region, x-default tells the search engine which page to fall back to. Sites that omit it see the wrong fallback served.
- Mixing implementation methods inconsistently. Picking one method (HTML, header, or sitemap) and applying it consistently is more reliable than partial implementation across multiple methods. Sitemap-based implementation tends to scale better at enterprise volume.
If you do nothing else with this section, audit those five things. They account for the majority of broken setups.
Why hreflang has diminished influence in AI search
This is the part most current guides miss. Hreflang operates at the serving layer, so after content has been retrieved and evaluated, it tells the engine which version to show. That works as designed in classical SERPs.
In AI-mediated retrieval, the selection of source content happens upstream, before serving. Retrieval-augmented generation systems pick which sources to synthesise based on relevance, authority, and entity confidence, and at that stage, hreflang signals may not be evaluated at all. The system selects a single representation, synthesises an answer, and serves it.
The practical consequence: a perfectly hreflang-correct German page could be technically valid and still never appear in a German-language AI Overview, because the upstream system chose to synthesise from the English source. Hreflang has no mechanism to influence that selection.
What does influence it: clear entity definition per market, genuine local authority signals (links from in-market sources, mentions in regional press, association with local entities in knowledge graphs), and content that demonstrates real market differentiation rather than mechanical translation. Hreflang is necessary; it is no longer sufficient.
XML sitemaps vs on-page tags at enterprise scale
For sites with up to a few dozen language/country combinations, on-page hreflang in the HTML head is straightforward and easy to debug. Beyond that, particularly for enterprises with 30+ markets, XML sitemap implementation scales better. It centralises the hreflang relationships in one place, reduces page weight, and makes bulk updates manageable.
The trade-off is debugging: errors in sitemap-based hreflang are harder to spot than errors in page-level tags. We recommend sitemap implementation paired with regular automated auditing, not sitemap implementation as a one-time setup.
Keyword research that survives translation
If you take one thing from this section: don’t translate keywords. Translate intent.
Why translated keyword lists fail
A keyword list translated word-for-word from English to German will miss the point in obvious and subtle ways. Obvious: search volumes diverge. A keyword that pulls 50,000 monthly searches in the US might have 200 in Germany. Optimising for it is a waste. Subtle: native speakers don’t search the way translation tools think they do.
The English term “trainers” and “sneakers” are both used; UK and US users prefer different ones. German B2B buyers search using compound nouns that don’t have direct English equivalents. Japanese users mix scripts (kanji, hiragana, katakana, romaji) within a single query in ways that pure translation misses entirely. Arabic users search in ways that depend on whether they’re using diacritical marks. None of this comes through a translation API.
Worse, translated keyword lists optimise for the wrong intent. A user searching “buy a car” in English has a different mental model from a user searching the German equivalent, therefore the entire shape of their journey, including which surfaces they research on, may differ.
Native-language research workflows
The reliable workflow goes like this. Start with intent rather than keywords: what is the user trying to accomplish? Then work with a native-speaker SEO or local consultant to identify the actual phrases that intent surfaces in. Validate with local-volume tools (Ahrefs and Semrush both segment by country; Google Keyword Planner can be set per country). Cross-check by reading top-ranking pages in the target market, and if their titles and headings consistently use a phrasing different from what you’ve translated, trust the SERP.
Most agencies serving international clients invest more time in this than in any other research activity, and it shows in the results. Cheap keyword translation is a false economy that compounds across years of content production.
Cross-market keyword variants
A non-exhaustive set of examples worth knowing about:
In English, US/UK/AU/IN markets diverge significantly. “Cell phone” vs “mobile” vs “handphone” is a famous example. So is “vacation” vs “holiday.” So is “résumé” vs “CV.” Brand names sometimes vary too, take how Aldi positions differently in the UK and Germany.
In Spanish, “ordenador” (Spain) vs “computadora” (Latin America). “Coche” vs “carro.” Vocabulary diverges enough that a single Spanish keyword strategy serving both Spain and Mexico is a mistake.
In French, France French vs Quebec French diverge on vocabulary, formality, and search behaviour. “Email” vs “courriel” is the canonical example.
The implication is consistent: each market needs its own keyword research, even when nominally sharing a language.
Using AI tools for cross-language research
LLMs are genuinely useful for international keyword research, with two important caveats.
They’re useful for: brainstorming intent variations, generating long-tail variants, identifying related concepts in a target language, and producing first-pass content briefs in languages you don’t speak. They’re a force multiplier for native-speaker editors, not a replacement for them.
They’re misleading for: estimating search volumes (LLMs confabulate them; always cross-check against real data), identifying brand-specific or jargon-heavy queries in niche B2B sectors, and judging cultural appropriateness or tone. The risk is that AI-assisted research feels comprehensive when it’s actually surface-level. Treat it as a starting point and validate.
Localisation beyond translation
Translation, transcreation, and localisation compared
Translation converts text from one language to another with fidelity to the source. Transcreation rewrites the underlying message so it lands the same way in the target market, sometimes departing significantly from the source words to preserve the source intent. Localisation is broader still, adapting an entire experience (text, imagery, layout, currencies, payment methods, examples, references) to fit local norms.
International SEO programmes that succeed do all three. Translation is necessary for body content where accuracy matters (specs, legal, FAQs). Transcreation is necessary for marketing copy, headlines, and CTAs where tone and resonance matter more than literal accuracy. Localisation is the umbrella that makes the page feel native rather than imported.
What actually needs localising
Beyond the obvious (language), the elements that most need adaptation are:
- Currency and pricing logic. Payment methods (Klarna in Germany, iDEAL in the Netherlands, Boleto in Brazil — relevant payment methods drive trust and conversion).
- Units of measurement (metric vs imperial, kg vs lb).
- Date and number formats (29/04/2026 vs 04/29/2026; comma vs period as decimal separator).
- Phone number formats.
- Customer support hours and channels in local time zones.
- Shipping, returns, and customs information.
- Trust signals — local certifications, association memberships, awards.
- Tax and legal disclosures specific to each jurisdiction.
- Imagery that reflects local users and contexts.
- Cultural references, holidays, examples, and idioms.
Pages that translate the body copy but leave the price in dollars, the phone number in US format, and the testimonials from Americans signal “imported” loudly and lose trust accordingly.
Cultural adaptation in B2B vs B2C contexts
B2C localisation tends to focus on tone, imagery, social proof, and emotional resonance. The bar for cultural fluency is high because consumer purchases are emotional and the cost of feeling “off” is a lost sale.
B2B localisation tends to focus on procurement norms, compliance signals, certifications, and the language of decision-making in each market. German B2B buyers, for example, weigh technical specification depth and certification heavily. Japanese B2B buyers value formality, established relationships, and demonstrated longevity. US B2B buyers tolerate aggressive sales language that would alienate a Northern European audience. Get these wrong and you don’t just lose conversion; you lose credibility on the first impression.
Technical SEO for global sites
The fundamentals don’t change at scale, but several specific technical issues become disproportionately important when serving multiple markets.
CDN strategy and server location
Site speed varies dramatically by region depending on where your content is served from. A US-hosted site delivering content to Australia or India without a CDN will be visibly slow to those users, and slow sites lose rankings, conversions, and AI citations.
A content delivery network with edge nodes in your target markets is non-negotiable for international programmes. Cloudflare, Fastly, and Akamai are all viable; the choice is about pricing, integration, and existing tooling rather than performance differences.
For markets behind the Great Firewall (notably China), a CDN with mainland China presence and a local hosting setup are required to achieve usable speeds. This adds operational complexity (ICP licensing, content review) that has to be factored into the market-entry decision.
Core Web Vitals across regions
Core Web Vitals (Largest Contentful Paint, Interaction to Next Paint, Cumulative Layout Shift) are measured per country in Search Console. A site that passes globally could fail in specific regions because of latency, network conditions, or device profiles in those markets. Mobile-first indexing makes this worse: in markets with high mobile penetration on slower networks, a site optimised for desktop performance could underperform badly on the actual user device profile.
Audit Core Web Vitals per target country, not just globally. The differences will surprise you.
Crawl budget for multi-country sites
Large international sites, particularly ecommerce, quickly hit crawl-budget limits if not architected carefully. Faceted navigation, filter parameters, and per-country variations can balloon the URL count into the millions, and Googlebot will simply not crawl all of it.
The standard remedies still apply: clean URL structures, judicious use of noindex and canonical tags, robots.txt directives for low-value parameter combinations, and prioritisation of crawlable URLs in XML sitemaps. The added wrinkle for international sites is that crawl budget is allocated unevenly across country variants, which means the low-priority country’s most important pages may not be crawled at all. Monitor crawl stats per market.
IP redirects and why they break SEO
A perennial mistake: detecting a user’s IP address and automatically redirecting them to “their” market’s site. It feels helpful. It breaks SEO.
Search engines crawl from US IPs (mostly). If your site auto-redirects US IPs to your US version, Googlebot only ever sees the US version, and your other markets never get indexed. Beyond that, IP-based redirection is wrong often enough, due to VPN users, travellers, people on satellite connections, to be genuinely user-hostile.
The right pattern is a banner or modal that suggests the appropriate market based on detected location, but lets the user choose. Country selection persists in a cookie or preference. Content remains crawlable on its canonical URLs.
Building authority in each market
Why global link tactics don’t transfer
Links from your home market do not establish authority in other markets. A site with 5,000 referring domains, all from the US, looks domestically strong and locally weak in Germany. Search engines and AI systems are aware of this.
Building authority in each new market means earning links and mentions from that market: local trade publications, local industry associations, local partner sites, local journalism, local podcast appearances, local conference participation. None of this is fast, and most of it isn’t possible to outsource cheaply.
Local PR, trade press, and association memberships
The reliable methods for building local authority are unglamorous and well-known. Memberships in country-specific industry associations. Sponsorships of local events. Original research released in local language with local angle. Contributions to local trade press. Speaking slots at country-relevant conferences. Partnerships with local distributors, agencies, or complementary brands.
What doesn’t work: cheap link schemes targeting country-coded directories, mass outreach in a language you don’t operate in, and translated press releases that signal “imported” the moment a local reader sees them.
Trust signals that vary by region
Beyond links, the trust signals that influence both search visibility and conversion vary surprisingly by market. Reviews on platforms that matter locally (Trustpilot in the UK, eKomi in Germany, Yelp in the US) carry different weight by region. Payment-method logos send trust signals (Klarna in Northern Europe, Cartes Bancaires in France, UnionPay in China). Industry certifications (TÜV in Germany, Kitemark in the UK) communicate quality. Local press logos in a “as featured in” section work. Imported logos don’t.
These aren’t ranking factors directly. But they affect engagement metrics, like bounce rate, time on site, and conversion, that feed back into ranking systems and into AI citation decisions. Local trust compounds.
International SEO beyond Google
Baidu, Yandex, Naver, Seznam: where each one matters
Google’s market share is dominant globally but not universal. Several engines matter enough in their home markets that optimising for them is part of the international SEO job, not a footnote.
- Baidu dominates China with the majority of search market share. Optimising for Baidu requires Chinese-language content, mainland China hosting (or a credible CDN solution), an ICP license, and an understanding that Baidu’s algorithm weights different signals than Google’s, including Baidu’s own ecosystem of properties (Baidu Baike, Baidu Tieba, Baidu Zhidao).
- Yandex holds a significant share in Russia and parts of the CIS. Russian-language content and an understanding of Yandex Webmaster Tools are required.
- Naver dominates South Korea, where Google still trails. Naver’s results blend search, blog content, café (forum) content, and shopping in a way that rewards being present across Naver’s properties as much as being technically optimised.
ICP licensing and other market-entry requirements
China deserves its own paragraph because it’s the most complex case. To run a credible site for the Chinese market, you need: an ICP (Internet Content Provider) license, which requires a Chinese business entity; mainland China hosting or a CDN solution that works inside the firewall; Chinese-language content that passes regulatory review; and an awareness of which topics, words, and references trigger filtering.
This is why most non-Chinese brands operating in China use a partnership or a Hong Kong-routed approach rather than a full mainland setup. The decision is operational and legal as much as technical.
When to invest in non-Google search engines
The honest answer: when the market is big enough to justify the investment and your competitors are doing it credibly. China almost always justifies it if you’re committed to the market. Russia and South Korea justify it for many B2C and ecommerce brands. Japan justifies a Yahoo-aware approach for most serious entries. Czechia and Vietnam are case-by-case.
If you’re just exploring a market, optimising for Google in that country delivers most of the available traffic. If you’re committing seriously, ignoring the local engine leaves money on the table.
International SEO in AI search
This is the section that most current guides handle weakly. AI-mediated search behaves differently across languages and markets in ways that change the optimisation strategy meaningfully.
How AI search engines select sources differently
Traditional search ranks pages. AI search retrieves and synthesises. The retrieval step is where most international SEO programmes are losing visibility without realising it.
When ChatGPT, Perplexity, Claude, or Google’s AI Overviews answer a query, they retrieve a small handful of sources (three to ten), synthesise an answer from them, and cite a subset. The retrieval decision weights different signals than classical ranking: source authority, entity confidence, content recency, and freedom from contradiction with other authoritative sources.
The implication for international SEO is that being technically optimised is no longer sufficient to be cited. You also need to be the kind of source the system trusts at the retrieval stage, which means a clear entity, an authoritative domain, original information, and structured content the system can confidently extract from.
The multilingual LLM bias problem
Large language models are trained predominantly on English-language data. The English-language web is overrepresented in training corpora; many other languages are dramatically underrepresented. The practical consequence is that AI tools have weaker, less consistent knowledge of brands, products, and concepts in non-English contexts.
A brand with strong English-language presence and weak local-language presence will routinely find that AI tools describe it correctly in English answers and incorrectly (or not at all) in answers in Portuguese, Vietnamese, Polish, or Arabic. The same brand may be hallucinated about, conflated with competitors, or simply omitted from comparison answers in markets where its training-data footprint is small.
The remedy is unglamorous: build genuine in-language content, in-language media coverage, in-language Wikipedia and Wikidata presence, and in-language structured data that reinforces who the entity is and what it does. Over time, these signals work their way into the next generation of training data and into the real-time retrieval that AI tools perform on the live web.
Building entity recognition across markets
Entity SEO, making it unambiguously clear to search systems and AI tools who you are, what you do, and how your brand relates to other brands and concepts, is the most underdiscussed lever in international SEO.
Practically, it means: a clear and consistent organisation schema across all your domains; stable naming conventions for products and services across markets; contributions to Wikipedia and Wikidata in each market’s language where notability allows; local-language press coverage that establishes the entity in each market; consistent NAP (name, address, phone) data across local listings; and explicit modelling of how your global brand relates to its local variants (subsidiaries, regional offices, market-specific product names).
When entity relationships are clear and consistent across markets, AI systems answer about your brand more accurately. When they’re ambiguous, the system defaults to its most confident global interpretation, which is the wrong one for the local user.
Measuring share of voice in AI answers per market
The new metric worth tracking: how often does your brand surface in AI answers for category-level queries in each of your target markets? Tools like Profound, Brand Radar, and other AI-search visibility platforms now offer this. You query a representative set of prompts (“best CRM for mid-market B2B,” “top suppliers of X in Germany”), see how frequently your brand appears, and compare that to competitors.
Reported as a percentage (share of voice in AI answers, by market) this becomes a defensible KPI for the part of SEO most marketing leaders feel intuitively but can’t quantify. It’s also a leading indicator: brands that gain AI share of voice gain pipeline 6-12 months later.
Hallucination risk in foreign markets
A specific risk worth flagging: AI tools sometimes confidently state wrong information about your brand in markets where their training data is sparse. Wrong shipping times. Wrong product availability. Wrong pricing. Wrong company history. Wrong people in wrong roles.
This isn’t a hypothetical. It happens to brands daily, usually invisibly, until a customer turns up with a complaint based on what an AI assistant told them. The protection is, again, entity clarity: structured data, clear product information, consistent NAP, current content, and active monitoring of how AI tools describe your brand in each market. Where errors persist, the remedy is to flood the relevant signals with correct information so the model has a less ambiguous source to retrieve from.
The international SEO audit
A 12-point international SEO audit checklist
A structured audit covers each of the following. Score each on a 0-3 scale (broken / partially working / working / excellent) and prioritise remediation by impact.
- Site architecture. Is the ccTLD/subdirectory/subdomain choice still appropriate for the business as it stands today? Are crawl, index, and authority signals consolidating in the right place?
- Hreflang implementation. Are all reciprocal tags in place? Are language and country codes correct? Are alternate URLs canonical, indexable, and 200-responding? Is x-default present?
- Indexation coverage. Is each market version of the site fully indexed? Search Console per-property shows discrepancies clearly.
- Per-market technical performance. Are Core Web Vitals passing in each target country? Is the CDN serving close to the user?
- Localised keyword targeting. Is each market’s content built around native-language keyword research, or is it translated from English?
- Localisation depth. Does each market’s content go beyond translation by localising currency, payment methods, trust signals, imagery, examples?
- Local link profile. What share of referring domains in each market come from genuinely local sources?
- Entity consistency. Is organisation schema deployed consistently? Are NAP data, product names, and brand descriptions consistent across markets?
- Search Console health per market. Are there manual actions, indexing issues, or hreflang errors flagged in any market property?
- AI visibility per market. Where does the brand surface in AI Overviews and AI assistant answers in each target language? Where is it absent?
- Local search engine presence. For markets where Baidu, Yandex, or Naver matter, is there a credible presence, or is the market effectively unaddressed?
- Measurement infrastructure. Are GA4, Search Console, and AI-visibility tools configured to report per market? Is reporting structured around it?
A quarterly walkthrough of these twelve items catches most international SEO problems before they compound.
When to audit (and when to rebuild)
If the score across these twelve items is mostly 2s and 3s with isolated 0s, audit and remediate. If the score is mostly 1s and 0s, you’re not auditing, you’re rebuilding. The signal that you need a rebuild rather than a refresh is when the architecture itself is wrong (chosen years ago for a business model that no longer applies) or when the site has been built incrementally with no consistent international strategy. In those cases, fixing individual items wastes effort that should go into a planned re-architecture.
Measurement and KPIs
KPIs that actually matter
The KPIs that matter for international SEO are the ones tied to commercial outcomes per market. Total organic traffic globally is a vanity metric; organic-influenced revenue per market is not.
The KPIs we recommend tracking, per market: organic sessions, organic conversion rate, organic-influenced revenue, branded vs non-branded query volume, share of voice in AI answers for category queries, ranking positions for top commercial keywords, and time to first conversion from new markets.
Notably absent from this list: keyword rankings as a primary metric. Rankings are an input, not an outcome.
Multi-market attribution
Attribution across regions is genuinely hard. Buyers research in multiple sessions across multiple devices, sometimes from multiple countries (a US-based buyer researching German suppliers, for example). Last-click attribution understates SEO’s contribution badly in this pattern.
A practical model: track first-touch, last-touch, and at least one mid-funnel touchpoint for each conversion. Report the channel mix by market and by buyer segment. Don’t try to settle the attribution debate; just report the same data three ways and let the patterns be visible.
Reporting to leadership
The international SEO report that gets read by a CMO or CFO has three sections: market-by-market commercial outcomes (organic-influenced revenue, conversions, share of category in market), leading indicators (rankings, share of voice in AI, branded query volume), and what’s been done in the last period and what’s planned for the next. Anything more is noise.
Most international SEO failures are operational, not technical. The technical work isn’t the hard part. The hard part is sustaining the work, in market, with people who care, after the initial launch enthusiasm fades. If you want to achieve this, get in touch with us today.
Facing a Challenge?
Get help from international SEO professionals. We can make your website visible when matters most. A new game plan to boost your online presence when your potential customers need you.
Get Started
ChatGPT Gets Instant Checkout With Agentic Commerce Protocol
Is Google Not Letting Third-Party Tools Crawl Beyond the First Page?
