Most developers focus on performance, functionality, and UI, but often overlook one silent killer of SEO: the URL structure.
A poorly designed URL system can confuse search engines, reduce your rankings, and make even great content invisible.
My name is Asfak Ahmed, and today I will share my SEO friendly development experience with SAAS applications spanning over 2 years. In reality, SEO starts with code, and URLs are the foundation of that code. If the URL is wrong, your website’s SEO is broken from birth, no matter how much optimization you do later.
What Is an SEO-Friendly URL?
An SEO-friendly URL is a clean, descriptive, and keyword-rich address that tells users and search engines exactly what the page is about.
Example:
✅ https://example.com/blog/seo-friendly-url-guide
❌ https://example.com/post?id=9834&cat=blog123
Why this matters:
Improves click-through rate (CTR) – People are more likely to click on a clean, meaningful link.
Helps indexing – Search engines understand page context faster.
Boosts trust – Users can easily guess the content of the page.
How URLs Affect Website SEO?
Search engines use URLs as one of the primary signals to:
Understand content hierarchy
Detect keywords and topics
Identify canonical (main) pages
Evaluate user experience (readability, trust)
Let’s break that down:
| SEO Factor | How URL Affects It |
| Crawlability | Long URLs make it harder for crawlers to index content. |
| Keyword relevance | Including relevant keywords helps ranking. |
| Duplicate content | Multiple URLs pointing to the same content can hurt SEO. |
| User trust | Clean URLs look safer and more professional. |
What Is URL Indexing?
URL indexing is the process by which Google (and other search engines) discover your pages, analyze their content, and store them in their searchable database, the Google Index. If your page isn’t indexed, it’s invisible to search, no matter how well-optimized it is.
The Process in 3 Steps:
Discovery (Finding URLs)
- Googlebot discovers URLs from sitemaps, links, or previous crawls.
Crawling (Accessing URLs)
- Googlebot requests the page to read the content, HTML, and metadata.
Indexing (Storing URLs)
- If the content is valid and allowed, Google adds it to the index.
How URLs Get Discovered?
There are multiple entry points where Google finds new URLs:
| Discovery Source | Description |
| Sitemap.xml | The most reliable way to submit all canonical URLs. |
| Internal links | Google follows links between your pages. |
| External backlinks | When another website links to your page. |
| JavaScript rendering | For modern apps, Google executes JS to extract dynamic routes. |
| Manual submission | You can submit URLs directly in Google Search Console. |
A pro tip for developers:
If you’re building with React, Next.js, or Vue, ensure that your routing generates real, crawlable links (<a href> tags), not JavaScript-triggered navigations.
Why URLs Don’t Get Indexed?
Let’s look at common indexing problems developers accidentally create and how to fix them.
Blocked by Robots.txt
User-agent: *
Disallow: /
Result: Googlebot can’t crawl any pages.
Fix: Only disallow what’s necessary.
Disallow: /admin/
Disallow: /api/
Noindex Meta Tag
<meta name="robots" content="noindex">
Result: The page is crawled but not indexed.
Fix: Remove this tag from public pages.
JavaScript-Only Routing
If your app uses client-side rendering (CSR) only, Google might not see your links.
Fix: Use server-side rendering (SSR) or static generation (SSG).
Frameworks like Next.js, Nuxt, and SvelteKit are SEO-safe.
Canonical Misconfiguration
<link rel="canonical" href="https://example.com/" />
If all pages point to the homepage as canonical, only the homepage will be indexed.
Fix: Make canonical URLs unique per page.
Duplicate or Parameterized URLs
/product?id=123
/product?ref=facebook
Google might see these as duplicates.
Fix: Use URL rewriting or canonicalize to one main version.
Common Mistakes That Break SEO
Using IDs or Query Strings
/page?id=1023
Problem: Search engines can’t infer meaning; these URLs aren’t shareable or readable.
Fix: Use slugs:
/blog/how-to-build-seo-friendly-urls
Uppercase or Mixed Case
/About-Us and /about-us
Problem: Treated as two different URLs duplicate content risk.
Fix: Always lowercase.
No Canonical Tags for Duplicates
/blog/seo-url-guide
/blog/seo-url-guide?ref=twitter
Problem: Search engines see both as separate pages.
Fix:
<link rel="canonical" href="https://example.com/blog/seo-url-guide" />
Using Underscores or Spaces
/seo_friendly_url or /seo friendly url
Problem: Google reads _ as a letter, not a space.
Fix: Use hyphens:
/seo-friendly-url
Changing URLs Frequently
Problem: You lose backlinks and rankings every time the URL changes.
Fix: Keep a stable structure and redirect old URLs (301 redirect) if needed.
How can we improve URL SEO?
Here’s a checklist developers should follow when building or reviewing a project:
Plan URL Structure Early
Before writing any routes, define a logical content hierarchy:
/blog/
/blog/[slug]
/category/[category]
/product/[product-slug]
Use Slugs, Not IDs
Use human-readable slugs in the database and routes.
Example (Next.js):
// pages/blog/[slug].js
export async function getStaticPaths() {
const posts = await getAllPosts();
return {
paths: posts.map(post => ({ params: { slug: post.slug } })),
fallback: false,
};
}
Implement Canonical Tags
Every page should declare its main (preferred) URL:
<link rel="canonical" href="https://example.com/blog/seo-tips" />
Handle Redirects Properly
Use 301 redirects for permanent moves:
- Next.js:
next.config.js
async redirects() {
return [
{
source: '/old-blog/:slug',
destination: '/blog/:slug',
permanent: true,
},
]
}
Enforce HTTPS and Trailing Slash Rules
Decide on a consistent format:
Either always use
/about/or/aboutAlways redirect HTTP → HTTPS
Example (Nginx):
rewrite ^/(.*)/$ /$1 permanent;
Use Sitemap and Robots.txt
A clean URL structure is useless if search engines can’t find your pages.
Generate sitemap dynamically:
https://example.com/sitemap.xml
And allow crawling in robots.txt:
User-agent: *
Disallow:
Sitemap: https://example.com/sitemap.xml
How Bad URLs Create SEO Issues!
Poorly structured or mismanaged URLs can confuse both users and search engines. From duplicate content to lost backlinks, bad URL practices can silently destroy your site’s rankings and traffic.
Here’s a breakdown of the most common URL-related SEO issues, their root causes, and what they typically look like in the real world:
Duplicate Content
When the same content is accessible through multiple URLs, search engines struggle to decide which version to index. This splits your ranking signals and weakens SEO authority.
Example:
https://example.com/blog/seo-tips
https://example.com/blog/seo-tips/?ref=homepage
Both URLs show the same article but are treated as separate pages.
Low Click-Through Rate (CTR)
Long, unreadable, or keyword-stuffed URLs look spammy and discourage users from clicking in search results.
Example:
https://example.com/post?id=93847&cat=seo&sort=asc
https://example.com/ultimate-guide-to-seo-2025
The second URL is shorter, descriptive, and instantly more clickable.
Indexing Errors
URLs with complex query strings or session IDs can confuse crawlers and prevent proper indexing. Google may ignore or skip these pages entirely.
Example:
https://example.com/product?sessionid=98237&tracking=affiliate
This dynamic URL can cause duplicate entries or missed indexing.
Lost Backlinks
When you change a page URL without setting up redirects, all backlinks pointing to the old link break, resulting in 404 errors and loss of SEO authority.
Example:
Old: https://example.com/blog/seo-basics
New: https://example.com/articles/seo-basics
Without a redirect, visitors and link equity are lost.
Crawl Budget Waste
If your site generates multiple versions of the same page with URL parameters, search engines waste crawl time on duplicates instead of important pages.
Example:
https://example.com/shop?page=1
https://example.com/shop?page=2
https://example.com/shop?sort=price&filter=red
Google may crawl dozens of these, using up your crawl budget.
What Determines If Google Indexes a URL
Just because Google crawls a page doesn’t mean it will index it, and even if it does, a poor URL structure can stop that page from ranking well. Crawling, indexing, and ranking are three separate steps, and your URL plays a role in all of them. A technically valid but messy URL may still fail to perform if it’s hard for users or search engines to understand.
To build URLs that Google loves to index and users love to click, you need to balance technical accessibility with a readable, well-structured design.
Here’s everything developers should know about what makes a URL both indexable and SEO-friendly:
Ensure Crawlability and Accessibility
If Google can’t crawl a URL, it will never reach the index.
Common blockers include restrictive robots.txt rules, “noindex” meta tags, or disallowed folders.
Example:
Disallow: /blog/
If your entire blog folder is blocked, none of your articles will appear in search results.
Fix: Always check your robots.txt file and <meta name="robots"> tags to confirm that public pages are crawlable. Only block private or duplicate pages.
Use Canonical Tags Correctly
When similar or duplicate pages exist, canonical tags tell Google which version to prioritize.
A missing or incorrect canonical tag can cause your preferred page to be ignored.
Example:
<link rel="canonical" href="https://example.com/blog/seo-guide" />
Fix: Make sure every duplicate or variant page (like pagination, UTM links, or print versions) points to one main canonical URL.
Deliver High-Quality, Unique Content
Google indexes pages that add unique value. If multiple URLs contain thin or duplicate content, they’re often filtered out.
Example:
A blog post that just repeats manufacturer descriptions or “Coming Soon” text is unlikely to rank.
Fix: Ensure each indexed page has original, helpful content that provides clear user value, text, visuals, or data.
Maintain Correct Server Responses
URLs must return a clean 200 OK status to be indexed. Redirect loops, 302s, or 404 errors will stop indexing entirely.
Example:
If https://example.com/blog/seo-basics redirects to a broken or looping URL, Google drops it.
Fix: Use 301 redirects for permanent changes and verify that every important URL resolves correctly with a 200 OK.
Strengthen Internal Linking
Even a perfect URL won’t rank if Google never finds it.
Orphan pages, those with no internal link, are often skipped during crawling.
Example:
A new landing page not linked from your main navigation or sitemap may never be indexed.
Fix: Add internal links from relevant pages and include the URL in your XML sitemap so crawlers can discover it easily.
Optimize Load and Render Time
If your content loads too slowly or relies entirely on JavaScript rendering, Googlebot might stop processing before it sees the full page.
Example:
A React SPA that renders key text only after JS hydration might appear “empty” to Google.
Fix: Use server-side rendering (SSR), pre-rendering, or static generation to ensure the main content is visible on initial load.
Keep URLs Descriptive and Readable
An SEO-friendly URL should tell both users and search engines what the page is about. Avoid random IDs, numbers, or encoded characters.
Example:
✅ https://example.com/blog/seo-friendly-urls
❌ https://example.com/post?id=9834&topic=seo
Use Hyphens, Not Underscores
Search engines treat hyphens as word separators, but underscores connect words.
Example:
✅ seo-friendly-urls
❌ seo_friendly_urls
Stick to Lowercase and Avoid Special Characters
Uppercase letters and special symbols (?, &, =, %) can cause duplicate URLs or indexing issues.
Fix: Keep all URLs lowercase, clean, and alphanumeric no unnecessary parameters.
Maintain a Logical Folder Structure
Organized URLs help Google understand your site’s hierarchy and context.
Example:
✅ https://example.com/blog/seo/advanced-tips
❌ https://example.com/xyz123/tips?ref=seo
How to Check URL Indexing Status?
Google Search Console (GSC)
Use the “Inspect URL” feature:
Shows whether the URL is indexed, discovered but not indexed, or crawled but not indexed
Reveals canonical, coverage, and crawl history
site: Search Operator
Search in Google:
site:yourdomain.com/page-slug
If your page appears, it’s indexed.
robots.txt Tester
Verify your URL isn’t blocked by robots.txt rules.
API / Programmatic Check
Google Indexing API (for jobs & live pages):
POST https://indexing.googleapis.com/v3/urlNotifications:publish
URL Best Practices for Common Frameworks
React (with React Router)
Use dynamic routes (
/blog/:slug) withreact-router-dom.Add canonical tags using
<Helmet>fromreact-helmet-async.Generate a sitemap using
react-router-sitemaporsitemapnpm package.For better SEO, consider prerendering with
react-snapor using SSR (e.g., with Next.js or Remix).
Next.js
Use dynamic routes (
/app/[slug])Define canonical tags in
<Head>Use
next-sitemapfor sitemap generation
Vue / Nuxt
Use pages/blog/[slug].vue with dynamic routing and canonical meta in head().
Some Developer Tools to Improve Indexing
XML Sitemap Generator
Tools likenext-sitemaporsitemap-generator-cliHelp list all valid URLs.Structured Data Testing Tool
Helps Google understand your content type better (articles, products, etc.)Fetch as Google (Search Console)
Test how Googlebot views your page after rendering.Log Analyzer
Check server logs to see if Googlebot is crawling your pages properly.
Conclusion
A developer’s work defines whether a website can ever rank well or not. SEO-friendly URLs aren’t a marketing gimmick they are the structural DNA of your site’s SEO health.
When you design URLs with clarity, consistency, and logic from the start, you’re not just helping Google; you’re building a site that’s faster, cleaner, and more user-friendly.
Remember:
“SEO doesn’t start with keywords — it starts with your code.”

Frontend Engineer | Building tools that make developers' lives easier, one commit at a time.







