Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to handle (internal) search result pages?
-
Hi Mozers,
I'm not quite sure what the best way is to handle internal search pages. In this case it's for an ecommerce website with about 8.000+ products and search pages currently look like: example.com/search.php?search=QUERY+HERE.
I'm leaning towards making them follow, noindex. Since pages like this can be easily abused for duplicate content and because I'd rather have the category pages ranked.
How would you handle this?
-
If none of these pages are indexed, you can block them via robots.txt. But if someone else links to a search page from somewhere on the web, google might include the url in the index, and then it'll just be a blank entry, as they can't crawl the page and see not to index it, as it's blocked via robots.txt.
-
Thanks for the quick response.
If the pages are presently not indexed, is there any advantage to follow/noindex over blocking via robots.php?
I guess my question is whether it's better or worse to have those pages spidered (by definition, any content that appears on these pages exists somewhere else on the site, since it is a search page)... what do you think?
-
Blocking the pages via robots.txt prevents the spiders from reaching those pages. It doesn't remove those pages from the index if they are already there, it just prevents the bots from getting to them.
If you want these pages removed from your index, and not to impact the size of your index in the search engines, ideally you remove them with the noindex tag.
-
Hi Mark,
Can you explain why this is better than excluding the pages via robots.txt?
-
How did it turn out? And Mark have you done much with internal search?
-
As long as you're sure that no organic search traffic is coming in via ranked search results pages from your site, it would be of no harm just to prevent search engines from indexing those pages as per the robots.txt directive I mentioned above - then just focus all your attention on the other pages of your site.
With regards to the unique content, always try and find the time to produce unique content on the category pages, these were the ones you mentioned you wanted to rank. Normally this is feasible providing you haven't got over 1,000 categories.
Feel free to PM me over a link to your ecommerce website if you would like me to take a look at any of the situation in greater detail.
-
Thanks for the reply. Yes, there is a semi-chance of duplicate content. And to be honest, the search function is not really great.
There are no visitors coming from the search pages, since we haven't build links specifically for those pages. As for the unique content, it's hard. Since we have so many products it's not really possible. We are working on optimizing our top 100 products though.
-
I'd do exactly what you're saying. Make the pages no index, follow. If they're already indexed, you can remove the page search.php from the engines through webmaster tools.
Let me know how it turns out.
-
How I would handle this would depend upon the performance of the ecommerce website and which entrance paths via the website convert higher.
You could easily instruct search engines not to index the search results page by adding the following in your robots.txt:-
Disallow: /search.php?search=*
But is there a real likelihood of duplicate matching content with your actual category pages? It's unlikely in all honesty - but depending on your website content and product range, I suppose possible.
If many visits to your website arrive via indexed search result pages, I would be inclined to leave them indexed however and implement measures to ensure that they won't be flagged as duplicate content.
Ways to handle this depend on your ecommerce provider and it's capabilities sometimes but more often that not, is just a case of ensuring there is plenty of unique content on your category pages (as there should be) and there is no chance of other pages of your website hindering their ranking potential then.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexed pages
Just started a site audit and trying to determine the number of pages on a client site and whether there are more pages being indexed than actually exist. I've used four tools and got four very different answers... Google Search Console: 237 indexed pages Google search using site command: 468 results MOZ site crawl: 1013 unique URLs Screaming Frog: 183 page titles, 187 URIs (note this is a free licence, but should cut off at 500) Can anyone shed any light on why they differ so much? And where lies the truth?
Technical SEO | | muzzmoz1 -
Abnormally high internal link reported in Google Search Console not matching Moz reports
If I'm looking at our internal link count and structure on Google Search Console, some pages are listed as having over a thousand internal links within our site. I've read that having too many internal links on a page devalues that page's PageRank, because the value is divided amongst the pages it links out to. Likewise, I've heard having too many internal links is just bad in general for SEO. Is that true? The problem I'm facing is determining how Google is "discovering" these internal links. If I'm just looking at one single page reported with, say, 1,350 links and I'm just looking at the code, it may only have 80 or 90 actual links. Moz will confirm this, as well. So why would Google Search Console report different? Should I be concerned about this?
Technical SEO | | Closetstogo0 -
Seeing URL Slugs as search result titles
I've been seeing some search results for my site that look like the first result here, where the URL slug is used as SERP title: https://drive.google.com/a/fitsmallbusiness.com/file/d/0B37y4RslpuY-a0hQYjlJQ0NxeFJicDF6RVlURFVSNFN0aGhB/view?usp=sharing The article title (and Yoast snippet title) are both "28 Press Release Examples From The Pros", but for some reason I'm seeing "press-release-examples" in the search results. I've seen this for multiple articles, and I see it now and then with different articles. I'm aware that Google often changes the titles in search results, but it seems very weird to me that they would opt for just the URL slug here. Thoughts? Has anyone else seen this issue? Any idea what might be causing this? All help much appreciated.
Technical SEO | | davidwaring0 -
Is it good to redirect million of pages on a single page?
My site has 10 lakh approx. genuine urls. But due to some unidentified bugs site has created irrelevant urls 10 million approx. Since we don’t know the origin of these non-relevant links, we want to redirect or remove all these urls. Please suggest is it good to redirect such a high number urls to home page or to throw 404 for these pages. Or any other suggestions to solve this issue.
Technical SEO | | vivekrathore0 -
Best way to handle pages with iframes that I don't want indexed? Noindex in the header?
I am doing a bit of SEO work for a friend, and the situation is the following: The site is a place to discuss articles on the web. When clicking on a link that has been posted, it sends the user to a URL on the main site that is URL.com/article/view. This page has a large iframe that contains the article itself, and a small bar at the top containing the article with various links to get back to the original site. I'd like to make sure that the comment pages (URL.com/article) are indexed instead of all of the URL.com/article/view pages, which won't really do much for SEO. However, all of these pages are indexed. What would be the best approach to make sure the iframe pages aren't indexed? My intuition is to just have a "noindex" in the header of those pages, and just make sure that the conversation pages themselves are properly linked throughout the site, so that they get indexed properly. Does this seem right? Thanks for the help...
Technical SEO | | jim_shook0 -
Determining When to Break a Page Into Multiple Pages?
Suppose you have a page on your site that is a couple thousand words long. How would you determine when to split the page into two and are there any SEO advantages to doing this like being more focused on a specific topic. I noticed the Beginner's Guide to SEO is split into several pages, although it would concentrate the link juice if it was all on one page. Suppose you have a lot of comments. Is it better to move comments to a second page at a certain point? Sometimes the comments are not super focused on the topic of the page compared to the main text.
Technical SEO | | ProjectLabs1 -
What's the difference between a category page and a content page
Hello, Little confused on this matter. From a website architectural and content stand point, what is the difference between a category page and a content page? So lets say I was going to build a website around tea. My home page would be about tea. My category pages would be: White Tea, Black Tea, Oolong Team and British Tea correct? ( I Would write content for each of these topics on their respective category pages correct?) Then suppose I wrote articles on organic white tea, white tea recipes, how to brew white team etc...( Are these content pages?) Do I think link FROM my category page ( White Tea) to my ( Content pages ie; Organic White Tea, white tea receipes etc) or do I link from my content page to my category page? I hope this makes sense. Thanks, Bill
Technical SEO | | wparlaman0