Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to handle (internal) search result pages?
-
Hi Mozers,
I'm not quite sure what the best way is to handle internal search pages. In this case it's for an ecommerce website with about 8.000+ products and search pages currently look like: example.com/search.php?search=QUERY+HERE.
I'm leaning towards making them follow, noindex. Since pages like this can be easily abused for duplicate content and because I'd rather have the category pages ranked.
How would you handle this?
-
If none of these pages are indexed, you can block them via robots.txt. But if someone else links to a search page from somewhere on the web, google might include the url in the index, and then it'll just be a blank entry, as they can't crawl the page and see not to index it, as it's blocked via robots.txt.
-
Thanks for the quick response.
If the pages are presently not indexed, is there any advantage to follow/noindex over blocking via robots.php?
I guess my question is whether it's better or worse to have those pages spidered (by definition, any content that appears on these pages exists somewhere else on the site, since it is a search page)... what do you think?
-
Blocking the pages via robots.txt prevents the spiders from reaching those pages. It doesn't remove those pages from the index if they are already there, it just prevents the bots from getting to them.
If you want these pages removed from your index, and not to impact the size of your index in the search engines, ideally you remove them with the noindex tag.
-
Hi Mark,
Can you explain why this is better than excluding the pages via robots.txt?
-
How did it turn out? And Mark have you done much with internal search?
-
As long as you're sure that no organic search traffic is coming in via ranked search results pages from your site, it would be of no harm just to prevent search engines from indexing those pages as per the robots.txt directive I mentioned above - then just focus all your attention on the other pages of your site.
With regards to the unique content, always try and find the time to produce unique content on the category pages, these were the ones you mentioned you wanted to rank. Normally this is feasible providing you haven't got over 1,000 categories.
Feel free to PM me over a link to your ecommerce website if you would like me to take a look at any of the situation in greater detail.
-
Thanks for the reply. Yes, there is a semi-chance of duplicate content. And to be honest, the search function is not really great.
There are no visitors coming from the search pages, since we haven't build links specifically for those pages. As for the unique content, it's hard. Since we have so many products it's not really possible. We are working on optimizing our top 100 products though.
-
I'd do exactly what you're saying. Make the pages no index, follow. If they're already indexed, you can remove the page search.php from the engines through webmaster tools.
Let me know how it turns out.
-
How I would handle this would depend upon the performance of the ecommerce website and which entrance paths via the website convert higher.
You could easily instruct search engines not to index the search results page by adding the following in your robots.txt:-
Disallow: /search.php?search=*
But is there a real likelihood of duplicate matching content with your actual category pages? It's unlikely in all honesty - but depending on your website content and product range, I suppose possible.
If many visits to your website arrive via indexed search result pages, I would be inclined to leave them indexed however and implement measures to ensure that they won't be flagged as duplicate content.
Ways to handle this depend on your ecommerce provider and it's capabilities sometimes but more often that not, is just a case of ensuring there is plenty of unique content on your category pages (as there should be) and there is no chance of other pages of your website hindering their ranking potential then.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Another company is coming up in search results when I type in my phone number
If I search for my client's phone number on Google, without gaps, ie 02036315541, another company comes up at the top of the list. This company has a similar name to ours, but it is in a different town and it does different things. My company name is Energy Contract Renewals https://www.energycontractrenewals.co.uk/ and their company is https://energyrenewals.co.uk. As far as I can see, the other company does not mention our phone number anywhere on their site or on their GMB page so I don't know why they are coming up. We do not come up at all for this search. However, if I put our phone number in like this: 020 3631 5541, our company does come up and the other company does not. Anyone know how I can correct this or if it is even possible to do something about it?
Technical SEO | | mfrgolfgti1 -
Is it good to redirect million of pages on a single page?
My site has 10 lakh approx. genuine urls. But due to some unidentified bugs site has created irrelevant urls 10 million approx. Since we don’t know the origin of these non-relevant links, we want to redirect or remove all these urls. Please suggest is it good to redirect such a high number urls to home page or to throw 404 for these pages. Or any other suggestions to solve this issue.
Technical SEO | | vivekrathore0 -
Omitted results
Hello We are facing a loss in ranking and organic traffic from 3 months on our ecommerce website. Mostly we have lost our ranking on our product pages. These pages are gone in the "omitted results" of google. It all started 3 months ago, when we had to face a duplicate content issue due to a technical priblems with our servers: 2 other domains that we own have been pushed online on google, while they shouldn't have. They have created millions of links to our main domain in a few days, and duplicate version / redirection to our main website. We have fixed this a long time ago now. But in GWT we still see that these domains are bringing links to our main ecommerce. It has dowgraded from 36 millions links to 3 millions.... Even if today there is no link ! We have done a lot of optimizations on site like adding specific content to our most important page, rebuilding the navigation, adding microdatas, adding canonical urls on products pages that we found were very similar (we sell very technical products, and we have products that are very similar. Now we have choosen 1 product to put in canonical each time it was necessary) Bt still our products pages don't rank in google. They stay in the "omitted results". Before they were ranking very well on 1st page of google's results. And we have noticed that some adswe put on ads listing websites are now well ranked in the google's results!... Like if the ads were having more authority on the subject than our own webpages... We started to delete some of these ads. But it's not always possible. And 2-3 of them are still online. Any advice to get our most important webpages at the top on the google's results back? Regards
Technical SEO | | Poptafic0 -
Tool to search relative vs absolute internal links
I'm preparing for a site migration from a .co.uk to a .com and I want to ensure all internal links are updated to point to the new primary domain. What tool can I use to check internal links as some are relative and others are absolute so I need to update them all to relative.
Technical SEO | | Lindsay_D0 -
No Search Results Found - Should this return status code 404?
A question came up today on how to correctly serve the right status code on pages where no search results are found. I did a couple searches on some major eccomerce and news sites and they were ALL serving status code 200 for No Search Results Found http://www.zappos.com/dsfasdgasdgadsg http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=sdafasdklgjasdklgjsjdjkl http://www.ebay.com/sch/i.html?_trksid=p5197.m570.l1313&_nkw=dfjakljgdkslagklasd&_sacat=0 http://www.cnn.com/search/?query=sdgadgdsagas&x=0&y=0&primaryType=mixed&sortBy=date&intl=false http://www.seomoz.org/pages/search_results?q=sdagasdgasdgasg I thought I read somewhere were it was recommended to serve a status code 404 on these types of pages. Based on what I found above, all sites were serving a 200, so it appears this may not be the best practice. Any thoughts?
Technical SEO | | WEB-IRS0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Handling 301s: Multiple pages to a single page (consolidation)
Been scouring the interwebs and haven't found much information on redirecting two serparate pages to a single new page. Here is what it boils down to: Let's say a website has two pages, both with good page authority of products that are becoming fazed out. The products, Widget A and Widget B, are still popular search terms, but they are being combined into ONE product, Widget C. While Widget A and Widget B STILL have plenty to do with Widget C, Widget C is now the new page, the main focus page, and the page you want everyone to see and Google to recognize. Now, do I 301 Widget A and Widget B pages to Widget C, ALTHOUGH Widgets A and B previously had nothing to do with one another? (Remember, we want to try and keep some of that authority the two page have had.) OR do we keep Widget A and Widget B pages "alive", take them off the main navigation, and then put a "disclaimer" on the pages announcing they are now part of Widget C and link to Widget C? OR Should Widgets A and B page be canonicalized to Widget C? Again, keep in mind, widgets A and B previously were not similar, but NOW they are and result in Widget C. (If you are confused, we can provide a REAL work example of what we are talkinga about, but decided to not be specific to our industry for this.) Appreciate any and all thoughts on this.
Technical SEO | | JU19850 -
How to handle sitemap with pages using query strings?
Hi, I'm working to optimize a site that currently has about 5K pages listed in the sitemap. There are not in face this many pages. Part of the problem is that one of the pages is a tool where each sort and filter button produces a query string URL. It seems to me inefficient to have so many items listed that are all really the same page. Not to mention wanting to avoid any duplicate content or low quality issues. How have you found it best to handle this? Should I just noindex each of the links? Canonical links? Should I manually remove the pages from the sitemap? Should I continue as is? Thanks a ton for any input you have!
Technical SEO | | 5225Marketing0