Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt Syntax for Dynamic URLs
-
I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page=
Which is the proper syntax?
Disallow: ?Page=
Disallow: ?Page=*
Disallow: ?Page=
Or something else? -
Thanks, Alick300 — unfortunately, the slash doesn't appear like that in the URLs on this site: they look like this
www.domain.com/page.html?Page= .........In running through an online robots.txt tester, all three versions in my original question seem to work. Until proven otherwise, I'm using the first one because it's the simplest.
-
Hi Bill,
Disallow: /?Page= will work
Thanks
-
Hi, James. It's not pagination I'm trying to disallow. The site structure has URLs that include things like "Page=give&...", that opens up a blank form ... but it comes from scores of web pages we want to spider. Since the "give" page is an empty form, we're getting tons of duplicate content errors as a result.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Tester - syntax not understood
I've looked in the robots.txt Tester and I can see 3 warnings: There is a 'syntax not understood' warning for each of these. XML Sitemaps:
Technical SEO | | JamesHancocks1
https://www.pkeducation.co.uk/post-sitemap.xml
https://www.pkeducation.co.uk/sitemap_index.xml How do I fix or reformat these to remove the warnings? Many thanks in advance.
Jim0 -
Sizes and numbers in friendly urls - syntax
Ok, I'm trying to establish some business rules of syntax for SEO friendly URLS. I'm doing this for an OpenCart online store which uses a SEO-url field to construct the "friendly URL's". The good news of that is I have total control over the urls' the bad news is I had to do some tricky Excel work to populate them. That all said, I have a problem with items that have sizes. This is a crafts store so many of the items are differentiated by size. Examples: Sleigh Bells, come in 1/2", 3/4", 1", 1 1/2" etc. So far Ive tried to stay away from inch mark " by spelling it out. Right now its inch but could be in. The numbers, fractions, sizes etc. create some ghastly friendly URL's. Is there any wisdom or syntax standards out there that would help me. I'm trying to avoid this: www.mysite.com//index.php?route=craft-accessories/bells/sleigh-bells/sleigh-bells-1-one-half-inch-with-loop I realize that the category (sleigh-bells) is repeated in the product name but there are several 1 1/2" items in the store. Any thoughts would be useful, even if it's links to good SEO sites that have mastered the myriad of issues with dimensions in the urls. thanks
Technical SEO | | jbcul0 -
Blocked jquery in Robots.txt, Any SEO impact?
I've heard that Google is now indexing links and stuff available in javascript and jquery. My webmastertools is showing that some links are blocked in robots.txt of jquery. Sorry I'm not a developer or designer. I want to know is there any impact of this on my SEO? and also how can I unblock it for the robots? Check this screenshot: http://i.imgur.com/3VDWikC.png
Technical SEO | | hammadrafique0 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
Urls with or without .html ending
Hello, Can anyone show me some authority info on wheher links are better with or without a .html ending? Thanks is advance
Technical SEO | | sesertin0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0