Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Url shows up in "Inurl' but not when using time parameters
-
Hey everybody,
I have been testing the Inurl: feature of Google to try and gauge how long ago Google indexed our page. SO, this brings my question.
If we run inurl:https://mysite.com all of our domains show up.
If we run inurl:https://mysite.com/specialpage the domain shows up as being indexed
If I use the "&as_qdr=y15" string to the URL, https://mysite.com/specialpage does not show up.
Does anybody have any experience with this? Also on the same note when I look at how many pages Google has indexed it is about half of the pages we see on our backend/sitemap. Any thoughts would be appreciated.
TY!
-
There are several ways to do this, some are more accurate than others. If you have access to the site which contain the web-page on Google Analytics, obviously you could filter your view down to one page / landing page and see when the specified page first got traffic (sessions / users). Note that if a page existed for a long time before it saw much usage, this wouldn't be very accurate.
If it's a WordPress site which you have access to, edit the page and check the published date and / or revision history. If it's a post of some kind then it may displays its publishing date on the front-end without you even having to log in. Note that if some content has been migrated from a previous WordPress site and the publishing dates have not been updated, this may not be wholly accurate either.
You can see when the WayBack Machine first archived the specified URL. The WayBack Machine uses a crawler which is always discovering new pages, not necessarily on the date(s) they were created (so this method can't be trusted 100% either)
In reality, even using the "inurl:" and "&as_qdr=y15" operators will only tell you when Google first saw a web-page, it won't tell you how old the page is. Web pages do not record their age in their coding, so in a way your quest is impossible (if you want to be 100% accurate)
-
So, then I will pose a different question to you. How would you determine the age of a page?
-
Oh ty! Ill try that out!
-
Not sure on the date / time querying aspect, but instead of using "inurl:https://mysite.com" you might have better luck checking indexation via "site:mysite.com" (don't put in subdomains, www or protocol like HTTP / HTTPS)
Then be sure to tell Google to 'include' omitted results (if that notification shows up, sometimes it does - sometimes it doesn't!)
You can also use Google Search Console to check indexed pages:
- https://d.pr/i/oKcHzS.png (screenshot)
- https://d.pr/i/qvKhPa.png (screenshot)
You can only see the top 1,000 - but it does give you a count of all the indexed pages. I am pretty sure you could get more than 1k pages out of it, if you used the filter function repeatedly (taking less than 1k URLs from each site-area at a time)
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dynamic URL Parameters + Woocommerce create 404 errors
Hi Guys,
On-Page Optimization | | jeeyer
Our latest Moz crawl shows a lot of 404-errors for pages that create dynamical links for users? (I guess it are dynamic links, not sure). Situation: On a page which shows products from brand X users can use the pagination icons on the bottom, or click on: View: / 24/48/All.
When a user clicks 48 the end of the link will be /?show_products=48
I think there were some pages that could show 48 products but do not exist anymore (because products are sold out for example), and that's why they show 404's and Moz reports them. How do I deal with these 404-errors? I can't set a 301-redirect because it depends on how many products are shown (it changes every time).
Should I just ignore these kind of 404-errors? Or what is the best way to handle this situation?0 -
How important are clean URLs?
Just wanting to understand the importance of clean URLs in regards to SEO effectiveness. Currently, we have URLs for a site that reads as follows: http://www.interhampers.com.au/c/90/Corporate Gift Hampers Should we look into modifying this so that the URL does not have % or figures?
On-Page Optimization | | Gavo1 -
Duplicate Content for Men's and Women's Version of Site
So, we're a service where you can book different hairdressing services from a number of different salons (site being worked on). We're doing both a male and female version of the site on the same domain which users are can select between on the homepage. The differences are largely cosmetic (allowing the designers to be more creative and have a bit of fun and to also have dedicated male grooming landing pages), but I was wondering about duplicate pages. While most of the pages on each version of the site will be unique (i.e. [male service] in [location] vs [female service] in [location] with the female taking precedent when there are duplicates), what should we do about the likes of the "About" page? Pages like this would both be unique in wording but essentially offer the same information and does it make sense to to index two different "About" pages, even if the titles vary? My question is whether, for these duplicate pages, you would set the more popular one as the preferred version canonically, leave them both to be indexed or noindex the lesser version entirely? Hope this makes sense, thanks!
On-Page Optimization | | LeahHutcheon0 -
301 redirect and then keywords in URL
Hi, Matt Cutts says that 301 redirects, including the ones on internal pages, causes the loss of a little bit of link juice. But also, I know that keywords in the URL are very important. On our site, we've got unoptimized URLs (few keywords) in the internal pages. Is it worth doing a 301 redirect in order to optimize the URLs for each main page. 301 redirects are the only way we can do it on our premade cart For example (just an example) say our main (1 of the 4) keywords for the page is "brown shoes". I'm wondering if I should redirect something like shoes.com/shoecolors.html to shoes.com/brown-shoes.html In other words, with the loss of juice would we come out ahead? In what instances would we come out ahead?
On-Page Optimization | | BobGW0 -
What does the "base href" meta tag do? For SEO and webdesign?
I have encounter the "base href" on one of my sites. The tag is on every page and always points to the home URL.
On-Page Optimization | | jmansd0 -
SEO value of "in the news" links on home page?
Notice more sites have an "in the News" section on the home page, or something similar like press releases... Apart from providing users fresh content, is there an SEO value to this? What is the explanation for this? Have a feeling the answer is obvious but just not too sure Thanks a lot.
On-Page Optimization | | inhouseninja0 -
Using commas in the title tag?
Is there a disadvantage/advantage to using commas to separate words in the title tag. Which will be more effective as a title tag: "keyword1 keyword2 - Brand" OR "keyword1, keyword2 - Brand"?
On-Page Optimization | | Audiohype0 -
Avoiding "Duplicate Page Title" and "Duplicate Page Content" - Best Practices?
We have a website with a searchable database of recipes. You can search the database using an online form with dropdown options for: Course (starter, main, salad, etc)
On-Page Optimization | | smaavie
Cooking Method (fry, bake, boil, steam, etc)
Preparation Time (Under 30 min, 30min to 1 hour, Over 1 hour) Here are some examples of how URLs may look when searching for a recipe: find-a-recipe.php?course=starter
find-a-recipe.php?course=main&preperation-time=30min+to+1+hour
find-a-recipe.php?cooking-method=fry&preperation-time=over+1+hour There is also pagination of search results, so the URL could also have the variable "start", e.g. find-a-recipe.php?course=salad&start=30 There can be any combination of these variables, meaning there are hundreds of possible search results URL variations. This all works well on the site, however it gives multiple "Duplicate Page Title" and "Duplicate Page Content" errors when crawled by SEOmoz. I've seached online and found several possible solutions for this, such as: Setting canonical tag Adding these URL variables to Google Webmasters to tell Google to ignore them Change the Title tag in the head dynamically based on what URL variables are present However I am not sure which of these would be best. As far as I can tell the canonical tag should be used when you have the same page available at two seperate URLs, but this isn't the case here as the search results are always different. Adding these URL variables to Google webmasters won't fix the problem in other search engines, and will presumably continue to get these errors in our SEOmoz crawl reports. Changing the title tag each time can lead to very long title tags, and it doesn't address the problem of duplicate page content. I had hoped there would be a standard solution for problems like this, as I imagine others will have come across this before, but I cannot find the ideal solution. Any help would be much appreciated. Kind Regards5