My domain is active since 1 year. I want to know more about following two statement.
Most likely, all your pages aren't "worthy" of current indexing by Google
Make sure as well to REMOVE any questionable or thin content.
BTW: Thanks for your wish!
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
My domain is active since 1 year. I want to know more about following two statement.
Most likely, all your pages aren't "worthy" of current indexing by Google
Make sure as well to REMOVE any questionable or thin content.
BTW: Thanks for your wish!
I am working on online retail stores. Initially, Google have indexed 10K+ pages of my website. I have checked number of indexed page before one week and pages were 8K+. Today, number of indexed pages are 7680.
I can't understand why should it happen and How can fix it? I want to index maximum pages of my website.
Your concern is that, Google will crawl following all pages. If I will not do any thing with those pages. Right?
http://www.vistastores.com/table-lamps
http://www.vistastores.com/table-lamps?limit=100&p=2
http://www.vistastores.com/table-lamps?limit=60&p=2
http://www.vistastores.com/table-lamps?limit=40&p=2
Now, my website is on 3rd page of Google for Discount Table Lamps keyword.
I have fear that, If Google will crawl multiple pages with duplicate Title tag so it may mesh up current ranking for Discount Table Lamps keyword.
What you think about it?
Will it really work? Because, both page have different content.
http://www.vistastores.com/table-lamps have 100 products and
http://www.vistastores.com/table-lamps?limit=100&p=2 have different + unique 100 products.
One another problem is regarding Meta info. Both page have same Meta info. If Google will index both pages so it may create warning message for duplicate Meta info across too many pages.
Honestly, I did not getting it. Because, I have read one help article about URL parameters by Google.
It shows me some different thing. Google suggested to use Google webmaster tools. But, I have restricted all dynamic pages by robots.txt.
So, I want to know best practice which may help me to gain my crawling and no of indexed pages.
Today, I was reading help article for URL parameters by Google.
http://www.google.com/support/webmasters/bin/answer.py?answer=1235687
I come to know that, Google is giving value to URLs which ave parameters that change or determine the content of a page. There are too many pages in my website with similar value for Name, Price and Number of product. But, I have restricted all pages by Robots.txt with following syntax.
URLs:
http://www.vistastores.com/table-lamps?dir=asc&order=name
http://www.vistastores.com/table-lamps?dir=asc&order=price
http://www.vistastores.com/table-lamps?limit=100
Syntax in Robots.txt
Disallow: /?dir=
Disallow: /?p=
Disallow: /*?limit=
Now, I am confuse. Which is best solution to get maximum benefits in SEO?
Today, I was checking my Google merchant center account. I come to know that, there are 145 inactive products are available from product feed. I have checked few products manually and found following error.
"The URL specified in your data feed wasn't working correctly when we reviewed this item." You can view more by attached image.
I have checked my URLs and it's working well. There is no issue in URL.
So, How do I fix issue?
I want to add my response on this question after long time. Because, I have made few changes as per discussion. You can see by this excel sheet.
I have changed entire structure for URLs and finished following tasks.
I have very simple question for crawling. How Google will act for these changes. Will Google slow down my crawling or not? OR any other inputs which may help me in same direction!
I am targeting my website in US so need to get high organic ranking in US web search.
One of my competitor is restricting website access to specific IP address or Geo location.
I have checked multiple categories to know more. What's going on with this restriction and why they make it happen?
One of SEO forum is also restricting website access to specific location.
I can understand that, it may help them to stop thread spamming with unnecessary Sign Up or Q & A.
But, why Lamps Plus have set this? Is there any specific reason?
Can I improve my organic ranking?
Restriction may help me to save and maintain user statistic in terms of bounce rate, average page views per visit, etc...
Even if you don’t want a page to rank,
Page rank is ranking factor? I don't think so... I am not opposing you but in my category there are many websites which are performing well with low page rank. And, high page rank website is still at bottom.
Have you any idea about it?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner.
"Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)."
It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search.
But, what about internal links? I have defined rel="nofollow" attribute on too many internal links.
I have archive blog post of Randfish with same subject. I read following question over there.
Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007]
A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.)
Matt has given excellent answer on following question. [In 2011]
Q: Should internal links use rel="nofollow"?
A:Matt said:
"I don't know how to make it more concrete than that."
I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all.
For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course.
I am still using nofollow attributes on my website.
So, what is current trend? Will it require to use nofollow attribute for internal pages?