Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How does Indeed.com make it to the top of every single search despite of having aggregated content or duplicate content
-
How does Indeed.com make it to the top of every single search despite of having duplicate content. I mean somewhere google says they will prefer original content & will give preference to them who have original content but this statement contradict when I see Indeed.com as they aggregate content from other sites but still rank higher than original content provider side.
How does Indeed.com make it to the top of every single search despite of having aggregated content or duplicate content
-
Hello Anirban,
The main reason large-scale websites like Indeed feature so prominently on SERP's is that there are more ranking factors at work than just content. While Panda has been created to avoid duplicate content issues and it is widely known that duplicate content can lead to penalties and reduced ranking potential, duplicate content can be over-shadowed by other ranking factors, like a website's link profile and it's Authority relative to websites in its niche.
For example, Wikipedia is widely cited by webmasters around the world, but it features a lot of repetition and is an internal linking nightmare where crawling and indexing are concerned. That being said, it is normal to find a Wikipedia page every time a "What Is" query is made.
Youtube is in a similar situation. There are duplicated videos and content galore on that domain, but it is linked to so frequently that it simply cannot be beaten when it comes to domain authority and relevance.
While Indeed is probably being impacted by duplicate content, this pales in comparison to its link profile and the relevance it has in its industry.
Hope this helps! Let me know if I can help with anything else.
Best regards,
Rob
-
Hi there,
Indeed.com is a massive site with a solid domain authority (93/100), links, and social metrics - to name a few important factors. And, as a website that posts listings, the duplicate content probably wouldn't be as big of an issue for them in the eyes of search engines as it would be for other types of sites. And, though the content might be pulled from other areas, it is user-generated in a lot of cases which could be treated differently than regular website content entered in through a CMS. The same could be said for similar types of websites in other industries, like Zillow or Amazon.
I am sure others will weigh in but that should help clarify.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content with ?Page ID's in WordPress
Hi there, I'm trying to figure out the best way to solve a duplicate content problem that I have due to Page ID's that WordPress automatically assigns to pages. I know that in order for me to resolve this I have to use canonical urls but the problem for me is I can't figure out the URL structure. Moz is showing me thousands of duplicate content errors that are mostly related to Page IDs For example, this is how a page's url should look like on my site Moz is telling me there are 50 duplicate content errors for this page. The page ID for this page is 82 so the duplicate content errors appear as follows and so on. For 47 more pages. The problem repeats itself with other pages as well. My permalinks are set to "Post Name" so I know that's not an issue. What can I do to resolve this? How can I use canonical URLs to solve this problem. Any help will be greatly appreciated.
On-Page Optimization | | SpaMedica0 -
Solve duplicate content issues by using robots.txt
Hi, I have a primary website and beside that I also have some secondary websites with have same contents with primary website. This lead to duplicate content errors. Because of having many URL duplicate contents, so I want to use the robots.txt file to prevent google index the secondary websites to fix the duplicate content issue. Is it ok? Thank for any help!
On-Page Optimization | | JohnHuynh0 -
Blog.mysite.com or mysite.com/blog?
Hi, I'm just curious what the majority think of what's the best way to start a blog on your website for SEO benefits. Is it better to have it under a sub domain or a directory? Or does it even matter?
On-Page Optimization | | truckguy770 -
Is content aggregation good SEO?
I didn't see this topic specifically addressed here: what's the current thinking on using content aggregation for SEO purposes? I'll use flavors.me as an example. Flavors.me lets you set up a domain that pulls in content from a variety of services (Twitter, YouTube, Flickr, RSS, etc.). There's also a limited ability to publish unique content as well. So let's say that we've got MyDomain.com set up, and most of the content is being drawn in from other services. So there's blog posts from WordPress.com, videos from YouTube, a photo gallery from Flickr, etc. How would Google look at this scenario? Is MyDomain.com simply scraped content from the other (more authoritative) sources? Is the aggregated content perceived to "belong" to MyDomain.com or not? And most importantly, if you're aggregating a lot of content related to Topic X, will this content aggregation help MyDomain.com rank for Topic X? Looking forward to the community's thoughts. Thanks!
On-Page Optimization | | GOODSIR0 -
Http://www.xxxx.com does not re-direct to http://xxx.com
When typing in my website URL www.earthsaverequipment.com successfully re-directs to earthsaverequipment.com as specified in robot. However if you type http://www.earthsaverequipment.com it brings up a 404 error Is this a potential issue? if so is there a way to fix it? thanks
On-Page Optimization | | Earthsaver0 -
.us VS .com
In general from what I have experienced a location specific extension such as .co.uk geo-targeted to the same location gives the best results when ranking BUT when I look at results from the US, page after page shows results of .com, surely if my above statement is true then a .us domain extension should rank better then a .com.
On-Page Optimization | | activitysuper0 -
Would it be bad to change the canonical URL to the most recent page that has duplicate content, or should we just 301 redirect to the new page?
Is it bad to change the canonical URL in the tag, meaning does it lose it's stats? If we add a new page that may have duplicate content, but we want that page to be indexed over the older pages, should we just change the canonical page or redirect from the original canonical page? Thanks so much! -Amy
On-Page Optimization | | MeghanPrudencio0 -
Avoiding "Duplicate Page Title" and "Duplicate Page Content" - Best Practices?
We have a website with a searchable database of recipes. You can search the database using an online form with dropdown options for: Course (starter, main, salad, etc)
On-Page Optimization | | smaavie
Cooking Method (fry, bake, boil, steam, etc)
Preparation Time (Under 30 min, 30min to 1 hour, Over 1 hour) Here are some examples of how URLs may look when searching for a recipe: find-a-recipe.php?course=starter
find-a-recipe.php?course=main&preperation-time=30min+to+1+hour
find-a-recipe.php?cooking-method=fry&preperation-time=over+1+hour There is also pagination of search results, so the URL could also have the variable "start", e.g. find-a-recipe.php?course=salad&start=30 There can be any combination of these variables, meaning there are hundreds of possible search results URL variations. This all works well on the site, however it gives multiple "Duplicate Page Title" and "Duplicate Page Content" errors when crawled by SEOmoz. I've seached online and found several possible solutions for this, such as: Setting canonical tag Adding these URL variables to Google Webmasters to tell Google to ignore them Change the Title tag in the head dynamically based on what URL variables are present However I am not sure which of these would be best. As far as I can tell the canonical tag should be used when you have the same page available at two seperate URLs, but this isn't the case here as the search results are always different. Adding these URL variables to Google webmasters won't fix the problem in other search engines, and will presumably continue to get these errors in our SEOmoz crawl reports. Changing the title tag each time can lead to very long title tags, and it doesn't address the problem of duplicate page content. I had hoped there would be a standard solution for problems like this, as I imagine others will have come across this before, but I cannot find the ideal solution. Any help would be much appreciated. Kind Regards5