• majorAlexa

        See all notifications

        Skip to content
        Moz logo Menu open Menu close
        • Products
          • Moz Pro
          • Moz Pro Home
          • Moz Local
          • Moz Local Home
          • STAT
          • Moz API
          • Moz API Home
          • Compare SEO Products
          • Moz Data
        • Free SEO Tools
          • Domain Analysis
          • Keyword Explorer
          • Link Explorer
          • Competitive Research
          • MozBar
          • More Free SEO Tools
        • Learn SEO
          • Beginner's Guide to SEO
          • SEO Learning Center
          • Moz Academy
          • MozCon
          • Webinars, Whitepapers, & Guides
        • Blog
        • Why Moz
          • Digital Marketers
          • Agency Solutions
          • Enterprise Solutions
          • Small Business Solutions
          • The Moz Story
          • New Releases
        • Log in
        • Log out
        • Products
          • Moz Pro

            Your all-in-one suite of SEO essentials.

          • Moz Local

            Raise your local SEO visibility with complete local SEO management.

          • STAT

            SERP tracking and analytics for enterprise SEO experts.

          • Moz API

            Power your SEO with our index of over 44 trillion links.

          • Compare SEO Products

            See which Moz SEO solution best meets your business needs.

          • Moz Data

            Power your SEO strategy & AI models with custom data solutions.

          Let your business shine with Listings AI
          Moz Local

          Let your business shine with Listings AI

          Learn more
        • Free SEO Tools
          • Domain Analysis

            Get top competitive SEO metrics like DA, top pages and more.

          • Keyword Explorer

            Find traffic-driving keywords with our 1.25 billion+ keyword index.

          • Link Explorer

            Explore over 40 trillion links for powerful backlink data.

          • Competitive Research

            Uncover valuable insights on your organic search competitors.

          • MozBar

            See top SEO metrics for free as you browse the web.

          • More Free SEO Tools

            Explore all the free SEO tools Moz has to offer.

          NEW Keyword Suggestions by Topic
          Moz Pro

          NEW Keyword Suggestions by Topic

          Learn more
        • Learn SEO
          • Beginner's Guide to SEO

            The #1 most popular introduction to SEO, trusted by millions.

          • SEO Learning Center

            Broaden your knowledge with SEO resources for all skill levels.

          • On-Demand Webinars

            Learn modern SEO best practices from industry experts.

          • How-To Guides

            Step-by-step guides to search success from the authority on SEO.

          • Moz Academy

            Upskill and get certified with on-demand courses & certifications.

          • MozCon

            Save on Early Bird tickets and join us in London or New York City

          Unlock flexible pricing & new endpoints
          Moz API

          Unlock flexible pricing & new endpoints

          Find your plan
        • Blog
        • Why Moz
          • Digital Marketers

            Simplify SEO tasks to save time and grow your traffic.

          • Small Business Solutions

            Uncover insights to make smarter marketing decisions in less time.

          • Agency Solutions

            Earn & keep valuable clients with unparalleled data & insights.

          • Enterprise Solutions

            Gain a competitive edge in the ever-changing world of search.

          • The Moz Story

            Moz was the first & remains the most trusted SEO company.

          • New Releases

            Get the scoop on the latest and greatest from Moz.

          Surface actionable competitive intel
          New Feature

          Surface actionable competitive intel

          Learn More
        • Log in
          • Moz Pro
          • Moz Local
          • Moz Local Dashboard
          • Moz API
          • Moz API Dashboard
          • Moz Academy
        • Avatar
          • Moz Home
          • Notifications
          • Account & Billing
          • Manage Users
          • Community Profile
          • My Q&A
          • My Videos
          • Log Out

        The Moz Q&A Forum

        • Forum
        • Questions
        • My Q&A
        • Users
        • Ask the Community

        Welcome to the Q&A Forum

        Browse the forum for helpful insights and fresh discussions about all things SEO.

        1. Home
        2. SEO Tactics
        3. Intermediate & Advanced SEO
        4. PDF for link building - avoiding duplicate content

        Moz Q&A is closed.

        After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

        PDF for link building - avoiding duplicate content

        Intermediate & Advanced SEO
        4
        14
        3144
        Loading More Posts
        • Watching

          Notify me of new replies.
          Show question in unread.

        • Not Watching

          Do not notify me of new replies.
          Show question in unread if category is not ignored.

        • Ignoring

          Do not notify me of new replies.
          Do not show question in unread.

        • Oldest to Newest
        • Newest to Oldest
        • Most Votes
        Reply
        • Reply as question
        Locked
        This topic has been deleted. Only users with question management privileges can see it.
        • BobGW
          BobGW last edited by

          Hello,

          We've got an article that we're turning into a PDF. Both the article and the PDF will be on our site. This PDF is a good, thorough piece of content on how to choose a product.

          We're going to strip out all of the links to our in the article and create this PDF so that it will be good for people to reference and even print. Then we're going to do link building through outreach since people will find the article and PDF useful.

          My question is, how do I use rel="canonical" to make sure that the article and PDF aren't duplicate content?

          Thanks.

          1 Reply Last reply Reply Quote 0
          • Marcus_Miller
            Marcus_Miller @BobGW last edited by

            Hey Bob

            I think you should forget about any kind of perceived conventions and have whatever you think works best for your users and goals.

            Again, look at unbounce, that is a custom landing page with a homepage link (to share the love) but not the general site navigation.

            They also have a footer to do a bit more link love but really, do what works for you.

            Forget conventions - do what works!

            Hope that helps
            Marcus

            1 Reply Last reply Reply Quote 0
            • BobGW
              BobGW @BobGW last edited by

              I see, thanks! I think it's important not to have the ecommerce navigation on the page promoting the pdf. What would you say is ideal as far as the graphical and navigation components of the page with the PDF on it - what kind of navigation and graphical header should I have on it?

              1 Reply Last reply Reply Quote 0
              • Marcus_Miller
                Marcus_Miller @BobGW last edited by

                Yep, check the HTTP headers with webbug or there are a bunch of browser plugins that will let you see the headers for the document.

                That said, I would push to drive the links to the page though rather than the document itself and just create a nice page that houses the document and make that the link target.

                You could even make the PDF link only available by email once they have singed up or some such as canonical is only a directive and you would still be better getting those links flooding into a real page on the site.

                You could even offer up some HTML to make this easier for folks to link to that linked to your main page. If you take a look at any savvy infographics etc folks will try to draw a link into a page rather than the image itself for the very same reasons.

                If you look at something like the Noobs Guide to Online Marketing from Unbounce then you will see something like this as the suggested linking code:

                [](<strong>http://unbounce.com/noob-guide-to-online-marketing-infographic/</strong>)

                [The Noob Guide to Online Marketing - Infographic](<strong>http://unbounce.com/noob-guide-to-online-marketing-infographic/</strong>)

                [](<strong>http://unbounce.com/noob-guide-to-online-marketing-infographic/</strong>)

                Unbounce – The DIY Landing Page Platform

                So, the image is there but the link they are pimping is a standard page:

                http://unbounce.com/noob-guide-to-online-marketing-infographic/

                They also cheekily add an extra homepage link in as well with some keywords and the brand so if folks don't remove that they still get that benefit.

                Ultimately, it means that when links flood into the site they benefit the whole site rather than just promote one PDF.

                Just my tuppence! 
                Marcus

                1 Reply Last reply Reply Quote 0
                • BobGW
                  BobGW @Marcus_Miller last edited by

                  Thanks for the code Marcus.

                  Actually, the pdf is what people will be linking to. It's a guide for websites. I think the PDF will be much easier to promote than the article.I assume so anyway.

                  Is there a way to make sure my canonical code in htaccess is working after I insert the code?

                  Thanks again,

                  Bob

                  Marcus_Miller BobGW 3 Replies Last reply Reply Quote 0
                  • Marcus_Miller
                    Marcus_Miller last edited by

                    Hey Bob

                    There is a much easier way to do this and simply have your PDFs that you don't want indexed in a folder that you block access to in robots.txt. This way you can just drop PDFs into articles and link to them knowing full well these pages will not be indexed.

                    Assuming you had a PDF called article.pdf in a folder called pdfs/ then the following would prevent indexation.

                    User-agent: * Disallow: /pdfs/

                    Or to just block the file itself:

                    User-agent: *
                    Disallow: /pdfs/yourfile.pdf Additionally, There is no reason not to add the canonical link as well and if you find people are linking directly to the PDF then having this would ensure that the equity associated with those links was correctly attributed to the parent page (always a good thing).

                    Header add Link '<http: www.url.co.uk="" pdfs="" article.html="">; </http:> rel="canonical"'

                    Generally, there are better ways to block indexation than with robots.txt but in the case of PDFs, we really don't want these files indexed as they make for such poor landing pages (no navigation) and we certainly want to remove any competition or duplication between the page and the PDF so in this case, it makes for a quick, painless and suitable solution.

                    Hope that helps!
                    Marcus

                    BobGW 1 Reply Last reply Reply Quote 2
                    • BobGW
                      BobGW @BobGW last edited by

                      Thanks ThompsonPaul,

                      Say the pdf is located at

                      domain.com/pdfs/white-papers.pdf

                      and the article that I want to rank is at

                      domain.com/articles/article.html

                      do I simply add this to my htaccess file?:

                      Header add Link "<http: www.domain.com="" articles="" article.html="">; rel="canonical""</http:>

                      1 Reply Last reply Reply Quote 0
                      • ThompsonPaul
                        ThompsonPaul @BobGW last edited by

                        You can insert the canonical header link using your site's .htaccess file, Bob. I'm sure Hostgator provides access to the htaccess file through ftp (sometimes you have to turn on "show hidden files") or through the file manager built into your cPanel.

                        Check tip #2 in this recent SEOMoz blog article for specifics:
                        seomoz.org/blog/htaccess-file-snippets-for-seos

                        Just remember too - you will want to do the same kind of on-page optimization for the PDF as you do for regular pages.

                        • Give it a good, descriptive, keyword-appropriate, dash-separated file name. (essential for usability as well, since it will become the title of the icon when saved to someone's desktop)
                        • Fill out the metadata for the PDF, especially the Title and Description. In Acrobat it's under File -> Properties -> Description tab (to get the meta-description itself, you'll need to click on the Additional Metadata button)

                        I'd be tempted to build the links to the html page as much as possible as those will directly help ranking, unlike the PDF's inbound links which will have to pass their link juice through the canonical, assuming you're using it. Plus, the visitor will get a preview of the PDF's content and context from the rest of your site which which may increase trust and engender further engagement..

                        Your comment about links in the PDF got kind of muddled, but you'll definitely want to make certain there are good links and calls to action back to your website within the PDF - preferably on each page. Otherwise there's no clear "next step" for users reading the PDF back to a purchase on your site. Make sure to put Analytics tracking tags on these links so you can assess the value of traffic generated back from the PDF - otherwise the traffic will just appear as Direct in your Analytics.

                        Hope that all helps;

                        Paul

                        1 Reply Last reply Reply Quote 2
                        • BobGW
                          BobGW @BobGW last edited by

                          Can I just use htaccess?

                          See here: http://www.seomoz.org/blog/how-to-advanced-relcanonical-http-headers

                          We only have one pdf like this right now and we plan to have no more than five.

                          Say the pdf is located at

                          domain.com/pdfs/white-papers.pdf

                          and the article that I want to rank is at

                          domain.com/articles/article.pdf

                          do I simply add this to my htaccess file?:

                          Header add Link "<http: www.domain.com="" articles="" article.pdf="">; rel="canonical""</http:>

                          1 Reply Last reply Reply Quote 0
                          • BobGW
                            BobGW @BobGW last edited by

                            How do I know if I can do an HTTP header request? I'm using shared hosting through hostgator.

                            1 Reply Last reply Reply Quote 0
                            • DoRM
                              DoRM @BobGW last edited by

                              PDF seem to not rank as well as other normal webpages.  They still rank do not get me wrong, we have over 100 pdf pages that get traffic for us. The main version is really up to you, what do you want to show in the search results.  I think it would be easier to rank for a normal webpage though.  If you are doing a rel="canonical"  it will pass most of the link juice, not all but most.

                              1 Reply Last reply Reply Quote 0
                              • DoRM
                                DoRM @BobGW last edited by

                                PDF seem to not rank as well as other normal webpages.  They still rank do not get me wrong, we have over 100 pdf pages that get traffic for us. The main version is really up to you, what do you want to show in the search results.  I think it would be easier to rank for a normal webpage though.  If you are doing a rel="canonical"  it will pass most of the link juice, not all but most.

                                1 Reply Last reply Reply Quote 1
                                • BobGW
                                  BobGW @DoRM last edited by

                                  Thank you DoRM,

                                  I assume that the PDF is what I want to be the main version since that is what I'll be marketing, but I could be wrong? What if I get backlinks to both pages, will both sets of backlinks count?

                                  DoRM BobGW ThompsonPaul 6 Replies Last reply Reply Quote 0
                                  • DoRM
                                    DoRM last edited by

                                    Indicate the canonical version of a URL by responding with the Link rel="canonical" HTTP header. Addingrel="canonical" to the head section of a page is useful for HTML content, but it can't be used for PDFs and other file types indexed by Google Web Search. In these cases you can indicate a canonical URL by responding with the Link rel="canonical" HTTP header, like this (note that to use this option, you'll need to be able to configure your server):

                                    Link: <http: www.example.com="" downloads="" white-paper.pdf="">; rel="canonical"</http:> 
                                    

                                    Google currently supports these link header elements for Web Search only.

                                    You can read more her http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394

                                    BobGW 1 Reply Last reply Reply Quote 1
                                    • 1 / 1
                                    • First post
                                      Last post

                                    Browse Questions

                                    Explore more categories

                                    • Moz Tools

                                      Chat with the community about the Moz tools.

                                    • SEO Tactics

                                      Discuss the SEO process with fellow marketers

                                    • Community

                                      Discuss industry events, jobs, and news!

                                    • Digital Marketing

                                      Chat about tactics outside of SEO

                                    • Research & Trends

                                      Dive into research and trends in the search industry.

                                    • Support

                                      Connect on product support and feature requests.

                                    • See all categories

                                    Related Questions

                                    • GhillC

                                      Same site serving multiple countries and duplicated content

                                      Hello! Though I browse MoZ resources every day, I've decided to directly ask you a question despite the numerous questions (and answers!) about this topic as there are few specific variants each time: I've a site serving content (and products) to different countries built using subfolders (1 subfolder per country). Basically, it looks like this:
                                      site.com/us/
                                      site.com/gb/
                                      site.com/fr/
                                      site.com/it/
                                      etc. The first problem was fairly easy to solve:
                                      Avoid duplicated content issues across the board considering that both the ecommerce part of the site and the blog bit are being replicated for each subfolders in their own language. Correct me if I'm wrong but using our copywriters to translate the content and adding the right hreflang tags should do. But then comes the second problem: how to deal with duplicated content when it's written in the same language? E.g. /us/, /gb/, /au/ and so on.
                                      Given the following requirements/constraints, I can't see any positive resolution to this issue:
                                      1. Need for such structure to be maintained (it's not possible to consolidate same language within one single subfolders for example),
                                      2. Articles from one subfolder to another can't be canonicalized as it would mess up with our internal tracking tools,
                                      3. The amount of content being published prevents us to get bespoke content for each region of the world with the same spoken language. Given those constraints, I can't see a way to solve that out and it seems that I'm cursed to live with those duplicated content red flags right up my nose.
                                      Am I right or can you think about anything to sort that out? Many thanks,
                                      Ghill

                                      Intermediate & Advanced SEO | | GhillC
                                      0
                                    • chalet

                                      Same content, different languages. Duplicate content issue? | international SEO

                                      Hi, If the "content" is the same, but is written in different languages, will Google see the articles as duplicate content?
                                      If google won't see it as duplicate content. What is the profit of implementing the alternate lang tag?Kind regards,Jeroen

                                      Intermediate & Advanced SEO | | chalet
                                      0
                                    • yacpro13

                                      Duplicate content on URL trailing slash

                                      Hello, Some time ago, we accidentally made changes to our site which modified the way urls in links are generated. At once, trailing slashes were added to many urls (only in links). Links that used to send to
                                      example.com/webpage.html Were now linking to
                                      example.com/webpage.html/ Urls in the xml sitemap remained unchanged (no trailing slash). We started noticing duplicate content (because our site renders the same page with or without the trailing shash). We corrected the problematic php url function so that now, all links on the site link to a url without trailing slash. However, Google had time to index these pages. Is implementing 301 redirects required in this case?

                                      Intermediate & Advanced SEO | | yacpro13
                                      1
                                    • browndoginteractive

                                      Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)

                                      Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
                                      2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality:  http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results:  Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index:  robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages.  I say "force" because of the crawl budget required.  Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links.  Best of both worlds:  crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution:  using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.

                                      Intermediate & Advanced SEO | | browndoginteractive
                                      0
                                    • team_tic

                                      International SEO - cannibalisation and duplicate content

                                      Hello all, I look after (in house) 3 domains for one niche travel business across three TLDs: .com .com.au and co.uk and a fourth domain on a co.nz TLD which was recently removed from Googles index. Symptoms: For the past 12 months we have been experiencing canibalisation in the SERPs (namely .com.au being rendered in .com) and Panda related ranking devaluations between our .com site and com.au site. Around 12 months ago the .com TLD was hit hard (80% drop in target KWs) by Panda (probably) and we began to action the below changes. Around 6 weeks ago our .com TLD saw big overnight increases in rankings (to date a 70% averaged increase). However, almost to the same percentage we saw in the .com TLD we suffered significant  drops in our .com.au rankings. Basically Google seemed to switch its attention from .com TLD to the .com.au TLD. Note: Each TLD is over 6 years old, we've never proactively gone after links (Penguin) and have always aimed for quality in an often spammy industry. **Have done: ** Adding HREF LANG markup to all pages on all domain Each TLD uses local vernacular e.g for the .com site is American Each TLD has pricing in the regional currency Each TLD has details of the respective local offices, the copy references the lacation, we have significant press coverage in each country like The Guardian for our .co.uk site and Sydney Morning Herlad for our Australia site Targeting each site to its respective market in WMT Each TLDs core-pages (within 3 clicks of the primary nav) are 100% unique We're continuing to re-write and publish unique content to each TLD on a weekly basis As the .co.nz site drove such little traffic re-wrting we added no-idex and the TLD has almost compelte dissapread (16% of pages remain) from the SERPs. XML sitemaps Google + profile for each TLD **Have not done: ** Hosted each TLD on a local server Around 600 pages per TLD are duplicated across all TLDs (roughly 50% of all content). These are way down the IA but still duplicated. Images/video sources from local servers Added address and contact details using SCHEMA markup Any help, advice or just validation on this subject would be appreciated! Kian

                                      Intermediate & Advanced SEO | | team_tic
                                      1
                                    • irvingw

                                      Is SEOmoz.org creating duplicate content with their CDN subdomain?

                                      Example URL: http://cdn.seomoz.org/q/help-with-getting-no-conversions Canonical is a RELATIVE link, should be an absolute link pointing to main domain: http://www.seomoz.org/q/help-with-getting-no-conversions <link href='[/q/help-with-getting-no-conversions](view-source:http://cdn.seomoz.org/q/help-with-getting-no-conversions)' rel='<a class="attribute-value">canonical</a>' /> 13,400 pages indexed in Google under cdn subdomain go to google   >  site:http://cdn.seomoz.org https://www.google.com/#hl=en&output=search&sclient=psy-ab&q=site:http%3A%2F%2Fcdn.seomoz.org%2F&oq=site:http%3A%2F%2Fcdn.seomoz.org%2F&gs_l=hp.2...986.6227.0.6258.28.14.0.0.0.5.344.3526.2-10j2.12.0.les%3B..0.0...1c.Uprw7ko7jnU&pbx=1&bav=on.2,or.r_gc.r_pw.r_cp.r_qf.&fp=97577626a0fb6a97&biw=1920&bih=936

                                      Intermediate & Advanced SEO | | irvingw
                                      1
                                    • YNWA

                                      Duplicate Content on Press Release?

                                      Hi, We recently held a charity night in store. And had a few local celebs turn up etc... We created a press release to send out to various media outlets, within the press release were hyperlinks to our site and links on certain keywords to specific brands on our site. My question is, should we be sending a different press release to each outlet to stop the duplicate content thing, or is sending the same release out to everyone ok? We will be sending approx 20 of these out, some going online and some not. So far had one local paper website, a massive football website and a local magazine site. All pretty much same content and a few pics. Any help, hints or tips on how to go about this if I am going to be sending out to a load of other sites/blogs? Cheers

                                      Intermediate & Advanced SEO | | YNWA
                                      0
                                    • Creode

                                      Duplicate content on ecommerce sites

                                      duplicate content

                                      I just want to confirm something about duplicate content. On an eCommerce site, if the meta-titles, meta-descriptions and product descriptions are all unique, yet a big chunk at the bottom (featuring "why buy with us" etc) is copied across all product pages, would each page be penalised, or not indexed, for duplicate content? Does the whole page need to be a duplicate to be worried about this, or would this large chunk of text, bigger than the product description, have an effect on the page. If this would be a problem, what are some ways around it? Because the content is quite powerful, and is relavent to all products... Cheers,

                                      Intermediate & Advanced SEO | | Creode
                                      0

                                    Get started with Moz Pro!

                                    Unlock the power of advanced SEO tools and data-driven insights.

                                    Start my free trial
                                    Products
                                    • Moz Pro
                                    • Moz Local
                                    • Moz API
                                    • Moz Data
                                    • STAT
                                    • Product Updates
                                    Moz Solutions
                                    • SMB Solutions
                                    • Agency Solutions
                                    • Enterprise Solutions
                                    • Digital Marketers
                                    Free SEO Tools
                                    • Domain Authority Checker
                                    • Link Explorer
                                    • Keyword Explorer
                                    • Competitive Research
                                    • Brand Authority Checker
                                    • Local Citation Checker
                                    • MozBar Extension
                                    • MozCast
                                    Resources
                                    • Blog
                                    • SEO Learning Center
                                    • Help Hub
                                    • Beginner's Guide to SEO
                                    • How-to Guides
                                    • Moz Academy
                                    • API Docs
                                    About Moz
                                    • About
                                    • Team
                                    • Careers
                                    • Contact
                                    Why Moz
                                    • Case Studies
                                    • Testimonials
                                    Get Involved
                                    • Become an Affiliate
                                    • MozCon
                                    • Webinars
                                    • Practical Marketer Series
                                    • MozPod
                                    Connect with us

                                    Contact the Help team

                                    Join our newsletter
                                    Moz logo
                                    © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                                    • Accessibility
                                    • Terms of Use
                                    • Privacy

                                    Looks like your connection to Moz was lost, please wait while we try to reconnect.