Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Few pages without SSL
-
Hi,
A website is not fully secured with a SSL certificate.
Approx 97% of the pages on the website are secured.A few pages are unfortunately not secured with a SSL certificate, because otherwise some functions on those pages do not work.
It's a website where you can play online games. These games do not work with an SSL connection.
Is there anything we have to consider or optimize?
Because, for example when we click on the secure lock icon in the browser, the following notice.
Your connection to this site is not fully secured Can this harm the Google ranking?Regards,
Tom -
It may potentially affect the rankings on:
-
pages without SSL
-
pages linking to pages without SSL
At first, not drastically - but you'll find that you'll get more and more behind until you had wished you just embraced HTTPS.
The exception to this of course, is if no one who is competing over the same keywords, is fully embracing SSL. If the majority of the query-space's ranking sites are insecure, even though Google frowns upon that - there's not much they can do (they can't just rank no one!)
So you need to do some legwork. See if your competitors suffer from the same issue. If they all do, maybe don't be so concerned at this point. If they're all showing signs of fully moving over to HTTPS, be more worried
-
-
Just to be sure, i would secure every page with an SLL certificate. When Google finds out that not every page is secure, this it may raise some eyebrows and even effect the whole site.
-
Yes that can hurt Google rankings. Insecure pages tend to rank less well and over time, that trend is only set to increase (with Google becoming less and less accepting of insecure pages, eventually they will probably be labelled a 'bad neighborhood' like gambling and porn sites). Additionally, URLs which link out to insecure pages (which are not on HTTPS) can also see adverse ranking effects (as Google knows that those pages are likely to direct users to insecure areas of the web)
At the moment, you can probably get by with some concessions. Those concessions would be, accepting that the insecure URLs probably won't rank very well compared with pages offering the same entertainment / functionality, which have fully embraced secure browsing (which are on HTTPS, which are still responsive, which don't link to insecure addresses)
If you're confident that the functionality you are offering, fundamentally can't be offered through HTTPS - then that may be only a minor concern (as all your competitors are bound by the same restrictions). If you're wrong, though - you're gonna have a bad time. Being 'wrong' now, may be more appealing than being 'dead wrong' later
Google will not remove the warnings your pages have, unless you play ball. If you think that won't bother your users, or that your competition is fundamentally incapable of a better, more secure integration - fair enough. Google is set to take more and more action on this over time
P.S: if your main, ranking pages are secure and if they don't directly link to this small subset of insecure pages, then you'll probably be ok (at least in the short term)
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel canonical tag from shopify page to wordpress site page
We have pages on our shopify site example - https://shop.example.com/collections/cast-aluminum-plaques/products/cast-aluminum-address-plaque That we want to put a rel canonical tag on to direct to our wordpress site page - https://www.example.com/aluminum-plaques/ We have links form the wordpress page to the shop page, and over time ahve found that google has ranked the shop pages over the wp pages, which we do not want. So we want to put rel canonical tags on the shop pages to say the wp page is the authority. I hope that makes sense, and I would appreciate your feeback and best solution. Thanks! Is that possible?
Intermediate & Advanced SEO | | shabbirmoosa0 -
Can noindexed pages accrue page authority?
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
Intermediate & Advanced SEO | | THandorf0 -
Pages are Indexed but not Cached by Google. Why?
Here's an example: I get a 404 error for this: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all But a search for qjamba restaurant coupons gives a clear result as does this: site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all What is going on? How can this page be indexed but not in the Google cache? I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine. This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google Yet, no cache.
Intermediate & Advanced SEO | | friendoffood2 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
Targeting local areas without creating landing pages for each town
I have a large ecommerce website which is structured very much for SEO as it existed a few years ago. With a landing page for every product/town nationwide (its a lot of pages). Then along came Panda... I began shrinking the site in Feb last year in an effort to tackle duplicate content. We had initially used a template only changing product/town name. My first change was to reduce the amount of pages in half by merging the top two categories, as they are semantically similar enough to not need their own pages. This worked a treat, traffic didn't drop at all and the remaining pages are bringing in the desired search terms for both these products. Next I have rewritten the content for every product to ensure they are now as individual as possible. However with 46 products and each of those generating a product/area page we still have a heap of duplicate content. Now i want to reduce the town pages, I have already started writing content for my most important areas, again, to make these pages as individual as possible. The problem i have is that nobody can write enough unique content to target every town in the UK via an individual page (times by 46 products), so i want to reduce these too. QUESTION: If I have a single page for "croydon", will mentioning other local surrounding areas on this page, such as Mitcham, be enough to rank this page for both towns? I have approx 25 Google local place/map listings and grwoing, and am working from these areas outwards. I want to bring the site right down to about 150 main area pages to tackle all the duplicate content, but obviously don't want to lose my traffic for so many areas at once. Any examples of big sites that have reduced in size since Panda would be great. I have a headache... Thanks community.
Intermediate & Advanced SEO | | Silkstream0 -
How important is the number of indexed pages?
I'm considering making a change to using AJAX filtered navigation on my e-commerce site. If I do this, the user experience will be significantly improved but the number of pages that Google finds on my site will go down significantly (in the 10,000's). It feels to me like our filtered navigation has grown out of control and we spend too much time worrying about the url structure of it - in some ways it's paralyzing us. I'd like to be able to focus on pages that matter (explicit Category and Sub-Category) pages and then just let ajax take care of filtering products below these levels. For customer usability this is smart. From the perspective of manageable code and long term design this also seems very smart -we can't continue to worry so much about filtered navigation. My concern is that losing so many indexed pages will have a large negative effect (however, we will reduce duplicate content and be able provide much better category and sub-category pages). We probably should have thought about this a year ago before Google indexed everything :-). Does anybody have any experience with this or insight on what to do? Thanks, -Jason
Intermediate & Advanced SEO | | cre80 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
NOINDEX listing pages: Page 2, Page 3... etc?
Would it be beneficial to NOINDEX category listing pages except for the first page. For example on this site: http://flyawaysimulation.com/downloads/101/fsx-missions/ Has lots of pages such as Page 2, Page 3, Page 4... etc: http://www.google.com/search?q=site%3Aflyawaysimulation.com+fsx+missions Would there be any SEO benefit of NOINDEX on these pages? Of course, FOLLOW is default, so links would still be followed and juice applied. Your thoughts and suggestions are much appreciated.
Intermediate & Advanced SEO | | Peter2640