You can't noindex a URL by protocol, Gaston - adding no-index would eliminate the page from being returned as a search result regardless of whether HTTP or HTTPS, essentially making those important pages invisible and wasting whatever link equity they may have. (You also can't block in robots.txt by protocol either, in my experience.)
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Posts made by ThompsonPaul
-
RE: Google Indexing Of Pages As HTTPS vs HTTP
-
RE: Google Indexing Of Pages As HTTPS vs HTTP
There's a very simple solution to this issue - and no, you absolutely do NOT want to artificially force removal of those HTTPS pages from the index.
You need to make sure the SSL certificate is still in place, then re-add the 301-redirect in the site's htaccess file, but this time redirecting all HTTPS URLs back their HTTP equivalents.
You don't want to forcibly "remove" those URLs from the SERPs, because they are what Google now understands to be the correct pages. If you remove them, you'll have to wait however long it takes for Google and other search engines to completely re-understand the conflicting signals you've sent them about your site. And traffic will inevitably suffer in that process. Instead, you need to provide standard directives that the search engines don't have to interpret and can't ignore. Once the search engines have seen the new redirects for long enough, they'll start reverting the SERP listings back to the HTTP URLs naturally.
The key here is the SSL cert must stay in place. As it stands now, a visitor clicking a page in the search engine is trying to make an HTTPS connection to your site. If there is no certificate in place, they will get the harmful security warning. BUT! You can't just put in a 301-redirect in that case. The reason for this is that the initial connection from the SERP is coming in over the "secure channel". That connection must be negotiated securely first, before the redirect can even be read. If that first connection isn't secure, the browser will return the security warning without ever trying to read the redirect.
Having the SSL cert in place even though you're not running all pages under HTTPS means that first connection can still be made securely, then the redirect can be read back to the HTTP URL, and the visitor will get to the page they expect in a seamless manner. And search engines will be able to understand and apply authority without misunderstandings/confusion.
Hope that all makes sense?
Paul
-
RE: Is domain redirection a good method for SEO?
It's likely the other site is trying to use the redirects to mask their manipulative link building. (Which doesn't work, by the way.)
Short answer - no this isn't a good idea.
Hope that helps?
Paul
-
RE: Keywords in GMB title...
Putting keywords in the GMB business name has a very high risk associated with it. It's against Google Terms of Service and they say they will suspend any account they catch doing it.
But as you note, many businesses are doing it and getting away with, at least for some period of time.
So the question becomes "how much risk is the client willing to accept"? Because if they get caught, their account gets wiped out and they lose all the work they've done to build up ranking, reviews, images etc for their GMB page.
I tend to agree with Miriam Ellis who just wrote about exactly this on Monday in her post. Build a good-quality GMB page within Google's guidelines to ensure long-term survival, including optimising for the "housecall" terms in description, reviews, etc. Then spend a bit of time reporting the offending pages as spam
Hope that helps?
Paul
-
RE: Local Ranking with No Physical Address in New Service Area - How to Rank?
I'm well aware of the reasons why small business owners might not want to have their home addresses listed, but it doesn't change the fact that Google will not allow use of UPS Store-type mailing addresses to pretend to be business locations. It's not a matter of having a "verifiable address" it's a matter of adhering to the requirement that you must have an actual business presence at that specific location where customers can come in person for service/sales.to qualify the local GMB address.
It is possible to set your home address, then select that it should be hidden and function as a local service area business instead.
But trying to get away with using a non-conforming "pretend" address will get you delisted when caught (and Google is very good at catching such non-conforming addresses in many ways, if it even lets you verify it in the first place.)
This is not just my opinion - it's specifically stated by Google in their own GMB terms of service. In fact, Local Search expert Miriam Ellis just posted about this in her Not-Actually-The-Best Local SEO Practices. To quote:
"Once caught, any effort that was put into ranking and building reputation around a fake-location listing is wasted."
Paul
-
RE: Local Ranking with No Physical Address in New Service Area - How to Rank?
Unfortunately, these types of "pretend" business addresses are specifically against Google's ToS for Google My Business locations. It's pretty easy for them to detect and they'll nuke your location listing as a result.
-
RE: What to do with old content after 301 redirect
Not really correct, unfortunately. As long as the 301 redirect has been written properly (it should be at the system level, not written into individual page code like a JavaScript redirect) then any request to the server for the page will be redirected before the old page can be reached. That's the express purpose of a 301-redirect.
So anyone clicking on an external link to the old URL (or a search crawler following it) will immediately be redirected to the new page as soon as they hit the server, whether the old page still exists or not.
-
RE: What to do with old content after 301 redirect
As long as the correct 301-redirect is in place, there's no SEO benefit to keeping the original page, as it can never be reached. That's the whole point of a 301-redirect.
For content management purposes, you might find it useful to keep the old posts around in draft form in case you want to use them as a basis for writing a new post/faq, but there's no reason to keep them available otherwise.
Hope that makes sense?
Paul
-
RE: Ive been using moz for just a minute now , i used it to check my website and find quite a number of errors , unfortunately i use a wordpress website and even with the tips , is till dont know how to fix the issues.
You've got some work to do, @ Dogara. It's essential to realise that just installing SEO plugins doesn't finish the job - they must also be carefully configured. And then the pages themselves must be optimised using the information the SEO plugin provides. Think of the plugin as a tool to make the optimisations easier, not one that will do all the work for you. Here's the task list I would tackle if I were you:
First things first - make certain you have a solid current backup of your website that you know how to recover if things should go sideways.
You currently have two competing SEO plugins active - definitely not recommended. You have both Squirrly and Yoast Premium. Since Squirrly doesn't appear to be configured at all, it should be removed. (This is assuming you haven't' done any customisation work with Squirrly - as it appears to me from a quick scan through your pages, But I didn't' do an exhaustive check, so if you've done customisations in this plugin, they may need to be exported, then imported into Yoast.)
Your Yoast Premium hasn't been updated in a full year - get it updated both for security and functionality. (And get all themes and other plugins updated too if they're behind - this is the biggest thing you can do for your website's security. Did I mention you need to have a solid backup first?
)
Fix your page layout templates - they are duplicating the page title and featured image.
Set up configuration of Yoast settings, configure the defaults for page:
-
Turn off the meta keywords functionality (no longer used)
-
Decide what you wish to do with all your redundant archive types that are creating a huge amount of duplicate content and bloat. My recommendations:
-
Since your site only appears to have one author, disable author archives.
-
turn off date-based archives. You're not using them anywhere that I can see, and few people are likely to search by date on the site
-
no-index the tag archives. These are straight-up massive duplicate content on your site as they are just lists of posts that are also listed elsewhere, like your categories.
-
add a couple of paragraphs of quality introductory text on each of your category pages (needs WordPress customisation to do this depending on your theme - may be doable with a plugin.) The alternative to this is to no-index your categories as well, but for a site like yours, this probably isn't recommended, since those categories are used as your primary header navigation.
-
NOTE! These recommendations are based on assumptions about how visitors use the site. If you have business reasons for keeping some of these archives, the decisions may be different!
-
write solid custom meta descriptions for your categories (assuming you are going to keep them indexed.) Currently, it is these category pages having no meta descriptions that is giving you that high error total in the Moz crawl. Do note that when you fix the meta descriptions, you may start seeing a large number of "duplicate meta description" errors listed in a new Moz crawl. This is because you have a large number of paginated pages for each category, and each will have the same meta description as the main page. This is not an issue, even though Moz may flag it, since the pages have proper pagination code in place already (re-next and rel-prev in headers). Note that Google has just this week changed the number of characters allowed in meta descriptions to be much longer - tools may not have caught up to this change yet.)
-
while you're editing the category meta description, take the opportunity to write better SEO page titles for each of them as well. They're edited in the same place as the meta descriptions in Yoast, so easy to do at the same time.
-
get the template for your homepage adjusted to include proper rel-next and rel=prev meta tags in the header so that its pagination is handled properly.
-
turn off JetPack's XML sitemap functionality and turn on the built-in sitemap tool in Yoast. You'll want to make certain only the appropriate sections of the site are in the sitemap (e.g. Any post types/taxonomies you've no-indexed, should be marked to exclude from the sitemap - like tags, author archives etc etc. You'll also need to resubmit a new sitemap address to your Google Search Console - make sure it's set up for the HTTPS address and submit the sitemap address https://hipmack.co/sitemap_index.xml
-
the "URLs too long" warning is somewhat arbitrary, but make certain you are rewriting the URLs of your new posts when you create them if they are too long (more than 4 or 5 words) I wouldn't' bother going back to change the old ones at this stage.
-
you are currently using an HTTPS Redirection plugin to manage the internal files of the site after your HTTPS migration. Would strongly recommend using a Search & Replace plugin for your database to properly rewrite these so you don't have a large number of internal redirects. Better for speed and more reliable.
-
Moz will tell you which page titles are too long and you can go into Yoast for each related page/post and rewrite them. Note that Google will still index a "too long" title, it'll just lose the end of the title when displaying it on search results pages. (So, for example, if it's just the website name getting cut off at the end, it'snot a big deal. This is also a good time to optimise the meta description for those posts as that's done in the same spot as the title are edited.
Whew! And that's just the start, but if you get those things cleaned up, you'll be well on your way to cleaning up the technical SEO of your site.
Paul
-
-
RE: If I'm using a compressed sitemap (sitemap.xml.gz) that's the URL that gets submitted to webmaster tools, correct?
Good Choice. The XML Sitemap spec has specific maximum sizes for individual sitemaps, but they are specified sizes before compression, so compression doesn't get you around the size limitations anyway
P.
-
RE: Is there a way to filter all computers on a specific IPv6 network in Google Analytics?
Those individual machine IP addresses are for identifying the computers to each other and to the server inside your network (called the LAN or Local Area Network). The IP address you need to block in Google Analytics is the one that connects the LAN to the outside internet. Unless your network has an unusual setup, using the browser of any computer or device connecting to the network (including phones using WiFi) to type "what is my IP" into the address bar will return the same IP address for the all the machines inside the network. (Large companies occasionally have multiple outside connections, but it doesn't sound like that's what you're dealing with.)
In addition, most commercial internet connections use static IP addresses so the IP "shouldn't" change, but anytime major changes or outages occur, it's a good idea to doublecheck the IP address to be sure it's stayed the same.
Make sense?
Paul
Sidenote: this is one of the main security purposes of a router. It routes all those internal machines' connections out to the internet through a single IP address, so the nasties out on the internet don't have access to an IP address for an individual machine that they can use to direct attacks against it. Thus the network admin only has to protect one device from direct attack from the nastyweb - the router - instead of having to protect every machine individually.
-
RE: If I'm using a compressed sitemap (sitemap.xml.gz) that's the URL that gets submitted to webmaster tools, correct?
Yup - you have to use the actual URL of the sitemap for submission. The search engines will handle it fine - as you can confirm by watching in GSC and Bing Webmaster Tools that it's getting processed.
Paul
P.S. There's really no particular benefit to using a compressed sitemap anymore.
-
RE: How do I know if I am correctly solving an uppercase url issue that may be affecting Googlebot?
It was still a good idea to create the redirects for the upper-case versions to help cut down duplicate content issues. Rel-canonical "could" have been used, but I find it's much better to actually redirect.
But that means the lower-case URLs are the canonical URLs, so ONLY they should appear in the sitemap. (Sitemaps aren't supposed to contain any URLs that redirect.) Right now, you're giving the search crawlers contradictory directives, and they don't do well with those
For additional cleanup, it would be good to have rules added to the CMS so that upper-case URL slugs cannot be created in the first place. Also run a check (can probably be done in the database) to ensure that any internal links on the site have been re-written NOT to use the uppercase URLs. there's no sense generating unnecessary redirects for URLs you control. (I suspect this is the majority of the cases that Screaming Frog is picking up.) You need to ensure all navigation and internal links are using the canonical lowercase version.
The more directly the crawlers can access the final URL, the better your indexing will be. So don't have the sitemap sending them through redirects, and don't let your site's internal links do so either.
Hope that helps?
-
RE: Does no-indexed page has an impact on bounce rate
You're ruining your site's usability for no reason, unfortunately. Google doesn't care about the bounce rate on your site as ranking factor in the example you have given (specifically because it is so trivial to manipulate as you are doing, for one reason). For many users, encountering this kind of behaviour on a site would be a major security.spam signal.
If you want to better understand your visitor behaviour on single-page visits, you'd be far better off to implement event tracking on your external links. You could even set the event to "interactive" so it would no longer register as a bounce visit if that's useful to your Analytics plan, but again, just manipulating the bounce rate does absolutely nothing for ranking purposes.
Hope that helps?
Paul
-
RE: Can Google Bot View Links on a Wix Page?
Google tries to index the rendered page, not the raw page code. The best way to see that page as Google sees it is to use the Fetch and Render tool in Google Search Console.
Alternately, you can use the developer tools in Chrome or Firefox to show the rendered version of the page code. Right-click in an area of the page you wish to view, and then select Inspect. This will bring up the inspector which will show you the rendered page code close to how Google sees it. (Google uses Chrome V42 for indexing, while the current version of Chrome is 62 - so you're not necessarily seeing exactly what G is seeing, which is why Fetch and Render is more the view G actually sees.)
The screenshot below shows the Inspect method of viewing the rendered page in current Chrome - but as I say, safest is to view within Google Search Console's Fetch tool.
Hope that helps?
Paul
P.S. This is an example of why Wix is not recommended for critical websites - it's using JavaScript methods to render basic page content which Google "says" it can index, but it's depending on a completely non-standard way of presenting page content which depends on search crawlers deviating from their standard methods to try to crawl it. (And there have been incidents in the past where G's crawling capabilti8y for this kind of JavaScript wasn't indexing it properly, causing major issues for a huge number of sites.)