Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to find a specific link on my website (currently causing redirects)
-
Hi everyone,
I've used crawlers like Xenu to find broken links before, and I love these tools. What I can't figure out is how to find specific pieces of code within my site. For example, Webmaster Tools tells me there are still links to old pages somewhere on my website but I just can't find them. Do you know of a crawler that can search for a specific link within the html?
Thanks in advance,
Josh
-
Use the SEOmoz crawl report.
Let Roger loose on your site, then when the report is available, filter the excel file on the broken link field. Then check the "referrer" field for each broken link. The referrer field will show the page where the broken link was discovered.
You can then use the SEOmoz bar to highlight the links on a page. Sometimes a link isn't obvious as it is hidden. In those cases you can always right-click on the page and choose View Page Source from the options, then search for the link.
-
Thanks for the reply.
I should have specified that the links are being reported in Bing webmaster tools and not Google webmaster tools. Bing doesn't seem to tell you where the bad links are.
-
Dreamweaver has a way of searching an entire website if you download the site to Dreamweaver. But webmaster tools should tell you where the links are being found on your site. They typically tell you which URL has the bad links.
-
There are a few ways I would approach this. In order:
-
Run a find in files using one of the text editors I use for coding, either UltraEdit or PhpEd, you can use whatever you are comfortable with,
-
Check the server logs for that page, it should show a referring page, which may not be on your site,
-
or just do a 301 from it to your home page or a relevant page. I have had situations where people link to the wrong page and I redirect them instead of letting it 404,
-
If you are sure it is an actual link on your site, and maybe it is being generated (you didn't post a link so I don't know which site you are referring to) , and not a redirect from somewhere, consider paying someone $5 on http://fiverr.com/ to find it.
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My WP website got attack by malware & now my website site:www.example.ca shows about 43000 indexed page in google.
Hi All My wordpress website got attack by malware last week. It affected my index page in google badly. my typical site:example.ca shows about 130 indexed pages on google. Now it shows about 43000 indexed pages. I had my server company tech support scan my site and clean the malware yesterday. But it still shows the same number of indexed page on google.
Technical SEO | | ChophelDoes anybody had ever experience such situation and how did you fixed it. Looking for help. Thanks FILE HIT LIST:
{YARA}Spam_PHP_WPVCD_ContentInjection : /home/example/public_html/wp-includes/wp-tmp.php
{YARA}Backdoor_PHP_WPVCD_Deployer : /home/example/public_html/wp-includes/wp-vcd.php
{YARA}Backdoor_PHP_WPVCD_Deployer : /home/example/public_html/wp-content/themes/oceanwp.zip
{YARA}webshell_webshell_cnseay02_1 : /home/example2/public_html/content.php
{YARA}eval_post : /home/example2/public_html/wp-includes/63292236.php
{YARA}webshell_webshell_cnseay02_1 : /home/example3/public_html/content.php
{YARA}eval_post : /home/example4/public_html/wp-admin/28855846.php
{HEX}php.generic.malware.442 : /home/example5/public_html/wp-22.php
{HEX}php.generic.cav7.421 : /home/example5/public_html/SEUN.php
{HEX}php.generic.malware.442 : /home/example5/public_html/Webhook.php0 -
Multilingual website
My website is https://www.india-visa-gov.in and we are doing multilingual. There are three options 1. TLD eg india-visa-gov.fr (French) india-visa-gov.de (German) 2. Subdomain eg: fr.india-visa-gov.in (French) de.india-visa-gov.in (German) 3. Folders https://www.india-visa-gov.in/fr/ (French) https://www.india-visa-gov.in/de/ (German) We have tried the 3rd option but need to know whether its better or not for the long term health from SEO. Does the MOZ DA carry better in Subdomain or TLD or Folders? What does MOZ recommend to maintain DA? Thanks
Technical SEO | | amitdipsite150220200 -
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
301 Redirects, Sitemaps and Indexing - How to hide redirected urls from search engines?
We have several pages in our site like this one, http://www.spectralink.com/solutions, which redirect to deeper page, http://www.spectralink.com/solutions/work-smarter-not-harder. Both urls are listed in the sitemap and both pages are being indexed. Should we remove those redirecting pages from the site map? Should we prevent the redirecting url from being indexed? If so, what's the best way to do that?
Technical SEO | | HeroDesignStudio0 -
I have multiple URLs that redirect to the same website. Is this an issue?
I have multiple URLs that all lead to the same website. Years ago they were purchased and were sitting dormant. Currently they are 301 redirects and each of the URLs feed to different areas of my website. Should I be worried about losing authority? And if so, is there a better way to do this?
Technical SEO | | undrdog990 -
Is SEO effected of putting an external link in the primary navigation of a website?
I have a customer, www.xxx.com. This site has good traffic, low bounce rate (28%), 2:00 min avg time on site, and 45% return visitor rating. No spam rankings, etc. Good load time. Another site, www.yyy.com, has sent out a request for them to add them as a new link in www.xxx.com's primary navigation - using a title such as "abc" (not the name of the company or site of yyy.com). This second site, www.yyy.com, has a bounce rate of 98%, avg time on site is :30, and 10.2% return visitor rate. No spam flags noted in Open Site explorer. Plus they are asking other sites similar to www.xxx.com to do the same thing. Questions/Concerns and Feedback appreciated: Will yyy.com's analytics and quality pass back to xxx.com and cause Google or algorithms to flag or penalize xxx.com? (It ranks #1 for quite a few things.) The relevancy between the sites is good -same industry, same business objectives. From a usability standpoint, isn't it more appropriate to place a link to another website in a different way? e.g. a promotional graphic wit a link or anchor text links? Isn't it more appropriate to ask another business for links - not using the primary nav of a site? (It seems yyy.com is essentially asking other sites for 'free advertising/promotion.' Thanks!
Technical SEO | | mundsack0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Too Many On-Page Links - caused by a drop down menu
Many of our e-com sites we build for customers have drop down menus to help the user easily find products without having to click - Example: http://www.customandcommercial.com/ But this then causes the report to trigger too many on page links We do have a site map and a google site map So should I put code in place not to follow the drop down menu link items or leave in place?
Technical SEO | | spiralsites0