Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to find all crawlable links on a particular page?
- 
					
					
					
					
 Hi! This might sound like a newbie question, but I'm trying to find all crawlable links (that google bot sees), on a particular page of my website. I'm trying to use screaming frog, but that gives me all the links on that particular page, AND all subsequent pages in the given sub-directory. What I want is ONLY the crawlable links pointing away from a particular page. What is the best way to go about this? Thanks in advance. 
- 
					
					
					
					
 Thanks for sharing this information Thomas. Appreciate your time and help here. Regards. 
- 
					
					
					
					
 I understand yes are referred that is a parameter or how far from home here's some information on a tool I'm using right now http://www.internetmarketingninjas.com/seo-tools/google-sitemap-generator/here is an HTML file of the results however you can see the how far from home on the left hand side I suggest you run the tool yourself so you can see the full resultsUsing the IMN Google Site Map GeneratorLinks are critically important to webpages, not only for connecting to other, related pages to help end users find the information they want, but in optimizing the pages for SEO. The Find Broken Links, Redirects & Google Sitemap Generator Free Tool allows webmasters and search engine optimizers to check the status of both external links and internal links on an entire website. The resulting report generated by the Google sitemap generator tool will give webmasters and SEOs insight to the link structure of a website, and identify link redirects and errors, all of which help in planning a link optimization strategy. We always offer the downloadable results and the sitemap generator free for everyone. Get startedTo start with the free sitemap generator, type (or paste) the full home page URL of the website you want scanned. Select the number of pages you want to scan (up to 500, up to 1,000, or up to 10,000). Note that the job starts immediately and runs in real time. For larger sites containing numerous pages, the process can take up to 30 minutes to crawl and gather data on 1,000 pages (and longer still for very large sites). You can set the Google sitemap generator tool to send you an email once the crawl is completed and the data report is prepared. The online sitemap generator offers several options and also acts as an XML sitemap generator or an HTML sitemap generator. Note that the results table data of the online sitemap generator is interactive. Most of the data items are linked, either to the URLs referenced or to details about the data. For most cells that contain non-URL data, pause the mouse over the cell to see the full results. 
 Results BarWhen the tool starts, a results bar appears at the top of the page showing the following information: - Status of the tool (Crawling or Done)
- Number of Internal URLs crawled
- Number of External links found
- Number of Internal HTTP Redirects found
- Number of External HTTP Redirects found
- Number of Internal HTTP error codes found
- Number of External HTTP error codes found
 For those who need sitemaps provided by either an HTML sitemap generator or an XML sitemap generator, 
 there are corresponding options offered here. Also shown are the following:- Download XML Sitemap button
- Download tool results in Excel format
- Download tool results in HTML format
 Lastly, if you love the free sitemap generator tool, you can tell the world by clicking any of the following social media buttons: - Facebook Like
- Google+
 
 Email notificationNext, you can submit your email address to have a copy of the report emailed to you if you choose not to wait for it to finish crawling. We offer this feature as well as the sitemap generator free to all users. 
 Tool results dataWhen results are ready, the HTML sitemap generator will organize the data into six tables: - Internal links
- External links
- Internal errors (a subset of Internal Links)
- Internal redirects (another subset of Internal Links)
- External errors (a subset of External Links)
- External redirects (another subset of External Links)
 The table data is typically linked to either page URLs or to details about the data. Click on column headers to sort the results. 
 1Internal Links tableThe Internal links table created by the XML sitemap generator includes the following data fields: - URLs crawled on the site
- Link to The On Page Optimization Analysis Free SEO Tool for that URL
- URL’s level from the domain root
- URL’s returned HTTP status code
- Number of internal links the URL has within the site (click to see the list of URLs)
- Link text used for the URL
- Number of internal links on the page (click to see the list of URLs)
- Number of external links on the page (click to see the list of URLs)
- Size of the page on kilobytes (click to see page load speed test results for this URL from Google)
- Link to the Check Image Sizes, Alt Text, Header Checks and More Free SEO Tool for that URL
- The tag text from the URL’s page
- The description tag text from the URL’s page
- The keywords tag text from the URL’s page
- Contents, if used, of the anchor tag’s “rel=” attribute
 
 2External Links tableThe External links table includes the following data fields: - URL’s returned HTTP status code
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- External URL used in the link
- Link text used for the URL
- Internal page URL on which the link was first found
 
 3Internal HTTP code errors tableThe Internal errors table gathers all of the pages returning HTTP code errors (4xx and 5xx level codes) in one place to help organize the effort to resolve the problems. It includes the following data fields: - URL’s returned HTTP status code
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- Internal URL used in the link
- Link text used for the URL
- Internal page URL on which the link was first found
 The Internal errors table is a subset of the Internal links table showing just those pages returning HTTP status code errors. 
 4Internal HTTP redirects tableThe Internal redirects table combines all of the pages returning HTTP redirects in one list so you can easily review them. You should not have to rely on redirects internally. Instead, you can fix the source code containing the redirected link. This table contains the following data fields: - URL’s returned HTTP status code (click it to go to the HTTP Response Code Checker tool)
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- Internal URL used in the link
- Link text used for the URL
- Redirect’s target URL
- Internal page URL on which the link was first found
 The Internal redirects table is a subset of the Internal links table showing just those pages returning 301 and 302 HTTP status code redirects. 
 5External HTTP code errors tableThe External errors table gathers all of the pages returning HTTP code errors (4xx and 5xx level codes) in one place to help organize the effort to resolve the problems. It includes the following data fields: - URL’s returned HTTP status code (click it to go to the HTTP Response Code Checker tool)
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- Internal URL used in the link
- Link text used for the URL
- Redirect’s target URL
- Internal page URL on which the link was first found
 The External errors table is a subset of the External links table showing just those pages returning HTTP status code errors. 
 6External HTTP redirects tableThe External redirects table combines all of the pages returning HTTP redirects in one list so you can easily review them. As the redirect to the targeted page does not affect your page, fix these URLs is a lower priority. This table contains the following data fields: - URL’s returned HTTP status code (click it to go to the HTTP Response Code Checker tool)
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- External URL used in the link
- Link text used for the URL
- Redirect’s target URL
- Internal page URL on which the link was first found
 The External redirects table is a subset of the External links table showing just those pages returning 301 and 302 HTTP status code redirects. 
- 
					
					
					
					
 Hi Thomas! When I say 1 click, I mean all links that can directly be reached from www.wishpicker.com. For example wishpicker.com/gifts-for can be reached directly from wishpicker.com wishpicker.com/gifts-for/boyfriend cannot be reached directly from wishpicker.com. I would first need to go to wishpicker.com/gifts-for, and then go to wishpicker.com/gifts-for/boyfriend. So wishpicker.com/gifts-for is 1 click away, and wishpicker.com/gifts-for/boyfriend is 2 clicks away from wishpicker.com. I am looking to crawl all links that are only 1 click away. Thanks for your help here. Really appreciate it. 
- 
					
					
					
					
 when you say one click away are you talking about a parameter? I will run this through screaming frog and a couple other tools and see if I can get your answer. 
- 
					
					
					
					
 Hi Thomas Thanks for your response. Here is my website: www.wishpicker.com What I am looking for is all the links present only 1 click away from the page www.wishpicker.com (both internal and external). Performing a crawl with screaming frog is giving me all links (1, 2, 3, 4, and more clicks away). Not sure how to limit the crawl to show links that are only 1 click away, and exclude links that are 2 or more clicks away from this page. Look forward to your response. Thanks! 
- 
					
					
					
					
 Hi, Screaming frog does in fact show you the links that would be considered external links. Here is a great guide. http://www.seerinteractive.com/blog/screaming-frog-guide If you look at the external part of Screaming frog you'll find what you're looking for however you may also do this with using either the campaign tool or the browser plug-in. I would suggest reading the seer interactive guide and sticking with screaming frog it is an outstanding tool. Here are some other tools which I hope will help you if that is not the route you wish to go. If you could post a photograph of what you are looking for or what you mean by it only showing you the internal link count I know what you mean by that I just want to see what screen you're looking on to get the The answer you're looking for. Here are some more tools that will allow you to scan up to 1000 pages of your website for free and will tell you the information you're looking for. http://www.internetmarketingninjas.com/tools if you cannot find what you're looking for in their you might want to try http://www.quicksprout.com/2013/02/04/how-to-perform-a-seo-audit-free-5000-template-included/ distilled.net/U might be the best way to find out these types of things however it is a complete search engine optimization training course. Sincerely, Thomas 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		How to find orphan pages
 Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks! Technical SEO | | KJH-HAC1
- 
		
		
		
		
		
		Can you use Screaming Frog to find all instances of relative or absolute linking?
 My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”// Technical SEO | | Merkle-Impaqt0
- 
		
		
		
		
		
		Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
 I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected? Technical SEO | | tgwebmaster0
- 
		
		
		
		
		
		Remove page with PA of 69 and 300 root domain links?
 Hi We have a few pages within our website which were at one time a focus for us, but due to developing the other areas of the website, they are now defunct (better content elsewhere) and in some ways slightly duplicate so we're merging two areas into one. We have removed the links to the main hub page from our navigation, and were going to 301 this main page to the main hub page of the section which replaces it. However I've just noticed the page due to be removed has a PA of 69 and 15,000 incoming links from 300 root domains. So not bad! It's actually stronger than the page we are 301'ing it to (but not really an option to swap as the URL structure will look messy) With this in mind, is the strategy to redirect still the best or should we keep the page and turn it into a landing page, with links off to the other section? It just feels as though we would be doing this just for the sake of google, im not sure how much decent content we could put on it as we've already done that on the destination page. The incoming links to that page will still be relevant to the new section (they are both v similar hence the merging) Any suggestions welcome, thanks Technical SEO | | benseb0
- 
		
		
		
		
		
		Should I disavow links from pages that don't exist any more
 Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂 Technical SEO | | IgorMateski0
- 
		
		
		
		
		
		How much domain authority is passed on through a link from a page with low authority?
 Hello, Let's say that there is a link to site A from site B. The domain authority of site B is 85, but the link is on a page that has a page authority of only 1. Does much authority get passed along from site B to site A? (Let's assume site A has a domain authority of 35, if that's relevant.) Thank you! Technical SEO | | nyc-seo0
- 
		
		
		
		
		
		Footer Links with same anchor text on all pages
 We have different websites targeted at the different services our company provides. (e.g. For our document storage services, we have www.ukdocumentstorage.com. For document management, we have www.document-management-solutions.co.uk). If we take the storage site for example, every single page has a link in the footer to our document management site, with the anchor text 'Cleardata Document Management' SEOMoz is telling me that these are seen as external links (as they are on a different URL's), and I'm just clarifying that would this be a major possible factor in the website not ranking highly? How should I rectify this issue? Technical SEO | | janc0
- 
		
		
		
		
		
		Do web pages have to be linked to a menu?
 I have a situation where people search for terms like, say 1978 one dollar bill. Even though there never was a 1978 one dollar bill. I want to make a page to capture these searches but since there wasn't such a thing as a one dollar bill I don't want it connected to the rest of my content which is reality based. Does that make sense? Anyway, my question is, can I publish pages that aren't linked to my menu structure but that will be searchable or, am I going to have to figure out a way to make these oddball pages accessible through my menu? Technical SEO | | Banknotes0
 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				