Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Content in Accordion doesn't rank as well as Content in Text box?
-
Does content rank better in a full view text layout, rather than in a clickable accordion?
I read somewhere because users need to click into an accordion it may not rank as well, as it may be considered hidden on the page - is this true?
accordion example: see features: https://www.workday.com/en-us/applications/student.html
-
Google will not treat content that is concealed behind tabs, accordions, or any other element where JavaScript is used to reveal content, in the same way as content that is visible as standard. However, it will still be indexed, so pages may rank for search phrases related to content contained within the hidden sections.
Why does Google devalue hidden content?
Google’s focus is on ensuring that the user experience within its search results is as good as possible. If the algorithm gave full weight to content hidden using JavaScript, this could be compromised.
For example, say a user searches for a term that is matched on a page but only in the hidden section. The user then clicks the search result to go through to that page but can’t immediately see the information they’re looking for because it’s hidden. They give up and return to the search results or head to another website.
This, in Google’s assessment, would not be a high quality user experience and the content within the hidden sections is therefore down-weighted.
In Summary
- Hiding content within tabs, accordions, or other elements that rely on JavaScript to reveal it to users is likely to be treated differently by Google, and assigned far less importance
- Websites, therefore, must take a considered approach and use this method only to hide content that is of secondary importance to the primary topic of the page, or that covers related topics
-
Hi there,
Absolutely not. In fact, I believe content in accordions outranks content on a page, although not for technical reasons.
Accordions are easier to fit into a page and can answer multiple user inquiries at once without throwing a wall of text at your visitors as they browse. Google reads accordions just the same as it reads open text. The difference comes with user interactions, metrics and satisfaction metrics.
Think about it like this:
You are browsing for pricing of a product. You also want to know shipping details and whether said product is safe to use for your 4-year old.
Your search returns 2 companies in your area that provide said product.
The first website throws 3,000 words at you in blocks, requiring you to scroll for what feels like hours without a clear indication of where to find the answer to your questions.
The second website can be scrolled in about 2 seconds and features an accordion which features headlines and direct answers to your questions without the need to view other content. Now we're cooking with gas.
In addition, accordion content lends itself to direct-answer formats which in turn lend themselves to showcase on SERP's. So not only will rankings improve, but so will traffic (there are tons of studies showing that Top 10 rankings = traffic, but few people realize that meta data and snippets can improve your odds of trapping 1st page traffic better than positioning).
Over time, this website will generate more and more authority for this product and relevant search queries, overtaking the other.
To answer your question directly - Google treats both forms of content equally, but (all else being equal) user metrics will provide greater link building potential, greater readership, more shares, etc. for the one featuring an accordion setup.
Look forward to what others have to say on this,
Rob
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?
Technical SEO | | d.bird0 -
301 Re-directing 'empty' domains
Hello, My client had purchased a few domains and 301 re-directed them, pointing to our main website. As far as I am aware the 'empty domains' are brand related but no content has ever been displayed on them, and I doubt they have much authority. The issue here is that we took a dive in ranking for our main keyword, I had a look on ahrefs and found the below: | www.empty-domain/our-keyword | 30 | 19 | 1 | fb 0
Technical SEO | | SO_UK
G+ 0
in 4 | REDIRECT 301 TO www.main-domain/our-keyword | 8 Feb '175 d | The ranking dip happened at the same time as the re-direct was re-discovered / re-crawled. Could the 'empty' URL in question been causing us any issues? I understand that this is terrible practice for 301 redirects, I was hoping someone in the community could shed light on any possible solution for this.0 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
Is alt text inside an img tag inside an h1 the same weight as text directly inside the h1?
Right now I use a background image and CSS to tie the h1 tag to my logo on each page. However, I am concerned that may not be best practice. Plus, I am interested in using schema markup on my logo. So, my question is, if I use an image with alt text inside my h1 tag, will the alt text carry as much weight as a text-based h1?
Technical SEO | | Avalara0 -
Why is Google's cache preview showing different version of webpage (i.e. not displaying content)
My URL is: http://www.fslocal.comRecently, we discovered Google's cached snapshots of our business listings look different from what's displayed to users. The main issue? Our content isn't displayed in cached results (although while the content isn't visible on the front-end of cached pages, the text can be found when you view the page source of that cached result).These listings are structured so everything is coded and contained within 1 page (e.g. http://www.fslocal.com/toronto/auto-vault-canada/). But even though the URL stays the same, we've created separate "pages" of content (e.g. "About," "Additional Info," "Contact," etc.) for each listing, and only 1 "page" of content will ever be displayed to the user at a time. This is controlled by JavaScript and using display:none in CSS. Why do our cached results look different? Why would our content not show up in Google's cache preview, even though the text can be found in the page source? Does it have to do with the way we're using display:none? Are there negative SEO effects with regards to how we're using it (i.e. we're employing it strictly for aesthetics, but is it possible Google thinks we're trying to hide text)? Google's Technical Guidelines recommends against using "fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash." If we were to separate those business listing "pages" into actual separate URLs (e.g. http://www.fslocal.com/toronto/auto-vault-canada/contact/ would be the "Contact" page), and employ static HTML code instead of complicated JavaScript, would that solve the problem? Any insight would be greatly appreciated.Thanks!
Technical SEO | | fslocal0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Text in div
When you use text in a div like this: <div id="container"> <h1 id="doc-header">Your Business in your Location - Your Services</h1> <p>Top industries are getting hit.p> <div id="branding"> <a href="#"> <img id="logo" src="images/logo.png" alt="Your Site" /> </a> </div> Are there SEO consequences when you put text in divs?
Technical SEO | | PlusPort0