Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What is the best way to deal with an event calendar
-
I have an event calendar that has multiple repeating items into the future. They are classes that typically all have the same titles but will occasionally have different information. I don't know what is the best way to deal with them and am open to suggestions.
Currently Moz anayltics is showing multiple errors (duplicate page titles, descriptions and overly dynamic urls). I'm assuming that it's showing duplicate elements way into the future.
I thought of having the calendar no followed at all but the content for the classes seems valuable.
Thanks,
-
Sorry for all the posts however maybe this will help you as well that get rid of the dynamic uRLs
http://www.webconfs.com/url-rewriting-tool.php
Thomas
-
A great completely and this is a good example of the type of difference changing the robots.txt file could make
I would read all the information you can on it as it seems to be constantly updating.
I used this info below as an example of a happy ending but to see the problems I would read all the stories you will see if you check out this link.
http://wordpress.org/support/topic/max-cpu-usage/page/2
CPU usage from over 90% to less than 15%. Memory usage dropped by almost half, from 1.95 GB to 1.1 GB including cache/buffers.
My setup is as follows:
Linode 2GB VPS
Nginx 1.41
Percona SQL Server using XtraDB
PHP-FPM 5.4 with APC caching db requests and opcode via W3 Total Cache
Wordpress 3.52
All in One Event Calendar 1.11All the Best,
Thomas
-
I got the robots.txt file I hope this will help you.
This is built into every GetFlywheel.com website they are a managed WordPress only hosting company
website the reason they did this was the same reason Dan as described above.
I'm not saying this is a perfect fix however after speaking with the founder of GetFlywheel I know they place this in the robots.txt file for every website that they host in order to try get rid of the crawling issue.
This is an exact copy of any default robots.txt file from getflywheel.com
Default Flywheel robots file
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/Disallow: /calendar/action:posterboard/
Disallow: /calendar/action:agenda/
Disallow: /calendar/action:oneday/
Disallow: /calendar/action:month/
Disallow: /calendar/action:week/
Disallow: /calendar/action:map/As found on a brand-new website. If you Google "Max CPU All in one calendar" you will see more about this issue.
I hope this is of help to you,
Thomas
PS
here is what
The maker of the all in one event calendar has listed on their site as a fix
-
Hi Landon
I had a client with a similar situation. Here's what I feel is the best goal;
Calendar pages (weeks/months/days etc) - don't crawl, don't index
Specific event pages - crawl and index
Most likely the calendar URLs have not been indexed, but you can check with some site: searches. Assuming the have not been indexed, the best solution was to block crawling to certain URLs with robots.txt - calendars can go off into infinity, and you don't want to send the crawlers off into a black hole as it's not good for crawl budget, or for directing them to your actual content.
-
is this the all-in-one event calendar for WordPress?
If so I can give you the information or you can just Google CPU Max WordPress
essentially you have to change the robots.txt file so the crawlers don't have huge issues as they do now with it.
Get flywheel has that built into their robots.txt file if that is your issue I can go in and grab it for you.
Sincerely,
Thomas
-
Besides this, take a look at the schema markup for Events it might help you mark up the page better so Google will understand what the page/ event is about: http://schema.org/Event
-
Are the same classes in the future link to the same page? are you using canonical tags correctly? Your URL should help diagnose the problem and guide you better,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
An immediate and long-term plan for expired Events?
Hello all, I've spent the past day scouring guides and walkthroughs and advice and Q&As regarding this (including on here), and while I'm pretty confident in my approach to this query, I wanted to crowd source some advice in case I might be way off base. I'll start by saying that Technical SEO is arguably my weakest area, so please bear with me. Anyhoozles, onto the question (and advance apologies for being vague): PROBLEM I'm working on a website that, in part, works with providers of a service to open their own programs/centers. Most programs tend to run their own events, which leads to an influx of Event pages, almost all of which are indexed. At my last count, there were approximately 800 indexed Event pages. The problem? Almost all of these have expired, leading to a little bit of index bloat. THINGS TO CONSIDER A spot check revealed that traffic for each Event occurs for about a two-to-four week period then disappears completely once the Event expires. About half of these indexed Event pages redirect to a new page. So the indexed URL will be /events/name-of-event but will redirect to /state/city/events/name-of-event. QUESTIONS I'M ASKING How do we address all these old events that provide no real value to the user? What should a future process look like to prevent this from happening? MY SOLUTION Step 1: Add a noindex to each of the currently-expired Event pages. Since some of these pages have link equity (one event had 8 unique links pointing to it), I don't want to just 404 all of them, and redirecting them doesn't seem like a good idea since one of the goals is to reduce the number of indexed pages that provide no value to users. Step 2: Remove all of the expired Event pages from the Sitemap and resubmit. This is an ongoing process due to a variety of factors, so we'd wrap this up into a complete sitemap overhaul for the client. We would also be removing the Events from the website so there are not internal links pointing to them. Step 3: Write a rule (well, have their developers write a rule) that automatically adds noindex to each Event page once it's expired. Step 4: Wait for Google to re-crawl the site and hopefully remove the expired Events from its index. Thoughts? I feel like this is the simplest way to get things done quickly while preventing future expired events from being indexed. All of this is part of a bigger project involving the overhaul of the way Events are linked to on the website (since we wouldn't be 404ing them, I would simply suggest that they be removed entirely from all navigation), but ultimately, automating the process once we get this concern cleaned up is the direction I want to go. Thanks. Eager to hear all your thoughts.
Technical SEO | | Alces0 -
What is the best way to refresh a webpage of a news site, SEO wise?
Hello all, we have a client which is a sports website. In fact it is a veyr big website and has a huge number of news per day. This is mostly the reason why it refreshes some of its pages with news list every 420 seconds. We currently use meta refresh. I have read here and elsewhere that meta refreshes should be avoided. But we don't do it to send to another page and pass any kind of page authority / juice. Is in this case javascript refresh better? Is there any other better way. What do you think & suggest? Thank you!
Technical SEO | | pkontopoulos0 -
I can buy a domain from a competitor. Whats the best way to make good use of these links for my existing website
I can buy a domain from a competitor. Whats the best way to make good use of these links for my existing website
Technical SEO | | Archers0 -
Best 404 Error Checker?
I have a client with a lot of 404 errors from Web Master Tools, and i have to go through and check each of the links because Some redirect to the correct page Some redirect to another url but its a 404 error Some are just 404 errors Does anyone know of a tool where i can dump all of the urls and it will tell me If the url is redirected, and to where if the page is a 404 or other error Any tips or suggestions will be really appreciated! Thanks SEO Moz'rs
Technical SEO | | anchorwave0 -
Best free tool to check internal broken links
Question says it all I guess. What would your recommend as the best free tool to check internal broken links?
Technical SEO | | RikkiD225 -
Internal vs external blog and best way to set up
I have a client that has two domians registered - one uses www.keywordaustralia.com the other uses www.keywordaelaide.com He had already bought and used the first domain when he came to me I suggested the second as being worth buying as going for a more local keyword would be more appropriate. Now I have suggested to him that a blog would be a worthy use of the second domain and a way to build links to his site - however I am reading that as all links will be from the same site it wont be worth much in the long run and an internal blog is better as it means updated content on his site. should i use the second domain for blog, or just 301 the second domain to his first domain. Or is it viable to use the second domain as the blog and just set up an rss feed on his page ? Is there a way to have the second domain somehow 'linked' to his first domain with the blog so that google sees them as connected ? NOOBIE o_0
Technical SEO | | mamacassi0 -
What is the best website structure for SEO?
I've been on SEOmoz for about 1 month now and everyone says that depending on the type of business you should build up your website structure for SEO as 1st step. I have a new client click here ( www version doesn't work)... some bugs we are fixing it now. We are almost finished with the design & layout. 2nd question have been running though my head. 1. What would the best url category for the shop be /products/ - current url cat ex: /products/door-handles.html 2. What would you use for the main menu as section for getting the most out of SEO. Personally i am thinking of making 2-3 main categories on the left a section where i can add content to it (3-4 paragraphs... images maybe a video).So the main page focuses on the domain name more and the rest of the sections would focus on specific keywords, this why I avoid cannibalization. Main keyword target is "door handles" Any suggestions would be appreciated.
Technical SEO | | mosaicpro0 -
Double byte characters in the URL - best avoided?
We are doing some optimisation on sites in the APAC region, namely China, Hong Kong, Taiwan and Japan. We have set the url generator to automatically use the heading of the page in the URL which works fine for countries using Latin characters, but is causing problems, particularly in IE, when it comes to the double byte countries. For some reason, IE struggles with double byte and displays URLs in their rather ugly, coded form. Anybody got any suggestions on whether we should persist with the keyword URLs or revert to the non-descriptive URLs for the double byte countries? The reason I ask is it's a balance of SEO benefit vs not scaring IE users off with ugly URLs that look dreadful and spammy.
Technical SEO | | Red_Mud_Rookie0