Type Here to Get Search Results !

How to temporarily shut down a website without affecting SEO?

The accessibility of a website is one of the key elements of SEO, and a website that is often inaccessible is definitely not seen by search engines. However, in some cases, the website needs to be temporarily shut down, such as maintaining the server, upgrading and repairing the website program, etc., all need to temporarily stop the website. So how do these situations do not affect search engine rankings?

Any closing of the website should be avoided as much as possible

The crawlers of the search engine will continue to crawl the URL of the website. If the URL is inaccessible, it will cause the crawlers to stop visiting in a short period of time, which is not good for SEO.


However, normal access is restored in a very short period of time, and this impact will not be great. As long as it is not a 404 error, the crawler will try again later. For example, a server restart, or a brief shutdown and recovery, has little impact on SEO.


What should I do if I plan to shut down my site for a day or more?

The methods described in this article can only help websites reduce losses.



The best suggestions include:

  • Using HTTP 503 Status Codes
  • Keep HTTP 503 for no more than a day
  • robots.txt must keep returning 200 status code
  • If your website is down for more than a day, be prepared for the consequences. Search engines will reduce the crawling frequency for a certain period of time until the website has been running steadily for a longer period of time.

HTTP 503 status code

When a website goes offline, make sure it provides an HTTP 503 status code to web crawlers. 503 is one of the HTTP codes that informs the server that the request failed . In this case, the message delivered is: "The service is temporarily unavailable ".


When web crawlers like Googlebot or Baiduspider encounter a 503 status code, they know the site is unavailable and may become available later.


With a 503 code, the crawler knows to check the site again instead of removing it from the search index.


This can be checked in Chrome, using the DevTools debug tool , selecting "Network" at the top, and refreshing the page. Check the top entry, it should be red with a 503 status.


HTTP 503 status code should not exceed one day

The crawler will try to return to the site after initially encountering a 503, but will not retry forever.


If you see a 503 code day in and day out, it will eventually start removing pages from the index.


Keep the 503 status - ideally - for a maximum of a day, although not everything is limited to 1 day.



Robots.txt keeps 200 status code

While pages from a closed site should return a 503 code, the robots.txt file should return a 200 status code.


Robots.txt should not give a 503, otherwise crawlers will assume that the site is completely blocked from crawling.


Prepare for negative impact

If the website will be offline for more than a day, please be prepared accordingly, there is no way to take the website offline for a long time without all the negative consequences.


When the site is "opened" again, check that the key pages are still being indexed. If not, try to resubmit the index as soon as possible.


How To Avoid Important Page 404 Errors

When the crawler encounters the 503 status code, it will try to retry, but the 404 status code is different, which means that the webpage does not exist. Some search engines may retry.

Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

Top Post Ad

Below Post Ad