How to download an entire website from Google Cache?

Published: 2019-12-27

If the website was recently deleted, but the Wayback Machine didn't save the latest version, what you can do to get the content? Google Cache will help you to do this. All you need is to install this plugin - https://www.downthemall.net/

1 - Install DownThemall plugin to your browser.

2 - Open Google Search in the browser and set "100 Results per page" in "Settings" - "Search Settings" menu. It will give you more downloadable cache pages per one click. Unfortunately 100 results are maximum in Google search:

3 - Find on Google all cached pages of your site. Just enter in search field this: site:yourwebsite.com. Here is an example with spacex.com:

4 - In DownThemall plugin enter cache in "Fast Filtering" field. This regular expression will choose all of cached pages. Press Download button and wait for... download error. 

5 - After 100 or more downloaded files Google will interrupt the process and ask you to verify yourself via captcha. DownThemall plugin can't solve captcha, it just stops downloading. So you need to return to Google search, open any search result, solve the captcha manually and restart the download process. It will give you next batch of files to download.
As you see the process is not fully automated but it is quite fast, and completely free. If you want to scrape  thousands and millions cached pages the better way to buy some SEO tool with "google cache scraper" option.

 

 

 

 

The use of article materials is allowed only if the link to the source is posted: https://en.archivarix.com/blog/download-website-google-cache/

Website downloader. How to choose the files limit?

Our Website downloader system allows you to download up to 200 files from a website for free. If there are more files on the site and you need all of them, then you can pay for this service. Download …

1 week ago
Regular expressions used in Archivarix CMS

This article describes regular expressions used to search and replace content in websites restored using the Archivarix System. They are not unique to this system. If you know the regular expressions …

2 weeks ago
Simple and compact Archivarix CMS. Flat-file CMS for downloaded websites.

In order to make it convenient for you to edit the websites restored in our system, we have developed a simple Flat File CMS consisting of just one small php file. Despite its size, this CMS is a powe…

2 weeks ago
Best Wayback Machine alternatives. How to find deleted websites.

The Wayback Machine is the famous and biggest archive of websites in the world. It has more than 400 billion pages on their servers. Is there any archiving services like Archive.org? …

1 month ago
How to download an entire website from Google Cache?

If the website was recently deleted, but the Wayback Machine didn't save the latest version, what you can do to get the content? Google Cache will help you to do this. All you need is to install this …

1 month ago
How to recover deleted YouTube videos?

Sometimes you can see this "Video unavailable" message from Youtube. Usually it means that Youtube has deleted this video from their server. But there is an easy way how to get it from the Wayback Mac…

2 months ago
How to hide your backlinks from competitors?

If you are making a PBN (Private Blog Network) , then you probably didn’t really want other webmasters to know where you get your backlinks. A large number of services are involved in backlink and key…

2 months ago
How to restore websites from the Web Archive - archive.org. Part 3

Choosing “BEFORE” limit when restoring websites from archive.org. When domain expires, domain provider or hoster’s parking page may appear. When entering such a page, the Internet Archive will save it…

2 months ago
How to restore websites from the Web Archive - archive.org. Part 2

In the previous article we examined archive.org operation, and in this article we will talk about a very important stage of site restoring from the Wayback Machine that relates to domain preparation f…

2 months ago
How to restore websites from the Web Archive - archive.org. Part 1

Web Archive Interface: Instructions for the Summary, Explore, and Site map tools. For reference: Archive.org (Wayback Machine - Internet Archive) was created by Brewster Cale in 1996 about at the same…

2 months ago
Latest news:
2020.02.14
New Friday, new updates!
A lot of new and useful was done in Archivarix CMS:
- In Search and Replace, you can now filter by url date.
- Now external links from all pages of the site can be deleted with the click of a button. Anchors are preserved.
- The new ACMS_SAFE_MODE parameter, which prohibits changing the Loader / CMS settings and loading custom files, is also prohibited from importing import settings and custom files.
- The JSON settings files for the Loader and CMS can now be downloaded to your computer and downloaded to the CMS from a file on the computer. Thus, the transfer of settings to other sites has become even easier.
- Creating custom rules has become more convenient, there are often used patterns that you can choose.
- New custom files can be created in the file manager without having to download the file.
- The url tree for the main domain always comes first.
- If you hide the url tree for the domain / subdomain, then this setting is saved while working with the CMS.
- Instead of two buttons, open / collapse the url tree, now one that can do both.
- Creating a new URL was simplified and when creating, you can immediately specify the file from the computer.
- In the mobile layout, the main working part comes first.
- After each manipulation of the file, its size is updated in the database.
- Fixed buttons for selective history rollbacks.
- Fixed creating new urls for subdomains that contain numbers in the domain name.
2020.02.07
New portion of updates!
There is no need to change anything in the source code of the files.
- Now you can upload sites to the server by uploading to the server only one script from our Archivarix CMS.
- In order to change something in the CMS settings, you no longer need to open its source code. You can set a password or lower limits directly from the Settings section.
- To connect your counters, trackers, custom scripts, a separate "includes" folder is now used inside the .content.xxxxxx folder. You can also upload custom files directly through the new file manager in CMS. Adding counters and analytics to all pages of the site has also become convenient and understandable.
- Imports support a new file structure with settings and the "includes" folder.
- Added keyboard shortcuts for working in the code editor.

These and many other improvements in the new version. The loader has also been updated and works with the settings that the CMS creates.
2020.01.23
Another mega-update of Archivarix CMS!

Added very useful tools that allow the click of a button:
- clean all broken internal links,
- delete missing images,
- set rel = "nofollow" for all external links.

Now additional restores can be imported directly from the CMS itself. You can combine different restores into one working site.

For those who work with large sites or use poor hosting - all actions that previously could stop at the timeout of your hosting will now be divided into parts and automatically continue until they are completed. Want to make a replacement in the code of 500 thousand files? Import several gigabyte recovery? All this is now possible on any, even very cheap hosting. The timeout time (by default, 30 seconds) can be changed in the ACMS_TIMEOUT parameter.

Our loader (index.php) now works on both http and https protocols, regardless of the build parameters. You can force the protocol by changing the value of the ARCHIVARIX_PROTOCOL parameter.
2020.01.07
The next update of Archivarix CMS with the addition of new features. Now any old site can be correctly converted to UTF-8 with the click of a button. Search filtering has become even better, because results can be also filtered by MIME type.
2019.12.20
We have released the long-awaited Archivarix CMS update. In the new version, in addition to various improvements and optimizations, a very useful feature has been added for additional filtering of search results and full support for the tree structure of URLs for recoveries with a large number of files. For more details, see the Archivarix CMS script change log.