How does Archivarix work?

Published: 2019-11-22

The Archivarix system is designed to download and restore sites that are no longer accessible from Web Archive, and those that are currently online. This is the main difference from the rest of “downloaders” and “site parsers”. Archivarix goal is not only to download, but also to restore the website in a form that it will be accessible on your server.

Let's start with the module that downloads websites from Web Archive. These are virtual servers located in California. Their location was chosen in such a way as to obtain the maximum possible connection speed with the Web Archive itself, because its servers are located in San Francisco. After entering data in the appropriate field on the module’s page, it takes a screenshot of the archived website and addresses the Web Archive API to request a list of files contained on the specified recovery date.

Having received a response to the request, the system generates a message with the analysis of the received data. User only needs to press the confirmation button in the received message in order to start downloading the website.

Using Web Archive API provides two advantages over direct downloading when the script simply follows the website’s links. First, all the files of this recovery are immediately known, you can estimate website volume and the time required to download it. Due to the nature of Web Archive operation, it sometimes works very unstable, so that connection breaks or incomplete download of files are possible, therefore module algorithm constantly checks the integrity of the received files and in such cases tries to download the content by reconnecting to the Web Archive server. Second, due to peculiarities of website indexing by Web Archive, not all website files may have direct links, which means that when you try to download a website simply by following the links, they will be unavailable. Therefore, restoration through the Web Archive API used by Archivarix, makes it possible to restore the maximum possible amount of archived website content for a specified date.

After completing the operation, the download module from the Web Archive transfers data to the processing module. It forms a website from the received files suitable for installation on Apache or Nginx server. Website operation is based on SQLite database, so to get started you just need to upload it to your server, and no installation of additional modules, MySQL databases and user creation is required. The processing module optimizes the website created; it includes image optimization, as well as CSS and JS compression. It may boost significantly the download speed of the restored website, if comparing to the original website. The download speed of some non-optimized Wordpress sites with a bunch of plugins and uncompressed media files may be significantly increased after processing by this module. It is obvious, that if website was optimized initially, this will not give a large increase in download speed.

Processing module removes advertising, counters and analytics by checking the received files against an extensive database of advertising and analytics providers. Removing external links and clickable contacts takes place simply by checksum code. In general, this algorithm carries out quite efficient cleaning the website of “traces of the previous owner”, although sometimes this does not exclude the need to manually correct something. For example, a self-written Java script redirecting website user to a certain monetization website will not be deleted by the algorithm. Sometimes you need to add missing pictures or remove unnecessary residues, as a spammed guestbook. Therefore, there is a need to hire an editor of the resulting website. And it already exists. Its name is Archivarix CMS.

This is a simple and compact CMS designed for editing websites created by the Archivarix system. It makes it possible to search and replace code throughout the website using regular expressions, editing the content in the WYSIWYG editor, add new pages and files. Archivarix CMS can be used together with any other CMS on one website.

Now let’s speak about other module used for downloading existing websites. Unlike the module for downloading websites from the Web Archive, it’s impossible to predict how many and which files you need to download, so the module’s servers work in a completely different way. Server spider simply follows all the links that are present on a website you are going to download. In order for the script not to fall into the endless download cycle of any auto-generated page, the maximum link depth is limited to ten clicks. And the maximum number of files that can be downloaded from the website must be specified in advance.

For the most complete downloading of the content that you need, there are several features that have been invented in this module. You can select a different User-Agent service spider, for example, Chrome Desktop or Googlebot. Referrer for cloaking bypass – if you need to download exactly what the user sees when logged in from the search, you can install a Google, Yandex, or other website referrer. In order to protect against banning by IP, you can choose to download the website using the Tor network, while the IP of the service spider changes randomly within this network. Other parameters, such as image optimization, ad removal and analytics are similar to the parameters of the download module from the Web Archive.

After the download is complete, the content is transferred to the processing module. Its operation principles are completely similar to the operation with the website downloaded from the Web Archive described above.

It is also worth mentioning the possibility to clone restored or downloaded websites. Sometimes it happens that during the recovery, one has chosen other parameters than it turned out to be necessary at the end. For example, removing external links was unnecessary, and some external links you needed, then you do not need to start downloading again. You just need to set new parameters on the recovery page and start re-creating the site.

The use of article materials is allowed only if the link to the source is posted:

How does Archivarix work?

The Archivarix system is designed to download and restore sites that are no longer accessible from Web Archive, and those that are currently online. This is the main difference from the rest of “downl…

2 months ago
How to transfer content from the Wayback Machine ( to Wordpress?

By using the “Extract structured content” option you can easily make a Wordpress blog both from the site found on the Web Archive and from any other website. To do this, firstly find the source websit…

1 month ago
How to restore websites from the Web Archive - Part 1

Web Archive Interface: Instructions for the Summary, Explore, and Site map tools. For reference: (Wayback Machine - Internet Archive) was created by Brewster Cale in 1996 about at the same…

1 month ago
How to restore websites from the Web Archive - Part 2

In the previous article we examined operation, and in this article we will talk about a very important stage of site restoring from the Wayback Machine that relates to domain preparation f…

1 month ago
How to restore websites from the Web Archive - Part 3

Choosing “BEFORE” limit when restoring websites from When domain expires, domain provider or hoster’s parking page may appear. When entering such a page, the Internet Archive will save it…

1 month ago
How to hide your backlinks from competitors?

If you are making a PBN (Private Blog Network) , then you probably didn’t really want other webmasters to know where you get your backlinks. A large number of services are involved in backlink and key…

1 month ago
How to recover deleted YouTube videos?

Sometimes you can see this "Video unavailable" message from Youtube. Usually it means that Youtube has deleted this video from their server. But there is an easy way how to get it from the Wayback Mac…

1 month ago
How to download an entire website from Google Cache?

If the website was recently deleted, but the Wayback Machine didn't save the latest version, what you can do to get the content? Google Cache will help you to do this. All you need is to install this …

3 weeks ago
Best Wayback Machine alternatives. How to find deleted websites.

The Wayback Machine is the famous and biggest archive of websites in the world. It has more than 400 billion pages on their servers. Is there any archiving services like …

1 week ago
Latest news:
The next update of Archivarix CMS with the addition of new features. Now any old site can be correctly converted to UTF-8 with the click of a button. Search filtering has become even better, because results can be also filtered by MIME type.
We have released the long-awaited Archivarix CMS update. In the new version, in addition to various improvements and optimizations, a very useful feature has been added for additional filtering of search results and full support for the tree structure of URLs for recoveries with a large number of files. For more details, see the Archivarix CMS script change log.
Our WordPress plugin Archivarix External Images Importer has been released. The plugin imports images from third-party websites, links to which are located in posts and pages, into the WordPress gallery. If the picture is currently not available or deleted, the plugin downloads a copy of it from the Web Archive.
We have added a new section of our site - Archivarix Blog. There you can read useful information about the operation of our system and site restoration.
Recently our system has been updated and now we have two new options:
- You can download Darknet .onion sites. Just enter .onion website address in the "Domain" field here and our system will download it from the Tor network just like a regular website.
- Content extractor. Archivarix can not only download existing sites or restore them from the Web Archive but can also extract content from them. In the "Advanced options" field you need to select "Extract structured content". After that you will receive a complete archive of the entire site, and an archive of articles in xml, csv, wxr and json formats. When creating an archive of articles our parser takes into account only meaningful content excluding duplicate articles, elements of design, menus, ads and other unwanted elements.