How does Archivarix work?

The Archivarix system is designed to download and restore sites that are no longer accessible from Web Archive, and those that are currently online. This is the main difference from the rest of “downloaders” and “site parsers”. Archivarix goal is not only to download, but also to restore the website in a form that it will be accessible on your server.

Let's start with the module that downloads websites from Web Archive. These are virtual servers located in California. Their location was chosen in such a way as to obtain the maximum possible connection speed with the Web Archive itself, because its servers are located in San Francisco. After entering data in the appropriate field on the module’s page https://en.archivarix.com/restore/, it takes a screenshot of the archived website and addresses the Web Archive API to request a list of files contained on the specified recovery date
Read more…

Latest news:
2019.01.07
The next update of Archivarix CMS with the addition of new features. Now any old site can be correctly converted to UTF-8 with the click of a button. Search filtering has become even better, because results can be also filtered by MIME type.
2019.12.20
We have released the long-awaited Archivarix CMS update. In the new version, in addition to various improvements and optimizations, a very useful feature has been added for additional filtering of search results and full support for the tree structure of URLs for recoveries with a large number of files. For more details, see the Archivarix CMS script change log.
2019.11.27
Our WordPress plugin Archivarix External Images Importer has been released. The plugin imports images from third-party websites, links to which are located in posts and pages, into the WordPress gallery. If the picture is currently not available or deleted, the plugin downloads a copy of it from the Web Archive.
2019.11.20
We have added a new section of our site - Archivarix Blog. There you can read useful information about the operation of our system and site restoration.
2019.10.02
Recently our system has been updated and now we have two new options:
- You can download Darknet .onion sites. Just enter .onion website address in the "Domain" field here and our system will download it from the Tor network just like a regular website.
- Content extractor. Archivarix can not only download existing sites or restore them from the Web Archive but can also extract content from them. In the "Advanced options" field you need to select "Extract structured content". After that you will receive a complete archive of the entire site, and an archive of articles in xml, csv, wxr and json formats. When creating an archive of articles our parser takes into account only meaningful content excluding duplicate articles, elements of design, menus, ads and other unwanted elements.