How to hide your backlinks from competitors?

Published: 2019-12-10

It is known that competitor backlink analysis is an important part of the work of a SEO specialist. If you are making a PBN, then you probably didn’t really want other webmasters to know where you get your backlinks. A large number of services are involved in backlink and keyword analysis, the most famous of which are Majestic, Ahrefs, Semrush. All of them have their own bots that can be blocked.

One way is to write Disallow rules for these bots in the robots.txt file, but then this file will be visible to everyone, and this may turn out to be one of the footprints by which you can identify the website as a part of your PBN. It is much better to block access for backlink checkers in the .htaccess file. This file is not visible to anyone except you. In addition, by blocking these crawlers, you significantly reduce the load on the webserver.

Here is an example of how this can be done. Just add this code to your .htacess file:

SetEnvIfNoCase User-Agent .*aboundexbot.* botstop
SetEnvIfNoCase User-Agent .*ahrefsbot.* botstop
SetEnvIfNoCase User-Agent .*backlinkcrawler.* botstop
SetEnvIfNoCase User-Agent .*blekkobo.* botstop
SetEnvIfNoCase User-Agent .*blexbot.* botstop
SetEnvIfNoCase User-Agent .*dotbot.* botstop
SetEnvIfNoCase User-Agent .*dsearch.* botstop
SetEnvIfNoCase User-Agent .*exabot.* botstop
SetEnvIfNoCase User-Agent .*ezooms.* botstop
SetEnvIfNoCase User-Agent .*gigabot.* botstop
SetEnvIfNoCase User-Agent .*ia_archiver.* botstop
SetEnvIfNoCase User-Agent .*linkdexbot.* botstop
SetEnvIfNoCase User-Agent .*lipperhey spider.* botstop
SetEnvIfNoCase User-Agent .*majestic-12.* botstop
SetEnvIfNoCase User-Agent .*majestic-seo.* botstop
SetEnvIfNoCase User-Agent .*meanpathbot.* botstop
SetEnvIfNoCase User-Agent .*megaindex.* botstop
SetEnvIfNoCase User-Agent .*mj12bot.* botstop
SetEnvIfNoCase User-Agent .*ncbot.* botstop
SetEnvIfNoCase User-Agent .*nutch.* botstop
SetEnvIfNoCase User-Agent .*pagesinventory.* botstop
SetEnvIfNoCase User-Agent .*rogerbot.* botstop
SetEnvIfNoCase User-Agent .*scoutjet.* botstop
SetEnvIfNoCase User-Agent .*searchmetricsbot.* botstop
SetEnvIfNoCase User-Agent .*semrushbot.* botstop
SetEnvIfNoCase User-Agent .*seokicks-robot.* botstop
SetEnvIfNoCase User-Agent .*sistrix.* botstop
SetEnvIfNoCase User-Agent .*sitebot.* botstop
SetEnvIfNoCase User-Agent .*spbot.* botstop
<Limit GET POST HEAD>
Order Allow,Deny
Allow from all
Deny from env=botstop
</Limit>

The use of article materials is allowed only if the link to the source is posted: https://en.archivarix.com/blog/hide-backlinks/

How does Archivarix work?

The Archivarix system is designed to download and restore sites that are no longer accessible from Web Archive, and those that are currently online. This is the main difference from the rest of “downl…

2 months ago
How to transfer content from the Wayback Machine (archive.org) to Wordpress?

By using the “Extract structured content” option you can easily make a Wordpress blog both from the site found on the Web Archive and from any other website. To do this, firstly find the source websit…

1 month ago
How to restore websites from the Web Archive - archive.org. Part 1

Web Archive Interface: Instructions for the Summary, Explore, and Site map tools. For reference: Archive.org (Wayback Machine - Internet Archive) was created by Brewster Cale in 1996 about at the same…

1 month ago
How to restore websites from the Web Archive - archive.org. Part 2

In the previous article we examined archive.org operation, and in this article we will talk about a very important stage of site restoring from the Wayback Machine that relates to domain preparation f…

1 month ago
How to restore websites from the Web Archive - archive.org. Part 3

Choosing “BEFORE” limit when restoring websites from archive.org. When domain expires, domain provider or hoster’s parking page may appear. When entering such a page, the Internet Archive will save it…

1 month ago
How to hide your backlinks from competitors?

If you are making a PBN (Private Blog Network) , then you probably didn’t really want other webmasters to know where you get your backlinks. A large number of services are involved in backlink and key…

1 month ago
How to recover deleted YouTube videos?

Sometimes you can see this "Video unavailable" message from Youtube. Usually it means that Youtube has deleted this video from their server. But there is an easy way how to get it from the Wayback Mac…

1 month ago
How to download an entire website from Google Cache?

If the website was recently deleted, but the Wayback Machine didn't save the latest version, what you can do to get the content? Google Cache will help you to do this. All you need is to install this …

3 weeks ago
Best Wayback Machine alternatives. How to find deleted websites.

The Wayback Machine is the famous and biggest archive of websites in the world. It has more than 400 billion pages on their servers. Is there any archiving services like Archive.org? …

1 week ago
Latest news:
2019.01.07
The next update of Archivarix CMS with the addition of new features. Now any old site can be correctly converted to UTF-8 with the click of a button. Search filtering has become even better, because results can be also filtered by MIME type.
2019.12.20
We have released the long-awaited Archivarix CMS update. In the new version, in addition to various improvements and optimizations, a very useful feature has been added for additional filtering of search results and full support for the tree structure of URLs for recoveries with a large number of files. For more details, see the Archivarix CMS script change log.
2019.11.27
Our WordPress plugin Archivarix External Images Importer has been released. The plugin imports images from third-party websites, links to which are located in posts and pages, into the WordPress gallery. If the picture is currently not available or deleted, the plugin downloads a copy of it from the Web Archive.
2019.11.20
We have added a new section of our site - Archivarix Blog. There you can read useful information about the operation of our system and site restoration.
2019.10.02
Recently our system has been updated and now we have two new options:
- You can download Darknet .onion sites. Just enter .onion website address in the "Domain" field here and our system will download it from the Tor network just like a regular website.
- Content extractor. Archivarix can not only download existing sites or restore them from the Web Archive but can also extract content from them. In the "Advanced options" field you need to select "Extract structured content". After that you will receive a complete archive of the entire site, and an archive of articles in xml, csv, wxr and json formats. When creating an archive of articles our parser takes into account only meaningful content excluding duplicate articles, elements of design, menus, ads and other unwanted elements.