We are resurrecting an online auto parts store and making 3 million dollars on SEO traffic

In this article I will tell you how to achieve the top and squeeze money out of commercial SEO traffic in modern realities. Practical information, the latest data and tools, meat and hardcore will be presented here. Let’s go!


an abandoned online auto parts store for foreign cars;

near-zero search traffic since the end of 2020;

a successfully operating store in Eastern europe with the possibility of pickup;

in Yandex.Metrica, crowds of bots are recorded daily;

the cherry on the cake is 3 strange proxying doorways.

Before starting to perform ordinary work on semantics, structure, content, and so on, we decided to start with clearing traffic from fraud and protect the online store from future attacks.

This is one of the standard points of work when bringing a new project to the top and actively increasing search traffic.

As visibility increases across clusters of search queries, the volumes of negative pf (behavioral factors) cheating grow in direct proportion in almost all niches where there is traffic and money.

We protect the site from negative cheating and competitive intelligence

Negative cheating of behavioral factors is already commonplace and it will not surprise anyone today. Nevertheless, many neglect protection and then wonder at the stagnation of search traffic. Without installing a firewall, there can be no talk about any tops in competitive niches in Yandex.

As the first barrier of protection, we use the Cloudflare service. A free tariff plan is quite enough for our tasks. By itself, Cloudflare does not filter bots in any way, especially behavioral ones. To provide basic protection, you need to configure the rules manually.

Rule #1 – we allow full access to all known bots of search engines, social networks, our own servers, apis, and so on.

Rule No. 2 – complete blocking of access to the online store from all countries except Russia, as well as blocking dozens of competitor analysis and monitoring services by user-agent.

This also includes the protection of important administrative sections of the site and files with access only from certain ip addresses. At the same time, good bots will have access to the site from any country.

Rule #3 – all requests from ipv6 or over http will receive a captcha. Large pools of ipv6 addresses can be obtained very cheaply, almost for free. They are used for parsing, cheating and manipulations that we want to protect ourselves from.

In Russia, ipv6 is almost not common on client networks, so a tiny percentage of users are potentially affected. Also, all http requests will receive a captcha and then a 301 redirect to https – it helps a lot from some public services to cheat pf and ddos bots.

Rule #4 – traffic with protocols lower than http/2 and all direct calls fall on a 5-second JS check. This rule perfectly filters parasitic traffic and tons of obscenity.

Behavioral bots and people with JS support in the browser will successfully pass the test and get to the site. In Yandex.Metrica, traffic will go from direct visits to internal transitions.

To date, old mobile browsers do not support http/2, so a small percentage of real users can also get to the JS check.

Some smartbots disguise themselves as real visitors with http/2 or http/3. We will filter them at the second barrier of protection.

What settings won’t help from behavioral bots:

enabling under attack mode;

bot fight mode and super bot fight mode on a paid tariff.

In the screenshot above, you can see the CSR (Challenge Solve Rate) column – this is the ratio of the number of requests that have passed the bot check to the total number of requests. If CSR > 3% means something is configured incorrectly and the rule catches a lot of real users.

As can be seen from the analytics, the firewall prevents ~ 100k parasitic requests to the site per day. These requests are placed on Cloudflare and do not reach the web server.

As a second barrier of protection, we use the service Antibot.Cloud. To date, this is the most flexible solution to protect against negative cheating. To provide advanced protection, I recommend using the following settings in the config.

Most users with white fingerprints, cookies and ip are checked automatically. Suspicious users and bots are shown a window with a color selection (the referrer does not matter).

In this configuration, behavioral bots are filtered perfectly and cannot access the online store. Yandex.Metrica also does not see such bots, there will be no records of site visits in the bot’s cookies.

We track all statistics in the web interface and promptly add traffic filtering rules in a couple of clicks.

With such a firewall, we provide:

protection against behavioral and spam bots;

protection against any parsers (proxies, http headers and user-agent do not matter);

protection against fake bots with a user-agent like the official search engine robots;

protection against proxying of the site by doorways;

checking bots by PTR records;

reducing the load on the web server.

The more search traffic on the site, the more difficult it is to overturn behavioral factors. When increasing search traffic to ~300,000 unique users per month, we connect to the Antibot.Cloud site config only for traffic from social networks, unpopular search engines and direct visits:

How the firewall protection works can be seen clearly. The site was regularly visited by ~800 bots through direct visits. After installing the firewall the next day, the robotness dropped to zero, and there were only a few dozen direct calls.

The anti-bot does not prevent our bots from positively cheating pf. To do this, before entering the online store, our bots first go to a secret url with a php script, where each bot receives a secret cookie key.

Further, bots perform the necessary manipulations on the site and significantly improve behavioral factors.

When conducting SEO work, one of the basic things is parsing and analyzing competitors. A lot of technologies and services for analyzing texts, structure, semantics, links and other things are based on this.

In online stores, competitors often parse prices and make them a little cheaper. As a result, the web server is loaded in vain, the conversion rate decreases, and other online stores receive benefits as a result of active SEO work.

To make these processes fully controllable and make life as difficult as possible for competitors, we do the following:

Setting up white cloaking.

One of the points of work when promoting an online store is the creation of text descriptions for product categories. It is enough to make a template generation and upload it all to the site.

We will not show the text description and a few other elements of the product catalog to real visitors, but search robots will see them.

<?php if ($ab_config[‘whitebot’] == 1) { ?> content <?php } ?>

As a result, the product catalog will look extremely minimalistic and convenient for users, which greatly affects the conversion.

At the same time, competitors do not see all the subtleties of optimization and overlook the chips and techniques. We gain a competitive advantage.

At the same time, search robots will assume that users not only walk through the product catalog and cards, but also carefully read the text content.

Texts in online stores do not play a very big role, compared to content projects and service sites. A similar technique works perfectly there.

Setting up content substitution.

Protection against parsing works successfully both at the Cloudflare level and at the Antibot. Of course, a person can safely pass all levels of protection and get to the analyzed site. Blocking text selection via css, right mouse button and keyboard shortcuts via js weakly saves from theft and content analysis. Who really wants to copy it.

It is for such people that we set up automatic substitution of visually similar Russian and English letters.

Leave a Reply

Your email address will not be published. Required fields are marked *