This article begins a series on Bot Protection. The series discusses the traffic conditions being experienced by our customers in 2019, the current state-of-the-art in bot management, and several areas of current research by Reblaze scientists into new capabilities and human/bot identification algorithms.
This article will discuss the following topics.
- The composition of web traffic today: humans, good bots, and bad bots.
- The consequences of inadequate bot protection.
- Threat category: the types of attacks for which hostile bots are used.
Later articles in this series will discuss:
- Threat landscape: an in-depth discussion of the attacks waged by bots.
- Bot threats per vertical: the most significant problems caused by bots in different industries.
- Traditional methods of bot detection, and why they are no longer reliable.
- What’s working today for bot mitigation.
- The future of bot protection: the frontiers of current research in bot detection, and what’s coming for new techniques in achieving optimal business outcomes.
Web Traffic Composition Today
Robust bot management is essential for web security today. On average across different verticals, only 38 percent of incoming requests originate from human users. The remaining 62 percent have an automated source.
Source: traffic processed by Reblaze (more than four billion http/s requests per day).
Not all bots are harmful. Some (such as search engine spiders) are often welcome, and some (such as content aggregators) are not overtly hostile. However, almost 40 percent of incoming requests come from malicious bots.
Consequences of Inadequate Protection
Threat actors use bots to wage a variety of web attacks. In fact, almost all attacks involve bots in one way or another. Hostile bots which are not identified and blocked can create a variety of problems for organizations with significant web assets (sites and web applications, microservices, and mobile/native API endpoints). Some of the potential problems are:
- Site downtime when DDoS exhausts app/server resources.
- Data theft from scraping.
- Site breaches via vulnerability discovery bots.
- Loss of revenue from inventory hoarding.
- Bad business decisions from skewed analytics.
- Account theft from credential stuffing.
- Merchant account problems from credit card testing and fraud.
- Degraded customer experience from loss of bandwidth & resources.
Hostile bots are used in a wide variety of web attacks. The most common are:
- DDoS (Distributed Denial of Service)
- Credential Attacks
- Vulnerability Scans
- Credit card fraud
- Gift card fraud, including loyalty account abuse
- Scraping & Data Theft
- Inventory Hoarding
- Advertising Abuse
- Checkout/Application Abuse
These are discussed in more detail in the next article in this series: Bot Protection in 2019, Part 2.
This article is part 1 of a six-part series. You can download the complete report here: 2019 State of Bot Protection.