Are you currently experiencing an attack?

Are you currently experiencing an attack?

Bot Management in the Cloud

Most cyberattacks today are waged by bots. Threat actors use bots in a wide variety of ways, including DDoS, ATO (account takeover) attacks, inventory hoarding, input fuzzing, vulnerability scans, and the list goes on. 

Some of these attacks are common enough that many security solutions include dedicated modules for them (such as DDoS protection and ATO prevention). To defend against the rest, it’s crucial to have robust bot management

The top-tier CSPs (cloud service providers) offer a number of security capabilities. In previous articles, we’ve surveyed some of them, such as cloud DDoS services, API security features, cloud rate limiting tools, and others. In this article, we’ll look at the three major CSPs (AWS, GCP, and Microsoft Azure) and discuss:

  • Their capabilities for bot management
  • The individual limitations of each 
  • The drawbacks found across all three
  • How to compensate for their weaknesses, and get effective protection against hostile bots.

Bot management on AWS

Amazon Web Services includes AWS WAF, a web application firewall service that defends AWS infrastructure against many common web attacks. 

It includes AWS WAF Bot Control, which adds defenses against hostile bots. By default, this service allows common bots (such as search engine crawlers) to access your infrastructure while denying those that are potentially more dangerous. 

To add or modify policies, you can edit the Bot Control managed rule group and create custom AWS WAF rules. The Bot Control dashboard allows you to monitor your traffic and observe the performance of the defined rules. 

If you enable CloudWatch logs, you can analyze how Bot Control evaluates and applies your rules to incoming traffic. Additionally, Bot Control comes with: 

  • Scope-down statements: These help you limit the number of requests that Bot Control processes by evaluating this statement first and then the rule from the rule group. 
  • Labels and label-matching rules: These allow you to customize the handling of requests flagged by the Bot Control rule group. 
  • Custom requests and responses: You can add custom headers to requests you want to allow, and define custom responses to others.

AWS doesn’t limit the number of rules you can define.

Note that AWS WAF scales to meet demand automatically, as is common for AWS services. So, unless you have AWS Budgets set up, be aware that unusual traffic volumes could create costs that are unexpectedly high.


AWS Bot Control has a number of significant limitations. First, it doesn’t include Account Takeover prevention (even though ATO attacks are almost always waged by bots). For this, organizations must also subscribe to AWS WAF Fraud Control.

Second, it can be expensive. Costs include:

  • Fixed subscription costs ($10.00/month for Bot Control and $10.00/month for Account Takeover Prevention)
  • Variable costs depending on configuration ($5.00/month for each web ACL, plus $1.00/month for each rule that you create per web ACL)
  • Request fees for the WAF ($0.60 per 1 million requests processed by the WAF)
  • Request fees for Bot Control (after the first 10 million requests per month, every additional million costs $1.00)
  • Analysis fees for Captcha ($0.40 per thousand challenge attempts analyzed)
  • Analysis fees for Fraud Control ($1.00 per thousand login attempts analyzed)

Third, you must write and maintain many security rules yourself. This can be difficult to do correctly. 

Fourth, the Bot Control managed rule group identifies bots using the IP addresses from AWS WAF; therefore, it can have difficulty identifying threats when attackers are attempting to evade detection. This can often be done with simple (and common) tactics such as using a variety of IP addresses.

Finally, AWS Bot Control relies on Captcha challenges to identify humans. This is problematic for multiple reasons. For example, it means that AWS Bot Control can’t provide good protection for APIs, since Captchas can’t be applied to non-browser traffic. Worse, Captchas aren’t effective against bots anyway, as we’ll discuss later.

The last three problems above are not unique to AWS. We’ll return to this point below, after we look at GCP and Azure. 

Bot management on Google Cloud Platform (GCP)

Google’s most prominent security product is Cloud Armor, which offers custom rule-creation capabilities. However, GCP also includes a specific security service for bot management.

Google’s reCAPTCHA Enterprise is claimed to use “advanced risk analysis techniques” to differentiate humans and automated clients. The service scales automatically to meet demand, but is also aware of swarm attacks and won’t overscale when faced with these.

The “bot test” that reCAPTCHA Enterprise most commonly uses is a manual challenge; this can vary from clicking a checkbox to selecting images of any particular object (such as trains, traffic lights, or sidewalks). It returns an encrypted token with attributes that represent the associated risk of the request, anywhere from 0.0 (highly risky and possibly fraudulent) to 1.0 (lower risk and most likely legitimate). 

You can choose the disposition of each request based on its risk (allow, block, redirect, or rate-limit; you can also add custom headers for later processing). If you choose to allow it, reCAPTCHA Enterprise automatically attaches an exemption cookie to every subsequent request from this traffic source, so that human users can bypass the assessment.

To use this service, you need to enable the reCAPTCHA Enterprise API in your infrastructure, create a site key, and attach it to a Cloud Armor security policy. Then, all the rules inside the policy will apply to incoming requests, and you can choose to redirect to a reCAPTCHA Enterprise assessment whenever you like. You can find more information on how to do this, as well as other capabilities, in Google’s documentation.


Although reCAPTCHA Enterprise sounds straightforward to use, many organizations will find that it doesn’t meet their needs. First, there are multiple ways to integrate this service into your infrastructure. The manual challenge, while being the simplest to set up, doesn’t provide the highest detection accuracy. For that, you’ll have to choose between using action-tokens and session-tokens, which protect users’ actions (like checkout) or entire sessions, respectively. You can also combine these three types of detection however you need, but that only increases the complexity of the solution.

The second problem is that CAPTCHAs degrade the user experience. Although Google has tried to improve its ability to automatically detect bots while reducing interaction with humans, reCAPTCHA Enterprise still frequently interrupts user sessions with puzzle-solving challenges. 

The third problem is that, as mentioned previously for AWS Bot Control’s Captchas, reCAPTCHA Enterprise cannot provide good protection for APIs.

The fourth, and worst, problem (for both Google and AWS) is that CAPTCHAs are an obsolete approach to bot detection. Organizations should not rely on them for filtering out hostile bots. For more about this, see the discussion of “Common Weaknesses” below.

Bot management on Microsoft Azure

Microsoft Azure does not offer a bot management service. Instead, Azure admins must add bot-specific rulesets to their Azure Application Gateway WAF.

This is done by enabling Azure’s Bot Protection ruleset, which blocks malicious IP addresses included in Microsoft’s Threat Intelligence feed (which is updated by Microsoft multiple times per day). This ruleset can be enabled alongside an OWASP ruleset, to maximize protection against other types of attacks such as SQL injection or cross-site scripting (XSS). 

As we’ll discuss later, an IP blacklist will block some malicious bots, but not all of them. Therefore, many Azure admins also choose to add custom rules to their Azure WAFs.

Custom rules have higher priority than the pre-configured managed rules, so they’ll be checked first. You can match multiple variables to incoming requests, and decide whether you should allow, block, or log each one. These variables range from IP addresses/range to request cookies. You can also perform operations on their values, such as transforming to lowercase, encoding/decoding, and more, all inside the same rule. Microsoft provides a few examples to get you started.


Azure WAF has a number of drawbacks. First, you don’t have the option to redirect requests. For example, if a custom rule cannot confirm that a request is legitimate, then there is no way to use another method of bot detection, so it’s best to just block the request. This can result in a high number of false positives. 

The second downside is pricing, as Microsoft’s cloud platform tends to be expensive, with both fixed costs and variable costs. Further, most organizations will require multiple WAF instances (each Application Gateway is limited in the number of http listeners it can support). 

Third, and most importantly, the Bot Protection ruleset is IP-based, and therefore, it does not provide full protection against hostile bots. This means that organizations must create their own custom rules, but these too will usually be inadequate. See below for further discussion of these points.

Common Weaknesses for CSP Bot Management Tools

As shown above, each top-tier cloud platform has unique issues in its bot management capabilities. They also share several common weaknesses. 

First, both AWS and GCP rely on CAPTCHAs. As discussed above, these have several problems:

  • They can harm the user experience. 
  • CAPTCHA challenges cannot be applied to API traffic, so the native AWS and GCP tools cannot provide full protection for APIs.
  • The worst problem is that they are obsolete and ineffective. Threat actors can easily solve CAPTCHAs today. In fact, there are dozens of automated and inexpensive services available for this, such as 2Captcha (currently $2.99 per 1,000 CAPTCHAs solved). 

Next, when identifying potential threats, the top-tier CSPs tend to rely heavily on IP addresses. We previously saw this in our article on cloud platform rate-limiting capabilities; now we see it again for bot detection. AWS and Azure especially rely on IP blacklists to identify hostile bots. 

While an IP blacklist will be able to identify many malicious traffic sources, it cannot identify them all. Obviously, a blacklist cannot contain a fully comprehensive list of all threat actors operating at any given moment, especially since hackers constantly switch IPs specifically to evade such blacklisting. (Reblaze often sees attacks where each request comes in on a unique IP address.)

Al three CSPs attempt to mitigate these problems by offering the ability to create custom security rules. However, although custom rules can be useful, few organizations are able to achieve a complete and robust security posture with them. 

The modern threat environment is complex and constantly changing. Without a large, dedicated security team, most organizations will find it difficult to construct and maintain a collection of rules that will provide full and effective protection for their sites, web applications, and APIs.

How to Get Effective Protection Against Hostile Bots

Reblaze offers comprehensive, cloud-native WAAP (Web Application and API Protection). Along with a next-gen WAF and multi-layer DDoS protection, Reblaze includes robust bot management.

The platform uses a multivariate process for blocking hostile bots. Along with standard methods such as ACLs, geobased filtering, signature recognition, blacklists, and threat intelligence feeds, Reblaze also uses a variety of other technologies to exclude unwanted automated traffic. Incoming traffic is subjected to a series of challenges for environmental verification, which detects requestors using emulators and headless browsers. Behavioral profiling based on Machine Learning blocks traffic sources exhibiting anomalous behavior. Advanced rate limiting, flow control, and other modules provide additional protection. And unlike CAPTCHA-based systems, all of Reblaze’s bot mitigation is completely invisible to legitimate human users.

In contrast to many other solutions, Reblaze offers effective bot protection for APIs. It even includes a client-side SDK for protecting mobile application traffic.

To learn more about Reblaze and see how it can protect your sites, applications, and APIs, you can schedule a demo.

Get your price quote

Fill out your email below, and we will send you a price quote tailored to your needs

This website uses cookies to ensure you get the best experience on our website.