The last few months have seen an increase in the number of distributed denial-of-service (DDoS) vectors with sophisticated techniques, including attack targeting authoritative DNS servers for domain names, attacks launched from botnets built using hijacked virtual machines and HTTP application-layer attacks with highly randomized fingerprints.
“The second quarter of 2023 was characterized by thought-out, tailored and persistent waves of DDoS attack campaigns on various fronts,” web security company Cloudflare said in a new report. These included DDoS attacks launched by pro-Russian hacktivist groups like REvil, Killnet, and Anonymous Sudan against Western websites; a large increase in targeted DNS attacks; UDP amplification attacks leveraging a vulnerability in Mitel MiCollab business phone systems; and an alarming escalation in HTTP attack sophistication, the company said.
Carefully engineered HTTP attacks
DDoS attacks are split into two main categories: network-layer attacks that target core data transmission protocols that exist at layers 3 and 4 of the OSI model such as TCP, UDP, ICMP, or IGMP, and application-layer attacks that target the communication protocols used by applications to send and receive messages to users, the most common of which is HTTP. According to Cloudflare, the second quarter of this year saw a 14% decrease in network-layer DDoS attacks, but a 15% increase in application-layer attacks.
The goal of HTTP attacks is to saturate the computing resources available to a web application or web API and impact their ability to answer requests from legitimate users by keeping them busy answering rogue requests initiated by bots. That’s why the most important attribute for judging the severity of HTTP attacks is their requests per second (rps) rate rather than the volume of data transmitted (Gbps), like in the case of network-layer attacks that seek to saturate the target’s available bandwidth.
Mitigating HTTP DDoS attacks requires a combination of techniques to differentiate between legitimate users and bots. For example, if an application experiences an unusually high rps rate, a DDoS mitigation provider might choose to temporarily enforce CAPTCHA checks before allowing requests to reach the application. These checks can also be triggered if the user-agent reported by the client during the request is unusual and doesn’t match typical browsers or if the request header as a whole has a known fingerprint matching a known botnet.
“We’ve observed an alarming uptick in highly randomized and sophisticated HTTP DDoS attacks over the past few months.” Cloudflare said. “It appears as though the threat actors behind these attacks have deliberately engineered the attacks to try and overcome mitigation systems by adeptly imitating browser behavior very accurately, in some cases, by introducing a high degree of randomization on various properties such as user agents and JA3 fingerprints to name a few.”
Additionally, in a number of cases attackers have kept their rps rates intentionally low to avoid triggering detections and blending in with legitimate traffic. Cloudflare warns that such techniques have been observed in the past in campaigns launched by state-sponsored attackers but are now adopted by cybercriminals.
DNS laundering attacks
The internet’s domain name system (DNS) that is responsible for translating domain names into IP addresses has also been a frequent target for DDoS attacks. In fact, over the last quarter over 32% of all DDoS attacks observed and mitigated by Cloudflare were over the DNS protocol.
There are two types of DNS servers: authoritative DNS servers that hold the collection of records for a domain name and all its subdomains (known as a DNS zone) and recursive DNS resolvers, which take DNS queries from end-users, look up which is the authoritative server for the requested domain, query it and return the response back to the requesting user.
To make this process more efficient, DNS resolvers cache the records they obtain from authoritative servers for a period of time, so they don’t have to query the authoritative servers too often for the same information. The time before cached records expire is configurable and admins must strike a balance, because a long expiry time means the DNS resolver might end up with outdated information about record changes made on the authoritative server negatively impacting the experience for users that rely on it.
Both DNS resolvers and authoritative DNS servers are important and can cause disruptions if taken down. If a resolver is disrupted, computers that rely on it might be unable to access domain names. If authoritative DNS servers are disrupted, resolvers might not be able to obtain fresh information when their cached records expire, again leading to disruptions.
Organizations like Google and Cloudflare run publicly available DNS resolvers like 8.8.8.8 and 1.1.1.1 that have become popular with end users and application developers and are hard for attackers to disrupt because of the large and distributed infrastructure of these organizations. That’s why attackers prefer to attack the authoritative DNS servers for the domains they want to impact, because that can affect all users that try to reach those domains regardless of the DNS resolver they rely on. It’s also not uncommon for organizations to operate their own authoritative DNS servers for the domains they own.
One common technique for attacking authoritative DNS servers that Cloudflare has observed over the past quarter is dubbed DNS laundering. This tactic involves using botnets to query reputable recursive DNS resolvers like those provided by Google or Cloudflare for non-existent subdomains under a target domain. The subdomain prefixes are continuously randomized and not used more than once to ensure that the resolvers don’t have cached records for them.
From a resolver’s perspective the queries are always for new subdomains that it hasn’t seen before. As such it will always forward the queries to the authoritative DNS server for the main domain, flooding it with requests until it can’t serve other legitimate requests or it crashes.
“From the protection point of view, the DNS administrators can’t block the attack source because the source includes reputable recursive DNS servers like Google’s 8.8.8.8 and Clouflare’s 1.1.1.1,” Cloudflare said. “The administrators also cannot block all queries to the attacked domain because it is a valid domain that they want to preserve access to legitimate queries.”
The rise of virtual machine botnets
In addition to traffic hiding techniques like reflection and laundering, attackers are also using botnets made up of virtual machines (VMs) and virtual private server (VPS) instances that have either been compromised or acquired. The main difference to traditional botnets that are largely made up from hijacked internet-of-things (IoT) devices, such as IP cameras, network-attached storage (NAS) boxes, modems, or routers is that virtual servers have access to greater bandwidth and computational resources.
According to Cloudflare, the latter are up to 5,000 times more powerful, so attackers can launch “hyper-volumetric” attacks with a much smaller number of bots. VM botnets have executed some of the largest DDoS attacks recorded to date, including an attack this year that reached 71 million requests per second. In addition, since these VMs run on the infrastructure of major cloud computing providers, it’s not easy to block IP addresses en masse without potentially disrupting legitimate resources.
“While we already enjoy a fruitful alliance with the cybersecurity community in countering botnets when we identify large-scale attacks, our goal is to streamline and automate this process further,” Cloudflare said. “We extend an invitation to cloud computing providers, hosting providers, and other general service providers to join Cloudflare’s free Botnet Threat Feed. This would provide visibility into attacks originating within their networks, contributing to our collective efforts to dismantle botnets.”
Cyberattacks, Cybercrime, DDoS, Network Security