> A total of 34,500 ports were targeted, indicating the thoroughness and well-engineered nature of the attack.
How is that more complicated than a for-loop?
monster_truck 8 days ago [-]
You can't just spray every port blindly if you are maximally trying to disrupt, there is nuance to it.
lolinder 8 days ago [-]
Right. So why does the fact that they targeted 34,500 ports show it was a well-engineered attack? By itself it's just evidence that they know how to iterate over ports. Coupled with the data size (7.3Tbps) we know they had an enormous botnet. None of this points to a well-engineered attack, it just means that lousy IoT has made botnets incredibly cheap.
A well-engineered attack would not draw headlines for its scale because it would take down its target without breaking any records.
motorest 8 days ago [-]
> A well-engineered attack would not draw headlines for its scale because it would take down its target without breaking any records.
You don't hear much about DDoS that are either comparable in size or bring down targets. How do you explain why this one made the news in spite of not having met your arbitrary and personal bar?
lolinder 8 days ago [-]
Like I said: it broke records for data throughput. It doesn't hurt that Cloudflare has an interest in publicizing the size of the DDoS attacks it fights off.
> in spite of not having met your arbitrary and personal bar?
I'm not sure what you mean by this. I didn't establish any sort of bar for what sorts of DDoS should get headlines, I'm just agreeing with OP that that line in the article doesn't make any sense. There may be other reasons to believe this attack was well-engineered but the article doesn't get into them.
therealpygon 7 days ago [-]
Yep. The number of ports is a useless metric to indicate sophistication of an attack. It’s like saying someone is a genius because they can write the numbers 1 through 10 on a sheet of paper, which is about the equivalent complexity.
rob_c 8 days ago [-]
[flagged]
ukuina 8 days ago [-]
Because it's a distributed for loop?
lolinder 8 days ago [-]
Not necessarily. It could be one for loop running on tens of thousands of compromised IoT devices, with the only thing distributed being the command that starts the loops.
saulpw 8 days ago [-]
Sounds like you've never managed tens of thousands of nodes in a distributed system. It's not trivial.
luckylion 8 days ago [-]
What would making a C&C server for a botnet hard? It's not like you need to carefully coordinate all those clients to hit precise timings, you just tell them who to target and let them rip, don't you?
PcChip 8 days ago [-]
Nothing. I did it with IRC servers in the late 90s when I was a dumb kid in high school
lolinder 7 days ago [-]
Coordinating a botnet to launch a DDoS is commodity software at this point. You could argue that the engineering that went into the coordination software is good, which may or may not be true, but simply launching a botnet is well within the capabilities of a script kiddie and not something that shows sophistication on the part of the attacker.
jjtheblunt 8 days ago [-]
(elixir / otp says "hold my beer")
blitq 8 days ago [-]
It’s not :)
ksec 8 days ago [-]
If I dont want my user to have Cloudflare captcha or for example captcha dont work on my Safari 18.5 running on OpenCore Patcher MacBook 2015. What other options have I got?
VladVladikoff 8 days ago [-]
Most websites don’t need DDOS protection.
Many websites which use Cloudflare to block basic bot vulnerability scanning. You could block this type of traffic with other methods; ja3/ja4, Ip to ASN & ASN filtering, etc.
esseph 8 days ago [-]
Your first line is wrong.
While it may not impact your site, it does impact your hosting provider. As their costs go up, your costs go up. Anything on the Internet at this point needs DDoS / scraping protection. If may not drop your service, but your ISP or upstreams may blackhole your route.
The "old web" (current web) was largely based on an open exchange of information.
The "new web", post AI bot scraping, is taking its place. Websites are getting paywalls. Advertising revenue is plummeting. Hosting providers are getting decimated by the massive shift in bandwidth demand and impact to systems scraped by the bots.
VladVladikoff 8 days ago [-]
I guess my products fall into a niche that doesn’t seem to attract AI crawlers. I’ve seen only a few and they haven’t been too aggressive. I mean they ignore typical crawl rate limits defined in robots.txt but account for maybe only 1-2% of my overall traffic.
esseph 7 days ago [-]
That's crazy! I've been watching the scraping go up exponentially for months now.
DDoS and AI are mostly unrelated. Sure, AI companies are running low-quality scrapers, but they don't cause nearly as much traffic as a DDoS. They might cause as much CPU load as a DDoS, which is an application-level problem.
They are creating a network and application load that in effect, is a DDoS. Tens or hundreds of thousands of hosts hitting the same domain at once.
immibis 6 days ago [-]
That's a bug in Gitea. I suffered the same bug and worked around it in Gitea. Non-malicious traffic can also break a Gitea server.
esseph 5 days ago [-]
Alright, let me go get a dozen posts on the internet by network and system engineers working at the biggest companies at the world to tell you the same thing that you apparently haven't heard yet - they're all running into the same thing.
We have about 400 domains at $dayjob and a decent sized network.
All these domains, even silly ones with a handful of tabs and no dynamic content, are getting absolutely brutalized with traffic non-stop.
Geoblocking did not help.
So much of the "AI scraping" is coming from compromised customer internet connections. Compromised CPE equipment, desktops, phones, etc.
It is a fucking nightmare.
immibis 4 days ago [-]
What is the evidence that has anything to do with AI?
Seems to be very circumstantial at best: we're getting AI scrapers, and at the same time DDoS scrapers, so the DDoS scrapers must also be AI.
immibis 6 days ago [-]
"As their costs go up, your costs go up" - great, let them charge for bandwidth then. Actually, incoming bandwidth to a hosting provider is free, because basically every interconnection on the Internet is billed based on the dominant direction of traffic. It's really important that you do not preemptively make assumptions about other people's business models in a way that impacts your own. If they have a problem with it, let them tell you.
Capitalism is a symmetric game (played asymmetrically). If your hosting provider thought something they did would increase your costs, do you think they'd refrain from doing it?
esseph 5 days ago [-]
Let's play this out bubba.
Let's talk economics.
How many monetized webpages are out there, vs free websites?
How many blogs, forums, linktrees, etc?
With what you're suggesting, all of that goes away.
You may want that internet.
I sure as fuck do not.
immibis 5 days ago [-]
You're still making no sense here?
Meekro 7 days ago [-]
The captchas are totally optional. You can turn that off and just keep Cloudflare's DDoS protection.
nemathod 8 days ago [-]
GRE-Tunnel
VladVladikoff 8 days ago [-]
I’m confused what this would accomplish? Do GRE tunnels drop UDP packets or something?
firebird84 8 days ago [-]
You make a contract with a company that does layer 3 ddos protection, you advertise a route including their AS on a subset of your prefixes and they route to you over a GRE tunnel.
VladVladikoff 8 days ago [-]
Sorry for the noob questions here but why couldn’t you just firewall? ie only allow traffic forwarded from the DDOS proxy?
immibis 8 days ago [-]
With these services the forwarding happens at a lower level. The traffic doesn't come from them - the source address is whoever actually sent the traffic. And the destination address is you, but the Internet thinks they are hosting you. They can't just forward the same packets to you because they'd just go back to the DDoS provider because that's where "you" "are". So they put the packets inside other packets and send them to you on a different address.
I suppose they could rewrite the destination to be your real address, and then send them to you without extra layers; you wouldn't get to know what the original destination address was; maybe if you only have one, it doesn't matter.
immibis 8 days ago [-]
The simplest is to just wait until the attacker is bored, and/or daddy's credit card runs out.
If you aren't doing any business, or not much business, through your site, this can be fine. Your hosting provider may either choose to let your server be overwhelmed with as many packets as its pipe can fit, or it may need to protect its network by discarding traffic to your IP address upstream of itself. It's probably a good idea to reach out to your hosting provider and let them know you're getting DDoSed. Even if they can't do anything about it (though there's a chance they can) they'll hopefully appreciate the heads up.
True story: I ran a Pixelflut client for 38C3 from a Netcup server in Nuremberg (this somehow had better performance than running it on my tablet at the physical location) and they somehow thought 38C3 was DDoSing me and "helpfully" blackholed traffic between 38C3 and my server.
---
It's important to stop thinking of DDoS as some magic hammer of Thor that you can't do anything about. DDoS packets, like all other packets, have source and destination addresses and flow through routers and links.
When Cloudflare receives a 7-terabit DDoS, they aren't receiving 7 terabits through one link. Cloudflare operates a huge number of locations that pretend to be one coherent network. So they're receiving 100 gigabits in London, 100 gigabits in Frankfurt, 200 gigabits in NYC, etc. Their network architecture pretends like it's delivering all these packets to their destination addresses, but really, each location has its own completely different set of servers that all have the same addresses. (This is called anycast.) Each individual packet sender is only sending packets to the nearest Cloudflare node, where they're getting discarded. Likely, no individual node is overloaded by this, but when you aggregate the statistics from all of them, it adds up to a large amount of traffic. This is by the nature of a DDoS - it's devices all over the world attacking you, which means they're all coming by different routes.
It's similar with hosting providers too, at least the big ones. Suppose you're on Hetzner: https://www.hetzner.com/unternehmen/rechenzentrum/ . They're not getting a terabit against your server through one link - they're getting 100Gbps through DE-CIX Frankfurt, 10Gbps through AMS-IX, 50Gbps through Telia in Nuremburg, 50Gbps through Telia in Helsinki, 50Gbps through Core-Backbone, etc.
If they deploy a routing rule to the router on their end of each of those links, which says to discard packets where the destination address is yours, they can protect their network. Your site will still be down, of course.
If one of their pipes does get overloaded (say their full 10Gbps from Baltnet in Frankfurt), they can reach out to that network (pretty much every serious network on the internet has a network operations center, reachable 24/7 by phone) and Baltnet will track it down further and block the traffic even closer to its source (or at a wider part of their network).
If you're lucky and the DDoS traffic is just coming from a few "directions", users whose packets happen to come via a different direction may still be able to access your site.
Suppose you're on Uncle Tom's Tiny Hosting Company Ltd (not real), they're certainly not the scale of Hetzner, and they only have a 10Gbps pipe between them and their ISP which is easily filled by a single attack. They'll have to contact their ISP to block traffic to your server so that the rest can get through, and their ISP will do the above stuff.
None of this information will keep your site up during a DDoS, I just want to show you there's a depth to this DDoS thing and this Internet thing and it's not just magic.
zzzeek 8 days ago [-]
dont piss off any nation-states that would want to take your site down, should help
petee 8 days ago [-]
Fwiw, i have a site with nearly zero content or users; randomly it got ddos'd one day, and never happened again. I think the reasons for a ddos can be wide ranging, from just testing, to nation state, to someone is unhappy with your font choice
inetknght 8 days ago [-]
> to someone is unhappy with your font choice
Everyone hates when I set my app's fonts to courier size 8.
8 days ago [-]
esseph 8 days ago [-]
An 11 year old with a discord account and a stolen credit card can now rent massive capabilities that can take (smaller, limited peered) entire countries offline for brief periods these days.
encom 8 days ago [-]
So this "article" "source" is Cloudflare, claiming Cloudflare blocked some super duper mega attack, but gives zero verifiable detail about any of it.
Now I hate Cloudflare with a passion, but even setting that aside, this is journalistic malpractice - it's basically a sponsored post. I was going to say I expected better from Ars Technica, but their glory days are long gone.
greggsy 8 days ago [-]
How is CF not a valid primary source?
They literally are a DDoS mitigation service.
nurettin 7 days ago [-]
They have Economic Incentive to lie to you.
encom 8 days ago [-]
Encom is the greatest, fastest, cheapest electrician in Denmark.
Source: me.
gundmc 8 days ago [-]
Why do you hate Cloudflare so passionately?
immibis 8 days ago [-]
There are many reasons Cloudflare should be hated. The main one is that their goal is to centralize the Internet. A secondary one is all those bloody captchas. A tertiary one is that they often block Tor, even if you pass a captcha. Yes, it's configurable, but their recommended settings are the ones that help break the internet. A fourth one is that many DDoS-for-hire sites are protected behind Cloudflare, which allows them because they are good for its business model. Need I go on?
However in this case I think we can rely on them to tell us what they did. If they say they got a 7.3 Tbps UDP DDoS, chances are good they actually did.
6r17 7 days ago [-]
tbh i get you ; but one has to realize this has nothing to do with that company and everything to do with the current nature of technological business where "everyone wins it all
What I say is that instead of hating on cloudflare one can look up how a DNS server works and start getting into DDOS mitigation ; but even after a couple of month anybody would still just have scratched the surface of it.
I don't think it's Cloudflare "goal" to centralize the internet, neither it is to set up captcha everywhere ; but it's definitely frustrating
immibis 6 days ago [-]
It's every internet company's goal to centralize the part of the internet that aligns with what they so; cloudflare's part happens to be most of the internet, since they provide low-level infrastructure services.
Orchestrated by Cloudflare sales team due to not meeting their revenue targets for the quarter. More fear in website owners and more publicitly about Cloudflare.
curtisszmania 8 days ago [-]
[dead]
balanc 8 days ago [-]
Doesn’t Cloudflare have every incentive to inflate the bandwidth of the attack they have successfully mitigated?
And yes I know that there are Cloudflare employees here so spare me with your pinky swears.
move-on-by 8 days ago [-]
A couple months ago Brain Krebs, who uses Google’s Project Shield, wrote of a very similar attack. 6.3 terabits, all UDP, less then a minute.
Couldn’t this logic apply to basically every internal metric across every company?
udev4096 8 days ago [-]
[flagged]
eviks 8 days ago [-]
How does it counter the incentives of all other companies to make it look like they're not the only one???
mlyle 8 days ago [-]
Cloudflare has the biggest scale and is arguably best positioned to soak up massive attacks. Therefore CF may have a unique incentive to make it sound like attacks are larger and there are more really big ones.
eviks 8 days ago [-]
> is arguably best positioned
Lying about the scale of thwarted attacks by others is the counter argument
mlyle 6 days ago [-]
Still not sure what you're saying:
A) everyone inflates-- in which case, you want to be with the entity with the biggest pipes. Also, of course, we have reasonable estimates about how much traffic everyone can exchange.
B) non-CF players downplay DDoS-- this isn't going to work either.
eviks 6 days ago [-]
A) not really, you need an entity with pipes big enough. Have they repeatedly claimed to have frequent attacks bigger than anyone's capacity? (by the way, how are those reasonable estimates immune to everyone lying but not to the lies about the attacks?)
B) why?
mlyle 6 days ago [-]
Capacity isn't a number, like you have 48 units of internets.
You need an entity with peering, pipes, and request rates big enough in the right places.
eviks 6 days ago [-]
Capacity is a number, like you can handle 10 tbps attacks (with the peering/pipes/rates infrastructure)
mlyle 6 days ago [-]
But it's not. How much you can soak up of clients actively trying to handshake and request resources is going to be different from how much of bulk traffic. How much traffic you can soak up in one place (e.g. saturating your peering to where 80% of your customers are). Or how much spoofed traffic you can soak up globally from a botnet.
Cloudflare has a ton of infrastructure, and this just makes them intrinsically more robust to the biggest attacks. It's in their interest, therefore, to make people believe there's a lot of really big attacks.
eviks 5 days ago [-]
You're just describing different capacities.
But this complexity likely works against your claim that there are reliable estimates, meaning that it's easier for everyone else to A) inflate their own capacities and argue that B) CF shouldn't be trusted in their assessment as it's inflated.
And I've already addressed your last point, so alternatively: if the biggest attacks are 1 bps of traffic, it doesn't matter that you have a ton of infrastructure because anyone can handle a byte. The lie has to exceed "capacities" of others for it to matter.
mlyle 5 days ago [-]
> CF shouldn't be trusted in their assessment as it's inflated.
I think since we know CloudFlare has the biggest scale, we know that CloudFlare has the biggest capacity, with a higher baseline request rate and traffic exchanged than anyone else: definitely globally, and almost everywhere locally.
> And I've already addressed your last point, so alternatively: if the biggest attacks are 1 bps of traffic, it doesn't matter that you have a ton of infrastructure because anyone can handle a byte.
Which is why Cloudflare has an incentive to make DDoS attacks sound bigger.
Sure, we always know that everyone has an incentive to make their own capacity sound bigger, and parties with marginal ability to soak DDoS have an incentive to make the DDoS problem sound smaller.
perching_aix 8 days ago [-]
Speaking of incentives, what might be the incentives of those referring to them as Clownflare? I sure have to wonder what their biases are, and how fairly they represent the company.
Rendered at 14:34:46 GMT+0000 (Coordinated Universal Time) with Vercel.
How is that more complicated than a for-loop?
A well-engineered attack would not draw headlines for its scale because it would take down its target without breaking any records.
You don't hear much about DDoS that are either comparable in size or bring down targets. How do you explain why this one made the news in spite of not having met your arbitrary and personal bar?
> in spite of not having met your arbitrary and personal bar?
I'm not sure what you mean by this. I didn't establish any sort of bar for what sorts of DDoS should get headlines, I'm just agreeing with OP that that line in the article doesn't make any sense. There may be other reasons to believe this attack was well-engineered but the article doesn't get into them.
While it may not impact your site, it does impact your hosting provider. As their costs go up, your costs go up. Anything on the Internet at this point needs DDoS / scraping protection. If may not drop your service, but your ISP or upstreams may blackhole your route.
The "old web" (current web) was largely based on an open exchange of information.
The "new web", post AI bot scraping, is taking its place. Websites are getting paywalls. Advertising revenue is plummeting. Hosting providers are getting decimated by the massive shift in bandwidth demand and impact to systems scraped by the bots.
https://www.eff.org/deeplinks/2025/06/keeping-web-under-weig...
They are creating a network and application load that in effect, is a DDoS. Tens or hundreds of thousands of hosts hitting the same domain at once.
We have about 400 domains at $dayjob and a decent sized network.
All these domains, even silly ones with a handful of tabs and no dynamic content, are getting absolutely brutalized with traffic non-stop.
Geoblocking did not help.
So much of the "AI scraping" is coming from compromised customer internet connections. Compromised CPE equipment, desktops, phones, etc.
It is a fucking nightmare.
Seems to be very circumstantial at best: we're getting AI scrapers, and at the same time DDoS scrapers, so the DDoS scrapers must also be AI.
Capitalism is a symmetric game (played asymmetrically). If your hosting provider thought something they did would increase your costs, do you think they'd refrain from doing it?
Let's talk economics.
How many monetized webpages are out there, vs free websites?
How many blogs, forums, linktrees, etc?
With what you're suggesting, all of that goes away.
You may want that internet.
I sure as fuck do not.
I suppose they could rewrite the destination to be your real address, and then send them to you without extra layers; you wouldn't get to know what the original destination address was; maybe if you only have one, it doesn't matter.
If you aren't doing any business, or not much business, through your site, this can be fine. Your hosting provider may either choose to let your server be overwhelmed with as many packets as its pipe can fit, or it may need to protect its network by discarding traffic to your IP address upstream of itself. It's probably a good idea to reach out to your hosting provider and let them know you're getting DDoSed. Even if they can't do anything about it (though there's a chance they can) they'll hopefully appreciate the heads up.
True story: I ran a Pixelflut client for 38C3 from a Netcup server in Nuremberg (this somehow had better performance than running it on my tablet at the physical location) and they somehow thought 38C3 was DDoSing me and "helpfully" blackholed traffic between 38C3 and my server.
---
It's important to stop thinking of DDoS as some magic hammer of Thor that you can't do anything about. DDoS packets, like all other packets, have source and destination addresses and flow through routers and links.
When Cloudflare receives a 7-terabit DDoS, they aren't receiving 7 terabits through one link. Cloudflare operates a huge number of locations that pretend to be one coherent network. So they're receiving 100 gigabits in London, 100 gigabits in Frankfurt, 200 gigabits in NYC, etc. Their network architecture pretends like it's delivering all these packets to their destination addresses, but really, each location has its own completely different set of servers that all have the same addresses. (This is called anycast.) Each individual packet sender is only sending packets to the nearest Cloudflare node, where they're getting discarded. Likely, no individual node is overloaded by this, but when you aggregate the statistics from all of them, it adds up to a large amount of traffic. This is by the nature of a DDoS - it's devices all over the world attacking you, which means they're all coming by different routes.
It's similar with hosting providers too, at least the big ones. Suppose you're on Hetzner: https://www.hetzner.com/unternehmen/rechenzentrum/ . They're not getting a terabit against your server through one link - they're getting 100Gbps through DE-CIX Frankfurt, 10Gbps through AMS-IX, 50Gbps through Telia in Nuremburg, 50Gbps through Telia in Helsinki, 50Gbps through Core-Backbone, etc.
If they deploy a routing rule to the router on their end of each of those links, which says to discard packets where the destination address is yours, they can protect their network. Your site will still be down, of course.
If one of their pipes does get overloaded (say their full 10Gbps from Baltnet in Frankfurt), they can reach out to that network (pretty much every serious network on the internet has a network operations center, reachable 24/7 by phone) and Baltnet will track it down further and block the traffic even closer to its source (or at a wider part of their network).
If you're lucky and the DDoS traffic is just coming from a few "directions", users whose packets happen to come via a different direction may still be able to access your site.
Suppose you're on Uncle Tom's Tiny Hosting Company Ltd (not real), they're certainly not the scale of Hetzner, and they only have a 10Gbps pipe between them and their ISP which is easily filled by a single attack. They'll have to contact their ISP to block traffic to your server so that the rest can get through, and their ISP will do the above stuff.
None of this information will keep your site up during a DDoS, I just want to show you there's a depth to this DDoS thing and this Internet thing and it's not just magic.
Everyone hates when I set my app's fonts to courier size 8.
Now I hate Cloudflare with a passion, but even setting that aside, this is journalistic malpractice - it's basically a sponsored post. I was going to say I expected better from Ars Technica, but their glory days are long gone.
They literally are a DDoS mitigation service.
Source: me.
However in this case I think we can rely on them to tell us what they did. If they say they got a 7.3 Tbps UDP DDoS, chances are good they actually did.
What I say is that instead of hating on cloudflare one can look up how a DNS server works and start getting into DDOS mitigation ; but even after a couple of month anybody would still just have scratched the surface of it.
I don't think it's Cloudflare "goal" to centralize the internet, neither it is to set up captcha everywhere ; but it's definitely frustrating
And yes I know that there are Cloudflare employees here so spare me with your pinky swears.
https://krebsonsecurity.com/2025/05/krebsonsecurity-hit-with...
Lying about the scale of thwarted attacks by others is the counter argument
A) everyone inflates-- in which case, you want to be with the entity with the biggest pipes. Also, of course, we have reasonable estimates about how much traffic everyone can exchange.
B) non-CF players downplay DDoS-- this isn't going to work either.
You need an entity with peering, pipes, and request rates big enough in the right places.
Cloudflare has a ton of infrastructure, and this just makes them intrinsically more robust to the biggest attacks. It's in their interest, therefore, to make people believe there's a lot of really big attacks.
But this complexity likely works against your claim that there are reliable estimates, meaning that it's easier for everyone else to A) inflate their own capacities and argue that B) CF shouldn't be trusted in their assessment as it's inflated.
And I've already addressed your last point, so alternatively: if the biggest attacks are 1 bps of traffic, it doesn't matter that you have a ton of infrastructure because anyone can handle a byte. The lie has to exceed "capacities" of others for it to matter.
I think since we know CloudFlare has the biggest scale, we know that CloudFlare has the biggest capacity, with a higher baseline request rate and traffic exchanged than anyone else: definitely globally, and almost everywhere locally.
> And I've already addressed your last point, so alternatively: if the biggest attacks are 1 bps of traffic, it doesn't matter that you have a ton of infrastructure because anyone can handle a byte.
Which is why Cloudflare has an incentive to make DDoS attacks sound bigger.
Sure, we always know that everyone has an incentive to make their own capacity sound bigger, and parties with marginal ability to soak DDoS have an incentive to make the DDoS problem sound smaller.