athenian200 wrote: ↑2024-03-25, 00:18
I can't imagine the average person running a website sees it much differently... they see being asked not to use CloudFlare as being asked to be more vulnerable to hackers and reduce speed, and would see any such request as extremely suspicious or at least inconvenient.
From the point of view of someone actually running a large website with our our own in-house CDN, these website operators are right to pick CF, unfortunately.
And that's even when I use or have used CF for other projects, and found its performance regularly subpar in various locations (or for specific services they offer). And its observability being nothing but catastrophic unless you pay a lot of money for their entreprise plan.
But for your average person running a wordpress/bb/... site for fun? It's seriously an uphill battle to DIY it all to the same level.
Yes, you could set up your own reverse proxies with their local caches, have very tight firewalls on all of them, then wire up a WAF somewhere in your edge, set up tight-but-loose-enough ratelimits, and then keep all of this updated regularly. And we do just that.
But it's a monumental timesink, requires a lot of expertise, and requires updating whenever there's new threats even aside from regular updates. Updates which also alternate between fixing stuff and breaking other stuff, and small-time website operators don't have dev environments perfectly mimicking their live environment. etc.
Oh and it's also a lot of extra resources to pay for along the way as all of this analysis/tracking/etc isn't magically processed.
And yet we still get owned with absolutely 0 recourse when some random fuck clicks the 350Gbps L4 attack button on their shitty booter.
And since the other big players of the web either don't need help (they are big enough that they already made their own CDN, and have teams dedicated to it), or they even sell their own CDN product, the result is that no one with decision power is interested in trying to improve things at all.
Because doing so is hard, time-consuming, requires convincing people that are notoriously annoying whenever you suggest changing anything (administrations and large ISPs come to mind). All for exactly 0 business benefit to them.
Note that here it doesn't mean they are *against* improvement. But they don't have a business case for the large time investment in both technology and advocacy they'd have to do.
athenian200 wrote: ↑2024-03-25, 12:50
Basically, the problem seems to be that CloudFlare is using extensive feature detection to make sure that the constellation of features supported by a browser lines up exactly with one of the browsers they support. In some ways, feature detection which was always hailed as a solution to the problems of relying on user-agents, is turning out to be worse for Pale Moon, because in practice websites are using combinations of features and their implementation details to determine the exact browser engine and turn away any browser engine they don't recognize as potential malware. At least with user agents we could spoof them to get past the sniffing, with this they are actually challenging us to do every single individual thing their supported browsers do in precisely the way they do it as a way of determining what engine we are on, whether they actually need/use that functionality or not, with the point being to filter out unsupported browser engines.
It's unlikely that their setup is based around testing for a set of known-good browsers like that tbh. (No matter how they might market/describe how it works.)
Most likely, the approach CF takes (and everyone else for that matter) is a statistical one, such as:
1. Collect actual behavior during TLS handshake, JS challenges, etc. per advertised UA
2. Clean up and categorize samples, to keep only the "real" profiles (would be behavior exhibited by the vast majority of the samples that pass captchas for a given advertised UA)
3. Flag the properties that only a few samples (or captcha-failing ones, or abusive ones, ...) exhibited as signs of spoofing
4. Deploy new rules based on the new set of "known" behavior-per-advertised-UA alongside blocking profiles (rather than UAs) associated with negative behaviour
To make this work with minimal rate of false-positives, especially at the scale CF operates, you need a ton of data however. And even then it's never perfect.
And you can't reliably test it all in house because of the ridiculous number of possible OS x Device settings x Browser x Browser settings. Even if I'm sure they specifically test quite a few of the most popular cases, and then do very gradual rollouts.
Eventually during-and-after they are deploying, they likely look at false-positives, and PM unfortunately-but-understandably does not sit at the top of their priority list.
Yet that's not them having anything against PM specifically. It's more that mechanically it's much more likely to be flagged by this kind of approach, and less likely to get fixed quickly, as it probably has:
- a much-higher-than-average amount of traffic that *does* have spoofed UA (by necessity due to shitty sites sniffing UAs, yes, but still)
- a much smaller footprint overall, so its FPs aren't looked at in priority
In the end, your worries are justified though, and it is only going to get worse over time indeed.
But even so I wouldn't be too quick to blame CF. They might have set out to fight a symptom of the current Internet's issues (bots, DoS, ...) rather than the root cause of it, but it's a really tough job as-is.
And for what it's worth, the only ones that are trying to fix a root cause (malicious browsers) are Google with WEI. Which is not encouraging because they are doing it mostly to fight adblockers and the likes...
So pick your poison I guess.
The only way is to keep reporting PM issues to CF for now, so that they eventually get to it on their false-positives list, and fix it.
Or try and find a friend of a friend who knows someone inside CF who can fast-track the issue past their support bureaucracy.