Currently in order for forward proxies to be useful in terms of caching websites in this era of an almost HTTPS-only and everywhere web, the proxy essentially needs to act as an MitM and break HTTPS, as otherwise they won't be able to see the resource you request and therefore serve a cached copy if available (and if not then cache it itself). This means if you don't want to break HTTPS the only thing forward proxies can do is become a relay (via the HTTP CONNECT method). But even in that front they're practically obsolete for general browsing due to low-cost VPNs which are much safer (because they try to proxy everything, not just HTTP or TCP).
If one doesn't care about breaking HTTPS then setting up a forward proxy to splice/bump the TLS is still a pain. You have to create a self-signed CA which you install to your browser and use proxy software like Squid which can dynamically create certificates for each domain, so you don't get stuck in endless TLS errors as you browse with your trusted MitM. It's also a lot more fragile because now you have to make sure your CA doesn't get compromised (or else you get an MitM on your MitM)... If we instead only validate the TLS certificate of our proxy then we only need a valid DV cert installed in our forward proxy. It can be issued by any already trusted CA; even just a free Let's Encrypt cert should do it.
But you may already be asking at this point, why are we breaking HTTPS? Why do you want to break the end-to-end principle of making sure only the other party on that website you're visiting knows your communications? Well.. because they've already been broken anyway. You see even though "HTTPS Everywhere" had pretty much succeeded in making sure almost every part of the web is encrypted, it didn't mean there are no longer third-parties who are privy to your private communications. CDNs/gateways like Cloudflare and Fastly already have to break HTTPS in order to cache the websites they serve but they don't control. Turns out webmasters never really cared about the "end-to-end" or "authentication/validation" part of HTTPS; they only want encryption and nothing more.
So if we are able to turn a blind eye on reverse proxies splicing your HTTPS, or Let's Encrypt not doing proper validation on the DVs they issue (further proof that most webmasters only care about encryption), then I don't see why we can't do the same for forward proxies. Caching forward proxies are obsolete now because we still think HTTPS is being taken seriously by 90% of the web. Well it's not and we should allow forward proxies to properly adapt to this new reality as we did for gateways. Make caches and proxies great again!

Think about it; it would be another unique feature that no other browser out there (AFAIK) offers yet. The IT of organizations' internal networks now have an alternative to installing a CA in each of their computers and could instead deploy a browser that has this option enabled. A power user who's on multiple devices could save some latency and data by setting up a Squid of their own. It's also one arena we can allow ourselves to be faster at than the mainstream browsers who have to rely solely on websites using CDNs. Even the big CDNs like Cloudflare could join in on the fun if they want by offering their own forward proxy; they just have to accept that it is no longer only the origin server which can decide how fast and cached its website wants to be. User choice!

(I am also willing to offer a bounty here if that motivates someone here who is familiar with the source code of the NSS and HTTP proxying to work on this request)