by jefftk on 7/12/2021, 6:27:25 PM
by anonymfus on 7/12/2021, 12:54:54 PM
Does that mean that the quest of finding a working direct link to the image/video will soon become impossible?
by wronex on 7/12/2021, 11:57:14 AM
How is this different from the origin header? Does the origin header not tell the webbserver if the requested originated from the same website? Is the origin header flawed in some way?
by ec109685 on 7/12/2021, 4:06:47 PM
This is FUD:
> Hence the banking server or generally web application servers will most likely simply execute any action received and allow the attack to launch.
While these are useful headers, there are protections today via XSRF tokens to prevent these attacks that all major sites implement, so it isn’t likely your bank is vulnerable.
by ajb on 7/12/2021, 8:48:46 PM
The original CORS protection is enforced by the browser, not the server. That means that it is much harder for it to cause a privacy problem. Given that this only works if you are using a browser anyway (any other user agent can spoof all this) I don't see how there can be any security gain from the server doing the enforcement. Which leaves me wondering whether the increased flexibility is worth the potential privacy issue.
by gentleman11 on 7/12/2021, 4:32:33 PM
Pardon my ignorance. I thought the way to deal with csrf was csrf tokens. It seems like you would still have to ignore the headers and rely on the token in your logic if ever they disagreed. I’m not sure how to use these new headers
by amluto on 7/12/2021, 3:02:55 PM
I’m quite surprised that Sec-Fetch-Dest doesn’t have a “form” type for form submissions, and the spec makes almost no mention of forms. Does this spec finally allow a simple header check to squash CSRF form posts or not?
by mousepilot on 7/13/2021, 3:03:42 PM
For me, recent firefox releases have been MISERABLE. I guess it could just be my computers but the browser constantly locks, no performance whatsoever, and just no help on troubleshooting anywhere.
I went with the long term support releases and have had a better experience. Course, still no sound lol but I use Chrome when I want sound. I still like Firefox, just can't use recent releases.
by rob-olmos on 7/12/2021, 7:51:45 PM
Some example code on how to use these headers to allow/reject requests:
https://web.dev/fetch-metadata/#step-5:-reject-all-other-req...
by TazeTSchnitzel on 7/12/2021, 3:16:10 PM
Does this essentially solve XSRF? Would it no longer be necessary to use XSRF tokens?
by barbazoo on 7/12/2021, 3:45:04 PM
In the example, couldn't the call from attacker.com to banking.com be thwarted by CORS headers defined by the server?
by AtNightWeCode on 7/12/2021, 6:40:25 PM
So basically CORS headers that works as expected. Excellent.
by forgotmypw17 on 7/12/2021, 9:39:34 PM
Is there anything Mozilla/Firefox has done in the past 10 years that at least CAN BE ARGUED is for the improvement of the user's experience?
I've been following their work pretty closely, but I'm at a loss trying to think of anything...
Very happy to see this landing in Firefox!
For the people wondering what the motivation is, https://www.w3.org/TR/fetch-metadata/#intro has a good summary:
Interesting web applications generally end up with a large number of web-exposed endpoints that might reveal sensitive data about a user, or take action on a user’s behalf. Since users' browsers can be easily convinced to make requests to those endpoints, and to include the users' ambient credentials (cookies, privileged position on an intranet, etc), applications need to be very careful about the way those endpoints work in order to avoid abuse.
Being careful turns out to be hard in some cases ("simple" CSRF), and practically impossible in others (cross-site search, timing attacks, etc). The latter category includes timing attacks based on the server-side processing necessary to generate certain responses, and length measurements (both via web-facing timing attacks and passive network attackers).
It would be helpful if servers could make more intelligent decisions about whether or not to respond to a given request based on the way that it’s made in order to mitigate the latter category. For example, it seems pretty unlikely that a "Transfer all my money" endpoint on a bank’s server would expect to be referenced from an img tag, and likewise unlikely that evil.com is going to be making any legitimate requests whatsoever. Ideally, the server could reject these requests a priori rather than delivering them to the application backend.
Here, we describe a mechanism by which user agents can enable this kind of decision-making by adding additional context to outgoing requests. By delivering metadata to a server in a set of fetch metadata headers, we enable applications to quickly reject requests based on testing a set of preconditions. That work can even be lifted up above the application layer (to reverse proxies, CDNs, etc) if desired.