Talk:Session hijacking

Page contents not supported in other languages.
Source: Wikipedia, the free encyclopedia.

Untitled

One of the major concerns about session hijacking is that an impersonator can gain ownership of a user's session and then masquerade as that user for the duration of the session unabated. This is made possible by acquiring a session id by various means. Use of the session id is honored by the server as an authenticated user. Once this session id is acquired, the window of opportunity to inflict damage is as wide as the session itself.

A session user is identified by a session id, which is deemed adequate for some applications since a session is often a short term interaction that does not reveal permanent user credentials. To exploit the weaknesses of session security by session id might require a dedicated and swift attacker. If the data protected by the application session is not sufficiently valuable, such time and expense may not be warranted. Still, it is a cause of concern that sessions are vulnerable to this extent. Sessions are themselves programming conveniences for which there exists minimal default security, namely a session id that is valid for what should be short term login.

The session id does not provide security in itself but exists in a context that may have limited windows of opportunity, given the expectation/assumption that a typical session is "short". Applications that need real security for their application sessions should employ SSL, since that protects all data, including the session id, via strong encryption. Commercial applications should be using SSL.

In the absence of SSL, it is still advantageous to use sessions, so it would be worthwhile to have a means of stronger resistance to this session id vulnerability. Users of sessioned applications should feel confident that there are not wide windows of opportunity. To reduce the threat potential, we can shorten the window of opportunity if we don't rely strictly on the session id as the token of admissibility.

To achieve this, we might want to use a request token in addition to a session token, where this request token is managed in the session and is valid only between requests.


This can be implemented by storing a random number in the session and sending that to the client in a cookie. A new token is issued on a per-request basis. For each subsequent request, the server will check the incoming token with the value it last issued. If the values match, it will honor the request and issue a new one. Issuing a token consists of generating a random number, storing that in the session, and sending a cookie containing the token in the response.

If a hijacker manages to obtain one's session id and current request token, it is possible to exploit them to make a valid request. The server will issue a new request token that the hijacker can use to make a subsequent request, and so on. However, as soon as the account owner attempts to make another request, the server will find that the incoming token does not match the last issued token, upon which it will end the session and cease negotiations under the given session id. This response is correct under the circumstances, since an expired token indicates a hijacking.


There is a condition in which the use of a token t(n-1) should be considered valid, and that is when a request bearing t(n-1) is made (nearly) concurrently with another request containing t(n-1). In the case of two simultaneous requests with the same token, the server will issue a new token upon receipt, namely t(n). The client receives t(n) to be used for the next request. However, a second request for t(n-1) is already in transit. The server receives the second request and sees that t(n-1) does not match the last issued token t(n). Under the rules given so far, it would dishonor this token and end the session, but clearly that would be incorrect behavior.

Therefore, to make this system work properly, we need a way to process a token t(n-1) without creating more opportunity for a hijacker. This can be done by maintaining space for two tokens in the server session, call them Slot1 and Slot2. As regular requests come in, the server issues a new token and stores it in Slot1, but before doing so, it copies the previous TokenA into Slot2, thus storing the last two tokens in session memory. If the server does not find a match in Slot1, it will check Slot2. If found in Slot2, it will honor the token and send the token in Slot1 (i.e. repeat the last issued token).

The final thing to do is to observe that honoring t(n-1) indefinitely would allow a hijacker to use it in such a way that when the account owner makes his next request, no knowledge of hijacking will be revealed. That is, the presentation of TokenA will be found in Slot1, as if everything is normal. This would be a serious flaw. In fact, what we want is for this sequence to result in session invalidation as described above.

Therefore, we want to implement a Time-To-Live (TTL) on the token in Slot2, such that the server will only honor the Slot2 token up to some pre-defined time after it was issued. Conveniently, the TTL can be very short (seconds or fractions of a second). This is due to the fact that valid Slot2 token matches only occur in the case of concurrent requests. Given that these are concurrent requests, the time differential in which a Slot2 token match would be checked wouldn't normally be more than a second. In simple tests, the actual elapsed time recorded was under 100 milliseconds. If we set a TTL of, say, 2 seconds, this affords a hijacker 2 seconds to exploit a stolen Slot2 token. The hijacker would have to steal the last issued token and use it within 2 seconds of the user's concurrent requests. As such, using two slots accounts for concurrent usage by the account owner, and leaves only a small window of opportunity for a hijacker.

In the end, we don't expect many concurrent requests. If we consider a requested URL to be a page in some menu-driven navigation system or a link on a page, a user doesn't typically select two items simultaneously. If the server mapped the request token behavior to page URLs (as opposed to all requests, including images, etc), then there is a high degree of expectation that normal usage will not result in concurrent requests in the same session. This support for concurrency exists to enable programmatic usage that may invoke resources such as webservices in a multithreaded or parallelized fashion. We must also not forget the occasional unruly user that holds down the F5 key to reload the current page as fast as the computer will allow...


This system is relatively easy to implement and only requires server-side code. By implementing the behavior in a servlet filter mapped to page urls (e.g. *.do for struts actions), simply have the behavior check and issue cookies containing a random number and maintain two in-memory slots in a session object.

Here is a partial implementation to get a sense of it.

 public static boolean handleToken (
   HttpServletRequest inRequest,
   HttpServletResponse inResponse)
   throws BadRequestTokenException
 {
   SessionToken sessionToken = (SessionToken) inRequest.getSession ()
       .getAttribute (RequestToken.REQUEST_TOKEN_KEY);
   // first request.. initialize, quit
   if (sessionToken == null)
   {
       sessionToken = new SessionToken();
       sessionToken.addTokenAndPop (inResponse);
       inRequest.getSession ().setAttribute (
           RequestToken.REQUEST_TOKEN_KEY, sessionToken);
       return true;
   }
   // serialize access to the same session object
   synchronized (sessionToken)
   {
       // If the request token matches the most recent session token,
       // honor it, issue a new token, and pop B
       if (sessionToken.getRequestTokenValue ().equals (
               sessionToken.getTokenA ().getToken ()))
       {
           sLog.debug ("\nToken Match on A [" +
               sessionToken.getRequestTokenValue () +
               "]");
           sessionToken.addTokenAndPop (inResponse);
           return true;
       }
       // If token matches B and B is still valid, honor it re-issue A
       if ((sessionToken.getTokenB () != null) &&
           sessionToken.getRequestTokenValue ().equals (
               sessionToken.getTokenB ().getToken ()) &&
           sessionToken.getTokenB ().isValid ())
       {
           sLog.debug ("\nToken Match on B [" +
               sessionToken.getRequestTokenValue () +
               "]");
           sessionToken.keepCurrentToken (inResponse);
           return true;
       }
       // Anything else is rejected.
       else
       {
           throw new BadRequestTokenException(sessionToken);
       }
   }
}

This technique is similar in spirit to an approach known as as Page Tokens.

Page tokens are encoded on each link in a page and stored in a session map on the server so that a link can only be followed once with a given page token. If a hijacker uses a stolen page token, the page token map on the server gets updated, causing the used token to expire. Then, when the real user clicks a link, the server finds the token to be stale, indicating that improper activity has occurred.

Page Tokens do satisfy the same basic goal as Request Tokens, but Page Tokens have three significant differences.

1) Page Tokens impose work on the page-author who must encode data on each link or in each form, whereas Request Tokens are submitted automatically by the Browser in a cookie

2) Page Tokens don't support concurrency. If two requests with the same token are submitted simultaneously, one of them will be rejected. Admittedly, the context in which Page Tokens would be used is in a page, and links in a page are not likely to be clicked simultaneously by a normal user. However, if an application is used by a program rather than a human user, the services may call for such behavior.

3) The third difference manifests itself in a variety of ways.

a) Child Browser Window - If a user opens a child browser and  
  clicks a link, the parent window showing the original page will 
  have outdated tokens and cannot be used; i.e. that page contains at
  least one dead link.
  .
b) Browser Refresh - If a user clicks a link with a page token 
  and arrives at the target page successfully, simply refreshing
  the screen will fail since the token used to access the page 
  has been used and is no longer current on the server.
  .
c) Back Navigation - If a user uses the browser back button, the 
  previously selected link will fail.  That is, if the user clicks a 
  link, clicks the back button, and then selects the same link, the 
  Page Token for that link has already been used.  Indeed, the 
  previous page itself was requested with a Page Token that has 
  been used and is therefore expired.  The only reason why using 
  Back itself would not fail is because the browser will likely 
  serve a cached copy rather than forcing a new request.  If a 
  new request was made with the previous parameters, the page 
  would fail immediately upon using Back.
  .
d) Bookmarks - These will fail for the same reason as Back 
  Navigation.  Simply put, cached URLs contain Page Tokens that 
  by definition have been used (being so cached), and having been 
  used, by prescription, are stale and invalid.

Whether this is an advantage or a disadvantage really depends on the type of application and whether these qualities are desired or not. Page Tokens can be of use in an environment where workflow state and navigation are closely guarded. However, in a general public website where there is no guarantee or control over user behavior, intermediate security such as this should not impair normal operations, so Request Tokens may have more general applicability.


Neither Page Tokens nor Request Tokens provide crack-proof session security. Their basic premise is that by introducing a request-level token in addition to the session level token, the window of opportunity for a hijacker is narrowed, giving a stronger defense against potential impersonation.

Tokens don't attempt to create security at all. However, any implementation designed for the sake of programmatic convenience should not undermine security by creating new avenues of susceptibility.

The fact that Request Tokens do not aim to create security implies there is a larger security context at issue. This context is the session context. Request tokens simply make it harder to breach the session security context.


For a discussion of Page Tokens from Palisade.

[1]

For more talk about Session Hijacking, see this Chris Shiflett article.

[2]

A different but interesting and related topic is Session Riding (a.k.a Cross Site Request Forgery CSRF):

[3]

a better method?

"Therefore, a better method [of secondary checking after you've checked session cookie] is to store and check a hash value of the user's browser string." Does this mean a hash value of the User-agent string? If so, won't that be identical for more or less everyone using IE6? Ogy403 20:01, 20 August 2007 (UTC)[reply]

No - because the navigator.userAgent in IE will return local addons, such as the version of .NET. It depends on local machine-level configuration, and as such tends to be helpful. —Preceding unsigned comment added by 69.226.210.193 (talk) 06:59, 28 December 2007 (UTC)[reply]


The page mentions IP source routing as being a "popular" way to hijack sessions. The vast majority of the internet no longer supports source routing at this point, is this really still a notably popular way?

199.106.103.248 (talk) 23:21, 21 April 2010 (UTC)[reply]