17

I'm trying to authenticate a client to my secure WebSocket server (wss) for registered member area.

Once a member is connected to the web server, I record, in a database, a unique token (associated to the member) that I displayed in a hidden field on the page initiating the connection to the Web Socket server.

Then the token is sent to the WebSocket server that authenticates the account using the token.

I'm really not a security expert, and I wanted your opinion as to the security of my authentication.

Are there any risks (except cookie hijacking)? Are there any better way to proceed considering that WebSocket doesn't prescribe any particular way that servers can authenticate clients during the WebSocket handshake.

I use Ratchet WebSocket.

Pier-Alexandre Bouchard
  • 5,135
  • 5
  • 37
  • 72
  • Also see [Websocket authentication](http://stackoverflow.com/questions/6169842/websocket-authentication). I don't believe its a duplicate since its asking about a HTTP login within a `WebSocket`. – jww Mar 23 '14 at 19:07
  • What is in the token? Most session/authentication tokens have some kind of time limiting mechanism (to mitigate the impact of leaks) and maybe a mechanism whereby the user can invalidate old tokens (for example on changing password). Does your token include signed expiry times, or act as a key to server-side state storage with similar properties? – bobince Jun 12 '14 at 21:36
  • SocketCluster was using cookies originally for storing JWT tokens but it has moved away from this. You can read why here: https://github.com/SocketCluster/socketcluster-client/issues/9 – Jon Feb 04 '16 at 03:41

2 Answers2

8

Yes, one option is to use cookies (and TLS to avoid cookie hijacking):

Have the cookie set after "plain old HTML form based" login, transmit the cookie to WebSocket server, and use the cookie to authenticate the WebSocket.

Here is a complete example of doing Mozilla Persona based authentication with WebSocket.

You asked about Ratchet. This example is not Ratchet, but it might give you some clues - which is why I think it's ok to point to.

Nate Anderson
  • 18,334
  • 18
  • 100
  • 135
oberstet
  • 21,353
  • 10
  • 64
  • 97
-7

Are there any risks ...

Yes, lots. PKI/PKIX and SSL/TLS has a number of architectural defects. For the full treatment, see Peter Gutmann's Engineering Security.

Also, WebSockets does not allow you to query attributes of the underlying connection. So, as far as I know, you can't even tell if you are actually using SSL/TLS. The only thing you will know is you requested it.


Are there any better way to proceed considering that WebSocket

The last time I checked, WebSockets is pretty lame. All you get to do is open, read and write. They are intended to be easy to use, but lack anything useful related to security.


For completeness, the last time I checked was March 2013 while performing a security evaluation on app that used them. Things may have changed by now.

I also tried engaging the WebSockect RFC authors, and the RFC editor about some of the issues form a security point of view. All of the emails went unanswered.


UPDATE (2015): It gets even better... Browsers and other users agents have Host Public Key Pinning now.

But if you look at the details, it should be called "Host Public Key Pinning with Overrides". The built-in overrides allow an attacker to break a good pinset. Worse, the browser is complicit in the cover-up because it MUST suppress the failed pin report.

You can read more complaints about it (and the rebuttal) at Comments on draft-ietf-websec-key-pinning.

Its a junk standard that never should have been standardized. Its more insecure browser junk.

Community
  • 1
  • 1
jww
  • 97,681
  • 90
  • 411
  • 885
  • 1
    One needs to differentiate between the WebSocket _protocol_ (RFC6455) and the W3C WebSocket _API_ in browsers. JavaScript running in browsers is not able to retrieve e.g. the certificate on a secure connection - and that is independent of using WebSocket, AJAX or whatever. This is by design. A rougue JS running on an Intranet might otherwise "probe around" on the network. – oberstet Mar 24 '14 at 11:38
  • oberstet - thanks, +1. Do you realize that position actually means you cannot use the socket? In data security, we [are supposed to] *never* apply our secret to an non-validated "thing" (for lack of a better term). For public key encryption, that means we validate the public key. For a secure channel, that means we verify SSL/TLS and certain attributes in the certificate chain. – jww Mar 24 '14 at 18:43
  • 2
    No, it does not mean that. The certificate is verified by the browser (verify in this case means: check if the cert is valid wrt to the trust chain built into browsers). If running outside a browser, a WebSocket client can and should verify the server certificate itself of course. – oberstet Mar 24 '14 at 18:52
  • "The certificate is verified by the browser" - bad idea. The browser (i.e., the platform) cannot know things like what CA is supposed to issue the end entity cert or what the server's expected public key is. The issuing authority or server's public key are things an app would know and could verify (if `WebSocket` made the information available). Little details like this are why web application are relegated to low value data. If the data is high value, a web app cannot be used. (At least in the risk departments I have worked). – jww Mar 24 '14 at 19:07
  • 2
    The browser _does_ know which CAs are acceptable for the server cert to be valid: the CA cert in the trust store. For IE/Chrome on Windows e.g. there is a OS wide trust store used. An the user/admin of the PC can modify the trust store. So you can remove all but one CA cert, and the browser will _only_ connect to a server via TLS if the server cert was issued by that CA. – oberstet Mar 24 '14 at 19:14
  • 1
    The problem is plural - "CAs". There's one valid issuing CA (sans coutersigning or cross certification using a PKI bridge). There's no need to trust the CA zoo (or DNS for that matter). That's gotten us into trouble in the past. Its why Comodo hacker was successful in his attack with the compromised Diginotar root. And its why Trustwave was successful in their MitM attacks. – jww Mar 24 '14 at 19:19
  • 1
    Yes. I agree on that one. It's a problem that a server certificate is not "pinned" to a specific issueing CA. There are fixes for that ongoing: http://security.stackexchange.com/questions/29988/what-is-certificate-pinning – oberstet Mar 24 '14 at 19:32
  • Being able to access the CA in the browser WebSocket API would be pretty useless. If we assume a compromised CA attack then the attacker would be eminently capable of sabotaging the JS that did that check, at the point it was served to the client. The commercial X.509 PKI has its problems for sure, but it's not reasonable to blame WebSocket for that. – bobince Jun 12 '14 at 21:29
  • @bobince - "Being able to access the CA in the browser WebSocket API would be pretty useless." - I disagree. What use case are you thinking of? I'm thinking of a side loaded app in a trusted distribution channel, like loaded from a organization's app server. In this case, the bad guy does not get access to the Javascript in motion because its already present on the device. I believe it should apply to SysApps, WebApps, Packaged Apps, Installable Apps, etc (but not bookmarked apps). It all depends on the distribution channel. – jww Jun 12 '14 at 21:41
  • I'm thinking of a normal web application, which appears to be the OP's use case. Custom client software can have its own CA trust store: this is the right level to handle trust, pre-connection-opening. For a custom app implemented in-browser, it would not be practical to do a post-opening check on every connection (image/script/stylesheet resource fetches, XMLHttpRequest and WebSocket) even if there were a way to do that... with script/style inclusion your JS origin is compromised immediately anyway. Plus a connection's TLS properties are not static as they can be renegotiated. – bobince Jun 12 '14 at 23:58
  • @bobince - "... a connection's TLS properties are not static as they can be renegotiated" - if the certificate or public key changes, then you probably have bigger problems. [Triple Handshake Attack](http://blog.cryptographyengineering.com/2014/04/attack-of-week-triple-handshakes-3shake.html) FTW! The TLS WG is working on that as we speak. – jww Jun 13 '14 at 01:04
  • @bobince - "Custom client software can have its own CA trust store..." - Forgive my ignorance.... do you have a reference? I don't believe the web security model has provisions for custom stores. I know WebCrypto does not. I think about the best we have is [IndexedDB or WebStorage](http://csimms.botonomy.com/2011/05/html5-storage-wars-localstorage-vs-indexeddb-vs-web-sql.html), and I've never seen anyone wire it up. They don't even have protected storage. – jww Jun 13 '14 at 01:07