In this post I describe and launch a small experimental host-proof application. At the moment, it provides a simple cryptographic pad for secure storage of text - stay tuned for more services soon. If the post below looks tldr, you should just head over to the FAQ.

The story so far...

First, a quick recap. Since my last post, I've hit on a useful explanatory shortcut, by drawing an analogy between host-proof apps and protocol design: host-proof apps are to traditional apps as ssh is to telnet. When we design a secure protocol, we think of communication over an untrusted channel between two endpoints. Host-proof applications extend the untrusted zone beyond the transit channel to the host's servers. The fundamental question when designing an encrypted protocol is "What if the communications channel is controlled by an attacker?". The fundamental question when designing a host-proof application is "What if the application server is controlled by an attacker?". Of course, the analogy is a loose one - there are significant differences too - but it's set off at least a few lightbulbs in the conversations I've had.

My last post argued that two of the most prominent host-proof applications - Clipperz and Passpack had shortcomings that meant they were not "host-proof" in any practical sense. The response from Clipperz was great - within a few days Giulio Cesare Solaroli sent me a link to a test release that addressed the major application security problem I found. Clipperz has also started solidifying their verification process by publishing the source for their client on an independent site. Kudos to them. There has been no update from Passpack yet, but then Passpack had much more fundamental design problems to correct, so perhaps that's understandable. At the moment, Clipperz is the only commercial application I know of that really tries to be "host-proof". as simple as possible

After my previous post, I had some ideas relating to host-proof apps that I wanted to try out. The first step was to put together a minimal host-proof app as a base for exploration. The key word here is "minimal" - I didn't have weeks to work on the project, so I wanted to build the simplest useful application. The design I settled on is a cryptographic text pad - essentially a text area with a save button. When you click save, the data in the text area is encrypted before being sent to the server. Next time you visit the pad, the encrypted blob of data is downloaded, and you are prompted for a decryption password. Once supplied, the text area is filled with the decrypted text. This is an app that I'd actually use, if I could be convinced that my data would be secure - I like simplicity, and I have very little use for structured data, automated logins and other features usually provided by password management apps. Each pad is identified by a unique name, and users access their pads like so:

The server-side is dead simple - a big hash table linking pad names with blobs of encrypted data. There's no authentication, no sessions, and the encryption password never leaves the browser. Anyone can visit any pad, but they can't view the information without supplying a correct decryption key.

... but no simpler

One practical problem with this simple design is that anyone can over-write the data saved in a pad. In practice a service like this would be destroyed by vandals. I decided to provide write protection by associating a random write key with each pad, and requiring this key to be supplied whenever data is saved. This is completely transparent to the user. When a pad is first created, a write key is generated on the server, and passed to the user's browser along with the application blob. When the user saves data to the server, the write key is prepended to the contents of the user's pad, and encrypted with the user's passphrase along with the rest of the data. On subsequent visits, the user has to supply the decryption key and successfully decrypt their pad contents before the application can access the write key. So a user can't save to an existing pad unless they've first successfully decrypted the pad contents.

And that - modulo a few usability features - is basically the design for cryptographic pads - try it out, and let me know what you think. For more information on the crypto and the format of the stored data see the FAQ.

The consequences of radical openness

Your first reaction to this design may well be vertigo, induced by the thought that anyone could access your encrypted data. Remember that the point of departure for a host-proof application is that we want to be able to resist a determined attack from the host itself. We are saying, in effect, that we don't trust the host any more than we trust the wider internet. Exposing the encrypted data to the world forces us to think honestly about the implications that this fundamental assumption has for application security.

The major consequence of this decision is that an offline attack against the encrypted data blobs isn't just a possibility, it's a near certainty. It's important to be frank about the consequence this has for your data, so lets explore this at some length. In a perfect world, we don't care about offline attacks - pads are encrypted with a 128-bit key, and there is no technology on the horizon that comes close to being able to explore a keyspace this large. The problem, of course, is that very few people are going to use a passphrase with 128 bits of entropy. Research shows that humans are stunningly bad at generating randomness - according to NIST your 8-character password probably has only 30 bits of entropy. Add to this the fact that human memory is depressingly limited - most people will have trouble remembering a random 8-character password - and you have a recipe for disaster. Between the crappy human password and the encryption process is a key derivation function, which takes an arbitrary length passphrase and turns it into a key with exactly 128 bits. An attacker mounting an offline attack against a pad wouldn't try to explore the entire keyspace - instead, they would guess passwords, run them through the key derivation function to derive a key, and then try the result against the encrypted data. The key derivation function has two properties to help protect against this kind of attack. Firstly, it is time-consuming, usually including a large number of rounds of a hash or pseudorandom function. Secondly, it incorporates a salt - 8 random bytes that that are stored along with the user's encrypted data, and used to initialise the key derivation function. The salt prevents an attacker from pre-computing a dictionary of keys from common passphrases, and then using that dictionary to attack a large number of encrypted pads simultaneously. By using a salt, we force the attacker to re-compute the dictionary for each pad.

The take-home message is that although we have countermeasures to slow an offline attack, the security of your data relies on the strength of your passphrase. The client can generate a random passphrase with 128 bits of entropy for you - that's about 23 characters. If you're really serious about the security of your pad, I suggest that you use a randomly generated key, write it down, and keep it in your wallet. Yes, your wallet.

I should also point out some other effects of's open design. Exposing encrypted pads to the world gives attackers a number of pieces of information they wouldn't otherwise have. An attacker can establish the existence of a pad with a given name. So, if you choose a pad name that is personally identifying, an attacker will be able to tell you have one. An attacker can tell that a pad has changed, by comparing the encrypted data with a previous retrieval. They can also get a fairly accurate idea of how much data is in a pad by looking at the length of the encrypted blob. Some of these exposures can be addressed (for instance, we could pad data to make guesses at data length less accurate). I might look at some mitigating strategies as evolves.

Verification - introducing AppHash

Now we have a basic application running, there is still a problem. Every time the user visits, they download a full application image from the host. The client-side code is made available for peer review, but how does a user know that the image running in their browser matches the one I've published? In my previous post, I made a big deal of the fact that this is pretty much an unsolved problem in the current crop of host-proof apps. Some applications (like Clipperz) publish a hash, but the amount of effort that checking requires from users makes this almost useless.

I'm taking a very preliminary first stab at this problem by releasing a Firefox addon called AppHash (binary release) along with It intercepts traffic matching specified regular expressions, and checks that the SHA256 hash of the loaded page matches a known-good value. AppHash isn't tied to - in fact, it is pre-configured with a hash for Clipperz too. Unfortunately, I couldn't do the same for Passpack, since its design makes this type of verification impossible.

Yes, a Firefox addon has a huge number of limitations. It is a pain in the ass for users to install, isn't cross-browser, won't work on mobile devices, has to be distributed securely, and is itself a significant application that requires verification. It is, however, a first step, and may even be the best we can do with current browser technology. In some parallel universe where host-proof applications are common, something like AppHash could be a piece of common infrastructure, co-maintained by host-proof application authors and their users. I plan to explore the verification problem more in a future post.


Client-side security and verification are technical problems - trust is a social one. What if you find that the published checksums don't match the application blob served up by Who do you contact? If was a traditional application, you would get in touch with the application administrators. But since this is a host-proof application, that's exactly the wrong thing to do - remember, one of our fundamental assumptions is that the host may itself be a malicious entity.

Here we come to another difficult truth about host-proof applications - assurance has to be devolved to its community of users. The host should have no privileged role in this process, apart from releasing data that can be publically verified. The core of this process has to be a well-known public channel not controlled by the application developer. This channel can be used by users to communicate with each other, discuss the security of the client-side application, and raise the alarm when something funky is going on. The application host can also publish new checksums through this public channel, and users can rely on community review to make sure nothing untoward is happening.

I think a known hashtag on Twitter is a pretty good solution. For, please use the #crypsr Twitter hashtag. I will announce changes and hash updates there, using the @crypsr account.

Alpha means alpha

In the end, was built in a few days by just one person. I'm hoping that it and its underlying libraries will get some exposure and peer review, and it's possible that when this happens someone will find some hideous error. The fulcrum of's security is the jsCrypto library - and I have found three critical implementation errors since I started looking at it. The latest is a problem in the jsCrypto SHA256 implementation, which also breaks key derivation, and is still unfixed in the official distribution (the jsCrypto guys say a fix is on the way). Other folks have also found problems - it's hard to say exactly how many, because jsCrypto's changelog is vague. On the up-side, the flurry of activity in jsCrypto shows that it has received a significant amount of review, and the kinks are being worked out - if you're interested in these things, you should lend a hand.

The upshot is this. A host-proof application has no way to poke into user data and, say, migrate encrypted pads to a new key generation function. So, a really severe problem might force me to mercilessly destroy all the user data saved on Be warned.

Up next

In future posts, I will build on to try some new approaches to the knotty verification problem, and to stretch host-proof applications into new areas. Watch this space!