Mobile ticketing, m-commerce, secure messaging, corporate applications, government communications, e-money, the list goes on of things that would be great to do on mobile – as long as they really were secure.

So, security on our phones, and we’d like to use publicly endorsed standards – 10 out of 10 security experts prefer it and describe any non-standard (proprietary) security as “Snake-Oil“.

Q: HTTPS/SSL is used for e-commerce everywhere, why not use it for mobile-commerce?

A: a number of issues make it impractical for mass-market mobile use:

  1. Not available on all phones
    MIDP1 phones don’t support SSL connections. MIDP1 represents fewer than half of the JAVA enabled phones in circulation in the West, but a significant number and a greater proportion in the developing world where used handsets are popular. To develop systems that won’t run on these handsets is to needlessly cut down your potential user-base.

  2. Out of Date Symmetric Ciphers
    Some handsets have built in HTTPS/SSL for their browsers and many handsets use the easiest to develop, and alas the oldest ciphers in their SSL/HTTPS. Some of these ciphers have known vulnerabilities, and should not be added to new systems, for example RC4. US Govt CERT Advisory: SSL/RC4 passwords easily crackable; solution: Do not use RC4 encryption.
    This is alarming as most mobile browsers which have SSH/HTTPS only use RC4. Try connecting to your bank, and look at the page/connection security details, and I’ll bet it says that you are using RC4. It should be retired, not built into brand new systems.
    “RC4 encryption algorithm is not a Federal Information Processing (FIPS) standard and probably won’t ever be because network professionals see RC4 as rather weak in terms of message authentication and integrity.”William Burr of NIST
  3. Only secures TCP data
    The built in SSL/HTTPS cannot be used to add encryption to locally stored data, SMS, MMS, Bluetooth, IrDA or any other connections used by the mobile device, so anything you are doing on these channels is in the clear. This non-TCP data is important because many users don’t always have the correct data settings for java networking, or know how to use them, and you can greatly help them by being able to “fall-back” to encrypted SMS to make your purchase, as you will find in our latest apps. We will cover networking issues in a later blog post. Also people have attacked and opened the RMS data stores on phones, so your stored details are not safe unless you have taken extra steps beyond standard MIDP behaviour.

  4. Certificate Problems
    Many handsets don’t have a full complement of root certification authority certificates installed. This means that your HTTPS server certificate will often not be recognised as a properly authenticated cert (throwing worrying messages to the user, and completely voiding the trust relationships that make SSL/HTTPS useful), and the phone will NOT CONNECT AT ALL to your server from Java as invalid and self-signed certificate warnings are not fed to the user when the connection is made. For example one of our test applications was unable to connect to the major and popular London Underground Oyster site, or the Barclays bank websites on most Nokias, because of certificate issues.
    It is impossible on almost all handsets to install any new certificates, so regardless of the technical abilities of your users or support teams you will not be able to overcome this issue; the FAQ site for Opera mini also acknowledges this issue.
    One final confusion: the original phone model from the handset manufacturer may have contained a valid Verisign or Thawte root cert, but sometimes the mobile network operator will remove that and put in their own certificates instead at the customisation stage, so you can’t reliably know what certs are installed even if you are sure which model of handset you are talking to.
  5. Slow to initiate
    HTTPS and SSL require several connections to establish a secure session. On a fast PC network with low ping times this isn’t much of a problem as the amount of data sent is tiny. However on a phone with two second ping times, commonly up to six seconds, each connection (regardless of amount of data) takes serious time to travel to the server and back. On many handsets the cryptography required on the handset side can also cause delays – in our experience up to eight seconds on the encryption alone, even on a recent MIDP2 Nokia.
  6. Vulnerabilities in WTLS/Secure WAP – also not end-to-end
    Most handsets implement WTLS (Wireless Transport Layer Security protocol). This is similar in intent to SSL, but with some bits chopped out to make it easier for phones to use with their limited memory and small CPU. Once the data gets to the WAP gateway it is decrypted and then re-encrypted into an SSL/HTTPS session to the destination server on the internet.
    Unfortunately the protocol allows the use of very weak, or no encryption at all between the phone and the WAP gateway. There are several known cryptographic attacks on WTLS too, detailed here.
    Worse, the data is all in plaintext within the WAP gateway i.e. this does not constitute end-to-end encryption – it relies on network engineers not to read or sell your data, unknown 3rd parties to maintain the server security, and defend the WAP gateway from compromise.

    The risks of leaving your data in plaintext at the WAP gateway (or anywhere inside the mobile network) are wonderfully demonstrated by the recent revelations that hackers have gone undetected inside mobile networks for years – see stories of accidental discovery of long term hackers in Vodafone Greece and T-Mobile, and also of network engineers snooping at user data passing through their systems for romantic reasons.

    [Open question to readers, because I’m suspicious but not sure: when using the default WAP Gateway, not an Internet Gateway, for your mobile data connection, are you able to use full end-to-end HTTPS or does the WAP/UDP protocol drag everything down to WTLS? If it did, all your WAP based banking may well be clear-text in the Wap gateway.]

Q: What about Opera Mini – the advanced one for MIDP2 that has some security built in, can’t we just do e-commerce through that?

A: It’s not end-to-end encrypted – so you have to trust Opera’s engineers, servers, sysadmins and maintenance contractors with your plain-text passwords and credit cards. Again they chose to use the RC4 cipher of all things, which should be retired, not built into new products.
In good news though, they do seed their Random Number Generator correctly, which is disappointingly rare in mobile security.

Q: What about SMS – internet based hackers can’t see them, as they are all in the Mobile Network Operator’s walled gardens?

A: Same problems as WTLS, because network engineers, 3rd party service technicians, servers, and base stations in the Mobile Networks are not policed to PCI DSS standards (required for sending/processing credit cards).
Additionally an SMS may not just be going through the UK networks that have familiar brand names. SMS’s are often transferred internationally across the Wild Wild Web through various companies that you have never heard of, and may originate or terminate in some very odd countries where you wouldn’t dream of leaving your credit card.
The infamous case of bored network engineers poking through other people’s MMS pictures of naked girlfriends (and forwarding them on) is nothing compared to what may happen when and if large amounts of credit card data start to get sent through SMS messages. After the case of O2 engineers abusing their access to SMS messages, analysts Gartner said “The contents of SMS messages are known to the network operator’s systems and personnel. Therefore, SMS is not an appropriate technology for secure communications. Most users do not realise how easy it may be to intercept”.
If I was thinking like a hacker or spy, the first thing I would do is place logging hardware in a few base-stations, which see plenty of data passing through them. Lawyers, government ministers and top business people leak enough valuable information without throwing credit cards into the mix. It should also be pointed out that it is child’s play to fake the sender number of an SMS.

Q: What about the open source Bouncy Castle Light Crypto Libraries?

A: A jolly good start, but they’re not THAT light. Adding a simple security system using Bouncy Castle with Asymmetric Key exchange, a Symmetric session cipher and a Random Number Generator (the bare minimum for secure comms) adds over 22Kb to the basic app, even after you’ve stripped unused code and obfuscated it. With the very oldest handsets having JAR size limits of 29Kb – such as the Nokia 6310i, which despite its age is still very popular, people clinging to it like limpets and some major companies still issuing them NEW to their executives (we learned in a recent meeting). Common MIDP1 handsets have 64Kb limits – Playtech saw more than 1 in 10 downloads go to these handsets in the first half of 2007, a substantial market share. Most of the work in mobile dev is making things small, and use as little memory as possible, so that you’ve got plenty of room for pictures and text and all that code that needs to glue it all together smoothly.

What a moaning session! I do apologise, but there’s no point beating about the bush.

Solution: (a suggestion, not the only solution)

To reliably create encrypted sessions from mobile you have to build a truly lightweight encryption/decryption system into your applications. This allows you to support every possible phone and also control the choice of cipher, the encryption strength, and eliminate certificate incompatibilities. It also allows you to encrypt data stored on the handset, SMS messages, or communication on other channels, which HTTPS and SSH do not.

This is why Masabi built EncryptME, a 3KB security component for J2ME that allows all phones to make encrypted connections over SMS, GPRS, WiFi or just about anything else. EncryptME is also US Government verified and certified, so you don’t have to take the our word for it that it does what it says on the tin. It provides 1024bit RSA, 256bit AES and an approved RNG, if you were wondering.

Additionally, by being standards compliant you can continue to use standard server cryptography components, for example those from Sun, Microsoft or our friends Bouncy Castle. Even against phones with built in SSL/HTTPS, EncryptME is between 6 and 20 times faster on a mobile in live tests.

And good choice of algorithms isn’t enough! You must use them in a wise way with user education, good seeding, padding, key management, and protection against replays, insertions, concatenations, man in the middle, phishing and all manner of other attacks that cryptography alone will not solve – but that’s another story, to be entitled “When good crypto goes bad”.

Good Examples:

  • Opera Mini Advanced: MIDP2 only, but has their own crypto with proper RNG seeding. Well done, apart from lack of end-to-end and use of RC4.
  • Playtech Online Casino: supports almost all MIDP phones, built in encryption, 1024bit RSA and 256bit AES, with player key-strokes during gameplay used to seed RNG.

Bad Examples:

  • Opera Mini Basic: no encryption, but allows users to interact with their secure HTTPS banking sites while sending cleartext usernames, passwords or credit cards over the internet. To be fair they do admit it here with a good diagram.
  • Mobile java apps that rely on SSL or HTTPS – they won’t work in all cases, and slow down user interaction where they do work.

  • Plaintext SMS apps which invite you to send Credit Cards or other sensitive data.
  • Anything using “security through obscurity” or “snake oil” where they don’t reveal what they do to protect your data.
    Also a special mention for the use of the word “patented” in security. It may help to raise a company’s valuation in the eyes of Venture Capitalists and investors, but to a cryptographer (or hacker) “patented” doesn’t mean safer, or correct, or even clever. It only really says that someone has sent it on a piece of paper to their local Patent Office and can be a good hint that very few people have had the chance to check it and test it, especially if this patented technology is only used by one or two companies.
    In Java reverse compiling (reverse engineering) end user applications is pretty straightforward, so the secrets will come out pretty quick as soon as someone takes an interest in your product.

I hope that gives you some idea about security on current generation mobiles. My next post will cover getting good Entropy for your secure random number generator on mobile, a need highlighted in a timely fashion by Brian at Mobile Crunch.

Comments: there are no completely secure systems in the world, and security only gets better through challenging, discussion, testing, probing and questioning. Please post your questions or corrections if you have any, and I’ll do my best to respond, or correct mistakes/omissions as they are pointed out.
Also please feel free to post good examples of clear and honest mobile security if you’ve seen some, but don’t be surprised if we start poking them and asking questions back at you!

10 Responses to Problems with Mobile Security #1

  1. post in haste, repent at leisure…. a couple of omissions that have nagged me:

    plain text SMS will hang around in most users’ sent-items folder on their phone so that phone thieves or anyone who gets hold of your phone can retrieve pins, passwords and Card details, and potentially use your phone to abuse them.

    What have I got against Opera?
    Absolutely nothing, I think it’s a great product, and well written, and as such can cope with my mentioning it as an example of not quite reaching perfection. They couldn’t avoid decrypting confidential info at their transcoder as they are a browser, not a single purpose application. (but they could add encryption to OperaMini basic if they really wanted to)

    There are plenty of companies and products in the press right now that I could name and shame as examples of real security failings, but that would be a bit too aggressive, and I’ll leave the educated reader to join the dots.

  2. Anonymous says:

    while I realise this is a marketing piece, that’s not excuse for peddling FUD about other alternatives.

    Point number two about out of date symmetric ciphers is pretty badly wrong. In actuality, RC4 only has one significant cryptoanalytic result against its name beyond brute force, and that can be worked around. In any case, the result doesn’t affect the use of RC4 in TLS. The only concern about RC4 is the keysize, but it’s good enough for most practical purposes.

    The cert advisory you link to in support of your case isn’t anything to do with the security of RC4 itself, but in fact a vulnerability in the SSH protocol. If you took it as an example of why null terminated strings and not validating input are a bad idea, then you’d be right. Giving it as an example of RC4 being a bad thing is disingenuous.

    I’m also rather interested in the statistics you have in point 5 about encryptMe. In the fastest machine tested, the N80, the connection setup time would seem to be faster than the latency will allow. OK, you can trim down the TLS startup time by using a pre-agreed cipher suite, but security dictates there be a minimum of three messages. One in one direction, and two in the other. If you could elaborate on that point, it’d be interesting. I suspect it compares apples with oranges….

  3. Hello anonymous,

    Re-reading the advisory, you are absolutely right that the first link to RC4 advisory is specific to SSH not SSL, and as such is not appropriate in this context – it wasn’t deliberate, but that’s no excuse in security.

    However that does not change the fact that RC4 is not considered very good by security professionals and academics, in fact
    “most security professionals recommend using alternative symmetric algorithms”
    . There are a number of known mathematical weaknesses in the algorithm – the flaws are not just in RC4 implementations such as the SSH advisory I incorrectly linked to.

    Also the quote from William Burr of NIST about RC4 never becoming a FIPS standard is accurate, and you will find that it still isn’t approved to this day.

    You say it’s “good enough for most practical purposes” but why use something in new products which has doubts raised about it when there are better alternatives? RC4 should be in graceful retirement, only supported for legacy systems. It’s putting your head in the sand to continue adding it to new products, and is building problems for the future completely unnecessarily.

    About connection speeds, I don’t completely understand your point about latency, but can’t email you to confirm what you meant. We perform one connection only, and our encryption is faster than HTTPS, hence the speed improvement.

    Apples and oranges? I am comparing one version of secure comms over HTTP (HTTPS) with another method of securing comms over HTTP (EncryptME), and ours is significantly faster no matter how many times you run the tests, and on whatever device, thanks to it having to do fewer things. And before you ask, yes, it is protected against replay attack.

    Of course this is a company blog and we have a vested interest in marketing our products, but this article was not intended just as a vehicle for FUD and promotion – we would happily use SSL for MIDP if it actually worked consistently and securely across all devices. Sadly the mobile world is never that simple and I have outlined the reasons here.

  4. Anupam Varghese says:

    An excellent article – concise and precise. Having worked on quite a few mobile solutions, I’d thought that HTTPS was as best as it could get, especially considering a slight performance cost with Bouncy. I’m impressed that you could fit an encryption system into a footprint that small!

    I’ve sent you an email too at your (email us) link, hope it reaches the right desks ;), if there is any other address, please let me know.

  5. says:

    Several mistakes here, dim comments, and linked sites are full of errors, too.
    I want to elaborate, but first let me know someone is out there. Everything is so old, not sure it’s worth my time if no one is even looking here. Hello???

  6. Maarten says:


    At this moment I am doing my thesis on mobile commerce possibilities.

    I thought this article was really interesting for me but I just read the latest comment…

    Yes, slakinduff I am reading and I am very interested in what you have to add to this article.

    Maarten van Dijk

  7. Ben Whitaker says:

    Hi Slakinduff,

    yes, the blog is alive (look at the latest posts on

    We posted here to encourage people to contribute and let us know what they think, or what we made a mistake on, so that we can answer criticism or improve things that are oversights, and so that all readers of the blog can be informed about security issues and approaches on mobile from both posts and comments.

    Do elaborate on your comments, but please try to be constructive and informative in your criticism.

  8. Slakin Duff says:

    Sorry, I was detained. Let’s start at the top with your references to the “Snake Oil” article.

    The first assumption is that all security can be reduced to the application of encryption. Wrong.

    The second assumption being made invariably, is that JUST BECAUSE you use encryption the security is good. Wrong again. Many breakins have taken place and the data was encrypted (TJ Maxx). Encryption doesn’t do much to protect you from insiders. And look what a stinking mess DNS is in, SSL or not. Certificates are being duped and all. It’s a disaster.

    The third assumption being made is that just because an individual can read the “Snake Oil” article, they can then discern for themselves whether a particular security product (SW or HW) or service is good. Wrong. For some, it’s like reading instructions to operate an electron microscope, then being told “There…you read the instructions, now go use it!”

    “Snake Oil” is simply predicated on past cases where some have passed off bad products at the expense of others. I won’t make any comments about the author. The fact is, “Snake Oil” as it is, is useless.

    In fact, I cannot identify anyone today, attempting to do as the author describes. Perhaps there are such people. BUT MY POINT IS there are thousands of products (SW and HW) and services being offered by what appear to be solid companies that I have no way of evaluating.

    During the past year everyone and their brother has released a security product. Am I supposed to believe they are all good, JUST BECAUSE they tell me they use AES256??? How do you or anyone else know that they’re applying the algorithm properly? Where are the keys stored, where do they originate? Do you know? Does the sales person even know or care? No, just so long as they say “we use AES256”, I’m supposed to think it’s OK. And even if it’s FIPS certified, should I trust them not to keep a copy of MY keys? I have no way to know that in the future a company will not come under an order from some authority to disclose all my data… for whatever reason. And the mere possession of encrypted data can get me in trouble.

    Don’t even get me started about the idiotic statement that only encryption algorithms from ivory tower people can provide security. I can give you several ways to protect data that do not use encryption, do not XOR the data with a one-way pad (not really encryption), yet it is physically impossible to determine the original contents of the data.

    BTW, the application of a one-time pad is academic. Why does this keep coming up? All you are doing is exchanging the burden of protecting one document with that of another. This does not have a practical use in communications. And forget about steganography.

    Encryption has several problems. Why would anyone conduct development of superior means to protect data at rest, or during communications, if it was completely useless? Why not keep it to themselves? They’d actually be better off the more people think it is snake oil and can’t work.

    Thank you for allowing me to comment.

  9. Ben Whitaker says:

    Hello again slakin duff,

    I can see that you mean well, and you are exactly correct in your assertion that encryption alone does not equal security.

    In the article body we state “And good choice of algorithms isn’t enough! You must use them in a wise way with user education, good seeding, padding, key management, and protection against replays, insertions, concatenations, man in the middle, phishing and all manner of other attacks that cryptography alone will not solve”

    Just saying that you use AES is not enough, as you say. You say that you cannot identify anyone today doing as I have suggested, so email me direct, and I’ll give you some systems to have a look at to find issues as I have described, and also issues mentioned in our subsequent posting about random number generators

    We never mentioned a one-time pad, but we are using a variety of the approach, in the keystream generated from AES CFB (as are almost everyone on the internet right now). The whole point of securing the keystream seed/key and not the full message is that it’s much smaller and easier to pass around inside the more costly (slow) RSA encryption block used for key exchange than the whole message. You can then use the generated keystream to protect a long, or in fact, streamed message with a much more efficient encryption. Some people do use RSA for everything, but it’s not practical in all situations.

    We are not saying that cryptography generated in “ivory towers” is secure, or that there is no way of carrying out secure business without it.

    But I will say this:
    1: If you are passing confidential information via the internet, or any other public telecommunications network, you should do something to prevent that data being mis-used by other parties.
    2: If you create your own cryptography to form a part of your security, and don’t build it and test it to a public standard, then history tells us that it is more likely that there will be a weakness in your approach.
    3: The publicly approved algorithms have not been created, tested and approved by a mysterious “ivory tower”, but are publicly recommended because they have been published, subjected to attack and investigation by many well funded departments in government and university security groups, and also by educated volunteers. It is only once plenty of people have had a chance to attack and investigate a system which is fully exposed would a cryptographic approach become mature enough to be included in standards like FIPS.

    I’m not entirely sure how to answer your final question. Keeping your security system secret means that you don’t benefit from other people with fiendish and different minds helping to find your mistakes, unless they are doing it in secret to damage your system. Publishing is the sensible way to test a security system, unless you are running it off-line, in a physically secure location (or concrete bunker).

    Another interpretation of your final question: people in huge and “solid companies” can throw money and cryptography at their systems, and still not provide security, regardless of the crypto-standards and buzz-words, and we’ve seen that happen.

    It is quite tricky to build something very secure, and almost impossible to build something completely secure.

    Unfortunately it’s quite easy to build something that looks secure (i.e. uses AES in a naive/unsafe manner) but would fall to the simplest of attacks, once someone has figured out what you are doing (which is easy when you reverse engineer java).

    The companies involved probably think that they have got something secure, and that provides the appearance of security, which to the consumer, business insurer, or investor is an important thing. It will only become a problem when they become interesting to an attacker, and until that happens, simply obscuring their data with a caesar-cipher would have been just as effective!

Scroll to Top