Lessons to be Learned from the Most Recent Facebook Hack
A few days ago, Orange Tsai, who works with Devcore, a group of Taiwanese hackers (in the white-hat sense), posted a thoroughly fascinating read about finding a vulnerability in Facebook’s external file transfer system. That in and of itself is already pretty cool, but what’s more is that he apparently found evidence that somebody else had been there, and had been collecting the passwords of everybody using that system!
There’s a number of great reasons to check out his write-up, including a great description of his thought process as he looked for and eventually found the aforementioned vulnerability.
How It Happened
Facebook, like any other business, had a need to facilitate the transfer of files into and out of its corporate network to business users that might not know or want to use system-to-system interfaces like SFTP (although it might be argued that even SFTP is not so big a deal, with software these days). It happened that Facebook used software from Accellion to achieve this purpose, and their software portal for doing so was exposed to the internet as class C IP address, which is essentially a lower level tier in the way that a network administrator might organize their network. Other services that Facebook exposed as Class C included things like VPN, Outlook Web App, and Mobile Device Management software.
This is pretty standard practice, and exposing entry points into your network is just fine if the software you’re using to facilitate that access is also secure. As it turns out, however, that’s a pretty big if. Orange quickly identified that the product was Accellion’s from the footer and logo web page design. Looking through the exposed source of the webpage didn’t provide much in the way of clues, either. There were, however, several previous reported vulnerabilities regarding the software, such as a report from February 2011 and another from January 2013
So, he acquired the source code for the product. This was a little bit easier said than done. The product was essentially a series of PHP and Perl modules, encrypted using Ioncube. Encrypting source code is a fairly common practice because it prevents hackers from reviewing your code for vulnerabilities (it’s also a bad idea, for reasons we’ll discuss later). However, it turned out that the version of Ioncube they used was out of date, and so could be readily decrypted using some pre-existing tools.
Having the source code in hand, Orange was able to look for vulnerabilities and found quite a large number. With these, he was able to start a webshell, which is typically used to remotely administrate systems, but can and has been used by bad actors to remotely control machines that they have compromised using some other vulnerability. In this case, it was a SQL injection vulnerability that allowed him to do this.
The Plot Thickens
This is all well and good, but when Orange was in the process of reporting the vulnerability to Facebook, he noticed in the server logs that there was evidence that another webshell had been used in the past. Digging further, he found that a proxy on the credential page was storing the passwords entered by users of the site in plaintext, and that periodically that file would be transferred to an external location.
The mistake that lead to detection was that whoever implemented this process did so using a GET call, which is well logged and typically audited, and forget to suppress errors generated by the process, which is how Orange discovered this activity in the first place.
One last thing to notice is that Orange has provided a detailed timeline of the events as they took place. It’s a fascinating look into the lifecycle of reporting vulnerabilities, and the process for receiving bug bounties.
Lessons to Learn
There are definitely a couple lessons that might be learned from this episode, which can help us maintain proper operational security (opsec) and network security (netsec).
From an opsec standpoint, this is a lesson that encryption does not guarantee security. We find new vulnerabilities in encryption protocols every day (some of which can read about this site!), and of course the human element cannot be easily controlled.
There is also an argument that security through obscurity does not work for code. The rationale behind security through obscurity is that if you treat everything like a secret, the people who want your actual secrets won’t know what is important. While this holds true for personal information, the metaphor falls apart when it comes to security because, simply put, all software has vulnerabilities, and people will find them. Knowing this, there is a tactical choice that might be made – why not let people know how your software works, let them find the vulnerabilities, and report them to you for repair? This is one of the principles for open source software security – most actors are not, in fact, bad.
Finally, on a very technical level – PHP is just not a very good language. It’s known to be especially vulnerable to a number of different security vulnerabilities, and syntactically has none of the readability or efficiency features of other programming languages that are available out there. If PHP does have one saving grace, however, it is exactly that it is open source and very widely used – over 80% of websites use PHP in some form, including this one.