What the LastPass CLI tells us about LastPass Design

LastPass is a password manager that claims not to be able to access your data.

All sensitive data is encrypted and decrypted locally before syncing with LastPass. Your key never leaves your device, and is never shared with LastPass. Your data stays accessible only to you.

While it would be pretty hard to prove that claim, it is interesting to take a look at how they implement their zero-knowledge encryption. The LastPass browser extensions are a mess of minified JavaScript, but they’ve been kind enough to publish an open-source command line client, that’s quite readable C code. I was interested to see what we could learn from the CLI, and while it won’t prove that they can’t read your passwords, it will help to understand their design.

All of my observations are from their git repo as of commit d96053af621f5e4b784aab3194530216b8d2ef9d. I’ll try to include code snippets as well to provide context in addition to line number references.

Deriving Your Encryption Key

Let’s start by looking at how your encryption key is determined. Looking at kdf.c, we see the following function:

1
2
3
4
5
6
7
8
9
10
11
12
13
void kdf_decryption_key(const char *username, const char *password, int iterations, unsigned char hash[KDF_HASH_LEN])
{
  _cleanup_free_ char *user_lower = xstrlower(username);

  if (iterations < 1)
    iterations = 1;

  if (iterations == 1)
    sha256_hash(user_lower, strlen(user_lower), password, strlen(password), hash);
  else
    pdkdf2_hash(user_lower, strlen(user_lower), password, strlen(password), iterations, hash);
  mlock(hash, KDF_HASH_LEN);
}

A couple of things worth noting: pdkdf2_hash is a function that uses different underlying functions on different platforms (OS X vs Linux), but just performs a basic PBKDF2 operation. It takes, in this order: salt, salt length, password, password length, number of iterations, and output buffer. It uses HMAC-SHA256 as the underlying crypto primitive. (And the misspelling of pbkdf2 as pdkdf2 is theirs, not mine.)

Also worth noting is the special case when iterations equals 1. Entirely as speculation on my part, but I suspect this indicates that they formerly did a plain SHA-256 (well, SHA-256 of the username and password concatenated) for the encryption key. This is genuinely speculative, but why else special case 1 iteration? 1 iteration of PBKDF2 is valid, though incredibly weak, so there would be no need for the 1 round case.

Other than the special case, this looks to me like a perfectly normal PBKDF2 implementation to get a strong encryption key from the password.

Deriving Your Login Hash

So, if the encryption key is generated that way, how do they authenticate users? Obviously, using the same hash would be problematic, as LastPass will then get the encryption key. Obviously, passing anything with fewer rounds would just allow someone to apply the extra rounds and derive the encryption key, so we need something else. Let’s take a look (conveniently also in kdf.c):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void kdf_login_key(const char *username, const char *password, int iterations, char hex[KDF_HEX_LEN])
{
  unsigned char hash[KDF_HASH_LEN];
  size_t password_len;
  _cleanup_free_ char *user_lower = xstrlower(username);

  password_len = strlen(password);

  if (iterations < 1)
    iterations = 1;

  if (iterations == 1) {
    sha256_hash(user_lower, strlen(user_lower), password, password_len, hash);
    bytes_to_hex(hash, &hex, KDF_HASH_LEN);
    sha256_hash(hex, KDF_HEX_LEN - 1, password, password_len, hash);
  } else {
    pdkdf2_hash(user_lower, strlen(user_lower), password, password_len, iterations, hash);
    pdkdf2_hash(password, password_len, (char *)hash, KDF_HASH_LEN, 1, hash);
  }

  bytes_to_hex(hash, &hex, KDF_HASH_LEN);
  mlock(hex, KDF_HEX_LEN);
}

A little bit longer than the encryption key, but pretty straightforward nonetheless. Assuming you have more than one iteration (as any new user would), you get the same hash as generated for the encryption key, and then use the password as a salt and do 1 PBKDF2 round on the encryption key result. This is essentially equivalent to an HMAC-SHA256 of the encryption key with the password as the HMAC key, which means converting the login hash to the encryption key is as difficult as finding a 1st preimage on SHA256. Seems unlikely.

It’s obvious to see that there’s still special-casing for one iteration. In that case, you get (essentially) sha256(sha256(username + password) + password). It’s still computationally infeasible to invert, but an attacker with the hash & associated username can trivially apply a dictionary attack to discover the original password (and hence, the encryption key). It’s a good thing they’ve moved on to PBKDF2. :)

How do they encrypt?

So, how do they handle encryption and decryption? Well, it turns out that’s interesting too. Looking at ciper.c, there’s a lot of code for RSA crypto, but that’s only used if you’re sharing passwords with another user. What does get interesting is when you look at their decryption method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
char *cipher_aes_decrypt(const unsigned char *ciphertext, size_t len, const unsigned char key[KDF_HASH_LEN])
{
  EVP_CIPHER_CTX ctx;
  char *plaintext;
  int out_len;

  if (!len)
    return NULL;

  EVP_CIPHER_CTX_init(&ctx);
  plaintext = xcalloc(len + AES_BLOCK_SIZE + 1, 1);
  if (len >= 33 && len % 16 == 1 && ciphertext[0] == '!') {
    if (!EVP_DecryptInit_ex(&ctx, EVP_aes_256_cbc(), NULL, key, (unsigned char *)(ciphertext + 1)))
      goto error;
    ciphertext += 17;
    len -= 17;
  } else {
    if (!EVP_DecryptInit_ex(&ctx, EVP_aes_256_ecb(), NULL, key, NULL))
      goto error;
  }
  if (!EVP_DecryptUpdate(&ctx, (unsigned char *)plaintext, &out_len, (unsigned char *)ciphertext, len))
    goto error;
  len = out_len;
  if (!EVP_DecryptFinal_ex(&ctx, (unsigned char *)(plaintext + out_len), &out_len))
    goto error;
  len += out_len;
  plaintext[len] = '\0';
  EVP_CIPHER_CTX_cleanup(&ctx);
  return plaintext;

error:
  EVP_CIPHER_CTX_cleanup(&ctx);
  secure_clear(plaintext, len + AES_BLOCK_SIZE + 1);
  free(plaintext);
  return NULL;
}

What’s the significant part here? If your eyes jump to the strange conditional, you’ve found the same thing I did. What’s the difference in the resulting OpenSSL calls? It’s subtle, but it’s EVP_aes_256_cbc() versus EVP_aes_256_ecb(). If the ciphertext begins with the letter !, the next 16 bytes are used as an IV, and the mode is set to CBC. If it doesn’t begin with that, then ECB mode is used. This is interesting because this suggests that LastPass formerly used ECB mode for their encryption. If you don’t know why this is bad, I strongly suggest the Wikipedia article on block cipher modes of encryption. Hopefully this has long been addressed and the code only remains to handle a few edge cases for people who haven’t logged in to their account in a very long time. (Again, this is all speculation.)

For what it’s worth, just a few lines further down, you’ll find the function cipher_aes_encrypt that shows all the encryption operations, at least from this client, are done in CBC mode with a random IV.

If you’re wondering why the comparison looks so strange, consider this: if they just checked the first character of the ciphertext, then 1/256 ECB-mode encrypted ciphertexts would match that. Since ECB mode ciphertexts are multiples of the block length (as are CBC ciphertexts), checking for the length to have one extra character (len % 16 == 1) rules out these extra cases.

Transport Security

This section, in particular, is only relevant to this command line client, as the browser extensions all use the browser’s built-in communications mechanisms. http.c shows us how the LastPass client communicates with their servers. It really attempts to emulate a fairly standard client as much as possible – sending the PHPSESSID as a cookie, using HTTP POST for everything. One very interesting note is this line:

1
curl_easy_setopt(curl, CURLOPT_SSL_CTX_FUNCTION, pin_certificate);

They pin the Thawte CA certificate for their communication to help reduce the risk of a man-in-the-middle attack.

Blobs, Chunks, and Fields

I’ve only had a quick look at blob.c, which contains their file format parsing code, but I think I have a rough idea of how it goes. Your entire LastPass database is a blob, which consists of chunks. chunks can be of many types, one of which is an account chunk, which contains many fields.

Interestingly, if you look at read_crypt_string, it makes it obvious that, rather than encrypting your entire LP database or encrypting each account entry, fields are individually encrypted. Looking at account_parse, you can see that a lot of fields seem to be unused by the CLI client, but it’s interesting to see all the fields supported by LastPass. One of the most interesting findings is, in fact, right here:

1
entry_hex(url);

It can be confirmed by using a proxy to examine the traffic, but it turns out that the URL of sites in your LastPass account database are stored only as the hex-encoded ASCII string. No encryption whatsover. So LastPass can easily determine all of the sites that a user has accounts on. (This is genuinely surprising to me, but I triple-checked that this is actually the case.)

Future Work

I think it would be interesting to dump the entire blob in a readable format. There’s some interesting things in there, like equivalencies between multiple domains. (If an attacker could append one of those, they could get credentials for a legitimate domain sent to a domain they control.) I’d also like to poke at the extensions a little bit more, but reversing compiled JavaScript isn’t the most fun thing ever. :) (Suggestions of tools in this space would be welcome.)

One thing is important to understand: no evaluation can say for sure that LastPass can’t recover your passwords. Even if they’re doing everything right today, they could push a new version tomorrow (extensions are generally automatically updated) that records your master password. It’s inherent in the model of any browser extension-based password manager.


So, is Windows 10 Spying On You?

“Extraordinary claims require extraordinary evidence.”

A few days ago, localghost.org posted a translation of a Czech article alledging Windows 10 “phones home” in a number of ways. I was a little surprised, and more than a little alarmed, by some of the claims. Rather than blindly repost the claims, I decided it would be a good idea to see what I could test for myself. Rob Seder has done similarly but I’m taking it a step further to look at the real traffic contents.

Tools & Setup

I’m running the Windows 10 Insider Preview (which, admittedly, may not be the same as the release, but it’s what I had access to) in VirtualBox. The NIC on my Windows 10 VM is connected to an internal network to a Debian VM with my tools installed, which is in turn connected out to the internet. On the Debian VM, I’m using mitmproxy to perform a traffic MITM. I’ve also used VirtualBox’s network tracing to collect additional data.

Currently, I have all privacy settings set to the default, but I am not signed into a Microsoft Live account. This is an attempt to replicate the findings from the original article. At the moment, I’m only looking at HTTP/HTTPS traffic in detail, even though the original article wasn’t even specific enough to indicate what protocols were being used.

Claim 1. All text typed on the keyboard is sent to Microsoft

When typing into the search bar within the Start menu, an HTTPS request is sent after each character entered. Presumably this is to give web results along with local results, but the amount of additional metadata included is just mind-boggling. Here’s what the request for such a search looks like (some headers modified):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
GET /AS/API/WindowsCortanaPane/V2/Suggestions?qry=about&cp=5&cvid=ce8c2c3ad6704645bb207c0401d709aa&ig=7fdd08f6d6474ead86e3c71404e36dd6&cc=US&setlang=en-US HTTP/1.1
Accept:                        */*
X-BM-ClientFeatures:           FontV4, OemEnabled
X-Search-SafeSearch:           Moderate
X-Device-MachineId:            {73737373-9999-4444-9999-A8A8A8A8A8A8}
X-BM-Market:                   US
X-BM-DateFormat:               M/d/yyyy
X-Device-OSSKU:                48
X-Device-NetworkType:          ethernet
X-BM-DTZ:                      -420
X-BM-UserDisplayName:          Tester
X-DeviceID:                    0100D33317836214
X-BM-DeviceScale:              100
X-Device-Manufacturer:         innotek GmbH
X-BM-Theme:                    ffffff;005a9e
X-BM-DeviceDimensionsLogical:  320x622
X-BM-DeviceDimensions:         320x622
X-Device-Product:              VirtualBox
X-BM-CBT:                      1439740000
X-Device-isOptin:              false
X-Device-Touch:                false
X-AIS-AuthToken:               AISToken ApplicationId=25555555-ffff-4444-cccc-a7a7a7a7a7a7&ExpiresOn=1440301800&HMACSHA256=CS
                               y7XaNyyCE8oAZPeN%2b6IJ4ZrpqDDRZUIJyKvrIKnTA%3d
X-Device-ClientSession:        95290000000000000000000000000000
X-Search-AppId:                Microsoft.Windows.Cortana_cw5n1h2txyewy!CortanaUI
X-MSEdge-ExternalExpType:      JointCoord
X-MSEdge-ExternalExp:          sup001,pleasenosrm40ct,d-thshld42,d-thshld77,d-thshld78
Referer:                       https://www.bing.com/
Accept-Language:               en-US
Accept-Encoding:               gzip, deflate
User-Agent:                    Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0; Cortana 1.4.8.152;
                               10.0.0.0.10240.21) like Gecko
Host:                          www.bing.com
Connection:                    Keep-Alive
Cookie:                        SA_SUPERFRESH_SUPPRESS=SUPPRESS=0&LAST=1439745358300; SRCHD=AF=NOFORM; ...

In addition to my query, “about”, it sends a “DeviceID”, a “MachineId”, the username I’m logged in as, the platform (VirtualBox), and a number of opaque identifiers in the query, the X-AIS-AuthToken, and the Cookies. That’s a lot of information just to give you search results.

Claim 2. Telemetry including file metadata is sent to Microsoft

I searched for several movie titles, including “Mission Impossible”, “Hackers”, and “Inside Out.” Other than the Cortana suggestions above, I didn’t see any traffic pertaining to these searches. Certainly, I didn’t see any evidence of uploading a list of multimedia files from my Windows 10 system, as described in the original post.

I also searched for a phone number in the edge browser, as described in the original post. (Specifically, I search for 867-5309.) The only related traffic I saw is traffic to the site on which I performed the search (yellowpages.com). No traffic containing that phone number went to any Microsoft-run server, as far as I can tell.

Claim 3. When a webcam is connected, 35MB of data gets sent

Nope. Not even close. I shut down the VM, started a new PCAP, restarted, and attached a webcam via USB forwarding in VirtualBox. After the drivers were fully installed, I shut down the system. The total size of the pcap was under 800k in size, a far cry from the claimed 35MB. Looking at mitmproxy and the pcap, the largest single connection was ~82kB in size. I have no idea what traffic he saw, but I saw no alarming connection related to plugging in a webcam. My best guess is maybe it’s actually 35MB of download, and his webcam required a driver download. (Admittedly a large driver, but I’ve seen bigger.)

Traffic from Connecting a Webcam

Claim 4. Everything said into a microphone is sent

Even when attempting to use the speech recognition in Windows, I saw nothing that was large enough to be audio spoken being transferred. Additionally, no intercepted HTTP or HTTPS traffic contained the raw words that I spoke to the voice recognition service. Maybe if signed in to Windows Live, Cortana performs uploads, but without being signed in, I saw nothing representative of the words I used with speech recognition.

Claim 5. Large volumes of data are uploaded when Windows is left unattended

I left Windows running for >1 hour while I went and had lunch. There were a small number of HTTP(s) requests, but they all seemed to be related to either updating the weather information displayed in the tiles or checking for new Windows updates. I don’t know what the OP considers “large volumes”, but I’m not seeing it.

Conclusion

The original post made some extraordinary claims, and I’m not seeing anything to the degree they claimed. To be sure, Windows 10 shares more data with Microsoft than I’d be comfortable with, particularly if Cortana is enabled, but it doesn’t seem to be anything like the levels described in the article. I wish the original poster had posted more about the type of traffic he was seeing, the specific requests, or even his methodology for testing.

The only dubious behavior I observed was sending every keystroke in the Windows Start Menu to the servers, but I understand that combined Computer/Web search is being sold as a feature, and this is necessary for that feature. I don’t know why all the metadata is needed, and it’s possibly excessive, but this isn’t the keylogger the original post claimed.

Unfortunately, it’s impossible to disprove his claims, but if it’s as bad as suggested, reproducing it should’ve been possible, and I’ve been unable to reproduce it. I encourage others to try it as well – if enough of us do it, it should be possible to either confirm or strongly refute the original claims.


Blue Team Player's Guide for Pros vs Joes CTF

I’ve played in Dichotomy’s Pros v Joes CTF for the past 3 years – which, I’m told, makes me the only player to have done so. It’s an incredible CTF and dramatically different from any other that I’ve ever played. Dichotomy and I were having lunch at DEF CON when he said “You know what would be cool? A blue team player’s guide.” So, I give to you, the blue team player’s guide to the Pros v Joes CTF.

Basics

First, a quick review of the basics: PvJ is a 2-day long CTF. On day one, blue teams operate in a strictly defensive role. Red cell will be coming after you relentlessly, and it’s your job to keep them out. On day two, things take a turn for the red, with blue teams both responsible for defending their network, but also given the opportunity to attack the networks of the other blue teams. Offense is fun, but do keep in mind that you need some defense on day two. Your network will have been reset, so you’ll need to re-harden all your systems!

Scoring is based on several factors. As of 2015, the first day score was based on flags (gain points for finding your own “integrity” flags, lose points for having flags stolen), service uptime (lose points for downtime), tickets (lose points for failing to complete required tasks in the environment), and beacons (lose points for red cell leaving “beacons” that phone home on your system, indicating ongoing compromise). Day two scoring is similar, but now you can earn points by stealing flags from other teams and placing your own beacons on their systems to phone home.

Make sure you read the rules when you play – each year they’re a little different, and things that have been done before may not fit within the rules – or may not be to your advantage anymore!

The Environment

Before I start talking strategy, let’s talk a little bit about the environment and what to expect. Of course, Dichotomy may have new tricks up his sleeve at any time, so you have to assume almost everything below will change.

Connecting to the environment requires connecting to an OpenVPN network that provides a route to your environment as well as all of the other teams – think of it as a self-contained mini-internet. Within this environment is a large vCenter deployment, containing all of the blue team environments. You’ll get access to the vCenter server, but only to your hosts of course.

Each team will have a /24 network to protect, containing a number of hosts. In 2015, there were a dozen or so hosts per network. All blue team networks contain the same systems, providing the same services, but they will have different flags and credentials. In front of your /24 is a Cisco ASA firewall. Yes, you get access to configure it, so it’s good to have someone on your team who has seen one before. (My team found this out the hard way this year.)

While the exact hosts are likely to change each year, I feel pretty confident that many of the core systems are unlikely to be dramatically different. Some of the systems that have consistently been present include:

  • A Windows Server 2008 acting as a Domain Controller
  • Several Linux servers in multiple flavors: Ubuntu, CentOS, SuSE
  • A number of Windows XP machines

Responsibilties as a Blue Team Member

This is intended as a learning experience, so nobody expects you to show up knowing everything about attack and defense on every system. That’s just not realistic. But you should show up prepared to play, and there’s several things involved in being prepared:

  1. Make sure you introduce yourself to your team mates via the team mailing list. It’s good to know who’s who, what their skill sets are, and what they’re hoping to get out of the CTF. (And yes, “have a good time” is a perfectly acceptable objective.)
  2. Have your machine set up in advance. Obviously, you’re going to need a laptop to play in this CTF. It doesn’t need to be the fastest or newest machine, but you should have a minimum toolkit:
    • OpenVPN to connect to the environment, configured and tested.
    • VMWare vSphere Client to connect to the vCenter host. (Windows only, so this might also call for a VM if your host OS is not Windows.)
    • An RDP Client of some sort is useful for the Windows machines in the environment.
    • Tools to map out your network, such as nmap, OpenVAS, or similar.
    • Attack tools for the 2nd day: Metasploit is always popular. If you’re not familiar with metasploit, consider also installing Armitage, a popular GUI front end. I usually run Kali Linux on the bare metal and have a Windows VM just for vSphere client. Make sure you run OpenVPN on your host OS so that traffic from both the host and guest gets routed to the environment properly.
  3. Learn a few basics in advance. At a minimum, know how to connect both to Windows and Linux systems. Never used ssh before? Learn how in advance. There’s a reading list at the bottom of this article with resources that will help you familiarize yourself with many of the aspects involved before the day of the event.
  4. You don’t have to sit there at the table the entire day both days, but you should plan for the majority of the time. If you want to go see a talk, that works, but let somebody know and make sure you’re not walking off with the only copy of a password.

Strategy

I could probably find a Sun Tzu quote that fits here (that is the security industry tradition, after all) but really, they’re getting a bit over used. What is important is realizing that you’re part of a team and that you’ll either succeed as a team or fail. Whether you fail as a team or as a bunch of individuals with no coordination depends on a lot of things, but if you’re not working together, you can be certain of failure.

With so many systems, there’s a lot of work to be done. With Red Team on the advance, there’s a lot of work to be done quickly. You need to make sure that quickly does not turn into chaotically. I suggest first identifying all of the basic hardening steps to be taken, then splitting the work among the team, making sure each task is “owned” by a team member. Some of the tasks might include:

  1. Configure the ASA firewall to only allow in SSH, RDP, and the services you need for scoring purposes. (Note that you’re not allowed to add IP filtering against red cell, nor block beacons to the scoring server.)
  2. Change the passwords on each system (divide this up so each individual only has 1-2 systems to handle) and document them. (A Google spreadsheet works well here.)
  3. Start an nmap scan of your entire network as a baseline. (Doing this from one of the hosts within the environment will be much faster than doing it from your laptop over the VPN.)
  4. Start disabling unnecessary services. (Again, responsibility for 1-2 systems per team member.)

Remember: document what you do, especially the passwords!

For the second day, I recommend a roughly 80/20 split of your team, switching after the basic hardening is done. That is, for the first hour or so, 80% of your team should be hardening systems while 20% looks for the low-hanging fruit on other team’s networks. This is a good opportunity to get an early foothold before they have the chance to harden. After the initial hardening (setup ASA, change passwords, etc.), you want to devote more resources to the offensive, but you still need some people looking after the home front.

Good communication is key throughout, but watch out how you handle it: you never know who’s listening. One year we had an IRC channel where everyone connected (over SSL of course!) for coordination so we would leak less information.

Pro Tips

Some tips just didn’t fit well into other parts of the guide, so I’ve compiled them here.

  • Changing all the passwords to one “standard” may be attractive, but it’ll only take one keylogger from red cell to make you regret that decision.
  • Consider disabling password-based auth on the linux machines entirely, and use SSH keys instead.
  • The scoring bot uses usernames and passwords to log in to some services. Changing those passwords may have an adverse effect on your scoring. Find other ways to lock down those accounts.
  • Rotate roles, giving everyone a chance to go on both offense and defense.

Hardening

Offensive Security

Conclusion

The most important thing to remember is that you’re there to learn and have fun. It doesn’t matter if you win or lose, so long as you got something out of it. Three years straight, I’ve walked away from the table feeling like I got something out of it. I’ve met great people, had great fun, and learned a few things along the way. GL;HF!


Hacker Summer Camp 2015: DEF CON

So, following up on my post on BSides LV 2015, I thought I’d give a summary of DEF CON 23. I can’t cover everything I did (after all, what happens in Vegas, stays in Vegas… mostly) but I’m going to cover the biggest highlights as I saw them.

The first thing to know about my take on DEF CON is that DEF CON is a one-of-a-kind event, somewhere between a security conference and a trip to Mecca. It’s one part conference, one part party, and one part social experience. The second thing to know about my take on DEF CON is that I’m not there to listen to people speak. If I was just there to listen to people speak, there’s the videos posted to YouTube or available on streaming/DVD from the conference recordings. I’m at DEF CON to participate, meet people, and hack all the things.

I generally try not to spend my entire DEF CON doing one thing, though I’d probably make an exception if I ever got to play in DEF CON CTF. (Anyone need a team member? :)) It’s a limited time (on papser, 4 days, but really, it’s about 2.5 days) and I want to get in all that I can. This year, I managed to get in a handful of major activities:

  1. A Workshop on Auditing Mobile Applications
  2. Playing OpenCTF
  3. Playing Capture the Packet

I also attended a few talks in the various villages, including the wireless village and the tamper evident village, worked at an event for my employer, and got lots of opportunities for finding out what others are working on and the fun & interesting projects people are doing. (And perhaps I attended a party or two…)

Auditing Mobile Applications

The auditing mobile applications workshop (taught by Sam Bowne, an instructor at CCSF) was interesting, but wasn’t as much depth as I’d hoped. Firstly, it was strictly confined to Android and didn’t cover iOS at all (though some of the techniques would still apply). Secondly, a lot of the attendees were very inexperienced at what I would consider basic tooling. I’m all for classes for every level, but having the same class for every level means teaching to the lowest common denominator. I do have to give him a lot of credit for being well prepared: he had his own wireless router, network switches, and even several pre-configured laptops for student use.

The course attempted to cover several aspects of the OWASP Mobile Top 10, but mostly focused on applications that failed to use SSL, failed to implement SSL, or failed to implement anti-reversing protections (more on that later). The first two were basically tested by installing an emulator, installing Burp Suite, and pointing the emulator to Burp. If traffic was seen as plaintext traffic, well, obviously no SSL was in use. If traffic was seen as HTTPS traffic, obviously they weren’t verifying certificates properly, as Burp was presenting its own self-signed certificate for the domain. In this case, failing connections was treated as a “success” for validation, though not all of the edge cases were being tested. (We didn’t check for valid chain, but wrong domain, or valid chain + domain, but expired or not yet valid, etc. There’s a lot that goes into SSL validation, but hopefully they’re using the system libraries properly.)

Then we got to recovering secrets from APKs. Sam demonstrated using apktool, which extracts and decompiles APKs to smali (a text-based representation of Dalvik bytecode), to look for things like hard-coded keys, passwords, and other relevant information. Using apktool is definitely an interesting approach, and shows how trivial it is to extract information from within the APK.

Next Sam started talking about what he called “code injection.” He pointed out that you could take the smali, modify it (such as to log the username & password entered into the app) and then recompile & resign the app, then upload the new version to the play store. He claimed that this is a security risk, but I quite frankly don’t agree with him there. It’s always been possible for an attacker to provide malicious binaries, including those that look like legitimate software. Though decompiling, modifying, and recompiling may reduce the bar for the attacker, it’s still the same attack. I’m not convinced this is a new threat or really can be mitigated (he suggests code obfuscators), but I think there’s room for disagreement on this one.

If you’re interested in the details, you can find Sam’s course materials online.

OpenCTF

OpenCTF is the little brother of the DEF CON CTF, with anyone able to play. There are a wide range of challenges for any skillset, and even though we didn’t spend a whole lot of time playing, we managed a 10th place finish. (Team Shadow Cats). One of my favorites was aaronp’s “Pillars of CTF”, which took you on a tour through the various areas commonly seen in CTF: reverse engineering, network forensics, and exploitation. Each area was relatively simple, but it had great variety and just enough challenge to make it interesting. Other challenges included web challenges with a bot visiting the website (so client-side exploitation was possible), more difficult PCAP forensics, and plenty of reversing. I haven’t played much CTF lately (before the week in Vegas), but it was really good to get back into it, especially since this was more of a Jeopardy-style CTF than the Attack/Defense in Pros vs Joes.

Capture the Packet

I don’t do any network forensics at work, but I find it a fascinating area of information security. Every year (save one) I’ve been to DEF CON, I’ve played in the “Capture the Packet” challenge. Basically, you get a network port on a 100Mb network, and a set of ~15 questions, and you need to find the answers to the questions in the stream of network traffic. Questions range from “What is the password for ftp user x?” to “There was a SIP call between stations 1001 and 1002. What was the secret word?” It’s non-trivial, and will stretch anyone but the most seasoned incident responder, but it’s a great opportunity to exercise your Wireshark skills.

Pro tip: I like to separate capture from analysis. If you live capture in Wireshark, it gets slow, and when the number of packets is too large, filtering becomes extremely laggy. Instead, I’ll start tcpdump in a shell session, then analyze the pcaps with Wireshark or other tools. (I’m contemplating writing a collection of domain-specific tools just for fun, and to help in future CTPs.) If you’re not a tcpdump expert, the command line I use is something like this:

1
tcpdump -i eth0 -s 0 -w packets -C 100

What this does is capture on interface eth0, unlimited packet size, write to files named like “packets”, and rotate the file every 100 MB. This way I can keep my cap sizes manageable, but avoid too much risk of splitting connections across multiple pcap files.

If you haven’t tried CTP before, I really recommend it. The qualifying round only takes about an hour, and some years (not sure if it was this year) it’s a black badge challenge, though you need to qualify + win finals.

Thoughts on the new venue

So, this was the first year at Bally’s/Paris combined, and it wasn’t without growing pains. The first couple of days, traffic was crazy, especially near the Track 1-4 areas in Paris, but the Goons quickly adapted and got the traffic flowing again (making one-way lanes really helped).

The best part was being on the strip. It meant it was easy to get to restaurants outside the venue without cabbing, and it was a much more attractive hotel than the Rio. (Yay no more awkward bathroom windows!)

The worst part, however, is that the conference area at Bally’s is all the way in the back, so you have to walk a non-trivial distance between the two areas. I definitely got my 10,000 steps a day in every day that week. There’s nothing DEF CON can do about this at the current venue, and I can use the exercise.

Conclusion

I had a great time, but felt like I didn’t do as much as I had hoped. I’m not sure if this was unrealistic expectations, or just a perception of doing less than I actually did. I definitely learned new things, met new people, and generally consider it a successful year, but I’m always a little wistful when it ends, and this year was no exception. I keep telling myself I’ll do something interesting enough to end up on the stage one year – maybe 24’s my lucky number.


Hacker Summer Camp 2015: BSides LV & Pros vs Joes CTF

I’ve just returned from Las Vegas for the annual “hacker summer camp”, and am going to be putting up a series of blog posts covering the week. Tuesday and Wednesday were BSides Las Vegas. For the uninitiated, BSides was founded as the “flip side” to Black Hat, and has spawned into a series of community organized and oriented conferences around the globe. Entrance to BSides LV was free, but you could guarantee a spot by donating in advance if you were so inclined. (I was.)

As regular readers know, I play a little bit of CTF (Capture the Flag), and BSides LV is home to one of the most unique CTF competitions I’ve ever played in: the “Pros vs Joes” CTF run by dichotomy. This CTF pits multiple defending teams (Blue Cells), each consisting of Joes + 1 Pro Captain, against a “Red Cell” consisting of professional penetration testers. Even though I work in security, my focus is on Application Security and not Network Security (which PvJ highly emphasizes) so I’ve never felt comfortable playing as a pro. Consequently, this was my 3rd year as a “Joe” in the PvJ CTF. (Maybe one day I can make it through my Impostor Syndrome).

If you’re not familiar with this CTF, here’s the rundown: on the first day, each blue cell defends their network against attack by the red cell. For the blue cells, the first day is entirely defense. On the 2nd day, the red cell is “dissolved” into the blue cells. Each blue cell gained two new pros and it was blue cell vs blue cells, so we had to take on the role of both attacker and defender. It’s great fun and requires a ton of work to set up, so my hat goes off to dichotomy for organizing and running it all.

Though I’ve played in the past, this year brought new challenges and experiences. Firstly, there was significant integration between the CTF and the Social Engineering CTF. (Apparently even more than I realized when playing.) This brough interesting components, such as people trying to social engineer information out of us or get us to help them with tasks for the SECTF, but it also brough challenges, the most significant of which was people who joined the CTF solely to gain information to help them with the SECTF. It’s my hope that next year, team captains will be allowed to “fire” team members that are obviously attempting to “leak” information.

There were other changes as well. In the past, there were two blue cells and one red cell, this year dichotomy managed to up it to a whopping four blue cells! This brought us a total of 44 players, which was obviously no small feat for both dichotomy and the conference organizers. There was also a greater emphasis on the use of “tickets” in the scoring system. Now, I’m not a fan of the tickets myself. Most of them consist of boring inventory or asset management work, and it’s often not clear exactly what response is expected to them. Hopefully this is the sort of thing that will be tweaked by next year. (As it was, the scoring tweaks dichotomy made to the ticket system between the first and second day were a welcome improvement.)

By far, however, the most interesting change to me was obvious even upon walking in the room. As soon as I saw the tables, I noticed something new: SIP phones on each table. Yes, they were connected. To a PBX. That was in scope. And had default credentials. That was definitely new – and not something most of the teams picked up on.

All things considered, I feel that my team (“Endtroducing”) did pretty well with a 2nd place finish. The first place team just seemed unbeatable. I don’t know if it was a different ratio of attack/defense, luck, different skills or what, but it worked out well for them.

In the next week or so, I’ll be writing a blue team player’s guide based on my 3 years of experience. It’ll expose some things about the play environment that have been constant for the 3 years, some tactics I’ve used, and just some general guidance on preparing for the Pros vs Joes CTF.