Private CA with X.509 Name Constraints

I wanted to run a small private Certificate Authority for some of my internal services. Since these aren’t reachable from the internet, and some of them are on network segments without internet connectivity, using a public ACME CA like Let’s Encrypt was inconvenient. On the other hand, if I run my own private CA and the keys get compromised, it could be used to MITM all my internet traffic. While that’s unlikely to happen, I decided to look for a better option.

It turns out that the idea of a “limited purpose” Certificate Authority is not new. RFC 5280 provides for something called “Name Constraints”, which allow an X.509 CA to have a scope limited to certain names, including the parent domains of the certificates issued by the CA. For example, a host constraint of allows the CA to issue certificates for anything under, but not any other host. For other hosts, clients will fail to validate the chain.

This hasn’t always been supported by TLS libraries and browsers, but all current browsers do support Name Constraints. Consequently, this is an approach to narrow the risks associated with a CA compromise for hosts other than those covered by the constraints in the CA certificate.

Book Review: Operator Handbook

When Netmux first released the Operator Handbook, I had to check it out. I had some initial impressions, but wanted to take some time to refine my thoughts on it before putting together a full review of the book. The book review will be a bit short, but that’s because this is a rather straightforward book.

Operator Handbook

I think the first things to know is that this book is strictly a reference. There’s nothing to read and learn things from in a cohesive way. It would be like reading a dictionary or a theasaurus – while you might learn things reading it, it’s not going to be in any meaningful way. There’s lots of things you can learn on a particular very narrow topic, but it is mostly organized to be “in the moment”, not as a “learning in advance” kind of thing.

The second thing to know is that unless you’re regularly in environments that don’t allow you to bring electronics in (e.g, heavily restricted customer sites), you really want this book in electronic format for quick searching and copy/paste. In fact, the tagline on the cover is “SEARCH.COPY.PASTE.L33T:)”. This is obviously a lot easier from the digital version. (Though I have to admit, I love the cover of the physical book – it’s got a robust feel and a cool “find it quick” yellow color.)

I rather suspect this book is inspired by books like the Red Team Field Manual, the Blue Team Field Manual, and Netmux’s own Hash Crack: Password Cracking Manual. When you crack it open, you’ll immediately see the similarities – very task focused, intended to get something done quickly, rather than a focus on the underlying theory or background.

I’ve actually referred to the book a couple of times while doing operations. Some of the things in it would be easily obtained elsewhere (e.g., a quick Google search for “nmap cheatsheet” gets you much the same information), but many other things would require distillation of the information into a more consumable format, and Netmux has already done that.

Many of the items in the book are also transformed into a security mindset – e.g., interacting with cloud platforms like AWS or GCP. Rather than trying to provide the information necessary to operate those platforms, the books focuses on the aspects relevant to security practitioners. The book also contains links to additional references, which is yet another reason you want to have this in a digital format. Some kind of URL shortener links would have been a nice touch for the print version.

One thing that I really want to applaud in this book is that there is a reference for mental health in the book. Whether or not the information security industry has a particular predisposition for mental health issues, I absolutely love the normalization of discussing mental health issues.

While there is content for both Red and Blue teamers, like so many resources, it seems to tend to the Red. Maybe it’s only my perception as a Red Teamer, maybe some of the contents I perceive as “Red” are also useful to Blue teamers. I’d love to hear from someone on the Blue side as to how they find the book contents for their role – any takers?

Overall, I think this is a useful book. A lot of effort clearly went into curating the content and covering the wide variety of topics that is included in it’s 123 references. There’s probably nothing ground-breaking in it, but it’s just presented so well that it’s totally worth having.

Everyone in InfoSec Should Know How to Program

Okay, I’m not going to lie, the title was a bit of clickbait. I don’t believe that everyone in InfoSec really needs to know how to program, just almost everyone. Now, before my fellow practitioners jump on me, saying they can do their job just fine without programming, I’d appreciate you hearing me out.

So, how’d I get on this? Well, a thread on a private Slack discussing whether Red Team operators should know how to program, followed by people on Reddit asking if they should know how to program. I thought I’d share my views in a concrete (and longer) format here.

Computers are Useless without Programs

I realize that it sounds idomatic, but computers don’t do anything without programs. Programs are what gives a computer the ability to, well, be useful. So I think we can all agree that information security, as an industry, is based entirely around software.

I submit that knowing how to program makes most roles more effective merely by having a better understanding of how software works. Understanding I/O, network connectivity, etc., at the application layer will help professionals do a better job of understanding how software affects their role.

That being said, this is probably not reason enough to learn to program.

Learning to Program Opens Doors

I suppose this point can be summarized as “more skills makes you more employable”, which is probably (again) idiomatic, but it’s probably worth considering. There are roles and organizations that will expect you to be able to program as part of the core expectations.

For example, if you currently work in the SoC, and you want to work on building/refining the tools used in the SoC, you’ll need to program.

Alternatively, if you want to move laterally to certain roles, those roles will require programming – application security, tool development, etc.

You Will Be More Efficient

There are so many times where I could have done something manually, but ended up writing a program of some sort to do it instead. Maybe you have a range of IPs and need to check which of them are running a particular webserver, or you want to combine several CSVs based on one or two fields on them. Maybe you just want to automate some daily task.

As a Red Teamer, I often write scripts to accomplish a variety of tasks:

  • Check a bunch of servers for a Vulnerability/Misconfiguration
  • Proof of Concept to Exploit a Vulnerability
  • Analyze large sets of data
  • Write custom implants (“Remote Access Toolkits”)
  • Modify tools to limit scope

On the blue side, I know people who write programs to:

  • Analyze log files when Splunk, etc. just won’t do
  • Analyze large PCAPs
  • Convert configurations between formats
  • Provide web interfaces to tools that lack them

How much do you need to know?

Well, technically none, depending on your role. But if you’ve read this far, I hope you’re convinced of the benefits. I’m not suggesting everyone needs to be a full-on software engineer or be coding every day, but knowing something about programming is useful.

I suggest learning a language like Python or Ruby, since they have REPLs, a “read-eval-print loop”. These provide an interactive prompt where you can run statements and see the responses immediately. Python seems to be more commonly used for InfoSec tooling, but they both are good options to get things done.

I would focus on file and network operations, and not so much on complicated algorithms or data structures. While those can be useful, standard libraries tend to have common algorithms (searching, sorting, etc.) well-covered. Having a sensible data structure makes code more readable, but there’s not often a need for “low level” structures in a high level language.

Have I Convinced You?

Hopefully I’ve convinced you. If you want to learn programming with a security-specific slant, I can highly recommend some books from No Starch Press:

Announcing TIMEP: Test Interface for Multiple Embedded Protocols

Today I’m releasing a new open source hardware (OSHW) project – the Test Interface for Multiple Embedded Protocols (TIMEP). It’s based around the FTDI FT2232H chip and logic level shifters to provide breakouts, buffering, and level conversion for a number of common embedded hardware interfaces. At present, this includes:

  • SPI
  • I2C
  • JTAG
  • SWD
  • UART


This is a revision 4 board, made using OSHPark’s “After Dark” service – black substrate, clear solder mask, so you can see every trace on the board. (Strangely, copper looks very matte under the solder mask, resulting in more of a tan color than the shiny copper one might expect to see.)

It’s intended to be easy to use and work with open source software, including tools like OpenOCD and Flashrom.

Edit: I rushed to get this post out late at night, but I should’ve acknowledged that this project was inspired while I was taking a hardware security class with Joe Fitzpatrick. He also provided a review of an early revision of the board. If you have no idea what SPI, I2C, JTAG, and SWD are, I can’t recommend his classes enough to get started in hardware hacking. (Even if you do know what those are, his classes are a lot of fun.)

See the project on GitHub, and I hope to have some boards available for sale on Tindie in the near future.

Security 101: Two Factor Authentication (2FA)

In this part of my “Security 101” series, I want to talk about different mechanisms for two factor authentication (2FA) as well as why we need it in the first place. Most of my considerations will be for the web and web applications, and I’m explicitly ignoring local login (e.g., device unlock) because the threat model is so different.