A friend recently asked me if I could recommend some reading about hacking and security culture. I gave a couple of quick answers, but it inspired me to write a blog post in case anyone else is looking for similar content. Unless otherwise noted, I’ve read all of these books/resources and can recommend them.
Unless you’ve been living under a rock, you know that the Crypto Wars are back. Politicians, seemingly led by Senator Lindsey Graham of South Carolina, seem bound and determined to undermine user’s privacy and security online to strengthen the power of the police state. It will have disproportionate affects on the innocent rather than criminals and will raise operating costs and make it much harder for small businesses and startups to compete in the US.
- Much like guns and nuclear weapons, the cryptography genie is already out of the bottle. Inserting backdoors or limiting access to encryption will affect law-abiding citizens, but criminals will be able to continue to use encryption software that already exists. In fact, the Al Qaeda terrorist organization already develops their own encryption software. It’s not like they’ll comply with US laws. While we might succeed in reducing their access to some types of encryption (e.g., encrypted phones), we won’t be able to completely eliminate it for motivated criminal enterprises or terror cells.
- There are a lot of legitimate reasons to want to use end-to-end encryption or full device encryption. Do companies want their sensitive data accessible to competitors? Do individuals want their data available to someone who finds their phone in a cab or steals it? Journalists want to be able to communicate with their sources in confidence, and attorneys and doctors should be able to securely encrypt their privileged files. The United States Senate even encourages Senators to use end-to-end encryption, as does the 82nd Airborne Division of the US Army.
- There is no such thing as good guy only access. Being good or evil is a matter of perspective and ethics, and technology does not recognize those. Any backdoor, key escrow, or other system designed to comply with these laws is subject to abuse by malicious governments, malicious insiders, or criminals. Cryptographer and professor Matthew Green says so, Bruce Schneier says so, and I say so. We’ve seen providers with stored keys breached before, so it would be pretty surprising if it didn’t happen again. The only way to keep the keys from being compromised is for the provider to not have them at all.
- It will decrease trust in American service providers. Look at the way Huawei and ZTE are treated because of potential Chinese backdoors. Why would another country want the US government to have a backdoor into communications they use? Even if you believe intent is good (and stopping child abuse is), the way the US government has used spying capabilities in the past raises serious concerns.
There’s good analysis on both EARN IT and LAED, the two bills introduced by Senator Graham here:
- Stanford Law on EARN IT
- Stanford Law on LAED
- EFF on EARN IT
- Human Rights Watch on EARN IT
- EFF on LAED
Based on EFF language, I wrote to my Senators and Representative the following:
I write you as both a constituent and in my personal capacity as an expert in cybersecurity. For most of the past decade, I have been employed as a senior security engineer at a large technology company, I have spoken at multiple conferences on information security, and have published articles on the matter..
I strongly urge you to reject both the EARN IT Act (S.3398) and the Lawful Access to Encrypted Data Act. They both pose an existential threat to online privacy and security.
End-to-end encryption protects innocent and law-abiding users against data breaches at their service providers. As we’ve seen time and time again, persons are irreversibly harmed when their communications are leaked, and requiring backdoor access for the government opens that backdoor to abuse by foreign governments and criminals.
The Graham-Blumenthal bill would give the Attorney General far too much power to dictate how Internet companies must operate. Attorney General William Barr has made it clear that he would use that authority to undermine our right to private and secure communications by blocking encryption. Additionally, passing on this power to the Attorney General leaves too much to the whims of each administration, resulting in a great deal of uncertainty regarding the future course of things.
The bill would create a commission tasked with creating “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation online. But far from mere recommendations, those “best practices” would be approved by Congress as legal requirements. The EARN IT Act’s structure would let Barr strong-arm the commission to include requirements that tech companies weaken their own encryption systems in order to give law enforcement access to our private communications. Companies could also be required to over-censor speech to comply with the government’s demands, or to bend to future governments’ political agendas in other ways.
Regulations relating to restrictions on speech must reflect a careful balance of competing policy goals and protections for civil liberties. Congress can only strike that balance through an open, transparent lawmaking process. It would be deeply irresponsible for Congress to offload that duty to an unelected commission, and especially not a commission controlled by unelected government officials.
Please publicly oppose the EARN IT Act and the Lawful Access to Encrypted Data Act.
I encourage you to do the same.
I wanted to run a small private Certificate Authority for some of my internal services. Since these aren’t reachable from the internet, and some of them are on network segments without internet connectivity, using a public ACME CA like Let’s Encrypt was inconvenient. On the other hand, if I run my own private CA and the keys get compromised, it could be used to MITM all my internet traffic. While that’s unlikely to happen, I decided to look for a better option.
It turns out that the idea of a “limited purpose” Certificate Authority is not
new. RFC 5280 provides for something
called “Name Constraints”, which allow an X.509 CA to have a scope limited to
certain names, including the parent domains of the certificates issued by the
CA. For example, a host constraint of
.example.com allows the CA to issue
certificates for anything under
.example.com, but not any other host. For
other hosts, clients will fail to validate the chain.
This hasn’t always been supported by TLS libraries and browsers, but all current browsers do support Name Constraints. Consequently, this is an approach to narrow the risks associated with a CA compromise for hosts other than those covered by the constraints in the CA certificate.
When Netmux first released the Operator Handbook, I had to check it out. I had some initial impressions, but wanted to take some time to refine my thoughts on it before putting together a full review of the book. The book review will be a bit short, but that’s because this is a rather straightforward book.
I think the first things to know is that this book is strictly a reference. There’s nothing to read and learn things from in a cohesive way. It would be like reading a dictionary or a theasaurus – while you might learn things reading it, it’s not going to be in any meaningful way. There’s lots of things you can learn on a particular very narrow topic, but it is mostly organized to be “in the moment”, not as a “learning in advance” kind of thing.
The second thing to know is that unless you’re regularly in environments that don’t allow you to bring electronics in (e.g, heavily restricted customer sites), you really want this book in electronic format for quick searching and copy/paste. In fact, the tagline on the cover is “SEARCH.COPY.PASTE.L33T:)”. This is obviously a lot easier from the digital version. (Though I have to admit, I love the cover of the physical book – it’s got a robust feel and a cool “find it quick” yellow color.)
I rather suspect this book is inspired by books like the Red Team Field Manual, the Blue Team Field Manual, and Netmux’s own Hash Crack: Password Cracking Manual. When you crack it open, you’ll immediately see the similarities – very task focused, intended to get something done quickly, rather than a focus on the underlying theory or background.
I’ve actually referred to the book a couple of times while doing operations. Some of the things in it would be easily obtained elsewhere (e.g., a quick Google search for “nmap cheatsheet” gets you much the same information), but many other things would require distillation of the information into a more consumable format, and Netmux has already done that.
Many of the items in the book are also transformed into a security mindset – e.g., interacting with cloud platforms like AWS or GCP. Rather than trying to provide the information necessary to operate those platforms, the books focuses on the aspects relevant to security practitioners. The book also contains links to additional references, which is yet another reason you want to have this in a digital format. Some kind of URL shortener links would have been a nice touch for the print version.
One thing that I really want to applaud in this book is that there is a reference for mental health in the book. Whether or not the information security industry has a particular predisposition for mental health issues, I absolutely love the normalization of discussing mental health issues.
While there is content for both Red and Blue teamers, like so many resources, it seems to tend to the Red. Maybe it’s only my perception as a Red Teamer, maybe some of the contents I perceive as “Red” are also useful to Blue teamers. I’d love to hear from someone on the Blue side as to how they find the book contents for their role – any takers?
Overall, I think this is a useful book. A lot of effort clearly went into curating the content and covering the wide variety of topics that is included in it’s 123 references. There’s probably nothing ground-breaking in it, but it’s just presented so well that it’s totally worth having.
Okay, I’m not going to lie, the title was a bit of clickbait. I don’t believe that everyone in InfoSec really needs to know how to program, just almost everyone. Now, before my fellow practitioners jump on me, saying they can do their job just fine without programming, I’d appreciate you hearing me out.
So, how’d I get on this? Well, a thread on a private Slack discussing whether Red Team operators should know how to program, followed by people on Reddit asking if they should know how to program. I thought I’d share my views in a concrete (and longer) format here.
Computers are Useless without Programs
I realize that it sounds idomatic, but computers don’t do anything without programs. Programs are what gives a computer the ability to, well, be useful. So I think we can all agree that information security, as an industry, is based entirely around software.
I submit that knowing how to program makes most roles more effective merely by having a better understanding of how software works. Understanding I/O, network connectivity, etc., at the application layer will help professionals do a better job of understanding how software affects their role.
That being said, this is probably not reason enough to learn to program.
Learning to Program Opens Doors
I suppose this point can be summarized as “more skills makes you more employable”, which is probably (again) idiomatic, but it’s probably worth considering. There are roles and organizations that will expect you to be able to program as part of the core expectations.
For example, if you currently work in the SoC, and you want to work on building/refining the tools used in the SoC, you’ll need to program.
Alternatively, if you want to move laterally to certain roles, those roles will require programming – application security, tool development, etc.
You Will Be More Efficient
There are so many times where I could have done something manually, but ended up writing a program of some sort to do it instead. Maybe you have a range of IPs and need to check which of them are running a particular webserver, or you want to combine several CSVs based on one or two fields on them. Maybe you just want to automate some daily task.
As a Red Teamer, I often write scripts to accomplish a variety of tasks:
- Check a bunch of servers for a Vulnerability/Misconfiguration
- Proof of Concept to Exploit a Vulnerability
- Analyze large sets of data
- Write custom implants (“Remote Access Toolkits”)
- Modify tools to limit scope
On the blue side, I know people who write programs to:
- Analyze log files when Splunk, etc. just won’t do
- Analyze large PCAPs
- Convert configurations between formats
- Provide web interfaces to tools that lack them
How much do you need to know?
Well, technically none, depending on your role. But if you’ve read this far, I hope you’re convinced of the benefits. I’m not suggesting everyone needs to be a full-on software engineer or be coding every day, but knowing something about programming is useful.
I suggest learning a language like Python or Ruby, since they have REPLs, a “read-eval-print loop”. These provide an interactive prompt where you can run statements and see the responses immediately. Python seems to be more commonly used for InfoSec tooling, but they both are good options to get things done.
I would focus on file and network operations, and not so much on complicated algorithms or data structures. While those can be useful, standard libraries tend to have common algorithms (searching, sorting, etc.) well-covered. Having a sensible data structure makes code more readable, but there’s not often a need for “low level” structures in a high level language.
Have I Convinced You?
Hopefully I’ve convinced you. If you want to learn programming with a security-specific slant, I can highly recommend some books from No Starch Press: