System Overlord

A blog about security engineering, research, and general hacking.

[CVE-2014-5204] Wordpress nonce Issues

Wordpress 3.9.2, released August 6th, contained fixes for two closely related vulnerabilities (CVE-2014-5204) in the way it handles Wordpress nonces (CSRF Tokens, essentially) that I reported to the Wordpress Security Team. I’d like to say the delay in my publishing this write-up was to allow people time to patch, but the reality is I’ve just been busy and haven’t gotten around to this.

TL;DR: Wordpress < 3.9.2 generated nonces in a manner that would allow an attacker to generate valid nonces for other users for a small subset of possible actions. Additionally, nonces were compared with ==, leading to a timing attack against nonce comparison. (Although this is very difficult to execute.)

Review of CSRF Protection

A common technique for avoiding Cross Site Request Forgery (CSRF) is to have the server generate a token specific to the current user, include that in the page, and then have the client echo that token back with the request. This way the server can tell that the request was in response to a page from the server, rather than a request triggered on the user’s behalf by an attacker. OWASP calls this the Synchronizer Token Pattern and one of the requirements is that an attacker is not able to predict or determine tokens for another user.

Wordpress Nonces

Wordpress uses what they call “nonces” (but they’re not, in fact, guaranteed to be used only once) for CSRF protection. These nonces include a timestamp, a user identifier, and an action, all of which are part of best practices for CSRF tokens. These values are HMAC’d with a secret key to generate the final token. All of this is in accordance with best practices, and at first blush, the nonce generation code looks good. Here’s how nonces were generated prior to the 3.9.2 fix:

1
2
3
4
5
6
7
8
9
10
#!php
function wp_create_nonce($action = -1) {
	$user = wp_get_current_user();
	$uid = (int) $user->ID;
	# snipped

	$i = wp_nonce_tick();

	return substr(wp_hash($i . $action . $uid, 'nonce'), -12, 10);
}

wp_nonce_tick returns a monotonically increasing value that increments every 12 hours to provide a timeout on the resulting nonce. $user->ID is the auto-increment id column from the database. wp_hash performs an HMAC-MD5 using a key selected by the 2nd argument, the nonce key in this case. So, we’re esentially getting an HMAC of a string concatenation of the current time, the action value passed in, and the current user’s UID. Assuming HMAC is strong, we’ve got a user, action and time-specific token, right?

Wrong. What if we can figure out a way to collide inputs to the HMAC? Turns out this is pretty easy, actually. Let’s look at some instances where wp_create_nonce is used:

1
2
3
4
#!php
wp_create_nonce( "approve-comment_$comment->comment_ID" )
wp_create_nonce( 'set_post_thumbnail-' . $post->ID );
wp_create_nonce( 'update-post_' . $attachment->ID );

In more than one case, we see places where nonces are created that end in an ID value (an integer from the database). Note that these action values are immediately before the UID, also an integer. This means that once the concatenation is done, there is no separation between the integer values of the action and the UID, leading to collisions in the hash input, and consequently the same nonce value being generated. Take, for example, an installation where users are privileged to update their own post but not those of other users. Let’s take user 1 and post 32, and user 21 and post 3. What are the respective inputs to wp_hash? (I’m substituting 0 for the timestamp value as it’s the same for all users at the same time.)

1
2
$i . 'update-post_32' . 1 => '0update-post_321'
$i . 'update-post_3' . 21 => '0update-post_321'

Despite being two separate users and two separate actions, their nonce values will be the same. While this is fairly limited in what an attacker can do (you can’t pick arbitrary users and values, only “related” users and values), it’s also very easy to fix and completely eliminate the hole: simply add a non-integer separator between the segments of the hash input. Wordpress 3.9.2 now inserts a | between each segment, so now the hash inputs look like this:

1
2
$i . '|' . 'update-post_32' . '|' . 1 => '0|update-post_32|1'
$i . '|' . 'update-post_3' . '|' . 21 => '0|update-post_3|21'

No longer will the HMACs collide, so now two distinct nonces are generated, closing the CSRF hole. The implementation also now includes your session token, making it even harder for an attacker to generate a collision, though I can’t think of a specific hole that fixes (it does generate new nonces after a logout/login):

1
2
3
4
5
6
7
8
9
10
11
#!php
function wp_create_nonce($action = -1) {
	$user = wp_get_current_user();
	$uid = (int) $user->ID;
	# snipped

	$token = wp_get_session_token();
	$i = wp_nonce_tick();

	return substr( wp_hash( $i . '|' . $action . '|' . $uid . '|' . $token, 'nonce' ), -12, 10 );
}

Timing Attack

Though probably very difficult to exploit on modern systems, using PHP’s == to compare hashes results in a timing attack (not to mention the possibility of running afoul of PHP’s bizarre comparison behavior).

Formerly:

1
2
3
#!php
if ( substr(wp_hash($i . $action . $uid, 'nonce'), -12, 10) === $nonce ) {
  ...

Now:

1
2
3
4
#!php
$expected = substr( wp_hash( $i . '|' . $action . '|' . $uid . '|' . $token, 'nonce'), -12, 10 );
if ( hash_equals( $expected, $nonce ) ) {
  ...

hash_equals was added in PHP 5.6, but Wordpress provides their own, using a fairly common constant-time comparison pattern, if you don’t have it.

Summary

Even when you include all the right things in your CSRF implementation, it’s still possible to run into trouble if you combine them the wrong way. Much like a hash length extension attack, cryptography won’t save you if you’re putting things together without thinking about how an attacker can alter or vary it.

I’d like to thank the Wordpress security team for their responsiveness when I reported the issues here. I have nothing but positive things to say about the team and my interactions with them.


Security: Not a Binary State

I’ve been spending a fair amount of time on Security StackExchange lately, mostly looking for inspiration for research and blogging, but also answering a question every now and then. One trend I’ve noticed is asking questions of the form “Is security practice X secure?”

This is asked as a yes/no question, but security isn’t a binary state. There is no “absolutely secure.” Security is a spectrum, and it really depends on what you’re worried about, which is where threat modeling comes in. Both users and service providers need to consider their risks and decide what’s important to them.

Users

Most internet users will never be specifically targeted by an attacker. Their concerns will (should) include:

  • Run-of-the-mill malware
  • Phishing
  • Security on public hotspots
  • Password management

For these users, maintaining a patched system, being aware of phishing, using a VPN on public hotspots, and maybe using an anti-virus or anti-malware program will generally protect against the threats they’re subject to. Of course, education is still important, as if they run random programs downloaded from the internet, malware will still make its way in.

Other users might have a more determined adversary. Those with access to financial systems, valuable data, or other desirable access may end up being targeted. Spearphishing becomes an issue. Depending on what you do, you might even find yourself subject to the ire of well-funded state attackers. What is adequately secure for a “normal” user is woefully inadequate for these users.

Service Providers

I’d originally only intended to talk about providers of Internet services, but given the continuing tendency of businesses to place their infrastructure online, I think all businesses interacting with customers fall into this category.

Service providers have a responsibility to two kinds of data: their data, and their user’s data. For their data, they are essentially a user as above. The service provider should perform their own threat modeling, decide what risks are and are not acceptable to them, and then act to secure their data against the risks they are worried about.

User data, on the other hand, is sacred. While no business can protect against every possible adversary, they should consider the trust users place in them and try to protect the data as their users would want it protected. While the provider might be willing to roll the dice on, say, their corporate email, they should consider if that data can be used to compromise user data. (Nearly universally, the answer to that question is “yes.”)

We’ve had a recent series of Point-of-Sale data breaches, including Target and Home Depot. I’m disappointed that, so far, these haven’t seemed to hurt the retailers very much. Retailers will only take adequate measures to protect themselves when it becomes obvious that the consequences of a data breach will be massive. If consequences of violating PCI actually had teeth (e.g., you can’t accept credit cards anymore for some period of time until you can be re-certified, like 6 months) and customers moved elsewhere, maybe the businesses would get a bit proactive.

Conclusion

That’s enough ranting for now, but I wanted to get one main point across: It’s important to remember that you can never be “secure.” You can only be “secure enough” to defend against some set of adversaries and threats. Anything else is just wishful thinking.


DEF CON 22 Recap

Conference Badges

I’m back and recovering with typical post-con fatigue. This year, I made several mistakes, not the least of which was trying to do BSides, Black Hat, and DEF CON. Given the overlapping schedules and the events occurring outside the conferences, this left me really drained, not to mention spending more time transiting between the events than I’d like.

BSides Las Vegas

B-Sides was a blast, but I spent most of the time I was there playing in the Pros vs Joes CTF run by Dichotomy. This is a particularly nice Capture the Flag competition, since it’s based on defending (and attacking) “real world” networks, rather than the typical Jeopardy-style “crack this binary” competitions. Most of the problems seen in the real world aren’t, in fact, 0-day produced by talented hackers, but in fact configuration weaknesses, outdated software, and insecure practices exploited by script kiddies. PvJ forces you to consider how to harden a “corporate” environment while still providing the same services. You get a Cisco ASA as your firewall, and can reconfigure services as needed to establish your perimeter and secure your systems. On Day 2, you also get to see just how good you are at breaking in, and just how good (or bad) your opponents are at securing their network.

Black Hat

There were a couple of interesting talks to see at Black Hat, but some of the ones that I hoped would be more ground breaking seemed to just scratch the surface and didn’t provide enough depth. (Or working demos! I’m looking at you, USB firmware!) The Black Hat business hall was an incredible letdown, as basically none of the booths had anyone with technical depth for discussion, but just had sales people who wanted to sell things that probably don’t work anyway. [Cynical mode off.]

In all honesty, Black Hat continues to be a venue for government & corporate security managers, and consultants and contractors that work for those entities. There’s absolutely nothing community about it, but so long as you go in with that expectation, you won’t be disappointed by that.

DEF CON 22

So much to do, so little time! Every year, I’m plagued by the same problem: which of the 7 amazing things going on right now do I want to do? This year, the problem got even more complicated for me due to an event run by my employer.

The badge was, as usual, pretty awesome, thanks to 1o57’s work. Apparently he even worked on it during his honeymoon, so a big thanks to @NelleBot for not yelling at him too much, so we all got to play with some awesome hardware. Once again, the badge features a Parallax Propeller chip, which is sortof unfortunate, as the toolkit for it is closed-source and Linux is not a first-class citizen. Between that & time constraints, I didn’t spend any time working on the badge challenge, but maybe I’ll play around with it some now that I’m home. I believe I’ve spotted (and heard of) an IR transmitter/receiver pair, similar to the DC20 badge. I also have some IR LEDs and receivers at home, so I wonder if they’re in a similar range. Maybe I’ll break out a Digispark as an IR transceiver to play around with.

Thursday night was theSummit, an annual fundraiser run by Vegas 2.0 to raise money for the Electronic Frontier Foundation. It’s an incredible event, with lots of great people in attendance, and a good opportunity to meet many of the BSides and DEF CON speakers. The fact that there’s a raffle, auction, and open bar is just the icing on the cake. (Donating to the EFF makes it such a good cause that I wouldn’t miss it for anything!) As you can see at the top, the VIP badge for theSummit was pretty awesome. I love the LED shining through the acrylic to make the text glow.

I was really happy to see the Crypto & Privacy village, and even though I only got a little time there, it was great to see that playing more of a role at DEF CON. I attended the OpenPGP keysigning on Friday, but didn’t make it back for Saturday’s. They also seemed to have some good introductory crypto talks, and it’ll be interesting to see how that evolves over the next year.

Despite losing a lot of time to a work event and teaching at the R00tz Asylum, I managed to play in Capture the Packet with another member of DC404 (my DEF CON group from when I lived in Atlanta) and we won the round, qualifying for the finals. Unfortunately, he wasn’t able to make it to the finals due to his flight arrangements, so another DC404 member (and current coworker) stepped in, and we managed a 2nd place overall finish, which I was extremely happy with. (Not that a black badge wouldn’t have been cool… There’s always next year.)

High Roller in Las Vegas

Of course, work events aren’t so bad when they come with this view. We took some interesting people on a little trip around the High Roller, the tallest Ferris Wheel in the world, right off the strip! It was incredible to get to talk with some of them, and the view didn’t hurt things either.

If you haven’t heard, this was the final year at the Rio. It’s time to pack our bags and head across the freeway to Paris. And Bally’s. That’s right, it’s going to take 2 hotels to contain all the hackers. Apparently we’ll have room blocks at several more of the area hotels. Makes sense given this year’s reported 16,000 attendance.


Weekly Reading List for 8/2/14

This has been missing for a few weeks, but it’s back!

Why is CSP Failing?

Why is CSP Failing? Trends and Challenges in CSP Adoption. Despite being an “academic” paper, this actually has a lot to offer about why one of the most effective defenses against XSS isn’t yet getting widely implemented, and what the implementation costs and strategies are.

Safari Bites the Dust

Ian Beer of Google Project Zero recently popped Safari and then proceeded to pwn OS X. This post dives into exploiting a WebKit unbounded write bug, and makes it obvious just how many hoops an attacker needs to go through compared to the ‘buffer overflow to overwrite EIP’ bugs of the ‘good old days’. It’s a great read, especially if you’re new to browser/client exploitation.

Blackhat & DEF CON Tips

It’s that time of year again – the annual Las Vegas pilgrimage for hackers. As usual, Chief Monkey over at Toolbox.com has some protips for first time attendees. (Or reminders for seasoned vets!)


Passing Android Traffic through Burp

I wanted to take a look at all HTTP(S) traffic coming from an Android device, even if applications made direct connections without a proxy, so I set up a transparent Burp proxy. I decided to put the Proxy on my Kali VM on my laptop, but didn’t want to run an AP on there, so I needed to get the traffic to there.

Network Setup

Network Topology Diagram

The diagram shows that my wireless lab is on a separate subnet from the rest of my network, including my laptop. The lab network is a NAT run by IPTables on the Virtual Router. While I certainly could’ve ARP poisoned the connection between the Internet Router and the Virtual Router, or even added a static route, I wanted a cleaner solution that would be easier to enable/disable.

Setting up the Redirect

I decided to use IPTables on the virtual router to redirect the traffic to my Kali Laptop. Furthermore, I decided to enable/disable the redirect based on logging in/out via SSH, but I needed to make sure the redirect would get torn down even if there’s not a clean logout: i.e., the VM crashes, the SSH connection gets interrupted, etc. Enter pam_exec. By using the pam_exec module, we can have an arbitrary command run on log in/out, which can setup and reset the IPTables REDIRECT via an SSH tunnel to my Burp Proxy.

In order to get the command executed on any login/logout, I added the following line to /etc/pam.d/common-session:

1
session optional	pam_exec.so log=/var/log/burp.log	/opt/burp.sh

This launches the following script, that checks if its being invoked for the right user, for SSH sessions, and then inserts or deletes the relevant IPTables rules.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

BURP_PORT=8080
BURP_USER=tap
LAN_IF=eth1

set -o nounset

function ipt_command {
	ACTION=$1
	echo iptables -t nat $ACTION PREROUTING -i $LAN_IF -p tcp -m multiport --dports 80,443 -j REDIRECT --to-ports $BURP_PORT\;
	echo iptables $ACTION INPUT -i $LAN_IF -p tcp --dport $BURP_PORT -j ACCEPT\;
}

if [ $PAM_USER != $BURP_USER ] ; then
	exit 0
fi

if [ $PAM_TTY != "ssh" ] ; then
	exit 0
fi

if [ $PAM_TYPE == "open_session" ] ; then
	CMD=`ipt_command -I`
elif [ $PAM_TYPE == "close_session" ] ; then
	CMD=`ipt_command -D`
fi

date
echo $CMD

eval $CMD

This redirects all traffic incoming from $LAN_IF destined for ports 80 and 443 to local port 8080. This does have the downside of missing traffic on other ports, but this will get nearly all HTTP(S) traffic.

Of course, since the IPTables REDIRECT target still maintains the same interface as the original incoming connection, we need to allow our SSH Port Forward to bind to all interfaces. Add this line to /etc/ssh/sshd_config and restart SSH:

1
GatewayPorts clientspecified

Setting up Burp and SSH

Burp’s setup is pretty straightforward, but since we’re not configuring a proxy in our client application, we’ll need to use invisible proxying mode. I actually put invisible proxying on a separate port (8081) so I have 8080 setup as a regular proxy. I also use the per-host certificate setting to get the “best” SSL experience.

Burp Setup

It turns out that there’s an issue with OpenJDK 6 and SSL certificates. Apparently it will advertise algorithms not actually available, and then libnss will throw an exception, causing the connection to fail, and the client will retry with SSLv3 without SNI, preventing Burp from creating proper certificates. It can be worked around by disabling NSS in Java. In /etc/java-6-openjdk/security/java.security, comment out the line with security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg.

Forwarding the port over to the wifilab server is pretty straightforward. You can either use the -R command-line option, or better, set things up in ~/.ssh/config.

1
2
3
4
Host wifitap
  User tap
  Hostname wifilab
  RemoteForward *:8080 localhost:8081

This logs in as user tap on host wifilab, forwarding local port 8081 to port 8080 on the wifilab machine. The * for a hostname is to ensure it binds to all interfaces (0.0.0.0), not just localhost.

Setting up Android

At this point, you should have a good setup for intercepting traffic from any client of the WiFi lab, but since I started off wanting to intercept Android traffic, let’s optimize for that by installing our certificate. You can install it as a user certificate, but I’d rather do it as a system cert, and my testing tablet is already rooted, so it’s easy enough.

You’ll want to start by exporting the certificate from Burp and saving it to a file, say burp.der.

Android’s system certificate store is in /system/etc/security/cacerts, and expects OpenSSL-hashed naming, like a0b1c2d3.0 for the certificate names. Another complication is that it’s looking for PEM-formatted certificates, and the export from Burp is DER-formatted. We’ll fix all that up in one chain of OpenSSL commands:

1
2
3
(openssl x509 -inform DER -outform PEM -in burp.der;
 openssl x509 -inform DER -in burp.der -text -fingerprint -noout
 ) > /tmp/`openssl x509 -inform DER -in burp.der -subject_hash -noout`.0

Android before ICS (4.0) uses OpenSSL versions below 1.0.0, so you’ll need to use -subject_hash_old if you’re using an older version of Android. Installing is a pretty simple task (replace HASH.0 with the filename produced by the command above):

1
2
3
4
5
6
7
$ adb push HASH.0 /tmp/HASH.0
$ adb shell
android$ su
android# mount -o remount,rw /system
android# cp /tmp/HASH.0 /system/etc/security/cacerts/
android# chmod 644 /system/etc/security/cacerts/HASH.0
android# reboot

Connect your Android device to your WiFi lab, ssh wifitap from your Kali install running Burp, and you should see your HTTP(S) traffic in Burp (excepting apps that use pinned certificates, that’s another matter entirely). You can check your installed certificate from the Android Security Settings.

Good luck with your Android auditing!