In RSA: “It feels like something’s missing” earlier this week, I mentioned that I found myself wondering whether what I was seeing at the show responded to security problems as experienced by users. Coincidentally enough, when I checked Slashdot today there were several of interesting security-related threads. So while it’s far from a statistically-valid sample, it’s still agreat chance to ask: is the industry successfully addressing these kinds of problems?
Let’s start with Oklahoma Leaks 10,000 Social Security Numbers, which is by far the most serious single issue:
“By putting SQL queries in the URLs, they not only leaked the personal data of tens of thousands of people, but enabled literally anyone with basic SQL knowledge to put his neighbor/boss/enemies on the sexual offender list.”
This appears to be a SQL injection defect, and a particularly glaring one. Lots of static analysis tools (Coverity, Fortify, Armorize, Ounce) find SQL injection bugs — were these run as part of the site’s development and QA process? This particular problem is so blatant that any competent penetration tester would be highly likely to find; were any of the firms that provide this service employed? And in my earlier post I talked about database activity monitoring product Secerno, which focuses on stopping SQL injection attacks at run-time; was that product, or a similar one, deployed? As DailyWTF pointed out in their excellent post breaking the story, this is secure coding 101; were the developers and testers on the project trained? And finally, compliance systems are responsible for providing information to management that should highlight major process oversights like this; was a system in use — or did management approve the deployment of a site even knowing the risks?
Of course, none of these things would necessarily have 100% assured that the problem doesn’t occur: no static analysis tool guarantees finding 100% of the defects, penetration testers missed things, even trained people sometimes make mistakes, etc. And there may well be some behind-the-scenes information that casts a different light on the subject. Still, it seems to me that this is a case where the kinds of products and services I saw at RSA should have prevented the problem; my guess is that due to budget constraints or time pressure, they simply weren’t deployed.
Fake Subpoenas Sent To CEOs For Social Engineering is kinda cool:
The Internet Storm Center notes that emails that look like subpoenas are being sent out to the CEOs of major US corporations. According to the ISC’s John Bambenek: ‘…. It then asks them to click a link and download the case history and associated information. It’s a “click-the-link-for-malware” typical spammer stunt. So, first and foremost, don’t click on such links. An interesting component of this scam was that it did properly identify the CEO and send it to his email directly. It’s very highly targeted that way.’”
So at one level, this is something the industry is clearly responding to: there were a lot of anti-malware tools at RSA, as well as plenty of patch management tools to try to minimize how vulnerable systems are to infection. As we saw in the recent pwn2own contest, though, even fully-patched systems can be taken over. Suppose that somebody less ethical than contest winner Charlie Miller had sold their Safari exploit to a criminal gang; then any CEO with a Mac would have been toast if they clicked on the link.
[And sure, easy to say “don’t click the links”, and the ISC’s page goes into some more detail: “If you get subpoenas, take it to a lawyer. Don’t click on links. And most importantly, no one renders service through e-mail right now, and if you tried it wouldn’t “count”.” I hadn’t ever really thought about it until now, though, and so if I got some official-looking mail telling me I’d been subpoena, I probably would want to find out more … I might well have gotten pwned by this.]
To be fair, I did see a lot of good anti-malware tools at RSA, and they might well have stopped this particular attack. It’d be interesting to know which of them would or wouldn’t — and for the ones that didn’t, how long it would take them to react.
In What Should We Do About Security Ethics? an anoymous reader writes
I am a senior security xxx in a Fortune 300 company and I am very frustrated at what I see. I see our customers turn a blind eye to blatant security issues, in the name of the application or business requirements. I see our own senior officers reduce the risk ratings of internal findings, and even strong-arm 3rd party auditors/testers to reduce their risk ratings on the threat of losing our business
This is a tough one. The hope is that compliance systems, coupled with the threat of liability (under Sarbanes-Oxley, for example) or lawsuits, will change behavior over time: senior officers may be less willing to reduce risk ratings if this decision is logged and they are likely to be held responsible if something goes wrong. In practice, since as far as I know nobody’s been held liable yet, the deterrent effect isn’t there. The Oklahoma case is a good example: how much will this cost the state in damages, and whose career suffers a major setback? And I’m not convinced that the compliance products I saw would even capture this kind of information. On the whole, it’s really hard for me to imagine this dynamic changing without a fundamental shift in the legal landscape, establishing liabilty for data breaches — and for software with security vulnerabilities.
And finally, Windows Live Hotmail CAPTCHA Cracked, Exploited:
Coming on the heels of credible accounts of the downfall of first Yahoo’s and then Gmail’s CAPTCHA, Ars Technica is reporting on Websense Security Labs’ deconstruction of the cracking and tuning / exploitation of the Live Hotmail CAPTCHA. Ars calculates that a single zombie computer can sign up over 1400 Live Hotmail accounts in a day, and alternate account creation with spamming.
Those intentionally-hard-to-read words that you have to type in when you sign up for email accounts (or leave blog comments) are CAPTCHAs, designed to ensure that there’s a real person at the other end. Since the concept first got introduced, it’s been an arms race between the people devising CAPTCHAs — and the hackers using technology to decode them automatically. Recent score: hackers 3, industry 0.
And at an even deeper level, this reveals an uncomfortable truth: there are huge numbers of zombie machines out there, mostly running Win95 and early versions of Windows XP, now largely controlled by “botnet herders”: professionals who are happy to make them available for whatever purposes for surprisingly reasonable prices. Something to keep in mind here: the substantial improvements that Microsoft and other OS vendors have made to their security more recently will not change that — at least until those machines go out of service. This is now a fact of life we have to deal with. And so when other security measures fail, broad-based exploitation can happen rapidly. Dealing with becomes largely a matter of law enforcement, not industry security solutions.
So from this admittedly-small data set, it’s a mixed bag for the security industry. On the downside, there are some fundamental problems that really seem beyond the industry’s ability to address. And there are some specific areas where the technology is still lagging.
The good news is that there are a lot of solutions (static analysis, penetration testing, training, database security monitoring and other kinds of dynamic analysis, security reviews) that can help with major issues like data breaches — and obviously, a lot of room for improvement just getting them deployed today.
Leave a Reply