What can Diaspora learn about security from Microsoft? (REVISED DRAFT)

See the final version here

Thanks to Adam, Jason, and Alem for the initial list; Sarah, tptacek, Locke1689, mahmud, Wayne, PeterH, Steve, and SonyaLynn for comments on the previous draft, and Damon for the wording on #7.


It’s counter-intuitive to think of Microsoft as a poster child for security — or as a role model for Diaspora, the “privacy-aware, personally-controlled, open-source, do-it-all social network”.   But the security mess Microsoft created back in the 1990s, the progress they’ve made since 2001, and the challenges they continue to face all provide some interesting lessons for a very different situation.

Four months after raising $200,000 using Kickstarter, Diaspora released their code to the open-source community in September.  From a security perspective, it’s swiss cheese, filled with security 101 errors.  In Diaspora: what next?, I argued

This was probably the right tradeoff for Diaspora to make over the summer.  If the guys had spent all their time becoming security experts, they couldn’t have gotten as far as they have.  There’s a huge amount of value in giving people something to play with even if it’s insecure.

Still, at some point fairly soon Diaspora will have to prioritize security.  If they get a reputation for security holes, then it doesn’t matter how privacy-aware they try to be: people won’t trust them.  The recent catastrophic failure of Haystack highlights what can happen to projects that misset expectations and put their users at risk.

Fortunately, Diaspora’s not Haystack.  They’ve got a huge potential asset with their open-source community, which makes it possible for people to support them to help.  Hopefully they’re learning from open-source projects like Tor, Apache, and GPG that take security very seriously.

And Diaspora’s not Microsoft, either. They’re small and need to move fast, and don’t have a lot of resources.  Oh, and they’re not the evil empire — a definite plus.  So just as Linux and BSD got a lot of help from people who wanted an alternative to Microsoft, Diaspora’s likely to continue to benefit from the fear and loathing Facebook continues to inspire.

So, with that as the background, here are some of the lessons for Diaspora from Microsoft’s experiences:

  1. Reach out to the security community. Diaspora appears to be doing virtually nothing here, so there’s huge room for improvement.  Microsoft used to be even worse, treating security researchers as the enemy, and minimizing communication about security issues. By engaging with their critics, and providing a lot more information, they’ve learned a lot about how to improve their security — and also helped shift others’ perception of the company.
  2. Add at least one security expert to the team. Computer security is hard and you need to have somebody who understands it deeply involved in the design and engineering process. You also need to have a main point of contact with the community. The skills for these are somewhat different so they might be two different people — and because it’s such an exciting project, Diaspora may well be able to find people to join part-time very cheaply.
  3. Review the code. Actually this is a lesson from Tor and BSD as well: when you look at the code from a security perspective, you find plenty of things that you’d otherwise miss. On Hacker News, Locke1689 commented that from his experience “the most effective part of Microsoft’s security practice is that we had dedicated developers whose only job is to evaluate the security of proposed changesets”. And it’s worth thinking about going further: formal multi-role code reviews are very expensive, but fortunately Diaspora’s code base is small.
  4. Do threat modeling. If you don’t know what the threats are, how can you claim that your system protects privacy? It’s fun, too!.  This is a place where Diaspora could really get a lot of help from the community if they can come up with a good way of sharing and refining threat models on their wiki.  With many eyes, all threats are shallow — or at least a lot shallower than they would be otherwise.  How cool would it be if computer security classes all over the world used Diaspora as an example, assigned threat modeling as an expertise, and contributed the best ones to the community?
  5. Train the developers — and the designers and quality engineers too. Secure software is everybody’s responsibility. Secure programming still isn’t covered in any detail in most undergraduate or graduate programs, so just like Microsoft discovered a decade ago, most developers don’t know the basic practices. Pairing (programming, code reviews, threat modeling) is a very effective way to train while making progress on some of the other items on this list.
  6. Use the tools — and develop ones that don’t exist. There are some excellent specification and unit testing tools in the Ruby environment (Cucumber, RSpec, Selenium, Burp Suite). There are also some holes, for example static analysis, fuzzing, and attack surface estimation.  Again, this is a great way for the broader Diaspora community to get involved and supplement the core team; Apache, Linux, Sendmail, and BSD are all great examples of open-source projects that have really benefitted from this
  7. Bake security in at every stage of development, as Damon Cortesi suggested on Quora.  Microsoft’s SDL, even the Agile version, may well be too heavyweight for Diaspora; there hasn’t been a lot of work in agile security that I know of, so to some extent Diaspora will be breaking new ground here.  It won’t be perfect at first but what was really surprising at Microsoft is how quickly even imperfect versions give value by providing a constant reminder of the importance of security.
  8. Create a security and privacy advisory board.  Microsoft’s Trustworthy Computing Advisory Board has about 20 experts from academia and has been incredibly valuable in many ways: feedback on priorities, suggestions for improvements, tough design and architecture reviews, sharing their understanding of Microsoft with their colleagues, and in some cases getting their students involved.   Obviously Diaspora can’t afford to fly everybody around to get together in person but virtual meetings can be almost as effective.
  9. Think about security up front.   As Wayne Ariola pointed out in a comment on my original draft, Diaspora’s current “find-and-fix” approach mirrors industry mindset.  If security isn’t designed in up front, it’s incredibly expensive to retrofit it and you’re going to miss a lot.  By the time Microsoft started paying serious attention to security in 2001, they had created a huge hole for themselves and even now after investing over a billion dollars they’re still playing catchup.  MySpace is probably an even more relevant example: it may never recover from being overrun by hackers and spammers just as Facebook was gathering steam.

Yes, it’s a lot.  One comment I got in my previous draft was that no web startups do all of these things.  True enough, although I don’t think that’s the best way to look at Diaspora.  Most web startups aren’t basing their appeal primarily on privacy so the consequences of their ongoing security problems aren’t particularly severe.  Most web startups are tackling easier problems.   Most web startups don’t have the early-stage visibility and the chance to galvanize an open-source community.  And most web startups don’t have as huge an opportunity.

So rather than comparing themselves to the typical web startup, I think it’s better to think of Diaspora as having a chance to be the next dominant social network, the successor to Friendster, MySpace, and Facebook.   From that perspective, it seems to me it’s worth investing.

jon


Comments

6 responses to “What can Diaspora learn about security from Microsoft? (REVISED DRAFT)”

  1. I sent the first draft to the to the Diaspora developers google group and Hacker News, and got some extremely helpful feedback. There was also useful input via Twitter and my Facebook feed. I posted it on Quora and a “tech influencers” group on Facebook too, but it didn’t get any attention in either of those places. For this revision I’ll try to broaden the audience …

    Thanks to all for the feedback, and please keep it coming!

  2. But wait! Quora comes through, with some great input from Damon Cortesi, now incorporated!

  3. Some great email feedback from Solar Designer of Openwall, a security-enhanced GNU*/Linux distro:

    Besides the code and typical end-user and developer documentation, there should be a detailed specification (perhaps structured/hierarchical) of Diaspora’s security model and intended behavior (the latter not even limited to obviously security relevant things)…. From our experience, especially with security audits of web apps, it is very easy to miss application logic errors, simply because the security people often don’t know what decisions were made on what functionality was supposed to be available to what users of the system, in what cases, and why. Sometimes such decisions were not even made, but merely implied by someone (such as a lead developer) and considered “obvious”. It is fairly easy to spot, say, an SQL injection (or risk thereof due to bad practices). However, it is similarly easy for someone doing a security audit (or, say, reviewing a changeset for security, as you propose) to completely miss the complete lack of access controls, where someone else in the project would think such access controls were “implied” and “obviously needed”. This is from our own experience (yes, there were occasions when we did spot implementation bugs, but missed high-level logic errors and non-existent but implied security features).

    Perhaps such a specification could consist of both formal/machine-readable and informal/documentation parts. The former could include assertions (e.g., on possible/impossible state transitions), which could be verified (vs. the actual implementation) by a machine or/and manually (perhaps the possible verification approaches will differ between the assertions). Some of these assertions could even be encoded in security design and APIs (if something is supposed to be false, make it impossible – checks for “can’t happen” conditions, disallowed state transitions; disable risky APIs, etc.)

    As to startups reasonably not caring about security initially, this is simply the reality, the way it has to be most of the time. If a non-security-focused startup invests into security right away, they’re more likely to fail (run out of funds before turning up a profit). A way around this, which we’re actually using right now (for a client’s startup where Openwall provides some services), is to do some security design and partial implementation upfront, but only for things that would be hard/costly to introduce at a later stage. Other security design and implementation gets postponed until the project becomes cash flow positive. Ditto for a security audit of the code (postponed). So there are known and expected mis-designs and implementation issues, but mostly not for things that would be very invasive to fix later.

    If Diaspora’s resources are very limited, they might use this approach initially, too – need good security experts involved right away, but use up only relatively little of their time (and of the developers’ time).

    Regardless of the security aspect, a project like this is likely to have their software almost(?) completely rewritten, maybe not all at once (this may proceed slowly), but quite possibly more than once. It is simply too hard and unreasonable to try to get all design decisions right from the start. The initial version should be just a throw-away
    hack, an early experiment. So there will be opportunities to change the security design as well.

    I don’t have much to add except “well said”.

  4. And Halvar Flake of Zynamics presented a devil’s advocate viewpoint:

    What happens if Diaspora security fails miserably ? In essence, it defaults to what happens in other social networks, right ? 😉 — so the worst case failure means it’ll be equal to what everybody else is doing.

    Of course it’s hard to know for sure, but it seems to me that since Diaspora’s whole reason for existence is to be privacy-friendly, they’ll lose if they’re at the same level as everybody else.

  5. A timely comment by Evgeny Morozov in BusinessWeek:

    The problem in Haystack’s case was that those guys started without any understanding of how the Iranian police and security people work, what they look for, and how they go about identifying dissidents. [Haystack] never even had what security experts call a threat model. So they never even conceptualized or thought through what risks their users were likely to be under when using Haystack.

Leave a Reply

Your email address will not be published. Required fields are marked *