Skip to main content


about julia allen

Julia AllenJulia Allen has been with the SEI since 1992. She served as Deputy Director/Chief Operating Officer for 3 years as well as as acting Director for an interim period of 6 months. She is the author of The CERT Guide to System and Network Security Practices (Addison-Wesley, June 2001), Governing for Enterprise Security (CMU/SEI-2005-TN-023, 2005), the CERT Podcast Series: Security for Business Leaders, and co-author of Software Security Engineering: A Guide for Project Managers (Addison-Wesley, May 2008).

[ email ] | [profile]

CyLab Chronicles

Q&A with Julia Allen

posted by Richard Power


CyLab Chronicles: Let's talk about your work in the software security space, and what you are seeing in the field, specifically in terms of application security. Where are we today as opposed to where we were five years ago -- in general terms? Are companies really paying more attention, i.e., investing adequate time and resources within the application development cycle and seriously addressing security issues?

Julia Allen, SEI/CERT:  It’s hard to know for sure about organizations in general. On a positive note, major software vendors are investing some serious resources in making their products more secure. One case in point is the consortium of vendors that are part of SAFECode: EMC, Juniper, Microsoft, Nokia, SAP, and Symantec. Microsoft in particular has been a leader based on their Trustworthy Computing Initiative and their secure development life cycle, which they have shared publicly.

Some organizations that are developing software, particularly customer-facing applications, are paying more attention as evidenced by some uptake of the CERT Secure Coding Standards for C and C++ (, the SANS Top 25 Most Dangerous Programming Errors, and OWASP’s Top Ten. Organizations that are acquiring software are beginning to include application security requirements in their requests for proposals and service level agreements, and are conducting third party security assessments. Two public examples include the Payment Card Industry Application Data Security Standard for protecting credit card data and the State of New York for the software they purchase. More organizations appear to be using the Common Vulnerabilities and Exposures (CVE) dictionary and the Common Weakness Enumeration (CWE) dictionary of software weakness types to help guide their development and acquisition efforts.

As an indication of international interest, the Cyber Security Knowledge Transfer Network in the UK conducted an international meeting on “Building Security In . . . Security, Assurance, and Privacy,” held at the British Embassy in Paris in March 2009. Carnegie Mellon participated in this event. A final report analyzes meeting results and presents a high-level roadmap for “developing and procuring software and systems which are resilient and sustainable by design,” including security and privacy.

On a more somber and perhaps more realistic note, Veracode commissioned Forrester to conduct a study that was released in April 2009. The study surveyed nearly 200 businesses and found that “while companies feel they know the make-up and business criticality of their mixed application portfolios, there is little confidence in the security quality of their applications. Companies are not doing enough to ensure the security of open source code, outsourced code, and commercial applications.” This is particularly true for medium and small enterprises and constraints imposed by the current economic climate.

At CERT, we’ve observed that building security in throughout the development life cycle is one of the systemic, root cause solutions to the widespread number of vulnerabilities (and exploitations of same) that we are all experiencing in today’s operational systems. We, along with other leaders in the field, have worked closely with the Department of Homeland Security’s Software Assurance Program in helping create the Build Security In web site, which describes a wide range of practices, knowledge, and tools. We have also recently published a book that describes recommended practices for software project managers. As part of CERT’s Podcast Series for Business Leaders, we’ve posted a number of podcasts that help digest this complex topic a bite at a time ( under Software Security).

CyLab Chronicles: If a concerned C-level executive or Board of Directors member -- other than the CIO or CTO -- who wanted to a way to gauge the level of attention paid to security within the software development cycle, informally, for herself, what are two or three off the cuff questions that she could ask? What does a genuine "security-enhanced software development process" look like?

ALLEN:  Useful questions to ask, including several from Fortify, include:

  • Have we identified our high-criticality, high-risk software applications that support key business processes/services – both those that we develop ourselves and those that we acquire? (Having the answer to this question is fundamental to the ones that follow. Given that you can’t secure everything, this is one way to determine where to focus attention.)
  • What type of intruder is likely to attack our high-risk applications? How would they do it? (Threat modeling and attack patterns are two key practices to assist with answering these questions respectively.)
  • Can our software engineers describe the most dangerous, commonly known application software vulnerabilities and how to mitigate or eliminate these? What are we doing to help them become better trained and educated so they can? Do our development teams have access to software security experts?
  • Do we know what software security practices to include in our standard software development life cycle or in the SDLC of our software suppliers? (You need to have a standard development process before tackling security.)

We are seeing an increase in the codification of software security best practices as part of the software development life cycle based on actual experience. Cigital and Fortify have studied the practices of nine organizations and released their Building Security In Maturity Model that reflects these observed practices. OWASP has published their Software Assurance Maturity Model. DHS has supported work to add software assurance practices to the SEI’s CMMI®. Two additional examples include Microsoft’s SDL and Cigital’s Touchpoints.

CyLab Chronicles:  In your work, you have focused a great deal on cultivating a "common sense" approach to making the case for the business case for software assurance. Of course, too often, it all comes down to dollars and cents, instead of dollars and sense. So what would you like to tell us about the costs and benefits, and how to justify the commitment of resources?

ALLEN:  Software security is a pay me now, pay me later proposition. There is ample evidence indicating that it is much more cost effective (by factors of 100:1 or more) to address a security requirements or design flaw (that can propagate forward into code and production) as early in the lifecycle as possible. The same is true for a security defect or coding error. You can fix it during code and test or you can incur all of the costs (dollars and productivity losses) associated with releasing a patch into a production system. In fact, this aspect of software quality (fix it early) has been known for some time. But knowing this does not necessarily cause organizations and software development project managers to change their behavior. There are many things that we all know are good for us that we don’t do.

As discussed in our September 2008 Making the Business Case for Software Assurance Workshop and our April 2009 report on the same topic, a compelling business case includes cost/benefit analysis, defined measures, assessing risk, prioritizing mitigation actions, having a process into which software security practices can be integrated, and dealing with the companion issues around organizational change, awareness, training, and education. While there is no single model that we recommend for making the cost/benefit argument, there are some promising results and convincing case study data that can assist. What it really comes down to is how important is software security in comparison to other business- and technology-based investments, particularly for critical, high-risk applications? Do the costs and benefits that will be incurred today to integrate software security practices into the development life cycle significantly outweigh those that will be incurred later when the software is part of a production system? In effect, it’s a risk management decision.

CyLab Chronicles: Tell us about the Security Investment Decision Dashboard (SIDD). What is it, what does it do and who does it do it for?

ALLEN:  We observe that business leaders are becoming more aware of the need to invest in information and software assurance—to meet compliance requirements and optimize their total cost of ownership for software-intensive applications and systems. So how do we ensure that security investments are subject to the same decision criteria as other business investments – level the playing field if you will? And by so doing, are we able to justify investments that increase our confidence in our ability to protect digital information using software that is more able to resist, tolerate, and recover from attack?

The Security Investment Decision Dashboard provides a means for those making investment decisions to evaluate and compare several candidate security investments to help select which ones to fund. Its intent is to subject information and software assurance investment decisions to the same criteria that are used for other business investment decisions such as developing a new product or service, building a new facility, or considering a merger or acquisition. With this approach, security investments can then be presented, reviewed, analyzed, debated, and compared using the same scales, factors, and investment-selection criteria and processes. Investment progress can be tracked over time and criteria can be updated as business factors change.

The current version of the dashboard includes the following seven criteria categories, each supported by 2-5 additional sub-criteria referred to as indicators:

  1. Cost: What is the estimated total cost to accomplish this investment? (initial cost, life cycle cost, cost of not doing, cost savings beyond breakeven, risk reduction)
  2. Criticality & Risk: Degree to which this investment helps meet business objectives & risk management goals (investment criticality, degree of risk mitigated, protection of stakeholder interests)
  3. Time & Effort Required: Level of staff hours and time to reach break-even (senior leadership time, buy-in time, time to demonstration of results)
  4. Feasibility: Likelihood of investment success (on first attempt, on subsequent attempts, sponsor turnover, need to roll back)
  5. Positive Interdependencies: Reasonable change to existing processes and/or paves the way for future work (other investment dependencies on this one, support of compliance with current or new laws, uses existing knowledge and skills, degree of positive side effects)
  6. Involvement: Level of required involvement and buy-in (who needs to be involved, third parties, independent review or audit)
  7. Measurability: How measurable is the investment outcome? (quantified in tangible terms, use of existing performance measures)

Categories and indicators are ranked independent of any particular investment, based on what is most important to the business. The discussions resulting from this ranking process have proven to be quite valuable to clarify the basis for the organization’s investment decisions. Then each investment is scored against the set of ranked categories and indicators. Scores are totaled for each investment and compared.

Categories and indicators can be tailored to a particular organization’s decision criteria. An article describing the application of the dashboard to four software security investments is available on the DHS Build Security In web site.

See all CyLab Chronicles articles