Posts (page 12 of 43)
-
Finding an Audience to Fix Flaws Oct 4, 2018
Infosec conferences are a great venue for sharing tools, techniques, and tactics across a range of security topics from breaking systems to building them. Not only are they a chance to learn from peers, but to meet new ones and establish connections with others who are tackling similar problems. One eternal topic is the “shift left” motto — building security into the SDLC as early as possible.
One way to shift left is to make sure developers aren’t left out of the conversation. Not every dev team has the budget to attend security conferences and quite often security conferences are attended by practitioners who aren’t building software as part of their daily work. I’ve attended many security conferences (and enjoy them!), but I’ve also wanted to find conferences that are oriented towards developers in order to bring an appsec message to them rather than expect them to discover security by chance.
This week I had the opportunity to present at the Star West developer conference. My presentation was about building metrics around the time and money invested in finding vulns within apps. It pulls data from real-world bounty programs and pen tests. It’s not hard to determine that finding vulns is important. But it can be hard to figure out when the time and money spent on finding them is well spent and when those investments could be directed to other security strategies.
A few points of the presentation were
-
Always strive to maintain an inventory of your apps, their code, and their dependencies. This is easier said than done, but it’ll always be a foundational part of an appsec program.
-
Find metrics that are meaningful to your program. For example, if you’re running a bounty program, when would it make more sense to attract more participants vs. engage the most prolific vuln reporters? If you’ve never conducted any security testing against an app, should you start with a pen test or a bug bounty? How might that choice affect your appsec budget?
-
Organizations and apps vary widely. Resist trying to compare your own metrics to other programs whose context, assumptions, and environments don’t match yours. Instead, follow your own metrics over time in order to observe trends and whether they’re influenced by your security efforts.
Although the presentation begins with data related to finding vulns, it doesn’t forget that fixing vulns is what contributes to making apps more secure. Regardless of how you’re discovering vulns, your DevOps team should be fixing them. Which also means you should have some metrics around that process.
One of the ways I looked at this data was in the relative time it takes organizations to fix different categories of vulns. Rather than ask how long it takes to fix all vulns, I thought it’d be interesting to see how quickly different types of vulns are fixed relative to each other.
In the following chart I took the average time to fix all vulns, then compared different categories against that average. In this case, it wasn’t too surprising to see that Remote Code Execution stood out as requiring fewer days than average to fix — it’s typically a high impact vuln that puts an app at significant risk of compromise. On the other hand, Redirects & Forwards took a longer time to fix. One theory could be that the greater number of days is related to the relatively low risk of such vulns and don’t need immediate attention. Another theory could be that such vulns are more difficult to fix due to nuances in allowing certain types of redirects while disallowing others. Knowing that Redirects & Forwards dropped off the OWASP Top 10 list in the recent 2017 update lends additional support to the idea that these are lower risk vulns.
Fast fixes and slow burns In any case, having metrics puts us on the path to data visualization. These steps enable us to start answering initial questions about the state of an app’s security. And then gives us a chance to ask follow-up questions about whether processes are working or whether we have blind spots in the data we’d like to have.
Appsec doesn’t happen in a vacuum. There’s a big difference between lamenting a perceived lack of security awareness among developers and engaging them on security issues that are relevant to their work. In addition to being relevant, the message should be constructive. Adding metrics to the discussion helps illuminate when efforts are successful, where they can be improved, and where more data is needed. Include the DevOps team as active participants in developing questions and metrics. Audience participation is a great way to build better appsec.
-
-
Preparing for the Next Data Breach Jun 6, 2018
Data contaminates everything it touches Data breaches happen. That doesn’t mean it’s acceptable for application owners to neglect security or be cynical about protecting data. It means that app owners need to be aware of how their organizations and the data they collect might be targeted. They need to review what controls and processes they have in place to make attacks more difficult or more easy to detect. And it means they should be ready to respond quickly and effectively in the event of a breach.
Be Practical
Data you don’t possess can’t be compromised. Of course, many apps are driven by data or need to collect data in order to provide value. But in many cases the utility of data degrades over time. There’s a point where the liability of holding onto data outweighs its value.
One way to approach data handling is reframing the metaphor you associate with it. For example, data can be toxic — accumulate as little as possible, be careful about collecting large doses. Perhaps it’s radioactive — it contaminates every system it touches and therefore those systems need to be protected. Or maybe you consider it digital oil — a valuable, finite resource that has significant negative consequences when it spills.
In any case, be aware of the nature of the data you collect, why it’s being collected, where it’s being stored, and when you can delete it. Again, these are easier said than done, but laying out a framework for the lifecycle of data (from collection to deletion) will help guide how you protect it.
Be Proactive
Encryption is necessary to protect data. It’s not sufficient (strong access control and authorization schemes are important as well), but it’s a baseline expectation for handling data.
Encrypt communications — HTTPS should be enabled and enforced by default with an HSTS header. The Let’s Encrypt project has made it easier to deploy HTTPS sites as part of a DevOps process. It provides free certs, removing objections of initial cost, and its ACME protocol enables automation to maintain, renew, and revoke certs.
Encrypt storage — Data lands in surprising places, from the typical databases, and data stores to AWS S3 buckets that notoriously expose their content due to misconfigurations. Cloud services have made it easier to deploy to instances running encrypted filesystems and interacting with APIs that manage hardware-based key stores. Of course, apps need to work on decrypted data (unless you’ve moved into more sophisticated crypto), so don’t expect this to be the only solution you need.
Protect secrets — Encryption will need private keys and shared secrets. APIs will need access tokens. Make it unnecessary to place credentials in code. Monitor commits and repos for accidental inclusion of private keys, tokens, and passwords.
Rotate secrets — Have a process to revoke and replace compromised or exposed secrets. If you’re encrypting data, but it takes two weeks to rotate encryption keys, then that’s a dangerous level of exposure.
Conduct security testing — Run pen tests to evaluate the baseline and recheck at key milestones. Maintain a bug bounty program to manage vulnerability reports. Use red team exercises to check for gaps in your monitoring and detection capabilities.
Be Prepared
Being prepared means you don’t have to make every decision and take every action under pressure. During a breach investigation there’ll always be pressure and time constraints and legal questions, but being prepared with data and how to answer basic questions will free up time to tackle the challenges you didn’t anticipate.
Just following the few steps of finding out when data is being collected and where it resides will reveal who the stakeholders are that may need to be part of the conversation when a breach occurs. It also helps identify who should be responsible for encrypting communications and storage. The modern DevOps model builds security into processes and technology. Security teams become participants in the CI/CD process, not gatekeepers who only drop in for random inspections.
Discover what you don’t know, whether it’s a data store that’s been forgotten and ill-protected or access controls that lend more access than control. Use a pen test to review the security of your apps. Use a bounty program to enable a continuous feedback loop. Engage a red team to test the technology and processes you’ve put in place to detect and respond to breaches.
All of these exercises will help you in the unfortunate event that a breach occurs. They’ll also help focus on recovery and improvement instead of falling into blame and chaos.
-
OURSA, Their Presentations, and Your Follow-up Apr 20, 2018
The RSA Conference descended on San Francisco again this year. It attracts hordes of infosec people who wander the jumbled grid of vendor expo halls and attend sessions. For several years it has been preceded by the BSidesSF conference, which is far smaller and far more focused on technical and practictioner tracks.
For several years, and this year in particular, the RSA keynotes have skewed mostly-to-almost-entirely male. BSides also skews this way, as do many conferences. RSA’s response to this situation evoked the mundane refrain that not enough diverse speakers were proposed or submitted by the keynote sponsors.
This prompted several people to challenge the assumption that speakers from under-represented groups are hard to find. Roughly five days later that challenge was transformed from an idea into the announcement of the OURSA conference. It promptly sold out in 12 hours.
The speakers weren’t essentialized to their identity or set forth only for their personal experience. Their experience and identity informed the security and privacy work they’ve been doing on a daily basis. It was that work, that context, and that perspective that was set forth throughout every presentation.
The format of the sessions contributed to both a focused message and enabling a variety of voices. Sessions were broken into roughly 15 minute blocks followed by a moderated panel of the speakers. The moderators continued that focus on message and brought out discussions that helped tie the presentations together.
Check out the recorded stream. It’s a long day of sessions, but it’s one well spent.
It’s a reminder that these groups exist, that they’ve been participants in infosec since the beginning. There are professionals with a voice working on important problems.
It’s a reminder that diversity enriches knowledge and points of view. Appsec, threat models, and privacy are enduring conference topics. Hearing them presented from different perspectives highlights important aspects that the usual lists and recommendations miss.
It’s a reminder that inclusivity requires action to build programs and that representation matters. Speaking in support of an effort isn’t as strong as having members of an under-represented population speak for themselves. Urging people to “just submit” to a conference where they may be unsure they’re welcome isn’t as strong as inviting people who can set the standard for technical content and presentation skills.
It’s refreshing to see how well a conference can be run — on schedule, high-information content, engaging speakers. It’s especially refreshing to see one that demonstrates how many of the familiar mantras of threat modeling, privacy, and appsec have failed to account for the context of underserved and vulnerable populations. Appsec and privacy need to raise the bar in terms of how they protect users and their data. To do so will require revisiting our understanding of these issues and how apps are or are not helping. What OURSA proved is that there are already people who understand this. Even better, they’re already working on solutions.
The OURSA conference shouldn’t be necessary. The speakers and their work should be visible in other conferences, as should speakers like them. The presentations were far more interesting that yet another discussion of weaponizing XSS or shallow commentary on why users make security impossible. The type of work they’re doing, applying appsec to vulnerable populations and pushing for more privacy engineering, makes for engaging content. And it pushes for ways of making infosec pick up more of the burden for crafting effective solutions.
I’m looking forward to 2019.