Dangerous Errors
Podcast Posts Presentations Synthwave About
Podcast Posts Presentations Synthwave About
  • OURSA, Their Presentations, and Your Follow-up Apr 20, 2018
    OURSA conference

    The RSA Conference descended on San Francisco again this year. It attracts hordes of infosec people who wander the jumbled grid of vendor expo halls and attend sessions. For several years it has been preceded by the BSidesSF conference, which is far smaller and far more focused on technical and practictioner tracks.

    For several years, and this year in particular, the RSA keynotes have skewed mostly-to-almost-entirely male. BSides also skews this way, as do many conferences. RSA’s response to this situation evoked the mundane refrain that not enough diverse speakers were proposed or submitted by the keynote sponsors.

    This prompted several people to challenge the assumption that speakers from under-represented groups are hard to find. Roughly five days later that challenge was transformed from an idea into the announcement of the OURSA conference. It promptly sold out in 12 hours.

    The speakers weren’t essentialized to their identity or set forth only for their personal experience. Their experience and identity informed the security and privacy work they’ve been doing on a daily basis. It was that work, that context, and that perspective that was set forth throughout every presentation.

    The format of the sessions contributed to both a focused message and enabling a variety of voices. Sessions were broken into roughly 15 minute blocks followed by a moderated panel of the speakers. The moderators continued that focus on message and brought out discussions that helped tie the presentations together.

    Check out the recorded stream. It’s a long day of sessions, but it’s one well spent.

    It’s a reminder that these groups exist, that they’ve been participants in infosec since the beginning. There are professionals with a voice working on important problems.

    It’s a reminder that diversity enriches knowledge and points of view. Appsec, threat models, and privacy are enduring conference topics. Hearing them presented from different perspectives highlights important aspects that the usual lists and recommendations miss.

    It’s a reminder that inclusivity requires action to build programs and that representation matters. Speaking in support of an effort isn’t as strong as having members of an under-represented population speak for themselves. Urging people to “just submit” to a conference where they may be unsure they’re welcome isn’t as strong as inviting people who can set the standard for technical content and presentation skills.

    It’s refreshing to see how well a conference can be run — on schedule, high-information content, engaging speakers. It’s especially refreshing to see one that demonstrates how many of the familiar mantras of threat modeling, privacy, and appsec have failed to account for the context of underserved and vulnerable populations. Appsec and privacy need to raise the bar in terms of how they protect users and their data. To do so will require revisiting our understanding of these issues and how apps are or are not helping. What OURSA proved is that there are already people who understand this. Even better, they’re already working on solutions.

    The OURSA conference shouldn’t be necessary. The speakers and their work should be visible in other conferences, as should speakers like them. The presentations were far more interesting that yet another discussion of weaponizing XSS or shallow commentary on why users make security impossible. The type of work they’re doing, applying appsec to vulnerable populations and pushing for more privacy engineering, makes for engaging content. And it pushes for ways of making infosec pick up more of the burden for crafting effective solutions.

    I’m looking forward to 2019.

  • OWASP AppSec Cali 2018 Presentation Jan 30, 2018

    Here are slides for my presentation, "DevOps Is Automation, DevSecOps Is People".

    For me, automation is one of the most compelling aspects of DevOps. Without automation you won’t reach scale, you’ll struggle with maintenance and patch management, and you’ll only have a foggy notion of the risk your app has.

    Several AD&D Books on a Shelf

    In addition to scaling, we want to make repetitive and complex tasks automatic for the people who do them. Exposing DevOps teams to the tasks of building and maintaining software shows that everybody hurts sometimes.

    The cloud has enabled systems to be abstracted to code and APIs. This doesn’t mean that they’ll be more secure, but it does mean that the maturity you bring to code quality for you app can translate to the code quality for your systems and architecture. What we don't have are APIs for people.

    And software is ultimately made by and made for people. You might even say it's made of people. (Some apps are more people than others....)

    This presentation was a bit of a survey of topics, comments, and examples of how to improve not only how we work with people to add security to the DevOps pipeline, but additional things to consider as we build threat models for the apps being deployed. For example, it's one thing to talk about weakness in "business logic" that may lead to privilege escalation or data theft. It's another to consider how an app's features can be used to abuse or harass other users.

    In appsec we have lists, more lists, recommendations, secure coding guidelines, and more lists. But they're meaningless without people to place them in context and take action. Communication and empathy are key to understanding how to improve the way we integrate security into processes successfully and build apps that serve people well.

    In a way they're like tabletop role-playing games. RPGs have lists and tables and appendices and dice and more tables and lists. They have threats and unexpected situations. But it's the people that bring the game to life.

  • The Fourth Year of the Fourth Edition Jan 14, 2018

    Today is the fourth anniversary of the fourth edition of Anti-Hacker Tool Kit. Technology changes quickly, but many of the underlying principles of security remain the same. Here's an excerpt from the introduction.

    AHT4

    Welcome to the fourth edition of the Anti-Hacker Tool Kit. This is a book about the tools that hackers use to attack and defend systems. Knowing how to conduct advanced configuration for an operating system is a step toward being a hacker. Knowing how to infiltrate a system is a step along the same path. Knowing how to monitor an attacker’s activity and defend a system are more points on the path to hacking. In other words, hacking is more about knowledge and creativity than it is about having a collection of tools.

    Computer technology solves some problems; it creates others. When it solves a problem, technology may seem wonderful. Yet it doesn’t have to be wondrous in the sense that you have no idea how it works. In fact, this book aims to reveal how easy it is to run the kinds of tools that hackers, security professionals, and hobbyists alike use.

    A good magic trick amazes an audience. As the audience, we might guess at whether the magician is performing some sleight of hand or relying on a carefully crafted prop. The magician evokes delight through a combination of skill that appears effortless and misdirection that remains overlooked. A trick works not because the audience lacks knowledge of some secret, but because the magician has presented a sort of story, however brief, with a surprise at the end. Even when an audience knows the mechanics of a trick, a skilled magician may still delight them.

    The tools in this book aren’t magical; and simply having them on your laptop won’t make you a hacker. But this book will demystify many aspects of information security. You’ll build a collection of tools by following through each chapter. More importantly, you’ll build the knowledge of how and why these tools work. And that’s the knowledge that lays the foundation for being creative with scripting, for combining attacks in clever ways, and for thinking of yourself as a hacker.

    I chose magic as a metaphor for hacking because it resonates with creative thinking and combining mundane elements to achieve extraordinary effects. Hacking (in the sense of information security) involves knowing how protocols and programs are constructed, along with the tools to analyze and attack them. I don't have a precise definition of a hacker because one isn't necessary. Consider it a title to be claimed for yourself or conferred by peers -- your choice.

    Another reason the definition is nebulous is that information security spans many topics. You might be an expert in one, or a dabbler in all. In this book you’ll find background information and tools for most of those topics. You can skip around to chapters that interest you.

    The Anti- prefix of the title originated from the first edition's bias towards forensics that tended to equate Hacker with Attacker. It didn't make sense to change the title for a book that's made its way into a fourth edition. Plus, I wanted to keep the skull-themed cover.

    Regard that prefix as an antidote to the ego-driven, self-proclaimed hacker who thinks knowing how to run canned exploits out of Metasploit makes them an expert. They only know how to repeat a simple trick. Hacking is better thought of as understanding how a trick is constructed or being able to create new ones of your own.

    Each chapter sets you up with some of that knowledge. And even if you don't recognize an allusion to Tenar or Gaius Helen Mohiam, there should be plenty of technical content to keep you entertained along the way.

    I hope you enjoy the book.

  • Crucial Timing for Critical Vulns Jan 12, 2018

    Time, like love, is a universal subject in songs. Time is also a universal theme when discussing vulns; it’s a key component of risk. Equally universal is the heartbreak we feel when finding out about critical vulns or trying to figure out how to fix them.

    Identifying vulns is an important part of evaluating an app’s overall risk. Vuln discovery comes from many sources — scanners, crowds, pen tests, red teams, devs, or others. These are also affected by the budget available for vuln discovery.

    Fixing vulns is an important part of reducing an app’s overall risk. Whether you primarily find or fix vulns, it’s helpful to shift perspective between the two tasks. Both sides should be working against similar threat models, but the information they have will affect how each interprets, expands, or contracts those models. It’s mutually beneficial when teams can educate each other on how they perceive these models. This is especially important for resolving vulns efficiently and effectively.

    My previous post looked at average resolution times across vuln categories. It revealed a stark bias that the category of Redirects and Forwards took by far the greatest amount of days to resolve, with an implication that such vulns likely pose the least risk and therefore require the least investment of DevOps attention.

    In continuing the theme of resolution, here’s a graph that shows the relative number of days to resolve vulns of different risk levels. The days are measured against the average resolution time across all vulns. Fewer days represents a faster fix, greater days a longer one.

    Relative days to fix critical to very low risk issues

    It's refreshing to see that the most critical vulns are fixed more quickly than average, and unsurprising to see that low-risk vulns take far longer than average. The top and bottom match what we’d expect from DevOps teams prioritizing their efforts based on risk.

    But the graph doesn’t form an idealized Z, where the number of days would be least (left-hand side) for the most critical vulns and progressively increase (towards right-hand side) for less critical ones. What’s going on in the middle?

    This messy middle provides a seed for many different discussions. As a metric, the number of days until a vuln is resolved is important, but it offers an incomplete story. Some vulns might be easy to find, but complex to fix. Some vulns might have one level risk when reported, but a lower one when reviewed by the DevOps team — not uncommon when the person who discovers the vuln doesn’t have the full context of the vuln’s impact. Others might be resolved by accepting the risk.

    (We might also use this to start a discussion on whether risk has been assigned in an effective manner. Calculating and managing risk is something for future posts.)

    As in that previous post, I haven’t noted any specific numbers related to the days. We’ll tackle SLAs and concrete timelines in a future post. The focus here is whether the criticality of a vuln influences the time it takes to resolve it. In the ideal world, they should be inversely related (more critical takes less time). But we live in the real world, where DevOps teams must make engineering trade-offs, address underlying problems, revisit assumptions, and review architectures.

    Fortunately, it looks like many organizations focus attention on critical vulns. The real world is messy — just listen to all those songs about time and love. Metrics don’t make things any cleaner, but they can give us different ways to evaluate just how messy it is.

  • Resolutions for a New Year of Vulns Dec 26, 2017

    Throughout 2017 I explored vuln data to highlight strategies for measuring and maximizing the efficiency of vuln discovery. The primary themes were budget and time — deciding how best to allocate money among different approaches plus evaluating the triggers and frequency of security testing. I was fortunate to present much of this at various conferences, which gave me a chance to collect feedback and engage in interesting discussions about the challenges that DevOps teams face.

    In 2018 I’ll have even more data with more dimensions to explore. While it’s important to lay out a strategy for finding vulns in an app. Knowing what risk an app has is only the start. Reducing that risk is the important next step. Strategies for resolving vulns will be a new theme for the new year.

    Resolving vulns relates to a trope along the lines of, “Show me your budget and I’ll tell you what you value.” This applies to time as well as money. The vulns you choose to fix and the speed with which you fix them reflect an investment in security. Ideally, it also reflects an understanding of risk.

    As a teaser of the content coming next year, here’s a graph that reflects how an organization resolves vulns. It shows the deviation in resolving a specific category of vuln from the average resolution time across all vulns. Fewer days represents a faster fix, greater days a longer one.

    Relative days to fix common vulns

    In this graph, vulns due to misconfigurations are resolved close to the average, whereas redirects and forwards languish in the DevOps queue for quite a while longer.

    We can follow many paths of discussion from a graph like this. Several things influence the speed of fixes, from complexity of code to risk they carry. In the case of redirects and forwards, it seems fair to guess that such vulns represent little or no risk — a hypothesis further reinforced by the category disappearing from the recent OWASP Top 10 update. (We’ll also examine how prevalence of a vuln type impacts its resolution.)

    I haven’t even noted what the average number of days is. In a way, it doesn’t matter. The vuln category at the top of the graph clearly receives more attention than the two I’ve labeled. In other words, DevOps teams have a clear priority for such vulns. That’s one indicator of the distinction of BugOps vs. DevOps. Even better is if that category were to completely disappear over time.

    It will still be important to discuss threat models and how they help measure risk, especially working with clear, well-considered models instead of falling into counterproductive traps. From there, we’ll take a view of the risk we measure and explore the different ways to reduce it.

    This year has only a few days left. A few days left should be the typical response to fixing vulns. Come back next year to find out what new vuln data says about those few days.

1 ... 6 7 8 9 10 ... 26

Dangerous Errors

  • zombie
  • mutantzombie
  • mutantzombie.bsky.app
  • SecurityWeekly

Cybersecurity and more | © Mike Shema