Dangerous Errors
Podcast Posts Presentations Synthwave About
Podcast Posts Presentations Synthwave About
  • Secure Design Practices for Verifying Vuln Fixes Dec 12, 2017

    The pen test lifecycle is coming to a close. The previous posts have weighed heavily on getting the process started and running smoothly. After all, it’s important to identify vulns within your apps. But most important is fixing them so the app’s users and data can remain well-protected.

    Part of keeping a pen test successful is ensuring that the pen testers submit findings with clear risk and reproduction steps. This helps developers understand and address the problem.

    A pen test isn’t over once the final report has been delivered. Ideally, the pen testers will remain available to confirm the fixes for each of their findings. Here are a few things to keep in mind when going through the results of a pen test.

    1. Resolve Findings Promptly

    The time it takes to release fixes is affected both by the complexity of the vuln as well as the maturity of the DevOps team. An efficient team can release a fix within a few days. Unfortunately, sometimes a fix can take a few months.

    Part of the process for resolving findings is considering their risk. Critical vulns should be addressed quickly. Other vulns may not need to be addressed at all. These latter types of vulns fall into an “Accepted Risk” category.

    Accepted Risk isn’t a loophole to avoid fixing issues. It’s an acknowledgement that some design decisions trade off different levels of security. Some decisions may be choose between two competing approaches to usability without sacrificing security significantly for either one.

    2. Address the Underlying Cause

    One reason to have pen testers confirm a fix is to ensure the vuln is correctly resolved. In the worst case, a developer may have simply put an attack payload on a deny list instead of applying a proper countermeasure. As an egregious example, this would be like blocking “alert” or “javascript” to prevent a cross-site scripting vuln instead of correctly applying the relevant output encoding for the payload.

    3. Look for Vuln Patterns Elsewhere

    The nature of a blackbox pen test means that the pen testers don’t have full insight or visibility into the app. As a developer, you do.

    Certain types of vulns have patterns that lends themselves to quick searches for similar problems elsewhere in the code. This might be a misused function that lead to an injection attack, a missing CSRF token, or a missing access control check. Take the time to review similar code paths for this kind of vuln.

    Conduct a postmortem for high and critical vulns. This exercise brings a team together to discuss what went wrong, whether a proposed fix will be effective, and create action items for additional countermeasures. The goal of a postmortem is to collect and share knowledge about important security vulns. It’s not an exercise in placing blame. While the presence of a vuln implies that someone did make a mistake, it’s important to understand how the mistake was made and how it could be prevented in the future.

    4. Create a Regression Test

    When possible, create a regression test that can reproduce and catch the vuln in the future. A regression test is also a way to discover other areas where a vuln may be lurking. (This ties into step three above.) Tests should be an integral part of a DevOps pipeline. They ensure code quality as well as build confidence in an app’s stability under constant change.

    Always prefer creating a regression test over making a comment in the code. Comments go stale quickly, may be ambiguous, and may often be incorrect. Code that is self-documenting (e.g. informative function and variable names) and whose functions are short enough to fit within a page of text is far easier to maintain.

    Going through the effort of creating regression tests also helps reinforce an understanding of the nature of a vuln. By reproducing it, a developer can get a better appreciation for the underlying problem and consider more effective countermeasures. It’s also a way of determining whether your test infrastructure is sufficiently flexible enough to handle certain types of error cases. For example, if you have no way of rendering a web page or inspecting the DOM produced by a resource call, then you may not be able to effectively test for cross-site scripting.

    It may not be possible to create a regression test for every vuln, especially complex ones that require multiple steps or exercise a series of flaws.

    A pen test should equip you with the knowledge about the current risk exposed by your app and provide recommendations for reducing the risk of the app’s data or users being compromised. The pen test may be a point in time snapshot of the app’s flaws, but how you address the vulns will have a lasting improvement to its security.

    Make it harder for vulns to appear. Refactor code so that developers can more easily do the right thing by default. Check out this blog post for additional ideas on making vulns a rare occurrence.

    Now that you’ve completed a pen test, it’s time to start planning for the next one. The frequency of testing can align with a calendar, e.g. semi-annual or quarterly, or align with major code releases. In any case, the speed with which you’re able to resolve vulns will also be a signal for how well you’ll be able to manage more testing.

  • Avoid BugOps, Do DevOps Oct 26, 2017

    DevOps aims to release code quickly with confidence. Frequent, fast releases aren’t the hard part. The challenge is achieving justifiable confidence that changes won’t break the production environment and, when that inevitably happens, that teams are able to quickly analyze and resolve problems.

    Etching of villagers building a fence

    Not chasing vulns, but crafting designs that defeat vuln classes

    This level of maturity requires smart investment in automation, testing, and monitoring. (And people! We’ll dive into that angle in a different post.) Automation increases the pace at which code goes through review, testing, and deployment. Testing (hopefully) detects errors and halts the deployment to prevent bugs from reaching production. Monitoring helps ensure the app remains stable under constant change by providing feedback about its health and activity.

    Code isn’t perfect. Perhaps an automated step mishandled an error condition, or testing had a gap in coverage that didn’t exercise functionality affected by a code change. Or maybe the app’s monitoring missed a type of event or omitted useful info. Bugs happen.

    Vulnerabilities are bugs that impact the security of the app, its data, or its users. Vuln discovery is important. It’s one of the reasons bug bounties have become so pervasive. We want to know what bugs our apps have, especially if they’ve reached production. And we also have to fix them.

    Being able to fix vulns fast is commendable. But a too-narrow focus on speed can turn DevOps into BugOps. BugOps is releasing code quickly to fix vulns without considering their underlying cause. It leads to an endless loop of find-fix-repeat. While it’s important to fix vulns promptly, just adding quick patch after quick patch only makes an app more brittle.

    Metrics are a critical tool for decision making. But a shortsighted devotion to metrics aggravates the BugOps mentality. Adhering to SLAs while ignoring root causes creates the illusion of secure code. A quickly-patched app may still rest on a weak architecture.

    Another tenet of DevOps is building feedback loops — collecting and responding to actions and events throughout the development pipeline. This should apply equally to vuln discovery.

    When vulns appear in production, it’s especially important to analyze how they arrived there and what quality controls they bypassed. They might be due to a mistake, where a coding guideline or established process wasn’t followed. Or they might be due to a misunderstanding, where some flaw in the app’s architecture was exposed or some process didn’t exist.

    This analysis can inspire fundamental changes to an app’s design that sweep away whole classes of vulns. Or it may introduce controls that make the exploitation of vulns less impactful and more evident.

    Good analysis provides insight into gaps in tools, knowledge, or process. For example, if your testing framework can’t model the types of vulns that are being reported, then you have two problems. One, you won’t be able to create effective regression tests. Two, you’re being underserved by automation.

    Good metrics provide insight into how well a DevOps team handles security. Collecting metrics emphasizes what topics are important (hence, worthy of measure). Metrics over time produce trends. Trends provide feedback about the effectiveness of security tactics such as introducing a new tool, adjusting a process, or adopting new programming patterns. Some useful metrics related to vuln discovery are

    • Type of discovered vulns. Do certain categories stand out? Do they share similar causes?
    • Risk of discovered vulns. Ideally, this would be a common rating based on severity indicated by a CVSS score. No rating is perfect, but CVSS provides a common frame of reference for severity that informs risk.
    • Speed of fix. What was the time between discovery of the vuln and the code commit that fixed it? How does this measure against expectations or explicit SLAs?
    • Speed of deployment. How long does it take for a commit to reach production? Is there a fast-path for code to address critical issues? Does the app have feature flags that can trivially enable/disable problem areas until a fix is ready?
    • Location (e.g. files, objects, functions) of vulns within source code. Review the commit history associated with vulns. Do developers repeatedly address vulns in a particular code path? Is a vulnerable pattern repeated elsewhere, waiting to be reported as vulnerable? Are particular developers responsible for weaker code? Is any automation or tool capable of identifying the vuln?
    • Staleness of the location of vulns within source code. In addition to space (i.e. where), capture the time associated with the vuln’s fix. When was the last time the affected code was touched? Is it related to older, legacy code? Is it in newer code? This can help highlight whether the app is on a path of cleanup to improve its overall quality or remains stuck with the same eternal programming mistakes.
    • Effort to fix. Related to speed, this is more about the cost associated with fixing vulns. It may be a measure of hours required to analyze the vuln and commit a fix. It could also be the number of people involved in the process. For example, a vuln might require a complex fix or many engineering discussions to weigh trade-offs.

    Avoid letting a swarm of vulns chase your team down the BugOps path. As you fix vulns, take the time to figure out how they might have crept into production, adjust tools and processes to catch similar errors that might occur in the future, and track metrics that help show what kind of progress your DevOps team is making to reduce risk.

    Keep an eye out for vulns. Keep your vision on the processes that make DevOps successful.

  • DevSecCon London 2017 Oct 20, 2017
    Assortment of insects

    Ah, London — the city responsible for most of my music collection. Also, the city where I recently had the fortune to present at DevSecCon.

    DevSecCon examines the challenges facing DevSecOps (and DevOps) practitioners. It emphasizes how to work with people to make tools and process part of the CI/CD pipeline. This resonates with me greatly because I strongly believe that effective security comes from participation and empathy.

    DevSecOps brings security teams into the difficult tasks of writing, supporting, and maintaining code. It's a welcome departure from delivering a "Go fix this" message. Sometimes developers need guidance on basic security principles and an introduction to the OWASP Top 10. Sometimes developers have that knowledge and are making tough engineering choices between conflicting recommendations. Security shouldn't be the party that says, "No". Their response should be, "Here's a way to do that more securely."

    The "Go fix this" attitude has underserved appsec. We live in an age of 130,000+ Unicode characters and extensive emoji. Yet developers must still (for the most part) handle apostrophes and angle brackets as special exceptions lest their code suffer from HTML injection, cross-site scripting, or a range of other injection-based flaws.

    All this is to say, check out The Flaws in Hordes, the Security in Crowds, which explores this from the perspective of vuln discovery — and that too much investment in vuln discovery at the time when an app reaches production misses the chance to build stronger foundations.

    Slides from all the presentations are available here.

  • Bikeshredding & Threat Models Oct 1, 2017

    Asking a DevOps team what they’re most worried about in their app is a great way to seed a conversation about risk. In my recent presentations, I’ve taken to emphasizing the use of threat modeling exercises as an avenue towards security awareness. Threat models are ways of reasoning about different ways an app’s data or users might be compromised. They can also be great ways to build security awareness by encouraging creative thinking about an app’s security in a way that drives constructive conversation and minimizes judgement about lack of security knowledge.

    A penny-farthing for your threat models.

    A penny-farthing for your threat models.

    A key element to such discussions is using the “Yes, and…” principle, in which you guide the conversation not by negating someone’s ideas, but expanding on them or offering an alternative viewpoint. In this exercise, you are the security expert filling in gaps in knowledge and nudging tangents away from unlikely scenarios, but letting the DevOps team drive the discussion amongst themselves.

    Bikeshredding is when this exercise devolves into distraction. The term echoes “bikeshedding” — a situation where a software engineering discussion becomes overwhelmed by details irrelevant to the problem at hand. For example, if your problem relates to efficient structures for storing bicycles, it’s unlikely that the color of the structure contributes meaningfully to that efficiency. And that such ill-timed attention is counterproductive to the core task. It may also be as much about arguing over subjective choices as it may be about misusing data as an illusion about objective preferences.

    In bikeshredding, a threat model becomes disjoined from reality. It may represent a scenario unduly influenced by ideology or one based on incomplete (or ignored) information.

    A strong indicator of bikeshredding are models that begin with “Assuming...” or “All you need to do is...”. For example, “Assuming the Same Origin Policy is broken, then...“ it may not be necessary to continue this sentence if it involves a discussion of cross-site scripting or anti-CSRF tokens. Or perhaps the setup to an attack “just requires a DNS or BGP hijack” before the vuln under discussion could be exploited.

    That doesn’t mean such scenarios aren’t impossible or that they should be dismissed. It does mean that threats are not unbounded agents that visit great woe unto apps or networks. Threat actors (those executing an attack) require resources and preparation. In some cases, those resources and prep may be nothing more than a browser bar and an unvalidated URL parameter. In other cases, those costs or sequence of events may be high and complex.

    There is a place for stunt hacks, where an SDR Bluetooth spoofer affixed to a hedgehog launched (safely, with parachute) from a drone hacks an IoT fridge in order to obtain some tasty insects stored therein. Tinkering and creativity are fun. Always educational, it can sometimes inform practical appsec.

    But there’s a reason that legacy systems and legacy software are notorious attack vectors. They’re easy and cost little for the attacker.

    As you mature your organization’s stance and have more robust ways to respond to threats, you’ll also increase the time and resources required of an attacker. Over time, you’ll understand how successful attacks are executed. And, although you’ll continue to improve your app’s baseline security to prevent exploits, it’s highly likely that you’ll discover that an efficient detection and response becomes an equally important investment.

    Use threat models to spread security knowledge throughout a DevOps team and engage them in prioritizing countermeasures and containment. Help them be informed about security topics and the chain of events necessary for various attacks to succeed. Avoid the bikeshredding and let them build the structure that handles user data with minimal risk.

  • ISC2 Security Congress, 4416 - GBU Slides Sep 29, 2017
    Rattlesnake

    My presentation on the good, the bad, and the ugly about crowdsourced security continues to evolve. The title, of course, references Sergio Leone's epic western. But the presentation isn't a lazy metaphor based on a few words of the movie. The movie is far richer than that, showing conflicting motivations and shifting alliances.

    The presentation is about building alliances, especially when you're working with crowds of uncertain trust or motivations that don't fully align with yours. It shows how to define metrics and use them to guide decisions.

    Ultimately, it's about reducing risk. Just chasing bugs isn't a security strategy. Nor is waiting for an enumeration of vulns in production a security exercise. Reducing risk requires making the effort to understand what what you're trying to protect, measuring how that risk changes over time, and choosing where to invest to be most effective at lowering that risk. A lot of this boils down to DevOps ideals like feedback loops, automation, and flexibility to respond to situations quickly. DevOps has the principles to support security, it should have to knowledge and tools to apply it.

1 ... 7 8 9 10 11 ... 26

Dangerous Errors

  • zombie
  • mutantzombie
  • mutantzombie.bsky.app
  • SecurityWeekly

Cybersecurity and more | © Mike Shema