Created with Sketch.
O'Reilly Security Podcast - O'Reilly Media Podcast
51 minutes | 3 years ago
Rich Smith on redefining success for security teams and managing security culture
The O’Reilly Security Podcast: The objectives of agile application security and the vital need for organizations to build functional security culture.In this episode of the Security Podcast, I talk with Rich Smith, director of labs at Duo Labs, the research arm of Duo Security. We discuss the goals of agile application security, how to reframe success for security teams, and the short- and long-term implications of your security culture.Here are some highlights: Less-disruptive security through agile integration Better security is certainly one expected outcome of adopting agile application security processes, and I would say less-disruptive security would be an outcome as well. If I put my agile hat on, or could stand in the shoes of an agile developer, I would say they would have a lot of areas where they feel security gets in the way and doesn't actually really help them or make the product or the company more secure. Their perception is that security creates a lot of busy work, and I think this comes from that lack of understanding of agile from the security camp—and likewise of security from the agile camp. Along those lines, I would also say one of the key outcomes should be less security interference (where it's not necessary) in the agile process. The goal is to create more harmonious working relationships between these two groups. It would be a shame if the agile process was slowed down purely at the expense of security, and we weren't getting any tangible security benefits from that. Changing how security teams measure their success If you’re measuring the success of your security program by looking at what didn’t happen, the hard work your security team is doing may never really be apparent, and people may not understand the amount of hard work that went in to prevent bad things from happening. And obviously, that's difficult to quantify as well, from a management perspective. This often has had the unfortunate side effect that security teams measure themselves and measure their success from the perspective of bad things they stopped from happening. That may well be the case, but it's hard to measure, and it's actually quite a negative message. It can push security teams into the mindset that the way they can stop the bad things from happening is by trying to make sure as few things change as possible. Security teams should measure themselves on what they enable, and what they enable to happen securely. That's a much more tangible and positive way of measuring the worth of that security team and how effective they are. Any old security team, whether it's good or bad, can say no to everything. Good security teams understand the business, understand what the development team is trying to get done. It's really more about what they can enable the business to do securely, and that's going to require some novel problem solving. That's going to mean that you're not just going to take solutions off the shelf and throw them at every problem. Evaluating your organization’s security culture Every company already has a security culture. It may not be the one they want, but they already have one. You need to build a security culture that works well for the larger organization and is in keeping with the larger organization's culture. I think we absolutely can take control of that security culture, and I'll go further and say that we have to. Otherwise, you're just going to end up in a situation where you have a culture that is not serving your organization well. There’s a lot of questions you should be considering when evaluating your culture. What is your current security culture? How does the rest of the company think abut security? How does the rest of the company view your security team? Do people go out of their way to include the security team in conversations and decision-making, or do they prefer to chance it and hope that they don't notice and try to squeak under the radar? That says a lot about your security culture. If people aren't actively engaging with the subject matter experts, well, something's wrong there.
27 minutes | 3 years ago
Christie Terrill on building a high-caliber security program in 90 days
The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program.In this episode of the Security Podcast, I talk with Christie Terrill, partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business.Here are some highlights: Educating businesses on the complexities of “being secure” This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’ Security can’t (and shouldn’t) interrupt the progress of other teams The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them. Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other. How to approach security program strategy and planning The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing? Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams. Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.
18 minutes | 3 years ago
Susan Sons on building security from first principles
The O’Reilly Security Podcast: Recruiting and building future open source maintainers, how speed and security aren’t mutually exclusive, and identifying and defining first principles for security.In this episode of the Security Podcast, O’Reilly’s Mac Slocum talks with Susan Sons, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project, recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly.Here are some highlights: Recruiting to save the internet The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business; the people who work at the ISP; and their routing equipment, power, billing systems, and marketing—but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs. Early- and mid-career programmers and sysadmins say, 'I'm going to go work on this really cool user application. It feels safer.' They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time, I talk about a few places you can donate to help with that, including ICEI and CACR), and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people. Building for both speed and security Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build, you have to take a breath and say, 'How am I going to build this in?' or 'I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.' That takes a long view rather than short-term planning. Identifying and defining first principles for security I worked with colleagues at the Indiana University Center for Applied Cybersecurity Research (CACR) to develop the Information Security Practice Principles (ISPP). In essence, the ISPP project identifies and defines seven rules that create a mental model for securing any technology. Seven may sound like too few, but it dates back to rules of warfare and Sun Tzu and how to protect things and how to make things resilient. I do a lot of work from first principles. Part of my role is that I’m called in when we don't know what we have yet or when something's a disaster and we need to triage. Best practice lists come from somewhere, but why do we teach people just to check off best practice lists without questioning them? If we teach more people to work from first principles, we can have more mature discussions, we can actually get our C-suite or other leadership involved because we can talk in concepts that they understand. Additionally, we can make decisions about things that don't have best practice checklists.
27 minutes | 3 years ago
Charles Givre on the impetus for training all security teams in basic data science
The O’Reilly Security Podcast: The growing role of data science in security, data literacy outside the technical realm, and practical applications of machine learning.In this episode of the Security Podcast, I talk with Charles Givre, senior lead data scientist at Orbital Insight. We discuss how data science skills are increasingly important for security professionals, the critical role of data scientists in making the results of their work accessible to even nontechnical stakeholders, and using machine learning as a dynamic filter for vast amounts of data.Here are some highlights: Data science skills are becoming requisite for security teams I expect to see two trends in the next few years. First, I think we’re going to see tools becoming much smarter. Not to suggest they're not smart now, but I think we're going to see the builders of security-related tools integrating more and more data science. We're already seeing a lot of tools claiming they use machine learning to do anomaly detection and similar tasks. We're going to see even more of that. Secondly, I think rudimentary data science skills are going to become a core competency for security professionals. Considering, I expect we are going to increasingly see security jobs requiring some understanding of core data science principles like machine learning, big data, and data visualization. Of course, I still think there will be a need for data scientists. Data scientists are going to continue to do important work in security, but I also think basic data science skills are going to proliferate throughout the overall security community. Data literacy for all I'm hopeful we're going to start seeing more growth in data literacy training for management and nontechnical staff, because it's going to be increasingly important. In the years to come, management and executive-level professionals will need to understand the basics—maybe not a technical understanding, but at least a conceptual understanding of what these techniques can accomplish. Along those lines, one of the core competencies of a data scientist is, or at least arguably should be, communication skills. I'd include data visualization in that skillset. You can use the most advanced modeling techniques and produce the most amazing results, but if you can't communicate that in an effective manner to a stakeholder, then your work is not likely to be accepted, adopted, or trusted. As such, making results accessible is really a vital component of a data scientist’s work. Machine learning as a dynamic filter for security data Machine learning and deep learning have definitely become the buzzwords du jour of the security world, but they genuinely bring a lot of value to the table. In my opinion, the biggest value machine learning brings to the table is the ability to learn and identify new patterns and behaviors that represent threats. When I teach machine learning classes, one of the examples I use is domain-generating algorithm detection. You can do this with a whitelist or a blacklist, but neither one of these is going to be the most effective approach. There's been a lot of success in using machine learning to identify this, allowing you to then mitigate the threat. A colleague of mine, Austin Taylor, gave a presentation and wrote a blog post about this as well—about how machine learning fits in the overall schema. He views data science in security as being most useful in building a very dynamic filter for your data. If you imagine an inverted triangle, you begin examining tons and tons of data, but you can use machine learning to filter out the vast majority of it. From there, a human might still have to look at the remaining portion. By applying several layers of machine learning to that initial ingested data, you can efficiently filter out the stuff that's not of interest.
26 minutes | 4 years ago
Andrea Limbago on the effects of security’s branding problem
The O’Reilly Security Podcast: The multidiscliplinary nature of defense, making security accessible, and how the current perception of security professionals hinders innovation and hiring.In this episode of the Security Podcast, I talk with Andrea Limbago, chief social scientist at Endgame. We discuss how the misperception of security as a computer science skillset ultimately restricts innovation, the need to make security easier and accessible for everyone, and how current branding of security can discourage newcomers.Here are some highlights: The multidisciplinary nature of defense The general perception is that security is a skillset in the computer science domain. As I've been in the industry for several years, I've noticed more and more the need for different disciplines, outside of computer science, within security. For example, we need data scientists to help handle the vast amount of security data and guide the daily collection and analysis of data. Another example is the need to craft accessible user interfaces for security. So many of the existing security tools or best practices just aren't user friendly. Of course, you also need that computer science expertise as well--from the more traditional hackers to defenders. All that insight can come together to help inform a more resilient defense. Beyond that, there’s the consideration of the impact of economics and psychology. This is especially relevant when you think about insider threat. It's really something I wish more people would think about in a broader perspective, and I think that would actually help attract a lot more people into the industry as well, which we desperately need right now. Making security accessible and easier for all We need to do a better job of informing the general public about security. Those of us in the security field see information on how to secure our accounts and devices all the time, but I consistently come across people outside of our industry who still don't understand things like two-factor authentication, or why that would be helpful for them. These are very smart people. Part of the challenge is we, as an industry, haven't done a phenomenal job branching out and talking in more common language about the various aspects and steps people can take. People know they need to be secure, but they really don't know what the key steps are. This month for National Cybersecurity Awareness Month, there are going to be hundreds of ‘Here are 10 things you need to do to be secure’-style articles, but these messages are not always making their way to the actual target audience. It needs to become more of a mainstream concern, and it needs to be made easier for people to secure their accounts and devices. We talk a lot about the convenience versus security trade-off, and for a lot of people, convenience is still what matters most. It's really hard to switch the incentive structure for people to help them understand that taking all these steps toward better security truly is worth the investment of their time. For us, as an industry, if we make it as easy as possible, I think that will help. Security has a branding problem We need to do a better job of making security appealing to a broader audience. When I talk to students and ask them what they think about security and cyber security and hacking, they immediately think of a guy in a dark hoodie. And that alone is limiting people from getting excited about entering the workforce. Obviously, the discipline and the industry is much broader than that. We, as an industry, need to rework our marketing campaigns to show other kinds of stock photos. If we can do that, we can start getting more and more diverse people interested and coming into the industry. By attracting the interest of a broader range of students and having them bring their diverse skillsets in from other disciplines, we can strengthen our defenses and increase innovation. If we change the branding of security and the perception of what it means to be a security professional, we can help fill the pipeline, which is one of our most crucial missions as an industry at this time.
17 minutes | 4 years ago
Window Snyder on the indispensable human element in securing your environment
The O’Reilly Security Podcast: Why tools aren’t always the answer to security problems and the oft overlooked impact of user frustration and fatigue.In this episode of the Security Podcast, I talk with Window Snyder, chief security officer at Fastly. We discuss the fact that many core security best practices aren’t easy to achieve with tools, the importance of not discounting user fatigue and frustration, and the need to personalize security tools and processes to your individual environment.Here are some highlights: Many security tasks require a hands-on approach There are a lot of things that we, as an industry, have known how to do for a very long time but that are still expensive and difficult to achieve. This includes things like staying up-to-date with patching or moving to more sophisticated authorization models. These types of tasks generally require significant work, and they might also impose a workflow obstacle to users that's expensive. Another proven and measurable way to improve security is to review deployments and identify features or systems that are no longer serving their original purpose but are still enabled. If they're still enabled but no longer serving a purpose, they may may leave you unneccessarily open to vulnerabilities. In these cases, a plan to reduce attack surface by eliminating these features or systems is work that humans generally must do, and it actually does increase the security of your environments in a measurable way because now your attack surface is smaller. These aren’t the sorts of activities that you can throw a tool in front of and feel like you've checked a box. Frustration and fatigue are often overlooked considerations Realistically, it's challenging for most organizations to achieve all the things we know we need to do as an industry. Getting the patch window down to a smaller and smaller size is critical for most organizations, but you have to consider this within the context of your organization and its goals. For example, if you’re patching a sensitive system, you may have to balance the need to reduce the patch window with the stability of the production environment. Or if a patch requires you to update users’ work stations, the frustration of having to update their systems and having their machines rebooted might derail productivity. It's an organizational leap to say that it's more important to address potential security problems when you are dealing with the very real obstacle of user frustration or security exhaustion. This is complicated by the fact that there's an infinite parade of things we need to be concerned about. More is not commensurate to better It’s reasonable to try to scale security engineering by finding tools you can leverage to help address more of the work that your organization needs. For example, an application security engineer might leverage a source analysis tool. Source analysis tools help scale the number of applications that you can assess in the same amount of time, and that’s reasonable because we all want to make better use of everyone's time. But without someone tuning the source analysis tool to your specific environment, you might end up with a source analysis tool that finds a lot of issues, creates a lot of flags, and then is overwhelming for the engineering team to try to address because of the sheer amount of data. They might conceivably look at the results and realize that the tool doesn't understand the mitigations that are already in place or the reasons these issues aren't going to be a problem and may create a situation where they disregard what the tool identifies. Once fatigue sets in, the tool may well be identifying real problems, but the value the tool contributes ends up being lost.
36 minutes | 4 years ago
Chris Wysopal on a shared responsibility model for developers and defenders
28 minutes | 4 years ago
Scott Roberts on intelligence-driven incident response
The O’Reilly Security Podcast: The open-ended nature of incident response, and how threat intelligence and incident response are two pieces of one process.In this episode of the Security Podcast, I talk with Scott Roberts, security operations manager at GitHub. We discuss threat intelligence, incident response, and how they interrelate.Here are some highlights: Threat intelligence should affect how you identify and respond to incidents Threat intelligence doesn't exist on its own. It really can't. If you're collecting threat intelligence without acting upon it, it serves no purpose. Threat intelligence makes sense when you integrate it with the traditional incident response capability. Intelligence should affect how you identify and respond to incidences. The idea is that these aren't really two separate things, they're simply two pieces of one process. If you're doing incident response without using threat intelligence then you’ll keep getting hit with the same attack time after time. Now, by the same token, if you have threat intelligence without incident response, you're just shouting into the void. No one is taking the information and making it actionable. The open-ended nature of incident response It’s key to think about incidents as ongoing. There are very few times when an attacker will launch an attack once, be rebuffed, and simply go away. In almost all cases, there's a continuous process. I've worked in organizations where we would do the work to identify an incident and promptly forget about it. Then three weeks later, we would suddenly stumble across the exact same thing. Ultimately, intelligence-driven incident response happens in those intervening three weeks. What are you doing in that time between incidents from the same actor, with the same target? And how are you using what you've learned to prepare for the next time? Regardless of the size of your organization, you can implement processes to better your defenses after each incident. It can be as simple as keeping good notes, thinking about root causes, and considering what could better protect your organization from the same or similar attackers in the future. Basically, instead of marking an incident closed as soon as you’ve dealt with the immediate threat, think beyond the current incident and try to understand what the attack is going to look like the next time. Even if you can't identify the next iteration, you don't want to get hit by the same thing again. As your team expands and matures, there are opportunities for more specialized types of analysis and processes, but intelligence-driven incident response is something you can adopt regardless of your size or maturity. Why more threat intelligence data is not always better When a team gets started with threat intelligence, their first impulse is to try collecting the biggest data set imaginable with the idea that there's going to be a magic way to pick out the needle in the haystack. While I understand why that may seem like a logical place to start, that's often a very abstract and time-intensive approach. When I look at intelligence programs, I first want to know what the team is doing with their own investigation data. The mass appeal of gathering a ton of information is all about trying to figure out which IP is most important to me or which piece of information I need to find. Often, I find that information is already available in a team's incident response database or their incident management platform. I think the first place you should always look is internally. If you want to know what threats are going to be important to an organization, look at the ones you've already experienced. Once you’ve got all those figured out, then go look at what else is out there. The first place to be effective and truly know that you're doing relevant work for your organization's defense in the future is to look at your past.
43 minutes | 4 years ago
Jack Daniel on building community and historical context in InfoSec
The O'Reilly Security Podcast: The role of community, the proliferation of BSides and other InfoSec community events, and celebrating our heroes and heroines.In this episode of the Security Podcast, I talk with Jack Daniel, co-founder of Security Bsides. We discuss how each of us (and the industry as a whole) benefits from community building, the importance of historical context, and the inimitable Becky Bace.Here are some highlights: The indispensable role and benefit of community building As I grew in my career, I learned things that I shared. I felt that if you're going to teach me, then as soon as I know something new, I'll teach you. I began to realize that the more I share with people, the more they're willing to share with me. This exchange of information built trust and confidence. When you build that trust, people are more likely to share information beyond what they may feel comfortable saying in a public forum and that may help you solve problems in your own environment. I realized these opportunities to connect and share information were tremendously beneficial not only to me, but to everyone participating. They build professional and personal relationships, which I've become addicted to. It’s a fantastic resource to be part of a community, and the more effort you put into it, the more you get back. Security is such an amazing community. We’re facing incredible challenges. We need to share ideas if we're going to pull it off. Extolling InfoSec history with the Shoulders of InfoSec I realized a few years ago that despite the fact I was friends with a lot of trailblazers in the security space, I didn't have much perspective on the history of InfoSec or hacking. I recognized that I have friends like Gene Spafford and the late Becky Bace who have seen or participated in the foundation of our industry and know many of the stories of our community. I decided to do a presentation a few years ago at DerbyCon that introduced the early contributors and pioneers who made our industry what it is today and built the early foundation for our practices. I quickly realized that cataloging this history wasn't a single presentation, but a larger undertaking. This is why I created the Shoulders of InfoSec program, which shines a light on the contributions of those whose shoulders we stand on. The idea is to make it easy to find a quick history of information security and, to a lesser extent, the hacker culture. As Newton actually paraphrased, if he has seen farther, it's by standing on the shoulders of giants, and we all stand on the shoulders of giants. The inimitable Becky Bace Becky was known as the den mother of IDS, for her work fostering and supporting intrusion detection and network behavior analysis. But even beyond her amazing technical expertise and contributions, Becky gave the best hugs in the world. She was just an amazingly warm, friendly, and welcoming person. One of the things that always struck me about Becky is the number of people she mentored through the years, and the number of people whose careers got a start or a boost because of Becky. She was just pure awesome. She would go out of her way to help people, and the more they needed help, the more likely she would be to find them and help them. She came from southern Alabama, and when she came north to the D.C. area, her dad said, ‘You can go up north and get a job and marry a Yankee, but when you're done doing that, I want you to come home because, remember, we need help down here.’ For those who don't know, when she left her consulting practice, she went to the University of South Alabama—not even University of Alabama, but the University of South Alabama—and set up a cyber security program. She was bringing cyber security education to people who otherwise wouldn't get it and she built a fantastic program. She did it because she promised her dad she would.
29 minutes | 4 years ago
Jay Jacobs on data analytics and security
The O’Reilly Security Podcast: The prevalence of convenient data, first steps toward a security data analytics program, and effective data visualization.In this episode of the Security Podcast, Courtney Nash, former chair of O’Reilly Security conference, talks with Jay Jacobs, senior data scientist at BitSight. We discuss the constraints of convenient data, the simple first steps toward building a basic security data analytics program, and effective data visualizations.Here are some highlights: The limitations of convenient data In security, we often see the use of convenient data—essentially, the data we can get our hands on. You see that sometimes in medicine where people studying a specific disease will grab the patients with that disease in the hospital they work in. There's some benefits to doing that. Obviously, the data collection is easy because you get the data that’s readily available. At the same time, there's limitations. The data may not be representative of the larger population. Using multiple studies combats the limitations of convenient data. For example, when I was working on the Verizon Data Breach Investigations Report, we tried to tackle that by diversifying the sources of data. Each individual contributor had their own convenient sample. They're getting the data they can access. Each contributing organization had their own biases and limitations, problems, and areas of focus. There are biases and inherent problems with each data set, but when you combine them, that's when you start to see the strength because now all of these biases start to level out and even off a little bit. There are still problems, including representativeness, but this is one of the ways to combat it. The simple first steps to building a data analysis program The first step is to just count and collect everything. As I work with organizations on their data, I see a challenge where people will try to collect only the right things, or the things that they think are going to be helpful. When they only collect things they originally think will be handy, they often miss some things that are ultimately really helpful to analysis. Just start out counting and collecting everything. Even things you don't think are countable or collectible. At one point, a lot of people didn't think that you could put a breach, which is a series of events, into a format that could be conducive to analysis. I think we've got some areas we could focus on like pen testing and red team activity. I think these are areas just right for a good data collection effort. If you're collecting all this data, you can do some simple counting and comparison. ‘This month I saw X number and this month I saw Y.’ As you compare, you can see whether there’s change, and then discuss that change. Is it significant, and do we care? The other thing: a lot of people capture metrics and don’t actually ask the question do we care if it goes up or down? That's a problem. Considerations for effective data visualization Data visualization is a very popular field right now. It's not just concerned with why pie charts might be bad—there's a lot more nuance and detail. One important factor to consider in data visualization, just like communicating in any other medium, is your audience. You have to be able to understand your audience, their motivations, and experience levels. There are three things you should evaluate when building a data visualization. First, you start with your original research question. Then you figure out how the data collected answers that question. Then once you start to develop a data visualization, you try to ask yourself does that visualization match what the data says, and does it match and answer the original question being asked? Trying to think of those three parts of that equation, that they all have to line up and explain each other, I think that helps people communicate better.
32 minutes | 4 years ago
Katie Moussouris on how organizations should and shouldn’t respond to reported vulnerabilities
The O’Reilly Security Podcast: Why legal responses to bug reports are an unhealthy reflex, thinking through first steps for a vulnerability disclosure policy, and the value of learning by doing.In this episode, O’Reilly’s Courtney Nash talks with Katie Moussouris, founder and CEO of Luta Security. They discuss why many organizations have a knee-jerk legal response to a bug report (and why your organization shouldn’t), the first steps organizations should take in formulating a vulnerability disclosure program, and how learning through experience and sharing knowledge benefits all.Here are some highlights: Why legal responses to bug reports are a faulty reflex The first reaction to a researcher reporting a bug for many organizations is to immediately respond with legal action. These organizations aren’t considering that their lawyers typically don't keep their users safe from internet crime or harm. Engineers fix bugs and make a difference in terms of security. Having your lawyer respond doesn't keep users safe and doesn't get the bug fixed. It might do something to temporarily protect your brand, but that's only effective as long as the bug in question remains unknown to the media. Ultimately, when you try to kill the messenger with a bunch of lawsuits, it looks much worse than taking the steps to investigate and fix a security issue. Ideally, organizations recognize that fact quickly. It’s also worth noting that the law tends to be on the side of the organization, not the researcher reporting a vulnerability. In the United States, the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act have typically been used to harass or silence security researchers who are trying to report something along the lines of “if you see something say something.” Researchers take risks when identifying bugs, because there are laws on the books that can be easily misused and abused to try to kill the messenger. There are laws in other countries as well, that similarly would act as discouragement from well-meaning researchers to come forward. It’s important to keep perspective and remember that, in most cases, you’re talking to helpful hackers, who have stuck their neck out and potentially risked their own freedom to try to warn you about a security issue. Once organizations realize that, they're often more willing to cautiously trust researchers. First steps toward a basic vulnerability disclosure policy In 2015, market studies showed (and the numbers haven't changed significantly since then) that of the Forbes Global 2000, arguably some of the most prepared and proactive security programs, 94% had no published way for researchers to report a security vulnerability. That’s indicative of the fact that these organizations probably have no plan for how they would respond if somebody did reach out and report a vulnerability. They might call in their lawyers. They might just hope the person goes away. At the very basic level, organizations should provide a clear way for someone to report issues. Additionally, organizations should clearly define the scope of issues they’re most interested in hearing about. Defining scope also includes providing the bounds for things that you prefer hackers not do. I've seen a lot of vulnerability disclosure policies published on websites that say, please don't attempt to do a denial of service against our website, or against our service or products, because with sufficient resources, we know attackers would be able to do that. They clearly request people don’t test that capability, as it would provide no value. Learning by doing and the value of sharing experiences At the Cyber U.K. Conference, the U.K. National Cyber Security Centre’s (NCSC) industry conference, there was an announcement about NCSC’s plans to launch a vulnerability coordination pilot program. They've previously worked on vulnerability coordination through the U.K. Computer Emergency Response Team (CERT U.K.) that merged under NCSC. However, they hadn’t standardized the process. They chose to learn by doing and launch pilot programs. They invited focused security researchers, who they knew and had worked with in the past, to come and participate, and then they outlined their intention to publicly share what they learned. This approach offers benefits, as it's not only focused on specific bugs, but more so on the process, on the ways they can improve that process and share knowledge with their constituents globally. Of course, bugs will be uncovered and strengthening security of targeted websites obviously represents one of the goals of the program, but the emphasis on process and learning through experience really differentiates their approach and is particularly exciting.
44 minutes | 4 years ago
Alex Pinto on the intersection of threat hunting and automation
The O’Reilly Security Podcast: Threat hunting’s role in improving security posture, measuring threat hunting success, and the potential for automating threat hunting for the sake of efficiency and consistency.In this episode, I talk with Alex Pinto, chief data scientist at Niddel. We discuss the role of threat hunting in security, the necessity for well-defined process and documentation in threat hunting and other activities, and the potential for automating threat hunting using supervised machine learning.Here are some highlights: Threat hunting’s role in improved detection At the end of the day, threat hunting is proactively searching for malicious activity that your existing security tools and processes missed. In a way, it’s an evolution of the more traditional security monitoring and log analysis that organizations currently use. Experienced workers in security operation center environments or with managed security services providers might say, ‘Well, this is what I've been doing all this time, so maybe I was threat hunting all along.’ The idea behind threat hunting is that you're not entirely confident the tools and processes in place are identifying every single problem you might have. So, you decide to scrutinize your environment and available data, and hopefully grow your detection capability based on what you learn. There are some definitions, which I'm not entirely in agreement with, that say that, ‘It's only threat hunting when it's a human activity. So, the definition of threat hunting is when humans are looking for things that the automation missed.’ I personally think that's very self-serving. I think this human-centric qualifier is a little bit beside the point. We should always be striving to automate the work that we're doing as much as we can. Gauging success by measuring dwell time It's still very challenging to manage productivity and success metrics for threat hunting. This is an activity where it’s easy to spin your wheels and never find anything. There's a great metric called dwell time, which admittedly can be hard to measure. Dwell time measures the average time for the incident response team to find something as opposed to when the machine was originally infected or compromised. How long did it take for the alert to be generated or for the issue to be found via hunting? We’ve all heard vendor pitches saying something along the lines of, ‘Companies take more than 100 days to find specific malware in their environments.’ You should be measuring dwell time within your own environment. If you start to engage in threat hunting and you see this number decrease, you're finding issues sooner, and that means the threat hunting is working. The environments where I've seen the most success with threat hunting utilized their incident response (IR) team for the task or built a threat hunting offshoot from their IR team. These team members were already very comfortable with handling incidents within the organization. They already understood the environment well, knew what to look for, and where they should be looking. IR teams may be able to spend some of their time proactively looking for things and formulating hypotheses of where there could be a blind spot or perhaps poorly configured tools, and then researching those potential problem areas. Documentation is key. By documenting everything, you build organizational knowledge and allow for consistency and measurement of success. The potential for automating threat hunting There's a lot of different factors you can consider in deciding whether something is malicious. The hard part is the actual decision-making process. What really matters is the ability of a human analyst to be able to make a decision whether an activity is malicious or not and how to proceed. Using human analysts to review every scenario doesn't scale, especially given the complexity and number of factors they have to explore in order to make a decision. I’ve been exploring when and how we can automate that decision-making process, specifically in the case of threat hunting. For people who have some familiarity with machine learning, it appears threat hunting would fit well with a supervised machine learning model. You have vast amounts of data, and you have to make a call whether to classify something as good or bad. In any model that you’re training, you should use previous experience to classify benign activities to reduce noise. When we automate as much of this process as possible, we improve efficiency, the use of our team’s time, and consistency. Of course, It’s important to also consider the difficulties in pursuing this automation, and how we can try to circumvent those difficulties.
34 minutes | 4 years ago
Amanda Berlin on defensive security fundamentals
The O’Reilly Security Podcast: How to approach asset management, improve user education, and strengthen your organization’s defensive security with limited time and resources.In this episode, I talk with Amanda Berlin, security architect at Hurricane Labs. We discuss how to assess and develop defensive security policies when you’re new to the task, how to approach core security fundamentals like asset management, and generally how you can successfully improve your organization’s defensive security with limited time and resources.Here are some highlights: The value of ongoing asset management Whether you're one person or you have a large security team, asset management is always a pain point. It’s exceedingly rare to see an organization correctly implementing asset management. In an ideal situation, you know where all of the devices are coming into your network. You have alerts set to sound if a new Mac address shows up. You want to know and be alerted if something plugs in or connects to your wireless network that you've never seen before, or haven't approved. You should never look at asset management as a box to check; it’s an ongoing process. Collaborate with your purchasing department—as they purchase PCs and distribute them, you should be tracking asset management at each step. And follow the same process when your organization gets rid of equipment. All laptops and servers eventually die; be sure to record those changes as well. This is important from a security perspective and also may save on software licensing so you're not paying for licenses for computers you no longer have. Budget-friendly user education A lot of people have computer-based phishing education once a year; it gets lumped in with things like learning how to use a fire extinguisher. That never sticks. People will click straight through the training, retake the test until they get the passing grade, and quickly forget about it. Instead, you need a repetitive process with multiple levels. The first step is to search the web to find email addresses in your system that are readily available on the web. Those should be your first targets because they are the most likely to be attacked by bots and other automatic phishing programs. Then move on to people in finance, database administrators, and other individuals with significant power within the organization. Send them a couple sentences of plain text and an internal link from a Gmail address to see if they give up their username and password. I have found that, before training, 60% to 80% of the employees targeted will click on the link. You should see clear progress over multiple levels of this training. Keep extensive metrics on the percent of people who clicked the emailed link, and the percent of people who gave their passwords, both before and after training. And be careful not to only identify “wrong behavior.” Place emphasis on educating staff about whom to contact if something seems weird and then provide positive reinforcement when they report suspicious activity quickly and effectively. Empowering your staff in this way provides quick, effective, and budget-friendly reporting. Preparation is key for incident response Incident response plans can be as simple or as complex as fits your organization’s needs. For some organizations, an incident response plan may be to shut everything off and call a third party for help. If you decide to go with a third party incident response plan, you should have that contract in place beforehand. If you wait until you’re in need of services immediately, you’ve no time or space for negotiating fees or comparing providers. You’ll also be facing an emergency situation and lose time by providing background on your systems to the third party. Putting a plan in place in advance, no matter how simple, will be cost effective, save time, and allow you to recover from an incident more efficiently and effectively. Other organizations may be able to manage a full-blown investigation internally, depending on the severity. Some places are advanced enough that they can reverse malware independently. Many places aren't. Regardless, you must know where to draw the line on stopping your incident response internally and getting someone external to come in and help. Once again, determining where that line is for your organization ahead of time is key. You don't want to have to make that decision in the middle of an incident.
33 minutes | 4 years ago
Kimber Dowsett on developing and maturing a vulnerability disclosure program
The O’Reilly Security Podcast: Key preparation before implementing a vulnerability disclosure policy, the crucial role of setting scope, and the benefits of collaborative relationships.In this episode, I talk with Kimber Dowsett, security architect at 18F. We discuss how to prepare your organization for a vulnerability disclosure policy, the benefits of starting small, and how to apply lessons learned to build better defenses. Here are some highlights: Gauging readiness for a vulnerability policy or a bug bounty program It’s critical to develop a response and remediation plan before you launch a disclosure policy. You should be asking, ‘Are we set up to respond to vulnerabilities as they come in?’ and ‘Do we have workflow in place for remediation?’ Organizations need to be sure they're not relying on a vulnerability disclosure policy to find bugs, vulnerabilities, or holes in their applications and code. It’s critical to ensure you have a mature, solid product in place before you open it up to the world and invite scrutiny. Additionally, vulnerability disclosure policies and bug bounty programs shouldn't be thought of as low-cost quality assurance. Code that hasn't been tested isn't viable for these programs. If your product hasn't been tested, torn apart, tested again, gone through pen tests, then it’s not ready, particularly for a bug bounty program. Even if you're ready for a vulnerability disclosure policy, there's a good chance you're not yet ready for a bug bounty program. Start small and proceed with caution If you don’t start small, there's a good chance you're going to get hit in ways that you're not prepared to handle, and probably with issues you'd never even considered. When we launched the 18F policy, we launched it with three sites and then rolled out additional sites as they were ready. If a team said to me, ‘Okay, we think we're good to go to be added to the disclosure policy,’ then we would review their pen test results, development, back end, and code reviews. It's a much slower process, but it returns better results. Going all in at the start and declaring that everything is in scope for your policy is shooting yourself in the foot. We have been cautious and we've had a very successful, slow rollout of vulnerability disclosure. We've proceeded with caution and that worked well for us. The benefits of building collaborative relationships When we confirm a vulnerability, our blue team explores how we would defended against it or ways we could defend it until remediation is complete. Then, our pen testers, security engineers, or developers look to add something about the vulnerability to their toolkits to test for similar insecurities as they are building apps. We really shoot for baked-in security, but there's always going to be a ‘gotcha.’ If researchers submit reports in meaningful ways, we are able to use that to save ourselves time and energy with the triage process, and move straight to determining the best defense and how to find and secure similar problems in the future. We’ve built a process that fosters collaborative relationships with researchers. When researchers make high-quality submissions, we ensure their discoveries are welcomed, and of course, responsibly disclosed. In a successful program, researchers have become part of the security process, as they’ve contributed in a meaningful way to the security of one of our applications. When researchers feel welcome, we all win.
30 minutes | 4 years ago
Kelly Shortridge on overcoming common missteps affecting security decision-making
The O’Reilly Security Podcast: How adversarial posture affects decision-making, how decision trees can build more dynamic defenses, and the imperative role of UX in security.In this episode, I talk with Kelly Shortridge, detection product manager at BAE Systems Applied Intelligence. We talk about how common cognitive biases apply to security roles, how decision trees can help security practitioners overcome assumptions and build more dynamic defenses, and how combining security and UX could lead to a more secure future.Here are some highlights: How the win-or-lose mindset affects defenders’ decision-making Prospect theory asserts that how we make decisions depends on whether we’re in the domain of gains mindset or the domain of losses mindset. An appropriate analogy is to compare how gamblers make decisions. When gamblers are in the hole, they're a lot more likely to make risky decisions. They're trying to recoup their losses and reason they can do that by making a big leap, even if it's unlikely to succeed. In reality, it would be better if they either cut their losses or made smaller, safer bets. But gamblers often don’t see things that way because they’re operating in a domain of losses mindset, which is also true of many security defenders. Defenders, for the most part, manifest biases that make them willing to make riskier decisions. They're more willing to implement solutions against a 1% likelihood of attack rather than implementing the basics—like two factor authentication, good server hygiene, and network segmentation. We see a lot more defenders buying those really niche tools because, in my view, they're trying to get back to the status quo. They’re willing to spend millions on incident response, particularly if they've just experienced an acute loss, like a data breach. If they had spent those millions on basic controls, they likely wouldn't have had that breach in the first place. Planning dynamic defenses and overcoming assumptions with decision trees Defenders frequently have static strategies. They aren't necessarily thinking next steps in how attackers will respond if they implement two factor authentication, antivirus software, or whitelisting. Decision trees codify your thinking and encourage you to figure out how an attacker might respond to or try to work around your initial defenses, not just your first step. Different branches show how you think an attacker could move throughout your network to get to their end goal. By including your defensive strategies and the probability of success for each, you're essentially documenting your assumptions about how likely your defensive tools are to work, and how likely attackers are to use certain moves. That means if you have a breach or incident, or if you get new data on attacker groups, you can start to refine your model. You can identify where your assumptions might have fallen through. It keeps you honest with tangible metrics, which is important in addressing cognitive biases. Knowing where you failed improves your defenses. It shows how your assumptions need to be tweaked. Why security needs UX—and vice versa We've done a terrible job as an industry of incorporating UX into security design. People lament all the time, regardless of product, that security warnings aren't worded correctly. Either they scare users or people blindly click through them. No one seems focused on how to effectively incorporate security into product design itself. Designers or developers often view security as a complete nuisance—necessary but, in many ways, a hindrance. Security professionals often view UX as a waste of time, and blame insecurity on users who click on things they shouldn’t. Security and UX need to meet in the middle. This is an area that is ripe for opportunity and needs to be explored because it could make a meaningful change in the industry. Using UX to encourage users to make better or more secure decisions as they conduct their various IT activities would have a huge impact on security.
13 minutes | 4 years ago
Dave Lewis on the tenacity of solvable security problems
The O’Reilly Security Podcast: Compounding security technical debt, the importance of security hygiene, and how the speed of innovation reintroduces vulnerabilities.In this episode, I talk with Dave Lewis, global security advocate at Akamai. We talk about how technical sprawl and employee churn compounds security debt, the tenacity of solvable security problems, and how the speed of innovation reintroduces vulnerabilities.Here are some highlights: How technical sprawl and employee churn compound security debt Twenty plus years ago when I started working in security, we had a defined set of things we had to deal with on a continuous basis. As our environments expand with things like cloud computing, we have taken that core set of worries and multiplied them plus, plus, plus. Things that we should have been doing well 20 years ago—like patching, asset management—have gotten far worse at this point. We have grown our security debt to unmanageable levels in a lot of cases. People who are responsible for patching end up passing that duty down to the next junior person in line as they move forward in their career. And that junior person in turn passes it on to whomever comes up behind them. So, patching tends to be something that is shunted to the wayside. As a result, the problem keeps growing. Reducing attack surface with consistent security hygiene We don't execute on the processes, standards, and guidelines that should exist in every environment for how you're going to do X, Y, and Z. Like SQL injection. If we are making sure we're sanitizing inputs and outputs from our applications, this attack surface by and large goes away. Is it 100%? No, but nothing in security is 100%, sadly. For patching, again, you have to have a proper regimen in place. It's sort of like this: I could build you a house if I have a hammer, but if I don't have the context of the larger plan to build that house, I’m stuck. There are tools available that can help you execute patch management. The tools and the abilities are there, but we need the processes to follow, and we need to execute on them. But the thing is, patching is not something that most people find enjoyable. We need to do a better job of seeing patching as an important part of protecting our environment and take pride in that. Innovation’s role in reintroducing previously solved problems Well, the Internet of Things (IoT) has really devolved into the new bacon. Any device you can get your hands on and slap an internet connection to is now IoT. I've seen kettles, I've seen toasters, I've seen toothbrushes that had internet connectivity. Here’s a question for you: if you have a device with an internet connection and you pull that connection, does your device stop working? I worry about this because we're getting so bogged down in the crush to create IoT devices that we're really, again, bypassing fundamentals. I've seen devices that are out on the internet using deprecated libraries, and in some cases reintroducing Heartbleed. This is abjectly silly. It's a problem we tackled a few years ago, only to see it reemerge in IoT devices that are online. Or conversely, with the Mirai botnet, we saw default usernames and passwords. Programmatically, there's no good reason for that. That is an easily fixed problem.
45 minutes | 4 years ago
Parvez Ahammad on applying machine learning to security
The O’Reilly Security Podcast: Scaling machine learning for security, the evolving nature of security data, and how adversaries can use machine learning against us.In this special episode of the Security Podcast, O’Reilly’s Ben Lorica talks with Parvez Ahammad, who leads the data science and machine learning efforts at Instart Logic. He has applied machine learning in a variety of domains, most recently to computational neuroscience and security. Lorica and Ahammad discuss the challenges of using machine learning in information security.Here are some highlights: Scaling machine learning for security If you look at a day's worth of logs, even for a mid-size company, it's billions of rows of logs. The scale of the problem is actually incredibly large. Typically, people are working to somehow curate a small data set and convince themselves that using only a small subset of the data is reasonable, and then go to work on that small subset—mostly because they’re unsure how to build a scalable system. They’ve perhaps already signed up for doing a particular machine learning method without strategically thinking about what their situation really requires. Within my company, I have a colleague from a hardcore security background and I come from a more traditional machine learning background. We butt heads, and we essentially help each other learn about the other’s paradigm and how to think about it. The evolving nature of security data and the exploitation of machine learning by adversaries Many times, if you take a survey and see that most of the machine learning applications are supervised, what you're assuming is that you collected the data and you think the underlying distribution of your data collection is true. In statistics, this is called stationarity assumption. You assume that this batch is representative of what you're going to see later. You are going to split your data into two parts; you train on one part and you test on the other part. The issue is, especially in security, there is an adversary. Any time you settle down and build a classifier, there is somebody actively working to break it. There is no assumption of stationarity that is going to hold. Also, there are people and botnets that are actively trying to get around whatever model you constructed. There is an adversarial nature to the problem. These dual-sided problems are typically dealt in the game theoretic framework. Basically, you assume there's an adversary. We’ve recently seen research papers on this topic. One approach we’ve seen is that you can poison a machine learning classifier to act maliciously by messing with how the samples are being constructed or adjusting the distribution that the classifier is looking at. Alternatively, you can try to construct safe machine learning approaches that go in with the assumption that there is going to be an adversary, then reasoning through what you can do to thwart said adversary. Building interpretable and accessible machine learning I think companies like Google or Facebook probably have access to large-scale resources, where they can curate and generate really good quality ground truth. In such a scenario, it's probably wise to try deep learning. On a philosophical level, I also feel that deep learning is like proving there is a Nash equilibrium. You know that it can be done. How it’s exactly getting done is a separate problem. As a scientist, I am interested in understanding what, exactly, is making this work. For example, if you throw deep learning at this problem and the thing comes back, and the classification rates are very small, then we probably need to look at a different problem because you just threw the kitchen sink at it. However, if we found that it is doing a good job, then what we need to do is to start from there and figure out an explainable model that we can train. We are an enterprise, and in the enterprise industry, it's not sufficient to have an answer; we need to be able to explain why. For that, there are issues in simply applying deep learning as it is. What I'm really interested in these days is the idea of explainable machine learning. It’s not enough that we build machine learning systems that can do a certain classification or segmentation job very well. I'm starting to be really interested in the idea of how to build systems that are interpretable, that are explainable—where you can have faith in the outcome of the system by inspecting something about the system that allows you to say, ‘Hey, this was actually a trustworthy result.’ Related resources: Applying Machine Learning in Security: A recent survey paper co-written by Parvez Ahammad
32 minutes | 4 years ago
Katie Moussouris on procuring and processing bug reports
The O’Reilly Security Podcast: The five stages of vulnerability disclosure grief, hacking the government, and the pros and cons of bug bounty programs.In this episode, I talk with Katie Moussouris, founder and CEO of Luta Security. We discuss the five stages of vulnerability disclosure grief, hacking the government, and the pros and cons of bug bounty programs.Here are some highlights: The five stages of vulnerability disclosure grief There are two kinds of reactions we see from organizations that have never received a bug report before. Some of them are really grateful, and that's ideally where you want people to start, but a lot of them go through what I call the five stages of vulnerability response grief. At first, they are in denial; they say, ‘No, that's not a bug—maybe you're mistaken,’ or they get angry and send the lawyers, or they try to bargain with the bug hunter and say, ‘Maybe, if we just did something really stupid and tried to mask what this is, and maybe you won't talk about it publicly, or tweet about it.’ Then they often get really depressed because they realize this is just one bug report from one bug finder and there might be a ton of bugs they don't know what to do with. Until finally, they get to the acceptance stage. Ideally, we like it when organizations have gotten to that acceptance stage, when they realize there are bugs in everything, and eventually somebody is going to report a security vulnerability to the organization. Even if you've just got a website on the internet, it's possible that somebody will find and report a security issue to you. Hacking the government Hack the Pentagon came about because the U.S. Department of Defense was really interested in hearing about manipulating bug bounty market incentives. Each of those types of bugs would have fetched six figures on the offense market. At the time, Microsoft wasn't paying six figures per bug for beta bugs—in fact, nobody was—so understanding those market behaviors actually helped the Pentagon feel comfortable in trying out a bug bounty pilot, which is what happened last year. The results were great for the Pentagon. They got 138 vulnerabilities reported in a 21-day period. They fixed them all within six weeks, I believe. They paid $75,000 in bug bounties to find that many vulnerabilities. Through their usual vendors, it was costing them more than a million dollars a year in federal contracts with different security vendors, and they were typically receiving maybe two or three bug reports a month. There was finally a legal channel for security researchers who wanted to help the government to be able to do so without risking their freedom. (Editor’s note: Moussouris just helped launch a similar effort with the UK’s National Cyber Security Centre.) The pros and cons of bug bounties Anyone can offer cash for bugs. Whether or not it turns out well for them depends on a whole lot of things. Bug bounties can be useful as a focus incentive. If an organization has a pretty good handle on their vulnerabilities and has a process for dealing with the ones they already know about, then that might be a good area to focus on, but I typically don't think it's a good way to start. It has been trendy, recently, in the last year or so, as bug bounties have caught on, where company leaders are saying, ‘We're not getting good vulnerability reports—let’s pay 10 times the bug bounty amounts for a period of time and attract a whole bunch of researchers.’ You might do that, and yes, you might get a whole swarm of bug reports, but are they really the most valuable bugs—the ones that are actually going to help you secure your users, your customers, your enterprise, or your website? Or, are they just going to be a whole swarm of the same bug reported by multiple sources because it was a little bit of a low-hanging-fruit exercise? I caution people to think through their incentive models. What is it that you really want? Do you want more bug reports? What types of bug reports do you want? How can you structure this so you're not wasting all your resources and money on an outsourced bug bounty service provider, or on triage provider resources, paying them to sift through reports for you. What would you save by finding these bugs more effectively with a decent security testing program and maybe a full-time person in-house? I talk a lot of people off the bug-bounty ledge, especially if they haven't done a whole lot of their own homework and testing. Organizations are always going to have competing needs when it comes to spending their security dollars, and I think from a holistic view, bug bounties are not going to be the 100% perfect answer for making people more secure. You cannot “bounty” your way to being secure, the same way you can't “penetration test” your way to being secure.
33 minutes | 4 years ago
Allison Miller on making security better and easier for everyone
The O’Reilly Security Podcast: Focusing on defense, making security better for everyone, and how it takes a village.In this episode, I talk with Allison Miller, product manager for secure browsing at Google and my co-host of the O’Reilly Security conference, which is returning to New York City this fall. We discuss the importance of having an event focused solely on defense, what we’re looking forward to this year, and some notable ideas and topics from the call for proposals.Here are some highlights: Focusing on defense When we created O’Reilly Security conference we took a risk because we said, "We're going to focus on the defenders, the folks who are protecting the users and the systems." I heard from others over and over again, "How are you going to make a whole agenda out of that?" because it's usually one track at a major security event, or a handful of talks on authentication or SIM technology. At some security events, someone who works on the defense side can feel a little under attack because that's what's being discussed—attacks and how people are not successfully defending against them. This was more like, "Hey, you know what? Let's sit down and talk about how to do this right." That engendered a different spirit of dialog amongst the participants. Learning from mistakes to make things better Thematically, we picked some pretty broad topics for the conference, like the effect security has on people. Additionally, most defensive work in the private sector happens in the context of a business, so understanding how security fits into the larger business unit is critical. And when it comes to technology itself—talking about the tools and also data, metrics, analysis and that side of it—those are broad topics with plenty of room to explore. We’re also making room for more war stories, more discussions about learning from the trenches—big “Ooops” moments and how those get turned into lessons learned and concrete improvements. The real emphasis in the discussions we’re looking for is, “Let's make things better.” When something bad happens or a mistake is made, that means you can push off from the wall, like you're doing a kick turn in swimming. It gives you something to push off against, redirecting your effort to allow you to get to the other end of the pool and then to get better. On the horizon for O’Reilly Security conference 2017 For this year’s call for proposals, I would like to hear about what people are doing for end users. That's my personal passion. I am also interested to hear how people are putting their big data to work for them, who’ve figured out how to quantify impact, or measure, or analyze complex systems or situations, and distill those down. Reasonable approaches for small businesses is also a hot topic because a lot of the techniques that we talk about and the aspects of security being considered as a part of the design process are very important here. You're trying to design security into systems for end users, or leverage data in clever ways. Those types of things scale up far more readily than they scale down. It's not even a question of resourcing—when you are an organization with a smaller footprint some of those things, techniques that are used at large high scale organizations, just aren't going to work. It takes a village Security is interdisciplinary because, ultimately, it's not just a technology problem—if it were just a technology problem, we would be done by now. We would just apply the right technology to the technology problem, and we could all go home. But it's a human problem because the actors are humans, they are motivated, and people are a vector of vulnerability, just as much as the systems and data are. Related resources: 6 ways to hack the O’Reilly Security Conference CFP This is how we do it: Behind the curtain of the O’Reilly Security Conference CFP
14 minutes | 4 years ago
Scout Brody on crafting usable and secure technologies
The O’Reilly Security Podcast: Building systems that help humans, designing better tools through user studies, and balancing the demands of shipping software with security.In this episode, O’Reilly Media’s Mac Slocum talks with Scout Brody, executive director of Simply Secure. They discuss building systems that help humans, designing better tools through user studies, and balancing the demands of shipping software with security.Here are some highlights: Building systems that help humans We tend to think of security as a technical problem and the user as the impediment to our perfect solution. That's why I try to bring the human perspective to the community. I think of human beings as the real end-goal of the system. Ultimately, if we aren't building systems that are meeting the needs of humans, why are we building systems at all? It's very important for us to get out and talk to people, to engage with users and understand what their concerns are. Designing better tools through user studies A powerful tool you can adopt when talking to users is cognitive walkthrough. In essence, you ask them to tell you what they're thinking as they're thinking it. So, if you're going to do a cognitive walkthrough for an encryption program, you might say, ‘I'd like you to encrypt this email message. Please tell me what you're doing as you're doing it and all of the thoughts that occur to you.’ You might hear someone say, ‘Oh, wow, okay, so I'm going to encrypt. I don't really know what I'm doing. I'm going to start by pushing this button because that looks good. That's green. I'm going to push that.’ You can really hear the thought process that people are going through. If you're in a more formal user study context, it can be useful to get the user's consent to videotape—not necessarily the person, but the screen—and see what they're doing because then you can play it for your colleagues. This is one of the most convincing ways you can make a case that your tool has problems or your tool needs improvement. Thus, just by videotaping people trying to use a tool and showing the challenges they face, you can identify ways to improve the user experience. Balancing security with shipping software Given my human orientation, I view software as a process, not a product. So, what are the human processes you can build in to make sure the security goals are met? To that end, you should be thinking about your developers and thinking about the people who are trying to get your software out the door. As human beings, what are the psychological components that you, as an engineering manager or a security advocate within your organization, can instrument to try to incentivize them to focus on security? It's a continuous effort, which makes it hard. It's challenging. But just like any kind of technical debt, if you don't chip away at it little bit by little bit, over time it will grow until it's a mountain.
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021