The practices of continuous and "shift-left" security attempt to reframe security away from being a gate after development, making teams responsible and informed guardians of their systems and user experience. InfoQ talked to Laura Bell, a leading advocate and teacher of continuous security, to discuss her recent talks at QCon London, her professional security journey, and how she continues to champion a consciousness for team owned security.
Bell is a founder of SafeStack and co-author of the O'Reilly book Agile Application Security. She has been hosting QCon London's Security track and delivered both a second-day keynote and a talk on how unqualified fear can have negative repercussions on architecture choices. Bell's keynote, titled Guardians of the Galaxy: Architecting a Culture of Secure Software, further discussed the need to weave security into a product development team of guardians with the tools to enable rapid and safe innovation.
In her talk, Bell presents a message of moving security away from being fear-driven, to being a considered and continuous part of rapid innovation. She discussed a tendency to introduce visible and experience-jarring gatekeepers such as captcha, rather than building seamless validation on the server side. In a microservices world, she points out that we need to move away from gatekeeping on less predictable boundaries to utilising a pattern of guardians, which monitor and respond to our services.
Intentionally expressing security concepts in generally accessible terms, Bell explained during her talk that the important thing about being a security person today is being able to tell stories and collaborate, "to go into a team using an analogy to bring about an insight."
InfoQ's Engineering Culture and Methods Trends Report for 2018 classified shift-left security as being in the innovator phase of our adoption curve. Consistent with this, Bell described continuous security as blending appropriate safety into the development process, making it repeatable, automated, team owned, integrated and scalable.
InfoQ: What were the main messages you wanted people to take away from your keynote?
Laura Bell: Make do with what you have. Don't aim for perfect. Aim for bringing security into things you're already doing.
The keynote's theme was about being conscious that when we're fixing security issues and trying to integrate security into everything we do, this requires us to architect a culture, the same way we'd architect everything else.
We talked about the principles of security at speed, our operating contract and how we find new people and work together.
InfoQ: You also talked about data-driven security. How can data be used effectively for security?
Bell: It's all about knowing more about what's happening security-wise. If a client finds a bug in their mobile apps, what information do we need to know to find out how many people this is affecting and how quickly those people are no longer vulnerable? Can we detect that it is being misused by looking at changing behaviour?
For instance, in Android, we are always asking what versions and what devices? We look at who is running the new version and who is running the old version. After we announce there is a new version available, how quickly does that curve go up? iOS released some interesting stats about the adoption of new versions, revealing that if they released new emojis people would upgrade.
Security is never the reason most people update. They aren't that interested. Historically we find a bug, release a fix and say our job is done. From my point of view our job is not done; not until that fix is in front of as many of our users as possible.
InfoQ: Your two talks discussed how unqualified fear can skew the perception of risk. What can teams do to ground their fear and meaningfully prioritise based on risk?
Bell: Michael Brunton-Spall's talk on Attack Trees presented a good tool for this.
It's about taking the conversation much further than "X is going to happen and it's going to compromise our database." Take it into the specifics of how would they do this? How hard is it? Is it possible? What would they do with that information?
When I teach I ask a group of people what the most valuable thing in this room is. They always answer that it is themselves. We move on to company secrets. If you are a malicious person, what are you going to do with these? Sell it to the competition. What happens if you contact them?
They might say they'd sell it on the Dark Web. Which may be right, but the Dark Web is full of all sorts of terrors and you don't just walk in there with a dataset and sell it. Getting them to go down this pathway, it's about really understanding the realities of what you do and don't know. It becomes more concrete. There isn't just a villain in another company, who you can call on the villain number.
InfoQ: How can security specialists and UX work effectively together to build really good products for people?
Bell: Firstly you go listen to UX. Don't talk at them. Understand their world a bit. I've worked with UX teams and asked, "how would you write a phishing attack which looks genuinely like it's from your company?" They get it. UX people also love data. They love knowing where on the page people have clicked and the conversion rates between features. They measure everything, it's fantastic. If you add extra measurements and surface them, you can make them more aware of the impacts of their decisions. What was the impact on average quality of passwords when we made some change to the password interface? Do we see their behaviours changing?
I really like what Slack does right now for their (login) challenge. If you log in on your mobile app and it says, "we assume your password is hard, if you don't want to type that in right now, we'll send you a magic link." So you don't have to sign in with a password at any time in Slack. When they have an outage their spare developers go on social media and handle calls. It's a very human-centric security model. It's about understanding the motivation of users.
Microsoft put out a post a few weeks ago saying they were going to get rid of passwords from windows. I'm hoping they are going to be more human-centric in that.
InfoQ: How are such moves received by security professionals?
Bell: There is a familiarity with the way we do things. If you tell a security person "we're going to get rid of passwords," they get butterflies at that point.
I do think that progress is being made on the server-side, away from the users. With monitoring and response, signal sciences, heuristics and machine learning to spot patterns, we can be more user focused on the customer side. We aren't just a username and password. We can assume that's been compromised by now. What are the times of day we connect at? What order do we normally do things in the application? What music are you listening to when you do this action? Is there a pattern? If we can tell what a normal version of Laura is, why don't we respond to it not being a normal version.
The funny thing is that in security we need to justify we're doing this. We have all this data and are going to put it all together for your safety. In marketing, they have been doing it for years.
InfoQ: In your keynote, you described an approach to hiring where candidates would be asked to plan a burglary to establish cultural fit. How well does this work in practice?
Bell: We tend to get the right fit. We've learned a lot about how people behave in the interview processes. If you bring an introverted person into an interview and say "rob a bank", that does not sit well. It tends to favour extraverted people.
The first few times we didn't account for people not wanting to fail. It's an interview and you want a job. How do you prove you are awesome when you're making stuff up on the fly and there's no engineering, and you've got no text book to help you? It's not a test, so we coach them. Once they relax a bit, they realise it's ok to fail. It's exciting and energetic. You can see the person underneath it all. If they focus on the physical things, that tells you something. If they focus on the computers, that tells you something else.
InfoQ: In acting as host for the Security Track at QCon London, have you recognised any particular themes arising from the audience and speakers?
Bell: There is a definitely a lot of curiosity and a lot of good practical 'how do I do this?' questions coming from the audience. We've gone past the stage of 'what does this even mean?' to "I'm trying to do this, help me?" I think we've reached a maturity space.
InfoQ: Do you have any positive examples of industries where you've seen a good balance of innovation with a risk-based understanding of security?
Bell: The payments industry has some really great teams. I've seen full unit test suites in Ruby using immutable architecture. Not on the code but on infrastructure as code. They've merged this with predictable security testing.
There are giant telcos rolling out security champion programs to 2000 developers. It's all well and good to run such a program across 50 developers. How do you scale that to seven countries across 2000 devs? It's not all in the countries you'd expect.
InfoQ: What can organisations do to emulate these patterns?
Bell: We've all come into security from different places. Some of us were firewall engineers, software developers or risk and compliance people. We're from all sorts of places and don't always have an understanding as to how teams are working now. We need to build some bridges internally to make sure security is pulled in and not pushed out. That the people in security teams are given as much education and training as our development teams. It has to come top down and there has to be management buy-in.
There have to be baby steps. No one wakes up one morning and has a Spotify tribe model. That is the result of doing lots of little bits of work. Go do the little bits of work. Build some bridges. Make some friends. Figure out where your pain points are. Measure all this stuff.
InfoQ: What are some of the other projects you're currently working on?
Bell: It's really easy to do a risk assessment when your system is deterministic. In the AI talks, there is the notion that you can introduce bias if you put bad data in. How can you rationalise and predict risk where you can't predict what it's going to do at all? Here's my machine learning, black box component. I want to see what tools we can build to help do the threat assessment in that space and understand how you can measure how badly it can go.
Also, I'm a people person. I started being very technical early in my career. Now a lot of what I do is culture and communication. I would love to do some more work with people like UX people and lawyers. To try and understand where the crossovers are. The more we cross-pollinate with the other industries the richer we become. I essentially want to make friends with all of these fields and say "here's my world, tell me about yours!"
InfoQ: What advice would you give to individuals who want to begin their own journeys into security?
Bell: Security is a massive field. Watch a ton of conference talks. From QCon, Blackhat and all the many which have been over the years. If you're bored after 10 minutes, stop it there.
Find out which of the vast array of these technologies and techniques motivates you. Ask "what if?" Go deep on that. Passions drives effort and motivation. If you've got passion, passion is contagious. You can share that interesting talk with people.
There are some famous talks. Barnaby Jack in 2008, when he exploited cash machines live on stage in Vegas. Charlie Miller in 2015 when he stopped a jeep on a highway. Apollo Robins TED talk where he shows you misdirection on change. You'll watch the video twice, I promise you.
All of those things have lessons you can bring to your engineering role. Go find passion and excitement.
Additional information on Laura Bell's QCon London 2018 talks can be found on her profile page on the conference website.