RSA Day 3
(Posting this a day late as I was crazy exhausted yesterday after walking nearly ten miles! I literally laid down in the room at 22:30 and woke up at 04:30 still in my clothes, lights on, etc…. I think I was effectively conferenced out, and that was only Day 3!)
Great tracks today and some exciting notes. Plus I got to hit the Expo floor. Here’s the talks I made it to:
Teaching Software Engineers to Threat Model: We Did It, and So Can You - Jamie Dicken, New Relic
Another Digital ID: Privacy-preserving Humanitarian Aid Distribution - Woulter Lueks, Faculty, CISPA Helmholtz, Center for Information Security
Web Application Hacking 101 - Look Mom No Tools - Joseph M. (I’m not going to name him as I have poor thoughts to share below.)
Lets break down the classes. Thought there was some great info today
Teaching Software Engineers to Threat Model: We Did It, and So Can You
Jamie did a great job of showing how a team of thoughtful and intentional engineers who are willing to partner with their stakeholders and audience can really push far left and deputize developer users to be security champions.
Her team was facing the same challenge many of us face, lots of security reviews, few security reviewers, and an increasing backlog of potential risk. So they tried what many of us have probably tried, split up the security review into chunks, move some pieces earlier in the process. But, like any body of work that we split, we now have twice the number of pieces, but the same volume of work, and the same capacity.
So, if the volume of work is going to stay the same, we have to find a way to either cut down on the volume or increase the size of our funnel. I don’t know about you, but headcount isn’t in abundance these days, so a bigger funnel wasn’t going to work. Unless, what if we could add people to the funnel. People who were experts in their domain and product. And who could be taught security, and threat modeling?
Enter the Citizen Security Engineer, or Developers that we put in funny hats.
By teaching developers to build their own threat models, we could not only limit the amount of threat models our security engineers have to build, but we could also have the fix agents find their own problems! This was a win-win, but how to get there?
Here’s the model as Jamie provided it.
-
Interview your Software Developers and get feedback about the current process and their concerns around the idea. E.g.; “What happens if we do a bad job at the threat model?”
-
Interview your Security Engineers and get feedback about the current process and their concerns around putting the burden on the Software Engineers. E.g; “What happens if the Software Developers do a poor job?”
-
Begin training development.
3a) Which Format works best? Computer Based Training (CBT)? Live Learning? Left-Seat-Right-Seat?
3b) Determine Methodology. Do we use STRIDE? DREAD? PASTA? Jamie’s team decided to utilize STRIDE model
But what if someone is already familiar with a different model and wants to use that? That was ok, New Relic adopts the Golden Path model and allows that person to threat model on their own templates and diagrams as long as the final purpose is served.
3c) What tools to use? Team decided that paid tools and more complexity wasn’t the way to go. So LucidChart, Visio, whatever they were using, let them keep using it. Low tech for now.
3d) When to do it? Do we do it based on feature? or User Story? I can tell you from my experience in infrastructure as opposed to dev, we do it when two systems want to talk to each other or when a new system is implemented. E.g.; There’s a lot less risk involved giving Bob access to Alice’s already implemented service, than there is if Bob’s going to give Alice’s service programmatic/API driven access to his service.
-
Map the Software Development/Procurement/Architectural (whatever process corresponds to your implementation or release) process. And decide where you put the “security gate” of a threat model. They decided to put there’s in the change design document because of the type of development they did. I’ve placed mine in the change and procurement procedures.
-
Define the new workflow. Pretty simple, decide how the security gate is going to work. Where do they submit it, who reviews it, what feedback do they get, how do they get help?
-
Define the template. What info has to be there. If they don’t follow the golden path, what’s the minimum viable product that represents a thorough threat model?
-
Pilot with a few teams. Pick out your best and worst candidates for this process. Find the super savvy almost tech guy and the new grads. Have them run the process through. Where does it break? Give them the training. Do a threat model. Give feedback. Fully support the pilot users asynchronously at all times. Time limit the pilot.
7a?) Collect the feedback and adjust the pilot. Rerun if necessary. Some common points of feedback: Training Format/Content, Support, Process/Workflow, Overall NetPromoter Score (not a true NPS but some kind of rating systems)
- Get to it. New Relic assigned training to target audience, gave a voluntary period of compliance (to break in teams and adjust processes), gave a mandatory date.
What happened as a result? They saw developers and implementers start to identify and modify risky designs before security teams even got to them. This has been reflected in my own experiences with implementers more clearly understanding our security requirements and instead of building mitigations that modify the risk, they remove the risky behavior or design decision all together.
Remember, your process won’t be perfect, but it will be better than NO threat models.
Another Digital ID: Privacy-preserving Humanitarian Aid Distribution
Wouter Lueks had one of the most under-rated talks at RSA and probably one of the most prescient considering Russia/Ukraine, Israel/Hamas, and soon China/Taiwan and possibly N. Korea. Wouter is a post-doctoral researcher at CISPA Helmholtz Center for Information Security in Saarbrucken, Germany and has developed a system for tracking humanitarian aid registration and distribution while protecting vulnerable populations from the collection of sensitive biometric data.
The more salient points, are that the system utilizes smart cards to record a biometric which stays on-device and is matched to a cryptographic key associated to that household or other unit. This key can be copied to all members of the household’s card, so that all members or any member can collect aid at a time after distribution. More importantly, the card can also record entitlements (e.g.; bags of rice, amount of baby formula, etc) the bearer is entitled to AND record in the distribution system if a key has ever been presented, preventing double-dipping.
I would highly recommend you view the paper by the same name, or seek out Wouter’s videoed presentation. He was very practical about the limitations of his technology, which was appreciated. And he was a great interactive speaker who insured that the audience understand the difficulties, realities, and benefits to the system he was proposing without drowning us in cryptographic proofs.
We even got to have a short conversation about using such a system for voting registration and anonymity in dangerous areas (think Afghan election where peoples hands were died purple which resulted in their targeting and death) and systems where a detrimental situations don’t occur if technology breaks (like an established welfare/commodities distribution platform.)
I’m very excited to see where Wouter and his team take this work.
Web Application Hacking 101 - Look Mom No Tools
This was the only learning lab I had the opportunity to attend this week and one of the only actually disappointing ones. While I was frustrated with the configuration and execution of the class, I was introduced to the bWAPP: an extremely buggy web app which allows users to safely attempt a variety of web based attacks without any special tooling.
My main criticism of this class was the multitude of spelling, grammatical, and logical errors that appeared in the documentation, making it extremely difficult to follow, accompanied by a failure to crawl, walk, run. The presenter had the opportunity to introduce HTTP, explain the HTTP methods, and then explain how one abuses those. Instead, I received the lab guide and was essentially told to read through it, and learn myself. So after around an hour, myself and many other members of the lab exited the room.
Feeling disappointed, I set out to see the conference floor and was not disappointed. I got to stop in at a number of vendors and met some of my favorites, including really cool technologies like Tailscale, Thinkst Canary, and got to be interviewed by Panther.