Back

EP 7 – Chaitanya Bhatt: How Credit Karma Scales Their AppSec Program

read

Credit Karma is expanding rapidly and a huge focus for them is having a truly agile engineering team. Application security has also been a focus and their ratio of AppSec engineers to developers is 1:50 which is one of the industries best ratios.

In their movement to success, today’s show shares exactly how Credit Karma’s Director of Application Security Chaitanya Bhatt has tackled modern application security. Chaitanya’s perspectives and expertise come from his first-hand experience in leading security teams at organizations including eBay, AppDynamics (acquired by Cisco), and Autodesk.

Key Findings: 

  • How enforcing security policies earlier in the software development lifecycle can make security easier for developers.
  • How to build an AppSec program that’s capable of keeping up with the pace of software development.
  • Why having an embedded partnership model between application security engineers and developers is critical — and how to logistically manage making this a reality.
  • What most organizations get wrong with their security champions programs and how to make your program more impactful.
  • How to incentivize developers to invest their time and effort into security-related tasks.
  • Why shift-left is not enough, where building security-as-code comes into play.

Resources: 

Episode mentioned — Travis McPeak: Securing the Modern SDLC with Security Guardrails

Transcript
Harshil: Hello everyone, and welcome back to the Future of AppSec. On today's episode, we have with us Chaitanya Bhatt, who is the Director of Application Security at Credit Karma. Now, prior to Credit Karma, Chaitanya was leading product security teams at companies like eBay, Symantec, and Autodesk. So he's definitely spent a lot of time building AppSec programs. Chaitanya, welcome to the show.

Chaitanya: Thank you, Harshil. Thanks for having me.

Harshil: Chaitanya, we've talked a lot about the amazing things you've done in AppSec and ProdSec teams in your current company and before. Tell me a little bit about your journey. How did you come into this leadership role at Credit Karma leading AppSec teams? What did you do before this show?

Chaitanya: Sure, Harshil. So I came to the US for my Masters, I did my Masters in computer engineering with security specialization. Security has always been a fun thing to do, and something that I was always passionate about, writing some small scripts back in India. And then I did my specialization, started my career as a security analyst with Symantec, and then slowly got to know more about the organization, the industry, where security fits in, what's the importance of cybersecurity. My next role was with Autodesk, where I started getting into the product security space, responsible for moving when Autodesk was moving away from CDs to 360 Cloud world. So that's when I joined, got to learn a lot, and then moved to eBay, one of the leading e-commerce sites where I got to get even bigger exposure, where we built a lot more AppSec programs, more on the AppSec innovative stuff that we did. That was an excellent learning opportunity. And now I am acting as director of Application Security at Credit Karma, where we are responsible for protecting member and partner data.

Harshil: That's amazing. I mean, those are definitely some of the best AppSec teams or security teams in general in our industry. So now when you think about AppSec or ProdSec within your organization, what is the mission of your team? How do you categorize as what AppSec team is responsible for?

Chaitanya: So here at CK, we're responsible for protecting member and partner data by fostering secure engineering practices throughout the development life cycle.

Harshil: Right. So what does that mean, though? Like, in terms of engagement with the rest of the organization, do you work with the internal software development teams typically? Do you work with IT teams? How does that look like?

Chaitanya: So as I mentioned, the objective is to have secure products in the development pipeline. So as an AppSec team, our responsibility is to engage with our developer community, our engineering community, at every stage of the software development life cycle and turn that into a secure software development life cycle such that secure applications are deployed.

Harshil: When you're tasked with something important like converting a software development lifecycle into a secure software development life cycle, I'm guessing it depends on how the engineering organization or development organization actually does their software development, right? So since we are adding security into their existing process, tell me a little bit about what their process looks like. Is it very waterfall? Is it very agile, hybrid, mix of it? What does that look like if you can paint a picture?

Chaitanya: So most of the companies have already started moving towards agile. The newer companies are running much faster. Something like CK has always been an agile model because we still run in a startup culture where we run faster and we follow smaller scrums and fully into an agile model.

Harshil: Right. So is that several releases per day, or several releases per week, several releases per month? How does that frequency look like?

Chaitanya: So we do have a velocity program to make sure the developers run as fast as they can. There have been teams who have been pushing changes every day. There have been teams who have been pushing changes a week. But overall, I can definitely say it ranges between day to a week, but a lot faster than the previous organizations have been to.

Harshil: Yeah, and one of the fundamental problems that I see all the time is the engineering teams are moving faster and faster, which is kind of obvious if you look around the most modern organizations, but security teams are really not well structured to cope up with it, right? So we're still getting left on the sidelines of the engineering and development processes in a lot of cases. So what is probably needed is for us security professionals to also transform our processes, the way we do things, to be more agile friendly. And that's where one of the topics on like, how we engage with development teams, how we engage with developers. That part of the security process comes into the picture. Do you have any views on that engagement model between how AppSec and development organizations should be, or what works or what doesn't work, in your opinion?

Chaitanya: Yeah, absolutely, Harshil. So the development world has been evolving ever since. Like different organizations have been following different models to one that fits them best. Scaling application security is always going to be one of the biggest challenges with the faster-moving development life cycle. And this is where your engagement model becomes super important because as I mentioned, some of my previous organizations had a ratio of 1 is to 100. 1 AppSec engineer for at least 100-150 software developers. It's hard to scale, or hard to match to that speed. And that's where your engagement model comes in and becomes your first stepping stone towards scaling your AppSec.

Harshil: Right.

Chaitanya: There are different perspectives on your engagement model, but something that I personally believe strongly into having a more embedded partnership model where we try and have our Application Security engineers, even though they are part of AppSec, they try to attend and be a part of the engineering teams where they attend their scrums, they attend their meetings, they attend their planning sessions, they attend their retrospection sessions and learn about the domain. We hire them as subject matter experts into some specific verticals of AppSec, but as they start working with the software engineering teams, they become domain experts understanding the whole domain space. And at some point, AppSec moves away from being AppSec as consultation, to AppSec as a driver.

Harshil: So this is very interesting to me, and one of the very tactical challenges that we ran into in my previous life when we were implementing a similar model is we had this fantastic idea of “Let's have Security Champions on every team, and also let's have AppSec team members who will attend these scrum meetings and retrospective meetings”, right? The problem became that every single Dev team had their sprint planning meeting and Scrum meeting at exactly the same time on Monday mornings at 09:00. So how do you logistically manage that thing, right? Because you don't have the same number of AppSec engineers as the number of teams. And that's a very small tactical question, but the bigger point is how do you manage the limited number of AppSec engineers to many Dev team representations that they would have to stand for?

Chaitanya: So the first step to this…first of all, setting up this kind of an engagement model is a massive cultural shift. It does take time to get to that cultural change in the organization, and it's a slower process. But what we have done is we have verticalized. We have created various verticals and tried to group them in such a way that there's one person who's responsible for that umbrella of verticals, and which are also closely related, closely associated to each other. There's a lot of interaction, there's a lot of cross-collaboration between these verticals. So there's a lot of identification that goes into the planning of these verticalizations. And then also creating tier-based verticals to identify these are the top tier of critical systems, these are some tier two systems which “how much of an involvement is required, how much of an engagement is required?”. Some teams are super security conscious. All they need is some guidance and some directions. Some teams require more hand holding and more consultation. So those are various factors that get into the planning while you are verticalizing and assigning security engineers to each vertical.

Harshil: Right. Yeah, I'm guessing you would have to spend a lot of upfront time planning those things and also take into account the personalities of people and who they work closely with and who they have better relationships with. Do you ever run into a situation where you've assigned an AppSec person to a group of teams and the teams come back to you, turn back and say, “Hey, this security engineer is assigned, dedicated to our teams, why don't you have that person fix the bugs in the first place?”. Has that ever happened to you?

Chaitanya: That happens a lot. And we have always heard this in AppSec —“When can AppSec start fixing bugs for us? When can AppSec start resolving things for us?”. The challenge that always comes in is the agile world brings so many changes into the whole development pipeline as they're running so fast with it. We have always been very clear about the segregation of duties and segregation of expertise, rather than messing up anything or breaking down any of their faster-moving cycles. We determine this very carefully and the expectations are set in such a way that when you are an expert in this area, we have expertise in this area, and this is how we communicate and divide our rules and responsibilities.

Harshil: Right. So I guess that is one model where since you've dedicated AppSec engineers to a certain group of teams, do you also have a Security Champions program place or not? There's a developer who's tasked to do some security-related things, yeah?

Chaitanya: Yeah, we do. We do have a Security Champions program, and that's one of the programs that we leverage a lot in order to scale AppSec and have a better engagement model.

Harshil: Yeah. And there is a fundamental difference in a lot of people, how they think about the Security Champions program, because a lot of them think that Security Champions are people who are trained better on security and that's where it ends. Are you doing anything more than that in terms of having them do things for security, towards the function of security? Maybe not for the team, but are they tasked for anything other than getting that special extra training?

Chaitanya: Yes, they are. So one of the challenges that we always face with the Security Champions program is these trainings, these programs are more aligned to giving them a generic security training just to train them around why we need security. But after a certain point, I strongly believe that the Security Champions program must evolve itself into creating more company specific trainings where the developers can learn how to do false-positive analysis on your SaaS tools. How can you internally pick up some of the low-hanging fruits of pen testing during your own internal bug bash? So that's where they start contributing to us. We cannot have, if there are 120 releases a week, we cannot have 120 pen tests being done. So that's where we leverage our Security Champions, where we train them, we teach them, and at some point, we start recognizing them and providing some incentives for doing pen tests for at least the low-hanging fruit, or at least for the ones which are less critical systems.

Harshil: Right. Do you have a good example of how you incentivize them to spend more time? Because I'm guessing their sprint will be decided by a product manager or an engineering manager who asks them to spend time towards building features and stuff like that. How do they get incentivized to spend time on security?

Chaitanya: So here at CK, what we did is we have our own Security Champions portal where we track ranks and offer them points. So we actually created this whole framework around Security Champions program, where whenever any of our Security Champions submits a security vulnerability, or when they fix any vulnerability, they are assigned some points associated with that. And with those scoring systems, we have created kind of a leaderboard which is going to be open to the rest of the company so you know these are the top leaders from these teams. So not just the engineers, but their leaders are also recognized of being security conscious and keeping so much education and awareness around security for their teams too. So that has been so far proving great results for us, and we are starting to see more engagement where people come in and engage with us. They want to learn more, they come up with some very valid questions. And at a certain point we want to see more bugs, more pen testing, and more issues coming from our Security Champions.

Harshil: That is amazing. That's potentially the best outcome you can expect from a Security Champions program.

Chaitanya: Exactly. And now we have an excellent way to measure the success of our Security Champions program by having a tracker in place and having these point allocation systems in place that can give us an accurate way of determining how much engagement we are seeing from our Security Champions. So even though we have less AppSec team members, we basically replicate ourselves by having Security Champions in place.

Harshil: Right. Fantastic. So we talked about having AppSec team members dedicated to helping a group of identified Dev teams. You have your Security Champions program in place. So those are the two ways of how engagement between AppSec and development teams work. Are there any other things that you have, maybe with leadership or any other forms of bringing security and development teams together?

Chaitanya: One of the things where we can bring them much closer and that's always been our top priority and that's how we believe that we are moving a step closer to scaling ourselves. Another bigger initiative that helps is automating some of the AppSec practices, and how do we ensure we shift left into a lot of our practices. And that is under the key aspect of our engagement model. And engaging with the developer community where not just our team but even developers can come in and contribute some of the automation, how has security practices been enforced during their life cycle.

Harshil: So talk to me a little bit more about that. This topic of security practice enforcement earlier in the life cycle, is generally what people call a shift-left. What does that look like in reality?

Chaitanya: So people always believe that shift-left is just giving this early detection, creating that early information for developers. Don't come late to the game. In very easy words they call it “don't come late to the game”. In a true sense shifting left is not just early detection but also early remediation. There are ways you can shift as left as you can. I know GitHub has a lot of capabilities now available in itself with advanced security models, or they have enabled actions where you can enable your in-house security capabilities while a code is committed. You can shift further left by utilizing some of the plugins that are being installed into your IDE. So the security scanning is done while you're coding. It's also more on the adoption of it because as you move left there is a higher chance of getting less understanding and lesser visibility because of this whole agile world where I mentioned about more scrums, faster deployments, smaller changes. So it's hard to get the overall context of an application and that could result in more false positives. So you have to find that optimum place in the development lifecycle, and that varies from organization to organization on how your CI/CD world looks like.

Harshil: Right. So there is this element of how do we actually implement controls realistically and you alluded to this earlier, which is that most people tend to think of shift-left as shifting, scanning left or early detection rather. But obviously, that's not the end goal, the goal being not just to detect but also to prevent things if possible or remediate as soon as possible, right? So how do you get from just orchestrating a bunch of detection tools earlier in the lifecycle to actually helping the developers write secure code, actually enforcing or implementing controls, implementation or standardization or whatever you want to call it, paved roads, guardrails, whatever you want to call it. How do you actually implement those things? Give me a realistic example.

Chaitanya: As you do more reviews, as you move away from manual reviews to more of an automated scanning solution, there's this terminology called “security as code”, where you start verification, validation at the code level. Some things can be enforced at the code level, some things where you can attain some confidence, and some things are more around providing constant warnings and feedback. In my personal opinion, providing constant feedback while developers are coding has always been helpful. Where SaaS tools acting as more of a false positive as a service, how do we make sure that rather than blocking them or stopping them from moving further, we start giving them warnings around, “Hey, maybe what you're trying to do could result in excesses. So how about what you're trying to code here might result into injection, could result into injection”. So giving them that warning. And obviously, as I mentioned, we have some security standards and patterns that have been created. We have Security Champions in place that are constantly evolving and educating our developers. Giving them these kinds of warnings while the code always helps them to ensure and to achieve that level of maturity and early heads up on this might be a vulnerability as you move forward in the development life cycle, or this could result into a potential security issue as you move forward in the development life cycle. So that has always helped in the enforcement cycle.

Harshil: Yeah. But when you mentioned providing these warnings to developers, are you thinking of something other than static analysis tools, or are you talking about static analysis tools themselves?

Chaitanya: So static analysis tools can result into this, but these kinds of warnings and stuff are more of like off-the-shelf recommendations that you'll have to start providing. And that again, as I said, with the different environments, the different cultures that every organization has, it varies. And sometimes I have seen some organizations in the past who used to provide these recommendations every time you commit the code. So every time you commit the code, they won't stop you, but you could see the list of potential vulnerabilities while you are committing the code. Maybe before you push your code into production, you might be stopped or you might be asked to fix some of the criticals and stuff. But it's not that you were not aware of it, you were already informed about it based on any policy violations or any standard practices that have not been followed early in the development life cycle.

Harshil: Right. So do you guys build something in-house to implement these types of warnings? Because it sounds like they are very custom to your environment.

Chaitanya: Correct. So we have something in-house built at CK which we use to provide these kinds of warnings. And for some of the security standards and practices where we have attained that confidence in the detection model, we are slowly starting to enforce it at the code level too.

Harshil: Interesting. Yeah, because we had always struggled, and I see a lot of people struggle with this aspect of… I'll give you a very simple example that when developers are building their own Docker containers, you want to enforce a control that all of the images should be based on an approved image that's sourced and that's stored in your registry, in your organization's registry. But enforcing that control and monitoring for that control in the pipeline itself, it's kind of tricky. I mean, it's not very simple to do that, especially when the team that's thinking and building that control or interested in enforcing that control is a security team who potentially doesn't even have the visibility into all the different pipelines that the Dev teams are working on.

Chaitanya: Yeah. So I actually call this more of like next-gen DevSecOps. We already had a known terminology called DevSecOps where people knew that security needs to start now working with the Ops team to move away from that blocker mode to that facilitator mode where we start running at the same pace as developers. But the next-gen DevSecOps is more around creating a horizontal layer of all the security vulnerabilities that you have been capturing during your life cycle. And at some point in your CI/CD, creating various gates. It could be during the image provisioning level, or it could be during merging. It could be just before you get into an experimentation mode. So this is how you engage in the whole pipeline. But security is not just one step in the CI/CD pipeline, but acts as a horizontal pipeline where you might be reached out at various stages of the CI/CD, where maybe at a provisioning level you'll need details about your container scanning. Maybe at merging branches, all you need is some static code details. Maybe before you get into experimentation mode, you need details about dynamic scanning. So this is how you create multiple gates where security is not just one step, but it's a multiple-step in the whole development life cycle and deployment.

Harshil: Right. Yeah, that's interesting. I mean, I think that's where we need to go because without having those controls in place or those checkpoints in place, it's really really difficult to not just get assurance of what is going through your pipeline, but even helping the developers understand what is expected out of them. Because if you don't tell them at the right time and the right place within the CI pipeline, they just don't know. I mean, they're not going to go and read a 15-page PDF document understanding securities policies. They're not going to do it. So you have to communicate with them in their language. And they understand what is needed to pass the CI gates, to pass the CI check so they get that. So if we integrate security at those checkpoints, it becomes very native for them.

Chaitanya: Exactly.

Harshil: Now there’s also…I know you spent a lot of time around things like threat modeling and risk reviews, and all of those things as well. So those are traditionally very manual processes, very manual ways for AppSec teams to conduct assessments. Have you guys figured out any interesting ways to make it, and adapt it to agile development practices?

Chaitanya: Yeah, absolutely. So back when I was working at eBay, we had developed our own security framework for threat modeling in the Agile world. And this was also one of the talks that I presented back in 2019 at one of the conferences on how every organization can move to that model where you can build your own framework, you can get into that level of automation. How can you do that? By a simple chatbot solution that could help you address at least low hanging, 80% of the threat assessment, and determine if that would require any further manual intervention. So you don't slow down the development team if you feel they are doing everything. So you actually create a happy path for a secure design, And if they are following all the right practices, there is no reason to slow them down. Another benefit that comes in for the organizations who have a managed stack, who usually rely on their in-house frameworks for the development. Engaging more with the framework team to bake in those security requirements into the framework also helps in reducing the manual threat modeling steps. Because now, if you see security by code, all that it's going to look for is if you’ve done nothing you’re going to lose those security protections. If you haven't, there's no reason to slow you down and bring you for a two-week-long review process.

Harshil: Right. Yeah, and there's an argument around it, which I mean, you can potentially reframe this as saying that the fastest path is the secure path where security controls are already built-in. If you don't follow the standard pattern, then you have to go through all these other checks and balances, which could potentially take more time. So it's just easier for everyone, for you to just adopt the standard framework.

Chaitanya: Exactly.

Harshil: Yeah, just a few weeks ago, I had Travis McPeak on the podcast. He was talking about sort of similar things where they use Spinnaker for deployments and security teams built controls in Spinnaker itself. So before you deploy your security guardrails and checkpoints are built into Spinnaker and it's just native for everyone, they just get through it. So you don't have to chase around different Dev teams asking them to fix things or adopt security controls. It's just built into the deployment pipelines. It's just native.

Chaitanya: Exactly.

Harshil: Chaitanya, we talked about a lot of different things. We covered several different aspects around how to engage with developers, your engagement models to building guardrails and pipelines and things like that. I'm sure your AppSec team does a lot more than this as well. If you were to categorize the core pillars or functions of your AppSec team, how would you describe those key functions?

Chaitanya: So at a very high level, I have “bucketized” the various functions and various verticals of AppSec and for what my team does into five major boxes. One is on the security trainings and governance, where the team gets involved into writing a lot of standards, patents, policies, training stuff. There’s more on the consultation piece, where we do a lot of design reviews stuff. There's code security. This is where our major focus has been, because at the end of the day, a lot of these standards policies are being enforced or at least validated. All the design review recommendations are being validated at the code level. And that's where our code security is one of the most important tracks, where a lot of emphasis has been put into creating in-house scanners, in house monitoring, and we've been partnering a lot with GitHub and our Ops team to create the checks at various levels during the lifecycle of software developers. And then we have dynamic security, which is more focused around dynamic scanning and making sure run time pen testing and all those things are done internally. And the last box is around the run time security, which focuses on real-time detection and something which is more identified post-deployment is done.

Harshil: Yeah, that sounds like a very comprehensive AppSec program. I'm sure it took your team a long time to get there. Let's just say hypothetically, if our audience member is listening to this and they are just starting to build out an AppSec program from scratch, let's say there's almost nothing. What do you suggest as the sequencing? Out of these five buckets, how would you sequence them in order of priority? And you're looking at an almost brand new program.

Chaitanya: I feel the security governance and training is, for someone who's just starting the program, introducing this to an organization first would require more of an awareness. Having those policies and standards and governance in place, people need to know what's a happy path. So educating that happy path to everyone, bringing that cultural change of security-first mindset. While security is everyone's responsibility, kind of an awareness model takes time. So my recommendation is to start with a stronger Security Champions program, where one excellent security trainer could replicate himself into thousands of Security Champions who can help you to run faster and embed yourself much faster. And they can represent AppSec on their own teams. So that would be my first focus. And as I said, the second most important thing that I strongly believe, I've been focusing a lot on is the code security track, where you start identifying these patterns at the code level, where you start detecting or validating a lot of your recommendations. We recommend a lot of things during consultation phases, but how do we validate if they are being followed or how do we even know if the right practices are being followed or not? So not meant to block anyone, but at least invest in detection and visibility first when you start a program, and then move towards enforcing and remediating it. But having that stronger layer of detection and visibility is going to help you improve your program or even grow your program and understand various gaps.

Harshil: That's amazing. Those are really good recommendations. It speaks to your experience doing this several times over and over again. Chaitanya, this is all the time we have in today's episode. This has been a fantastic conversation. Thank you so much for being a part of this.

Chaitanya: Thanks for having me, Harshil. It's really awesome talking to you and sharing experiences.

Rate this article

Recent articles

Solving the Challenges of Engaging with Developers

On a recent episode of the Future of Application Security podcast, Chad Girouard, AVP Application Security at LPL Financial, talked about some of the challenges to overcome...

Read more
What’s Caused the Need for Software Supply Chain Security

On a recent episode of the Future of Application Security podcast, Dave Ferguson, Director of Technical Product Management, Software Supply Chain Security at ReversingLabs, explained why the...

Read more

Ready to Scale Your Application Security Program?

Sign up for a personalized one-on-one walkthrough.

Request a demo

[email protected]

Request a demo