‘India developers exceptional, but talent shortage issue is structural’
In an interview, Mike Hanley, chief security officer and senior vice-president of engineering at GitHub, spoke about how the shortage of cybersecurity talent remains acute in India and globally, and what the platform is doing to tackle growing cyber attacks

NEW DELHI : India is home to the second-largest developer base for GitHub, the world’s largest platform for storing, hosting and sharing code. Data shared exclusively with Mint by GitHub revealed India to have over 11.4 million individual developers on the platform, while over 440,000 Indian companies also host and share their code through the platform. All this contributed to nearly 30 million code repositories on GitHub by Indian users. In an interview, Mike Hanley, chief security officer and senior vice-president of engineering at GitHub, spoke about how, despite these figures and the advent of generative AI, the shortage of cyber security talent remains acute in India and globally, what the platform is doing to tackle growing cyber attacks, and why training professionals is not the only way to address the talent shortage. Edited excerpts:
What is GitHub doing in terms of its contribution to the cybersecurity developer community?
We’re trying to make sure that developers everywhere, with an emphasis on open source, achieve better security outcomes with us. To do this, we give free educational resources and training on security. Our security lab spends a lot of time finding vulnerabilities in open-source software and then partnering with communities that build that software to improve it or resolve any bugs. We’re also closely associated with the Open Source Security Foundation (SSF).
Then, we’re further improving the security of our own platform—we’ll require everyone contributing code on GitHub to use two-factor authentication, which is one step to increase the security of the overall ecosystem.
Despite such contributions, major geographies such as India and the US have a significant cybersecurity talent shortage. Why is this so?
India has a massive developer community—the largest for GitHub outside the US, with 11.4 million developers. However, the vast majority of them are not security experts. This talent shortage is a major challenge because these developers are building open-source software, and we are dependent on that in everything—from smartphones to cars and even smart coffee makers.
Through GitHub, we’re trying to improve this by giving the right security experiences to developers and equipping them with the right educational resources and sponsorships. These resources can be directly based on what we’ve made or someone else. In terms of offering tools, we’re looking to make sure that GitHub’s tools are built in a way that developers can get good security standards without being a security expert. Our developer products have adva-nced capabilities such as security code scanning, which ensure that a developer doesn’t need to be a security expert to create secure code. We’re trying to design this for every developer who will interact with these products. But, there’s nothing we can do to magically manifest interest or additional people in cybersecurity.
Are Indian developers interested in the security of code?
India is a massive market, and the developer talent here is exceptional too. There are also exceptional security practitioners across India. The level of interest and talent shortage relative to the demand, of Indian developers in cybersecurity is more or less similar to the US. This is just structural because the cybersecurity landscape is so dynamic, and there are so many big challenges over time.
What India and other geographies need to think about is that we’re probably not going to solve the talent shortage just through training. This is where AI, public-private partnerships, and open-source security foundations can help. Broader solutions co-opting resources of the public and private sector to figure out some of these challenges will be important. We’re not going to train our way out of a talent shortage of this magnitude.
So, will AI be the answer?
AI, I believe, will bring a fundamental transformation in tthe prevention of software vulnerabilities in code. In terms of the talent shortage, the situation is tough primarily because most of the time, we don’t have people to find bugs. A vast majority of software defects are written and persist for years, before we run into them. For instance, Log4j, one of the most notorious cyber security incidents of late, was there for almost two years before it struck.
If the problem is not having enough people to find and fix vulnerabilities, then AI is going to help us prevent vulnerabilities from ever being written in the first place—which marks a massive shift. Typically, developers get security feedback a little before or after building an app. With AI, we’re talking about security feedback happening at the time of writing the code. That is a massive shift.
Breaking things down within AI itself, what impact do you believe generative AI will have on cybersecurity?
A new feature we introduced in February this year to Copilot’s underlying AI models is a feature to emulate static analysis tools—a fairly traditional security tool that every developer would have. As we’re able to help emulate those features, we’re able to identify vulnerable code patterns and improve code suggestions from the models over time. This helps the developers stay focused on their core work, without needing to be security experts. That’s how generative AI will aid cyber security.
But can all of this be used by attackers to build capability at their end as well?
That’s a fair question. We have a malware and exploits policy on Github, wherein we recognize that a lot of security research tooling can be dual-use. In fact, security professionals will tell you that more offensive toolkits can actually make you a better defender and, in many cases, are used to help train or simulate cyber defences.
The challenge is that you can’t necessarily infer intent just from the piece of code existing. The intent is up to the user. Obviously, our policy doesn’t allow for using code to facilitate an attack, but we do understand that a lot of code can be dual-use.
As for the generative AI platform, Copilot is getting better at filtering out code suggestions that are not secure, even though it is right now early days. We’re improving the quality of the suggestions all the time, but it’s important to remember that the models are trained on code written by humans. While code suggestions from AI are going to be better than what you get from an average developer, it is still trained on code that contains bugs—because humans write bugs quite literally for a living.
These are things that we’ll continue to work on over time. As for whether someone could actually use it to write malicious code—that’s where AI safety concerns kick in. To address this, we’re working closely with Microsoft and OpenAI to figure out what the right guard rails are.
Milestone Alert!Livemint tops charts as the fastest growing news website in the world 🌏 Click here to know more.
