AI is taking increasing chunks of developers and security pros’ work, whether the pros in question realize it or not.
Kicking off the RSA conference in San Francisco this week, RSA CEO Rohit Gaur said AI systems will play a massive role in next generation security platforms.
But he said AI was already highly prevalent in existing products, with 10 major vendors and over 50 startups announcing cybersecurity products based on the technology.
It’s just that: “So as not to spook us humans, most of these capabilities have been positioned as a co-pilot model, where the human is doing the same job assisted by AI.”
But, he continued, this simply “sugarcoats the scary truth” that “many jobs will disappear, many will change and some will be created.” Which is perhaps a good thing for cybersecurity, ”because we don’t have enough human talent.”
That said, research from GitLab released last week suggests that developers are happily adopting AI despite misgivings about its possible effect on their future employment prospects. Just shy of two-thirds of developers are using AI and machine learning in testing, or plan to in the next three years the numbers showed.
Of those already putting AI to work, 62 percent are using it to check code, up from half a year ago, while 53 percent use bots for testing, and 36 percent use AI/ML for code review.
Two thirds of security respondents were concerned about the effect of the technology on their jobs, with 28 percent saying they were very concerned. A quarter of those who express misgivings zeroed in on AI’s potential to introduce errors that will make their jobs more difficult.
GitLab used this week’s RSA conference to highlight AI-assisted vulnerability mitigation guidance that analyzes developers’ code with vulnerability info to suggest fixes. It also debuted a license compliance scanner that extracts information from 500 plus license types to ease compliance and reduce the potential for remediation work if an error occurs. The vendor announced code suggestions powered by AI back in February.
GitLab CPO David DeSanto told us, “As we look at the longer path, I always ask what is the most natural way for somebody to interact with something and we, as humans, gravitate around speech and hearing and that sort of conversational way to learn.”
The debut of ChatGPT has made people more comfortable engaging with AI, he said, but GitLab won’t be piggybacking on it or Bard.
He said “GitLab is already working on what would be our chat bot inside the product. And it should be out later this year.” The main brake on this is simply that GitLab is in the process of changing its UI.
“The bot is trained on our local data, and it’s using a localized model. We didn’t build the model, but we trained the model.” The training data includes GitLab’s handbooks and documentation.
Initial use cases would be asking things like “how do I configure a compliant pipeline?” However, he said, further down the line, “I can see having interaction inside a chatbot like, ‘Hey, explain this code to me, or can you summarize all the comments on this issue’.”