
AI accounts for over half of the code produced in some organizations, but much of this going
into production with little to no oversight.
According to Cloudsmith’s Artifact Management Report 2025, “AI is now writing code at
scale.” Of those developers using AI, 42 percent said at least half their code is AI generated.
Breaking those numbers down further, 16.6 percent attributed the majority of their code to AI,
and 3.6 percent said all their code was machine generated.
The report didn’t break down how many developers were using AI to generate code.
However, last year, a GitHub report spanning the US, Brazil, Germany and India, reported
that “More than 97 percent of respondents reported having used AI coding tools at work at some
point.” The proportion reporting “at least some company support” for AI code gen tools range
from 88 percent in the US to 59 percent in Germany.
But Cloudsmith warned, “While LLMs boost productivity by generating code quickly, they can
inadvertently introduce risks by recommending non-existent or malicious packages.”
And developers are acutely aware of this. Asked if AI will exacerbate open source malware
threats (e.g. typosquatting, dependency confusion), 79.2 percent believe AI would increase
the amount of malware in environments, with 30 percent saying it will significantly increase
malware exposure.
Just 13 percent believed AI would “prevent or reduce threats”. And 40 percent said code
generation was the point at which AI input posed the greatest risk.
Yet, a third of developers did not review AI-generated code before every deployment,
meaning “large portions” went unvetted, presenting a growing vulnerability in the supply
chain. While two thirds said they only trust AI generated code after manual review, the
question is whether that proportion will change as AI accounts for an ever larger proportion
of the world’s ever larger code base.
This meant AI was introducing new risks, “often at scale” while “Traditional concerns like
artifact integrity, dependency management, and SBOMs (Software Bill of Materials) are
being compounded by AI’s ability to rapidly consume and reuse unknown or untrusted code.”
This represented an inflection point, Cloudsmith argued, with AI becoming a key contributor
to the software stack, while trust models, tooling and policies had yet to catch up. And
relying on humans to review code was not sustainable.
Naturally, Cloudsmith advocates improved artifact management, with intelligent access
controls, and end-to-end visibility, as well as dynamic access control policies and robust
policy-as-code frameworks.
When it comes to AI code specifically, it flagged up automatically enforced policies to spot
unreviewed or untrusted AI artefacts, and provenance tracking to separate human and AI
generated code. And trust signals need to be integrated directly into the development
pipeline, with reviews no longer just optional.