We take a look at Google’s Antigravity: Agentic AI development but some frustrations for early adopters

We take a look at Google’s Antigravity: Agentic AI development but some frustrations for early adopters
spacetime continuum - gravity. pic by Shutterstock 40619395

Google has previewed Antigravity, a new IDE forked from the open source base of Visual Studio Code, described as an agentic development platform – but early adopters have expressed frustrations with credits soon running out, and our quick tests soon ran aground because of “model provider overload.”

Antigravity is designed for software developers using an AI-driven approach, where most of the coding and design work is done by AI. The developer’s task is to prompt, refine and verify the agent’s output. 

A distinctive feature of Antigravity is its agent manager window, which is the primary place for interacting with agents. The is one of what the team describes as surfaces. “There are three main surfaces on which you can get your work done,” said Google engineer Kevin Hou in an introductory video, these being the agent manager, the code editor, and the Chrome web browser, automated by Antigravity. 

Agent interaction is also possible in the editor, via a sidebar or from within the editor itself. The docs state that the in-editor AI features “do not get nearly as much use as the agent.”

Agents generate artifacts, which can be any sort of output including markdown files, architecture diagrams, images, browser recordings and more. One example of an artifact is an implementation plan, which details proposals for code changes or even for a complete application, to be reviewed and amended by the developer before the changes are actioned.

Balancing security and productivity with settings that determine how much autonomy the agent is allowed
Balancing security and productivity with settings that determine how much autonomy the agent is allowed

The extent of human review is determined by settings, which control whether every action has to be approved, or whether the agent itself can decide what needs approval. The default is agent-assisted development, where, Hou said, the LLM (large language model) will “automatically decide if something is worth our attention.”

Frontend design includes access to Nano Banana, an image generation model developed by Google and released in August. Browser automation includes the ability for the agent to try out interactions such as completing input fields and reviewing the results, with screenshots that are presented to the developer.

Another feature highlighted by Google is the ability to parallelize work, instructing an agent to do some background research, for example, while also working on the application.

Pricing is not yet available, though a Team plan and Enterprise plan are tagged as coming soon. Individuals can currently use Antigravity for free, subject to a rate limit which refreshes every five hours. The exact way the limit is calculated is not specified, other than that it is based on the down by the agent rather than the number of prompts. The LLMs on offer are Google Gemini 3, Anthropic Claude Sonnet 4.5 or OpenAI GPT-OSS.

Early adopters have hit issues with credit exhaustion or provider overload
Early adopters have hit issues with credit exhaustion or provider overload

We tried Antigravity but we soon hit issues, with a message “Agent taking unexpectedly long to load.” The agent manager seems happy to display its busy icon forever, but clicking a button to go to the editor revealed a further message, “Agent terminated due to error” this being “model provider overload.” We were asked to try again later.

Others have found themselves soon running out of credits. “I start using it for my project and after about 20 mins – oh, no. Out of credits … I switched back to cursor,” said a comment on Hacker News. There is no mechanism currently for buying extra credits. AI processing is expensive, and once the product comes out of preview this may be a pricey tool to run in a development team.

Antigravity is an incremental step towards providing tools for Ai-driven development, and views on its value will vary according to whether the notion of software development evolving into the orchestration of agents is embraced or resisted. Security is an issue, and the terms of use warn that “Antigravity is known to have certain security limitations.” Risks identified include data exfiltration and (presumably malicious) code execution. The terms advise avoiding processing sensitive data and verifying all actions taken by the agent – though one would have thought that if Google were serious about this, it would not have product defaults that give the agents substantial autonomy. 

Risks can be mitigated by using Antigravity in a sandboxed environment and increasing the amount of human review. One of the difficulties though is that the AI may write code or take actions that some developers do not fully understand, making human verification challenging. Prompt injection, where the AI draws on resources crafted to instruct the agent to perform malicious actions, is another risk.

Forking VS Code can also be a source of friction, particularly with extensions which for Antigravity come from the Open VSX registry, rather than from the better-supported Visual Studio Code marketplace.

Some developers feel that Microsoft is falling behind on agentic development, making forks like Antigravity necessary. “f we’d have to wait for Microsoft to innovate on the Agent interaction, the ecosystem would move pretty slowly,” said one Reddit comment.

Then again, moving more slowly may be exactly what the AI ecosystem needs, considering the number of unsolved issues around security and reliability.