As of November 18, 2025, Gemini 3 is officially live, and Google has wired its most intelligent model directly into Gemini CLI, an open-source, agentic AI assistant that runs in your terminal. This isn’t just “ChatGPT in the shell.” With Gemini 3 Pro, the CLI can now plan multi-step workflows, call tools like the file system and shell safely, and keep enough context to refactor large codebases or debug distributed services. In this guide, you’ll learn how to use Gemini 3’s new agentic CLI capabilities to automate real dev workflows: from greenfield “vibe coding” to complex debugging, documentation generation, and cloud operations – all without leaving your terminal.
News or evergreen: what this article is
This piece is evergreen content with news-aware details. Gemini 3 and the Gemini CLI integration are brand-new (announced November 18, 2025), but the focus here is a hands-on tutorial you can reuse long after launch. Expect a step-by-step guide, practical workflows, and example commands rather than a short news brief.
What’s new: Gemini 3 + Gemini CLI at a glance
According to Google’s November 18, 2025 developer update, “Start building with Gemini 3”, Gemini 3 Pro is their most capable model yet, with state-of-the-art reasoning and agentic coding. It’s available via the Gemini 3 API in Google AI Studio and Vertex AI, and it’s now wired into Gemini CLI as a first-class coding agent.
Key facts as of November 18, 2025:
- Gemini 3 Pro is live in preview for developers via the Gemini API (Google AI Studio / Vertex AI) and integrated into Gemini CLI.
- Gemini CLI is an open-source terminal agent (client + local server) that talks to the Gemini API and exposes tools (file system, shell, web fetch, web search, MCP servers, etc.).
- Gemini 3 Pro in Gemini CLI launched with v0.16.x of the CLI (GitHub release
v0.16.0marks “launch Gemini 3 in Gemini CLI”). - Gemini 3 Pro is available in Gemini CLI to Google AI Ultra subscribers and paid Gemini API users now; Gemini Code Assist Enterprise and other tiers can join a waitlist (per Google Developers Blog, Nov 18, 2025).
- Pricing for Gemini 3 Pro preview via the Gemini 3 API is currently listed at $2/million input tokens and $12/million output tokens for prompts ≤ 200k tokens in Google’s official pricing docs.
| Component | Current status (as of Nov 18, 2025) |
|---|---|
| Gemini 3 Pro model | Preview in Gemini API, AI Studio, Vertex AI; integrated into Gemini CLI and Google Antigravity |
| Gemini CLI | Open-source terminal agent; Gemini 3 support launched in v0.16.0 (stable) with 0.17.x nightlies |
| Access in CLI | Enabled for Google AI Ultra + paid Gemini API key; waitlist for other plans |
The rest of this guide assumes you want to use Gemini 3’s agentic capabilities directly from your terminal to speed up real-world development work.
Understanding Gemini CLI as an agentic dev environment
Gemini CLI is more than a simple “chat in terminal” client. The official docs describe it as a client/server system:
- Client (
packages/cli): the REPL/UI that runs in your terminal. - Core server (
packages/core): a local process that manages:- Connections to the Gemini 3 API
- Tool execution: file system,
run_shell_command, web fetching, Google web search, memory, todos, MCP servers, etc. - Policy engine and “trusted folders” to control what the agent can touch on disk.
When people call Gemini CLI “agentic,” they’re referring to this pattern:
- You describe an objective in natural language (e.g., “migrate our auth middleware to OAuth 2.1”).
- Gemini 3 Pro creates a multi-step plan.
- It chooses appropriate tools (read/write files, run shell commands, call MCP servers, hit HTTP APIs) and executes them under policy constraints.
- It tracks state across steps (checkpointing, memory, code context) until the task is complete or you intervene.

Because the CLI acts as a structured agent runner for the Gemini 3 API, it’s one of the fastest ways to get hands-on with Gemini 3’s reasoning and tool-use without building your own orchestration layer.
Installing and upgrading Gemini CLI for Gemini 3
Prerequisites
- Node.js (LTS, e.g., 20.x) and npm installed.
- A Gemini API key with Gemini 3 Pro preview access, or a Google AI Ultra subscription tied to your Google account.
- A modern terminal (VS Code integrated terminal, iTerm2, Ghostty, etc.) for best UX with the interactive shell.
Install or update to Gemini CLI 0.16.x+
The official NPM package is @google/gemini-cli. As of mid-November 2025, the latest stable version that launched Gemini 3 integration is v0.16.0 (with newer 0.16.x patch releases and 0.17.x nightlies in progress).
# Install globally (recommended)
npm install -g @google/gemini-cli@latest
# Verify version
gemini --version
# Expect: 0.16.x or newerIf you prefer not to install globally, you can use npx:
npx @google/gemini-cli@latestAuthenticate with the Gemini 3 API
You can authenticate Gemini CLI using either:
- Gemini API key from Google AI Studio (simplest).
- Google Cloud / Gemini Code Assist credentials (for enterprise).
Run the CLI and follow the authentication prompts:
geminiOr use the dedicated auth command if provided in your version (check Gemini CLI auth docs):
# Example pattern – exact flags may differ slightly by release
gemini auth login
# or
gemini auth set-api-key <YOUR_GEMINI_API_KEY>Enable Gemini 3 Pro in the CLI
Per the November 18, 2025 Google Developers blog (“5 things to try with Gemini 3 Pro in Gemini CLI”), once you’re on v0.16.x and have appropriate access:
- Launch Gemini CLI:
gemini - Run the in-CLI command:
/settings - Toggle Preview features to true
- The CLI will now default to Gemini 3 Pro as your base model.
If you’re on a plan that isn’t yet enabled, join the waitlist linked in the blog post and watch the GitHub discussions for rollout updates.
Core workflows: using Gemini 3’s agentic CLI in daily dev
Once set up, the real power of Gemini 3’s agentic workflows in Gemini CLI shows up in repeatable development tasks. Below are four patterns you can adopt immediately.
1. Vibe coding: scaffold full apps from natural language
Gemini 3 Pro is optimized for “vibe coding” – you describe the outcome (look, feel, tech stack), and the agent plans and generates a multi-file project, not just a single script.
From a project folder where Gemini CLI has trust to read/write files:
# In your project directory
gemini
# Then, in the REPL:
Create a new subfolder called `landing-3d` and scaffold
a production-ready Three.js landing page with a reactive
UI for a SaaS product "Orbital Metrics". I want:
- A single HTML entry point: index.html
- A JS module bootstrapping Three.js with a minimal but
impressive 3D orbit animation in the hero background
- Tailwind CSS via CDN for layout and typography
- A responsive layout with a hero, feature grid, and pricing
- npm scripts for `dev` (using a simple static file server)
and `build` (using esbuild), in a package.json
Explain exactly how to run it locally once you finish.Gemini 3 should:
- Create the folder
- Write
index.html, a JS module, Tailwind setup, andpackage.json - Optionally run
npm installor advise you to do so - Summarize how to run
npm run dev

To keep this sustainable, save common “vibes” as custom slash commands (documented in Google’s July 30, 2025 blog on Gemini CLI custom commands) so teammates can reuse standardized scaffolds.
2. Multimodal UI building from sketches
Gemini 3 Pro brings stronger multimodal understanding into the CLI. You can feed it design sketches and have it generate production-ready UIs.
- Draw a UI on paper or a whiteboard.
- Take a clear photo, save it as
dashboard-sketch.pngin your project folder. - Run Gemini CLI and reference the image by path:
gemini
Build a responsive React + Tailwind admin dashboard
for "Nebula Control". Use my sketch as the visual reference:
@./dashboard-sketch.png
Requirements:
- Layout with sidebar nav, top bar, and main content cards
- Dark theme with accent gradients
- Data cards for "Active clusters", "CPU usage", and "Alerts"
- Use Vite for dev tooling, configure everything via npm scripts
- Create all necessary files under `nebula-dashboard/` and
then run the appropriate npm commands to install deps.
After you're done, summarize the file structure and how
to run it locally.The agent will read the image, infer layout, and generate code accordingly. This is a fast way to go from sketch → running prototype without opening a GUI tool.
3. Complex shell automation via natural language
Gemini CLI exposes a shell tool (run_shell_command) that Gemini 3 can call as part of an agentic workflow. With Gemini 3’s improved reasoning, you can delegate gnarly shell sequences.
Example: letting Gemini 3 drive git bisect to find a regression:
gemini
At some point in this repo, we lost the commit that set
the default theme to dark mode for the web app.
Using git bisect and any other necessary shell commands,
find the exact commit hash that introduced the regression.
Explain your reasoning along the way and paste the final hash
as a code block at the end.The agent should:
- Init
git bisectbetween a known good and bad commit (it may ask you for these). - Run tests or grep commands to detect when dark mode is enabled/disabled.
- Iterate until it finds the culprit.
- Report the final hash and how it determined “good” vs “bad.”
“Gemini 3 Pro scores 54.2% on Terminal-Bench 2.0, a benchmark for tool use in the terminal, reflecting its ability to plan and execute complex command-line workflows.”
Google, “Start building with Gemini 3,” Nov 18, 2025
For safety, use trusted folders and the policy engine to restrict destructive operations (see Trusted Folders in the docs) and require confirmation for high-risk commands like rm -rf or production deployments.
4. Generating comprehensive documentation from real code
One of the most practical agentic workflows is “read this repo and produce user-facing docs.” The official Gemini CLI blog shows using Gemini 3 Pro to generate multi-section documentation for a CLI project by pointing it at the codebase.
From the root of a codebase, grant Gemini CLI permission to read the folder, then run:
gemini
This repository powers our internal deployment tool "AstraDeployer".
We have zero user-facing documentation.
Task:
1. Recursively read the codebase and understand:
- CLI commands and flags
- Config options
- Authentication mechanisms
- Supported providers (AWS, GCP, Kubernetes, etc.)
- Any plugins or extension system
2. Produce a multi-page documentation set in Markdown:
- 01-introduction.md
- 02-installation.md
- 03-usage.md
- 04-configuration.md
- 05-extensions.md
- 06-contributing.md
Focus on user-facing behavior, but include a high-level
architecture overview and pointers for contributors.
Save all docs under a new `docs/` folder in this repo.
When done, show me a summary of each file.Here Gemini 3 is:
- Reading files via the file-system tools.
- Building a mental model of the architecture.
- Producing consistent Markdown files in a single run.
Because Gemini 3’s agentic reasoning is tuned for multi-step planning, this type of “read → synthesize → write multiple outputs” workflow is far more reliable than older models that struggled with long-horizon tasks.
5. Live cloud debugging with CLI extensions
Gemini CLI supports extensions that integrate third-party tools (Dynatrace, Elastic, Snyk, Cloud Run, etc.). With Gemini 3 Pro’s improved tool use, the agent can orchestrate complex multi-service debugging sessions.
Example scenario from Google’s November 18 blog post: debugging a performance issue in a Cloud Run service using CLI extensions to talk to observability and security tools.
gemini
Users report that the "Save Changes" button in our web UI
is slow. Investigate the Cloud Run service `tech-stack`:
- Use the available observability and security extensions
to pull logs, traces, and performance metrics.
- Identify the likely root cause and propose a fix.
- If appropriate, prepare a patch as a pull request
against the `tech-stack` repo, but DO NOT deploy
without my explicit approval.
Explain each step you take, and summarize your findings
in a short incident report at the end.With the right extensions configured, Gemini 3 can:
- Query Cloud Run metrics and logs
- Scan for security or dependency issues via Snyk
- Open a patch locally, run tests, and propose a PR
This turns what used to be a 5–10 tool juggling act into a single terminal session driven by an agent that understands the bigger picture.
Designing your own agentic workflows with Gemini 3 + CLI
Beyond the built-in tools, you can extend Gemini CLI and the Gemini 3 API to match your stack and processes.
Use the Gemini 3 API directly for custom agents
Gemini 3 Pro is exposed via the Gemini 3 API with new thinking levels and more granular media resolution controls to manage reasoning depth and multimodal processing. For custom agentic systems (CI bots, internal chatops, etc.), you can model your architecture after Gemini CLI:
- Front-end: Slack bot, web UI, or internal CLI
- Middle layer: your own “core” that:
- Calls the Gemini 3 API with structured tool schemas
- Keeps track of thought signatures and checkpoints
- Manages policy (what tools can be run, where)
- Back-end tools: Kubernetes API, GitHub, databases, internal REST/GraphQL services

Gemini CLI’s open-source repo is a useful reference implementation of how to structure tools, policies, and checkpoints around the Gemini 3 API.
Extend Gemini CLI with custom tools and extensions
According to the official Gemini CLI docs:
- Tools API lets you define new functions the agent can call (e.g.,
deploy_to_staging,trigger_canary,query_cost_dashboard). - Extensions package up integrations (internal APIs, SaaS tools) as reusable modules.
- IDE integration connects the CLI agent to VS Code, JetBrains, etc., so the same workflows apply in-editor.
Design guidance for dev teams:
- Start with one or two high-value, low-risk workflows (e.g., triaging logs, generating docs, scaffolding services).
- Model them as custom slash commands or tools with clear contracts.
- Use the Policy Engine to restrict what each tool is allowed to do and where.
- Treat agentic workflows like code: review prompts, test them, and version-control configs.
Best practices, limitations, and when to dial Gemini 3 back
Gemini 3 Pro is powerful, but agentic tooling in the CLI should be used thoughtfully.
- Guardrails first: Always configure trusted folders and conservative policies in new environments; require confirmation for destructive or production-affecting actions.
- Keep humans in the loop: Let Gemini 3 propose patches, migrations, and infra changes, but require code review and explicit approval before merge/deploy.
- Optimize cost: For large document/code analysis, consider chunking and be aware that long-running, multi-step workflows can consume substantial tokens.
- Expect occasional missteps: Even with Terminal-Bench 2.0 scores, the agent can choose suboptimal commands or misinterpret ambiguous instructions. Start with mock or staging environments.
- Use the right model: For very high-volume, simpler tasks, pairing Gemini CLI with lighter models (e.g., vertex-hosted Flash in other tools) can be cheaper. Reserve Gemini 3 Pro for reasoning-heavy workflows where it shines.
Conclusion: bringing Gemini 3’s intelligence into your terminal today
Gemini 3 marks a clear step up in agentic capabilities, and the Gemini CLI is the most direct way to put that power into your day-to-day dev workflows. By upgrading to Gemini CLI 0.16.x+, enabling Gemini 3 Pro, and leaning on its tools for file I/O, shell access, and extensions, you can turn your terminal into an AI-native workbench: scaffold apps from rough ideas, convert sketches into running UIs, wrap complex shell-fu in natural language, and even orchestrate multi-service cloud debugging sessions.
To get started, install the latest Gemini CLI, authenticate with the Gemini 3 API, and try one concrete workflow: generate documentation for an existing repo, or let Gemini 3 drive a targeted refactor behind a feature flag. From there, evolve custom tools and extensions around your stack. As Google continues to expand Gemini 3’s agentic features and introduces platforms like Google Antigravity, the workflows you prototype in Gemini CLI today can become the foundation of richer, organization-wide AI agents tomorrow.