How to Use Gemini 2.5 Pro in Google AI Studio: A First Look

2025-11-18674-gemini-2-5-pro-ai-studio-header

Google’s Gemini 2.5 Pro is now one of the most powerful “thinking” models available to developers, and as of November 2025 it’s fully accessible through Google AI Studio with the stable model ID gemini-2.5-pro. With a 1M-token context window, native multimodal input (text, images, audio, video, PDFs), and built‑in reasoning, it’s designed to move you from prototype to production much faster. This how‑to guide walks you through using Gemini 2.5 Pro directly in Google AI Studio: from spinning up your first prompt to exporting working code and wiring it into your app. We’ll focus on hands‑on workflows, not just specs, so you can start building AI-powered tools that actually ship.

What is Gemini 2.5 Pro and why use it in AI Studio?

According to Google’s March 25, 2025 DeepMind blog and the October 2025 technical report, Gemini 2.5 Pro is a natively multimodal, sparse Mixture‑of‑Experts transformer trained with “thinking” (reinforcement‑learning‑based multi‑step reasoning at inference time). The stable API model entry, documented on Gemini models, is:

Model code: gemini-2.5-pro
Input: text, images, video, audio, PDF
Output: text
Input token limit: 1,048,576
Output token limit: 65,536
Thinking: supported
Code execution, function calling, structured output, Google Search & Maps grounding: supported

AI Studio is Google’s browser-based playground for the Gemini API. As of its September 22, 2025 quickstart update, it lets you:

  • Try Gemini 2.5 Pro in a GUI with chat, file uploads, and multimodal runs.
  • Configure “Run settings” (temperature, safety, tools, thinking) without writing code.
  • Export ready-to-run snippets (“Get code”) in your preferred language.
  • Iterate on prompts, then migrate to Vertex AI or your own backend when you’re ready.

Step 1 – Set up Google AI Studio and select Gemini 2.5 Pro

Before you build anything, you need access to AI Studio and the right model.

  1. Open AI Studio
    Go to aistudio.google.com and sign in with your Google account.
  2. Create or select a project
    If prompted, choose a Google Cloud project (or let AI Studio create a lightweight “API key only” project). This is required for API keys and billing.
  3. Verify Gemini 2.5 Pro availability
    From the Gemini models page (Gemini API models) you’ll see Gemini 2.5 Pro listed as a stable model with the “Try in Google AI Studio” link:
    • gemini-2.5-pro – advanced thinking model (June 2025 GA, last updated Nov 18, 2025).
    Click “Try in Google AI Studio” or choose it from the model picker in the Chat interface.
  4. Confirm region & quotas
    Gemini 2.5 Pro runs in multiple regions (US, EU, APAC). For most AI Studio use cases you don’t have to pick a region explicitly, but for production you’ll want to align with available regions and rate limits.
Google AI Studio interface showing the model picker with Gemini 2.5 Pro selected in a chat prompt workspace
Google AI Studio lets you pick gemini-2.5-pro from the model dropdown in Chat mode.

At this point you’re ready to experiment interactively with Gemini 2.5 Pro before touching any code.

Step 2 – Build your first Gemini 2.5 Pro chat workflow

AI Studio’s Chat prompts are ideal to prototype assistants, copilots, and domain-specific bots. The official quickstart uses Gemini 2.5 Pro for its Europa “alien chatbot” example; we’ll adapt that pattern to a more realistic developer assistant.

2.1 Configure system instructions

  1. In AI Studio, click New chat. Ensure the model is set to gemini-2.5-pro.
  2. Click the assignment icon to open System instructions.
  3. Paste tailored instructions, for example:
You are an expert TypeScript and React engineer.
Your job is to:
- Explain code changes in concise, developer-friendly language.
- Suggest refactors that reduce complexity while preserving behavior.
- When showing code, include only the minimal diff needed.

Always:
- Ask clarifying questions if requirements are ambiguous.
- Keep answers under 3 paragraphs unless asked for more detail.

Gemini 2.5 Pro’s thinking capability means it can follow quite detailed role instructions, but you’ll get better results if you keep them specific and test iterations frequently.

2.2 Prototype conversation and iterate

  1. In the input box, paste a realistic query, for example:
    Here’s a React component that renders a huge table and is getting slow.
    Suggest a refactor to virtualize the list with minimal changes.
    Attach the component as a file or paste it directly.
  2. Click Run. Inspect the response:
    • Does it respect your length and style requirements?
    • Does it reason correctly about performance trade-offs?
    If not, refine the system instructions (“Prefer using react-window”, “Avoid pseudo-code”, etc.) and retry.

Every exchange—system instructions, user messages, and model responses—counts toward the 1M‑token context. For long sessions, periodically start a fresh chat when the history becomes too long or noisy.

2.3 Adjust run settings: temperature, safety, thinking

Open the Run settings panel on the right. For a coding assistant:

  • Temperature: 0.2–0.4 for deterministic, low-hallucination output.
  • Max output tokens: 2048–4096 is usually enough for code diffs.
  • Safety settings: leave defaults unless you have stricter needs; Gemini 2.5 has improved safety vs 1.5 but can still be tuned.
  • Thinking (if exposed in UI): use “auto” or a small budget for typical queries; increase for hard algorithmic tasks, mindful of latency and cost.
Run settings panel in Google AI Studio showing model parameters such as temperature, max tokens, safety settings and tools enabled for Gemini 2.5 Pro
Run settings give you fine-grained control over Gemini 2.5 Pro’s behavior without writing any code.

Once you’re consistently getting the behavior you want from Gemini 2.5 Pro in Chat, you’re ready to wire it into real data and workflows.

Step 3 – Use multimodal inputs and long context

One of Gemini 2.5 Pro’s main advantages over earlier models is its ability to handle large, mixed‑modality contexts: entire repositories, long PDFs, multi-hour videos, and audio. AI Studio lets you exercise those capabilities interactively.

3.1 Upload PDFs, images, audio, and video

  1. Drag-and-drop files into the Chat interface or click the attachment icon:
    • Documents: PDFs or text up to 50 MB per file (API) / 7 MB via console upload; 1,000 pages per PDF.
    • Images: up to 7 MB each; up to ~3,000 images per prompt via the API.
    • Video: formats like MP4, WEBM; up to ~45 minutes with audio or ~1 hour without (per Vertex AI docs), up to 10 videos per prompt.
    • Audio: 8+ hours per prompt (bounded by 1M tokens).
  2. Frame your prompt to exploit multimodality, for example:
    Here’s a 45-minute onboarding video and the product’s technical PDF.
    
    1. Extract the core API contracts and error-handling rules.
    2. Propose an internal checklist for reviewing new features that touch these contracts.
    3. Output as a concise markdown checklist.

Behind the scenes, AI Studio uses Gemini 2.5 Pro’s multimodal encoders and long-context attention to reason over all these inputs together, something Google’s 2025 technical report shows is now competitive or state-of-the-art on VideoMME, LVBench, and similar benchmarks.

3.2 Design prompts for long-context reasoning

With a 1M token window, you can easily exceed what the model can “mentally prioritize” if you don’t guide it. When working with large contexts in AI Studio:

  • Chunk your tasks: Ask for high-level summaries first, then drill down with follow-up prompts referencing specific sections.
  • Use structure in prompts: Numbered steps, headings, and explicit “First/Then/Finally” instructions help the model plan.
  • Be explicit about constraints: “Use only these files”, “Ignore earlier messages”, “Base your answer strictly on the attached PDF”.

Example long-context prompt:

You have been given:
- The entire backend service repository (as multiple files).
- A PDF describing our data retention policies.

Task:
1. Identify every place we write user PII to storage.
2. For each location, state whether it complies with the retention policy.
3. Output a JSON report with fields:
   - file_path
   - line_span
   - pii_type
   - policy_violation (true/false)
   - notes

This is exactly the kind of structured, long-context reasoning Gemini 2.5 Pro was built to handle, and AI Studio gives you a safe environment to tune this before you code against the API.

Step 4 – Enable tools: structured output, function calling, grounding, code execution

Gemini 2.5 Pro supports a rich tool set documented in the Gemini API guides: structured outputs, function calling, code execution, Google Search grounding, Maps grounding, File Search, and URL context. In AI Studio, you can toggle many of these features in the UI before exporting code.

4.1 Structured output: get JSON, not prose

  1. In Run settings, enable Structured output.
  2. Define a JSON schema, for example:
    {
      "type": "object",
      "properties": {
        "endpoint": { "type": "string" },
        "method": { "type": "string", "enum": ["GET","POST","PUT","DELETE"] },
        "summary": { "type": "string" },
        "risk_level": { "type": "string", "enum": ["low","medium","high"] }
      },
      "required": ["endpoint","method","summary","risk_level"]
    }
  3. Prompt the model:
    Analyze this OpenAPI spec and emit one JSON object per endpoint
    according to the schema, focusing on security risks.

AI Studio will show you the structured response; “Get code” will provide the correct API parameters for response_schema/response_mime_type in your language of choice.

4.2 Function calling and Google Search grounding

Gemini 2.5 Pro can decide when to call functions you define. In AI Studio Build mode (or advanced Chat configurations), you can declare tools and see how the model uses them before writing backend code.

  1. Switch to a Build or “Advanced” prompt that supports tools (AI Studio labels may vary over time).
  2. Define a tool signature, such as:
    getWeather(location: string, units: "metric" | "imperial")
  3. Enable Google Search grounding if you want Gemini to reference the web for factuality (availability depends on your account and policies).
  4. Test with prompts like:
    Summarize today’s weather in Berlin and compare it to yesterday.
    Use the getWeather tool; don’t guess.

AI Studio will show the intermediate tool calls (in JSON) as well as the final answer, which is invaluable for debugging your tool schemas and guardrails before you put them behind an API.

4.3 Code execution for safer, more reliable answers

For math, data analysis, and certain coding tasks, you can enable Code execution in Run settings. Gemini 2.5 Pro can then generate and run Python snippets behind the scenes to reason more reliably (similar to the “thinking” budget but with external tools).

Example prompt:

You will be given a CSV of anonymized transactions.
Compute:
- Total volume per country
- Top 5 merchants by revenue
- Any suspicious spikes (more than 3 standard deviations from the daily mean)

Use code execution for all calculations and show the final results in a markdown table.

You’ll see the code and outputs in AI Studio, which makes it easy to replicate similar logic in your own environment later.

Step 5 – Export code and integrate Gemini 2.5 Pro into your app

Once you have a working prompt and run configuration in AI Studio, you don’t need to reimplement it by hand. Use the built‑in “Get code” feature.

5.1 Get an API key

  1. Go to API key in AI Studio or aistudio.google.com/apikey.
  2. Create a key, scoped to your project. Store it securely (env vars, secret manager).

5.2 Use “Get code” for your prompt

  1. In your configured Chat or Build prompt, click Get code.
  2. Select your target language (JavaScript/TypeScript, Python, Java, etc.).
  3. Copy the generated snippet using the gemini-2.5-pro model and your chosen parameters.
// Example: Node.js using the official Gemini SDK
import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-2.5-pro" });

async function summarizeSpec(specText) {
  const prompt = `
You are a backend API reviewer. Summarize this OpenAPI spec and highlight
breaking changes or security risks. Respond in concise bullet points.
`;

  const result = await model.generateContent({
    contents: [
      { role: "user", parts: [{ text: prompt }] },
      { role: "user", parts: [{ text: specText }] },
    ],
    generationConfig: {
      temperature: 0.3,
      maxOutputTokens: 2048,
    },
  });

  const response = await result.response;
  console.log(response.text());
}

This code mirrors your AI Studio configuration so that behavior is consistent between the playground and production. For more advanced usage (Files API, Batch API, Live API), follow the Gemini API docs.

How Gemini 2.5 Pro compares to other Gemini models in AI Studio

In practice you’ll often choose between Gemini 2.5 Pro and the 2.5 Flash / Flash‑Lite models, depending on latency and cost constraints.

ModelModel codeBest forContext windowNotes (as of Nov 2025)
Gemini 2.5 Progemini-2.5-proDeep reasoning, complex coding, multimodal analysis1,048,576 tokensStable; thinking, tools, long video/audio; higher latency and cost
Gemini 2.5 Flashgemini-2.5-flashHigh-volume requests, agents, chatbots1,048,576 tokensBest price–performance; strong reasoning but below Pro on hardest tasks
Gemini 2.5 Flash‑Litegemini-2.5-flash-liteUltra-low latency & cost1,048,576 tokensLower capability; good for simple classification, routing, lightweight tools

In AI Studio you can quickly A/B these models on the same prompt. For early prototyping of complex, multimodal workflows, start with Gemini 2.5 Pro. Once the behavior is stable, try swapping to 2.5 Flash or Flash‑Lite where quality allows, then export code.


Putting it all together: from announcement to application

Gemini 2.5 Pro has moved from “experimental” to a stable, production-ready model in 2025, and Google AI Studio is the fastest way to put that power to work.

Key steps to get started:

  • Use AI Studio to select gemini-2.5-pro and prototype your system instructions and prompts.
  • Exercise multimodal and long-context power by uploading real PDFs, repositories, audio, and video and framing tasks that mirror your production needs.
  • Enable structured output, tools, grounding, and code execution in the Run settings to shape the model into a reliable component of your system, not just a chat toy.
  • Click Get code to export working snippets for your stack, then integrate with the Gemini API using your AI Studio key.
  • Iterate: refine prompts and parameters in AI Studio, then sync changes back into your code as your product evolves.

If you follow this workflow, Gemini 2.5 Pro in Google AI Studio becomes more than a demo: it’s your front door to building next‑generation AI tools with state-of-the-art reasoning, coding, and multimodal understanding already baked in.

Written by promasoud