Gemini
batchling is compatible with Gemini through any supported framework
The following endpoints are made batch-compatible by Gemini:
/v1beta/models/{model}:generateContent
Check model support and batch pricing
Before sending batches, review the provider's official pricing page for supported models and batch pricing details.
The Batch API docs for Gemini can be found on the following URL:
https://ai.google.dev/gemini-api/docs/batch-mode
Example Usage
API key required
Set GEMINI_API_KEY in .env or ensure it is already loaded in your environment variables before running batches.
Here's an example showing how to use batchling with Gemini:
gemini_example.py
import asyncio
import os
from dotenv import load_dotenv
from google import genai
from batchling import batchify
load_dotenv()
async def build_tasks() -> list:
"""Build Gemini requests."""
client = genai.Client(api_key=os.getenv(key="GEMINI_API_KEY")).aio
questions = [
"Who is the best French painter? Answer in one short sentence.",
"What is the capital of France?",
]
return [
client.models.generate_content(
model="gemini-2.5-flash-lite",
contents=question,
)
for question in questions
]
async def main() -> None:
"""Run the Gemini example."""
tasks = await build_tasks()
responses = await asyncio.gather(*tasks)
for response in responses:
print(f"{response.model_version} answer:\n{response.text}\n")
async def run_with_batchify() -> None:
"""Run `main` inside `batchify` for direct script execution."""
async with batchify():
await main()
if __name__ == "__main__":
asyncio.run(run_with_batchify())
Output: