TinyFish has opened its Search and Fetch APIs to all developers and AI agents at no cost, removing the credit card requirement and setting generous rate limits that make both tools viable for production use from day one.
The launch covers every surface developers already work on: direct REST endpoints, MCP-compatible clients, including Claude Code, Cursor, and Codex; a CLI; Skills, Python, and TypeScript SDKs; and first-party integrations with n8n, Dify, LangChain, CrewAI, and Vercel Skills.
One API key covers it all, and the same key and dashboard carry over as usage grows beyond the free tier.
TinyFish's Codex integration
Search returns structured JSON built for agent retrieval rather than browser-style link lists. The endpoint runs on TinyFish's own Chromium infrastructure, delivering a p50 latency of under 0.5 seconds with rank-stable results calibrated for use within an agent's tool loop. Fetch takes any URL through a full browser render, strips navigation bars, scripts, ads, and cookie banners, then returns clean Markdown, JSON, or HTML. Up to 10 URLs can be processed in parallel, and failed URLs do not count against usage.
Test it on TinyFish!
The token efficiency case is direct: where native fetch implementations hand models a wall of raw HTML, TinyFish Fetch hands them the actual content, which means lower LLM costs and longer effective context per call. A one-line Skill install also teaches compatible agents when to reach for search versus fetch and how to call each correctly.
TinyFish positions itself as an enterprise infrastructure for AI web agents, with a four-product platform covering search, fetch, browser sessions, and multi-step web automation. Its customer base already includes Google, Amazon, DoorDash, and Grubhub, and its Web Agent product holds a publicly benchmarked accuracy of 89.9% on the Mind2Web evaluation.
Making Search and Fetch free is a deliberate infrastructure play: remove every friction point from the retrieval layer so it lands in every agent stack at zero cost, then grow with teams as their workloads scale. The move places TinyFish in direct competition with tools like Firecrawl and the native fetch implementations baked into LLM clients, staking its differentiation on infrastructure ownership, token efficiency, and depth of integration across the agent ecosystem.