Skip to content

SDKs & Libraries

LLMCrawl provides official SDKs and libraries to make integration with your applications seamless and developer-friendly.

Official SDKs

JavaScript/TypeScript SDK

Our most comprehensive SDK for JavaScript and TypeScript applications.

  • Package: @llmcrawl/llmcrawl-js
  • Platform: Node.js, Browser, React, Next.js, Vue, Angular
  • Features: Full API coverage, TypeScript support, AI extraction, webhooks
  • Documentation: JavaScript/TypeScript SDK Guide
bash
npm install @llmcrawl/llmcrawl-js
typescript
import { LLMCrawl } from "@llmcrawl/llmcrawl-js";

const client = new LLMCrawl({ apiKey: "your-api-key" });
const result = await client.scrape("https://example.com");

View Full Documentation →

Framework-Specific Examples

React/Next.js

Perfect for building web applications with scraping capabilities:

typescript
import { LLMCrawl } from "@llmcrawl/llmcrawl-js";

function ScrapingComponent() {
  const [result, setResult] = useState(null);

  const handleScrape = async () => {
    const client = new LLMCrawl({
      apiKey: process.env.NEXT_PUBLIC_LLMCRAWL_API_KEY,
    });
    const data = await client.scrape("https://example.com");
    setResult(data);
  };

  return (
    <div>
      <button onClick={handleScrape}>Scrape Website</button>
      {result && <pre>{result.data?.markdown}</pre>}
    </div>
  );
}

Node.js

Ideal for backend services, automation, and data processing:

typescript
import { LLMCrawl } from "@llmcrawl/llmcrawl-js";
import fs from "fs/promises";

const client = new LLMCrawl({ apiKey: process.env.LLMCRAWL_API_KEY });

// Scrape and save to file
const result = await client.scrape("https://docs.example.com");
if (result.success) {
  await fs.writeFile("content.md", result.data.markdown);
}

Express.js API

Build RESTful APIs with scraping capabilities:

typescript
import express from "express";
import { LLMCrawl } from "@llmcrawl/llmcrawl-js";

const app = express();
const client = new LLMCrawl({ apiKey: process.env.LLMCRAWL_API_KEY });

app.post("/api/scrape", async (req, res) => {
  const { url } = req.body;
  const result = await client.scrape(url);

  if (result.success) {
    res.json({ content: result.data.markdown });
  } else {
    res.status(400).json({ error: result.error });
  }
});

Community SDKs

We welcome community contributions! If you've built an SDK for another language or framework, let us know.

Python SDK (Coming Soon)

Official Python SDK in development with full typing support and Pydantic integration.

  • Status: In Development (Q2 2025 Beta)
  • Features: Type hints, async support, Pydantic models
  • Documentation: Python SDK Guide
python
# Preview API (subject to change)
from llmcrawl import LLMCrawl

client = LLMCrawl(api_key="your-api-key")
result = await client.scrape("https://example.com")

View Python Documentation →

Go (Community)

Looking for Go developers to contribute an official SDK.

PHP (Community)

Looking for PHP developers to contribute an official SDK.

REST API

All SDKs are built on top of our REST API. You can also use the API directly:

bash
curl -X POST https://api.llmcrawl.dev/v1/scrape \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}'

View API Reference →

WebSocket Support

For real-time crawl updates and streaming results:

typescript
import { LLMCrawl } from "@llmcrawl/llmcrawl-js";

const client = new LLMCrawl({ apiKey: "your-api-key" });

// Start crawl with webhook
const crawl = await client.crawl("https://example.com", {
  webhookUrls: ["https://your-app.com/webhook"],
  limit: 100,
});

// Or poll for updates
let status = await client.getCrawlStatus(crawl.id);
while (status.status === "scraping") {
  await new Promise((resolve) => setTimeout(resolve, 5000));
  status = await client.getCrawlStatus(crawl.id);
  console.log(`Progress: ${status.completed}/${status.total}`);
}

Getting Started

  1. Get an API Key: Sign up at llmcrawl.dev and get your API key from the dashboard.

  2. Install SDK: Choose your preferred language/framework and install the SDK.

  3. Initialize Client: Create a client instance with your API key.

  4. Start Scraping: Use the SDK methods to scrape, crawl, or map websites.

Support

  • 📚 Documentation: Comprehensive guides and API reference
  • 🎮 Playground: Test the API interactively at llmcrawl.dev/tools
  • 💬 Community: Join our Discord for help and discussions
  • 📧 Email: Direct support at [email protected]

Contributing

Interested in contributing to our SDKs or building one for a new language? Check out our contribution guidelines or reach out to us!