Skip to main content
Copy-paste these commands directly into your terminal. Just replace ck_your_api_key with your actual API key.

Raw Crawl

Fetch a webpage and get its HTML content.

Basic Request

curl -X POST https://api.crawlkit.com/api/v1/crawl/raw \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}'

With Options

curl -X POST https://api.crawlkit.com/api/v1/crawl/raw \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "options": {
      "timeout": 30000,
      "followRedirects": true,
      "maxRedirects": 5,
      "headers": {
        "User-Agent": "MyApp/1.0"
      }
    }
  }'

Example Response

{
  "success": true,
  "data": {
    "url": "https://example.com",
    "finalUrl": "https://example.com/",
    "statusCode": 200,
    "headers": {
      "content-type": "text/html; charset=UTF-8"
    },
    "body": "<!doctype html>...",
    "contentLength": 1256,
    "timing": { "total": 342 },
    "creditsUsed": 1,
    "creditsRemaining": 999
  }
}

Search the web using DuckDuckGo.
curl -X POST https://api.crawlkit.com/api/v1/crawl/search \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"query": "web scraping python"}'

With Filters

curl -X POST https://api.crawlkit.com/api/v1/crawl/search \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "web scraping python",
    "options": {
      "language": "en-US",
      "region": "us-en",
      "timeRange": "m",
      "maxResults": 20
    }
  }'
Time Range Options:
  • d - Last day
  • w - Last week
  • m - Last month
  • y - Last year

Example Response

{
  "success": true,
  "data": {
    "query": "web scraping python",
    "totalResults": 20,
    "results": [
      {
        "position": 1,
        "title": "Web Scraping with Python - Guide",
        "url": "https://example.com/guide",
        "snippet": "Learn how to scrape websites..."
      }
    ],
    "timing": { "total": 1523 },
    "creditsUsed": 1,
    "creditsRemaining": 998
  }
}

Screenshot

Take a full-page screenshot of any website.

Basic Screenshot

curl -X POST https://api.crawlkit.com/api/v1/crawl/screenshot \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}'

With Options

curl -X POST https://api.crawlkit.com/api/v1/crawl/screenshot \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "options": {
      "width": 1920,
      "height": 1080,
      "timeout": 30000,
      "waitForSelector": ".content-loaded"
    }
  }'

Example Response

{
  "success": true,
  "data": {
    "url": "https://files.example.com/screenshot-abc123.png",
    "width": 1920,
    "height": 1080,
    "timing": { "total": 4521 },
    "creditsUsed": 1,
    "creditsRemaining": 997
  }
}

Saving Output to a File

Save HTML to file

curl -X POST https://api.crawlkit.com/api/v1/crawl/raw \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}' \
  | jq -r '.data.body' > page.html

Save JSON response to file

curl -X POST https://api.crawlkit.com/api/v1/crawl/raw \
  -H "Authorization: ApiKey ck_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}' \
  -o response.json

Using Environment Variables

Store your API key in an environment variable for convenience:
# Add to your ~/.bashrc or ~/.zshrc
export CRAWLKIT_API_KEY="ck_your_api_key"

# Then use it in commands
curl -X POST https://api.crawlkit.com/api/v1/crawl/raw \
  -H "Authorization: ApiKey $CRAWLKIT_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}'