Turn Any URL Into Sales Intelligence.
Welcome to the Prompt Fuel API documentation. Our web scraping API provides a simple, reliable way to extract data from any website with a 99.9% success rate.
Base URL
https://app.promptfuel.io
Key Features
- Universal anti-bot bypass (Cloudflare, DataDome, PerimeterX, and 50+ others)
- JavaScript rendering with real Chrome browsers
- Automatic CAPTCHA solving
- Premium proxy network across 195+ countries
- Smart retry logic with exponential backoff
- Structured data extraction
Authentication
All API requests require authentication using your API key. You can obtain your API key from your dashboard after signing up.
Authentication Method
Include your API key in the Authorization header with the Bearer scheme:
Authorization: Bearer YOUR_API_KEY
Rate Limits
Rate limits depend on your subscription plan:
- Developer: 25 concurrent requests
- Startup: 50 concurrent requests
- Professional: 100 concurrent requests
- Enterprise: Custom limits
Quick Start
Get started with Prompt Fuel in under 5 minutes. Here's a simple example to scrape your first website:
1. Get Your API Key
Sign up for a free account and get your API key from the dashboard.
2. Make Your First Request
curl -X POST "https://app.promptfuel.io/api/v1/scrape" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "example.com",
"timeout": 180
}'
3. Handle the Response
The API returns a JSON response with the scraped data:
{
"success": true,
"url": "https://example.com/",
"title": "Example Domain",
"emails": [],
"phones": [],
"keywords": [
"example",
"domain"
],
"social_links": {},
"links": [],
"career_page_url": null,
"job_board_redirects": [],
"technologies": {},
"headers": {},
"seo": {
"meta_description": null,
"meta_keywords": [],
"og_title": null,
"og_image": null,
"og_url": null,
"canonical_url": null,
"robots": null,
"sitemap_url": null,
"rss_feed": null,
"viewport": "width=device-width, initial-scale=1",
"favicon": null,
"h1": [
"Example Domain"
],
"schema_org": []
},
"content_length": 1248,
"extracted_at": "2025-09-22T22:55:01.184140Z",
"error": null,
"error_code": null,
"body_text": "Example Domain Example Domain This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission. More information...",
"cache_used": false,
"cached_at": null,
"credits_charged": 1,
"remaining_credits": 9999985
}
Response Format
The API returns different response formats based on your request parameters.
Standard Response (Extracted Data)
Default response with extracted and structured data:
{
"success": true,
"url": "https://example.com/",
"title": "Example Domain",
"emails": [],
"phones": [],
"keywords": [
"example",
"domain"
],
"social_links": {},
"links": [],
"career_page_url": null,
"job_board_redirects": [],
"technologies": {},
"headers": {},
"seo": {
"meta_description": null,
"meta_keywords": [],
"og_title": null,
"og_image": null,
"og_url": null,
"canonical_url": null,
"robots": null,
"sitemap_url": null,
"rss_feed": null,
"viewport": "width=device-width, initial-scale=1",
"favicon": null,
"h1": [
"Example Domain"
],
"schema_org": []
},
"content_length": 1248,
"extracted_at": "2025-09-22T22:55:01.184140Z",
"error": null,
"error_code": null,
"body_text": "Example Domain Example Domain This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission. More information...",
"cache_used": false,
"cached_at": null,
"credits_charged": 1,
"remaining_credits": 9999985
}
Raw HTML Response
When using raw: true parameter, returns the raw HTML:
{
"success": true,
"url": "https://example.com/",
"content_length": 1248,
"error": null,
"error_code": null,
"cache_used": true,
"cached_at": "2025-09-22T22:55:01.184140",
"forced_cache_hit": true,
"raw_html": "\n Example Domain \n\n \n \n \n \n\n\n\n\n Example Domain
\n This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.
\n \n\n\n\n",
"credits_charged": 1,
"remaining_credits": 9999984
}
Request for Raw HTML
curl -X POST "https://app.promptfuel.io/api/v1/scrape" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "example.com",
"timeout": 180,
"raw": true,
"cache_bypass": true
}'
Error Response
{
"success": false,
"error": "Invalid API key",
"error_code": "INVALID_API_KEY",
"credits_charged": 0,
"remaining_credits": 0
}
Python Integration - Production-Ready Concurrent Scraping
Complete implementation for concurrent web scraping that automatically respects your plan's limits.
Installation
pip install aiohttp
Full Production Code (300+ lines)
#!/usr/bin/env python3
"""
Production-ready concurrent web scraping with Prompt Fuel API
Automatically respects your plan's concurrent limits
"""
import asyncio
import aiohttp
import time
import csv
import json
from typing import List, Dict, Any, Set
from datetime import datetime
# Configuration - UPDATE THESE VALUES
API_KEY = "YOUR_API_KEY" # Replace with your actual API key
CSV_FILE = "domains.csv" # Your CSV file with domains
CSV_COLUMN_NAME = "domain" # Column name for domains in your CSV (e.g., "domain", "Website", "url")
API_BASE = "https://app.promptfuel.io/api/v1" # Production API endpoint
class ConcurrentScraper:
def __init__(self, api_key: str, base_url: str):
self.api_key = api_key
self.base_url = base_url
self.headers = {"X-API-Key": api_key, "Content-Type": "application/json"}
self.results = []
self.domains = []
self.active_requests: Set[asyncio.Task] = set()
self.concurrent_limit = 100 # Will be updated from API
def load_domains_from_csv(self, csv_file: str, column_name: str = None):
"""Load domains from CSV file"""
try:
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
domains = []
for row in reader:
# Use provided column name or check common names
if column_name:
domain = row.get(column_name)
else:
domain = row.get('domain') or row.get('Website') or row.get('url')
if domain:
domain = domain.strip()
# Clean up the domain
domain = domain.replace('http://', '').replace('https://', '')
domain = domain.replace('www.', '')
domain = domain.split('/')[0]
if domain:
domains.append(domain)
self.domains = domains if domains else []
print(f"ā
Loaded {len(self.domains)} domains from {csv_file}")
except FileNotFoundError:
print(f"ā CSV file not found: {csv_file}")
self.domains = []
except Exception as e:
print(f"ā Error loading CSV: {e}")
self.domains = []
async def check_status(self, session: aiohttp.ClientSession) -> Dict[str, Any]:
"""Check current API status and concurrent limits"""
async with session.get(
f"{self.base_url}/scraping/concurrent-status",
headers=self.headers
) as response:
return await response.json()
async def scrape_domain(self, session: aiohttp.ClientSession,
domain: str, request_id: int) -> Dict[str, Any]:
"""Scrape a single domain and track timing"""
start_time = time.time()
print(f" š [{request_id:03d}] Starting: {domain}")
try:
async with session.post(
f"{self.base_url}/scrape",
headers=self.headers,
json={
"url": domain,
"timeout": 180,
"cache_bypass": True # Set to False to use cache
},
timeout=aiohttp.ClientTimeout(total=180)
) as response:
duration = time.time() - start_time
data = await response.json()
result = {
"request_id": request_id,
"domain": domain,
"duration": duration,
"status_code": response.status,
"success": data.get("success", False),
"error": data.get("error"),
"credits_charged": data.get("credits_charged", 0),
"timestamp": datetime.now().isoformat(),
"response_data": data # Save full response
}
# Print result
if result["success"]:
print(f" ā
[{request_id:03d}] Success: {domain} "
f"({duration:.2f}s, {result['credits_charged']} credits)")
else:
error_msg = result["error"]
if isinstance(error_msg, dict):
error_msg = error_msg.get("message", str(error_msg))
print(f" ā [{request_id:03d}] Failed: {domain} "
f"({duration:.2f}s) - {error_msg}")
return result
except asyncio.TimeoutError:
duration = time.time() - start_time
print(f" ā° [{request_id:03d}] Timeout: {domain} after {duration:.2f}s")
return {
"request_id": request_id,
"domain": domain,
"duration": duration,
"status_code": 0,
"success": False,
"error": "Timeout",
"credits_charged": 0,
"timestamp": datetime.now().isoformat()
}
except Exception as e:
duration = time.time() - start_time
print(f" š„ [{request_id:03d}] Error: {domain} - {str(e)}")
return {
"request_id": request_id,
"domain": domain,
"duration": duration,
"status_code": 0,
"success": False,
"error": str(e),
"credits_charged": 0,
"timestamp": datetime.now().isoformat()
}
async def run_concurrent_scraping(self):
"""Run concurrent scraping respecting API limits"""
print("=" * 60)
print("š CONCURRENT SCRAPING STARTED")
print("=" * 60)
print(f"\nš Total domains to scrape: {len(self.domains)}")
async with aiohttp.ClientSession() as session:
# Check initial status and get concurrent limit
print("\nš Checking API Status:")
status = await self.check_status(session)
print(f" Plan: {status.get('plan_name', 'Unknown')}")
print(f" Concurrent limit: {status.get('concurrent_limit', 'Unknown')}")
print(f" Active jobs: {status.get('active_jobs', 0)}")
print(f" Queued jobs: {status.get('queued_jobs', 0)}")
self.concurrent_limit = status.get('concurrent_limit', 100)
print(f"\nšÆ Strategy:")
print(f" ⢠Send up to {self.concurrent_limit} requests simultaneously")
print(f" ⢠Wait for completions before sending more")
print(f" ⢠Maintain exactly {self.concurrent_limit} active requests")
print()
test_start = time.time()
domain_index = 0
request_counter = 0
# Keep track of active tasks
active_tasks: Set[asyncio.Task] = set()
# Process all domains while respecting concurrent limits
while domain_index < len(self.domains) or active_tasks:
# Fill up to concurrent limit
while (len(active_tasks) < self.concurrent_limit and
domain_index < len(self.domains)):
domain = self.domains[domain_index]
request_counter += 1
# Create and start task
task = asyncio.create_task(
self.scrape_domain(session, domain, request_counter)
)
active_tasks.add(task)
print(f"š¤ Sent request {request_counter}: {domain} "
f"(Active: {len(active_tasks)}/{self.concurrent_limit})")
domain_index += 1
# Wait for at least one task to complete
if active_tasks:
done, pending = await asyncio.wait(
active_tasks,
return_when=asyncio.FIRST_COMPLETED
)
# Collect results from completed tasks
for task in done:
result = await task
self.results.append(result)
active_tasks.remove(task)
print(f"š„ Completed request {result['request_id']}: "
f"{result['domain']} "
f"(Active: {len(active_tasks)}/{self.concurrent_limit})")
# Show progress every 10 completions
if len(self.results) % 10 == 0 and self.results:
completed = len(self.results)
success_rate = sum(1 for r in self.results if r["success"]) / completed * 100
print(f"š Progress: {completed}/{len(self.domains)} completed "
f"({success_rate:.1f}% success)")
test_duration = time.time() - test_start
# Final status check
print("\nš Final Status Check:")
status = await self.check_status(session)
print(f" Active jobs: {status.get('active_jobs', 0)}")
print(f" Queued jobs: {status.get('queued_jobs', 0)}")
# Overall statistics
print("\n" + "=" * 60)
print("š SCRAPING RESULTS")
print("=" * 60)
total_requests = len(self.results)
successful = sum(1 for r in self.results if r["success"])
failed = total_requests - successful
print(f"Total requests: {total_requests}")
print(f"Successful: {successful} ({successful/total_requests*100:.1f}%)")
print(f"Failed: {failed} ({failed/total_requests*100:.1f}%)")
print(f"Total duration: {test_duration:.2f}s")
print(f"Throughput: {total_requests/test_duration:.2f} requests/sec")
print(f"Concurrent limit respected: {self.concurrent_limit}")
if successful > 0:
avg_success_time = sum(r["duration"] for r in self.results if r["success"]) / successful
print(f"Avg success time: {avg_success_time:.2f}s")
total_credits = sum(r["credits_charged"] for r in self.results)
print(f"Total credits used: {total_credits}")
# Error analysis
if failed > 0:
print(f"\nā Error Analysis:")
errors = {}
for r in self.results:
if not r["success"]:
error = r["error"]
if isinstance(error, dict):
error = error.get("code", error.get("message", "Unknown"))
errors[error] = errors.get(error, 0) + 1
for error, count in sorted(errors.items(), key=lambda x: x[1], reverse=True):
print(f" {error}: {count}")
return self.results
async def simple_scrape(api_key: str, url: str = "example.com"):
"""Simple single domain scrape"""
headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
data = {
"url": url,
"timeout": 180,
"cache_bypass": True
}
print(f"š Scraping: {url}")
async with aiohttp.ClientSession() as session:
try:
async with session.post(
f"https://app.promptfuel.io/api/v1/scrape",
headers=headers,
json=data
) as response:
result = await response.json()
print(json.dumps(result, indent=2))
return result
except Exception as e:
print(f"ā Error: {e}")
return None
async def main():
"""Main entry point"""
import sys
# For a single domain test
if len(sys.argv) > 1 and sys.argv[1] == "simple":
url = sys.argv[2] if len(sys.argv) > 2 else "example.com"
await simple_scrape(API_KEY, url)
return
# For concurrent scraping from CSV
scraper = ConcurrentScraper(API_KEY, API_BASE)
scraper.load_domains_from_csv(CSV_FILE, CSV_COLUMN_NAME)
if not scraper.domains:
print(f"ā No domains loaded. Create a CSV file with a '{CSV_COLUMN_NAME}' column:")
print(f"\nExample {CSV_FILE}:")
print(CSV_COLUMN_NAME)
print("example.com")
print("google.com")
print("github.com")
return
# Run concurrent scraping
results = await scraper.run_concurrent_scraping()
# Save results to file
output_file = "scraping_results.json"
with open(output_file, "w") as f:
json.dump(results, f, indent=2)
print(f"\nā
Results saved to {output_file}")
if __name__ == "__main__":
asyncio.run(main())
CSV File Format
domain
example.com
google.com
github.com
stackoverflow.com
Running the Script
# Single domain test
python concurrent_scraper.py simple example.com
# Bulk concurrent scraping from CSV
python concurrent_scraper.py
JavaScript Integration
Use modern JavaScript with async/await to integrate Prompt Fuel API in browser or Node.js environments.
Browser JavaScript (Fetch API)
async function scrapeDomain() {
const url = 'https://app.promptfuel.io/api/v1/scrape';
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'X-API-Key': 'YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: 'example.com',
timeout: 180
})
});
const data = await response.json();
console.log(JSON.stringify(data, null, 2));
} catch (error) {
console.error('Error:', error);
}
}
scrapeDomain();
Node.js Integration
Build robust server-side applications with Node.js and the Prompt Fuel API.
Installation
npm install node-fetch
Complete Example
const fetch = require('node-fetch');
async function scrapeDomain() {
const url = 'https://app.promptfuel.io/api/v1/scrape';
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'X-API-Key': 'YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: 'example.com',
timeout: 180
})
});
const data = await response.json();
console.log(JSON.stringify(data, null, 2));
} catch (error) {
console.error('Error:', error);
}
}
scrapeDomain();
Error Codes
Comprehensive list of error codes returned by the Prompt Fuel API to help you debug and handle different scenarios.
Domain and Access Errors (DOM series)
Domain forbidden due to access restrictions
Domain requires proxy due to anti-bot protection
Domain blocked automated access
Domain doesn't exist or DNS error
Domain took too long to respond
Request Errors (REQ series)
Domain not in valid format
Required parameters missing from request
Parameters have invalid values
Request payload exceeds limits
Content type not supported
Browser and Rendering Errors (BRW series)
Unable to initialize browser
Page took too long to load
JS execution error
All browser instances in use
Navigation error
Proxy Errors (PRX series)
Unable to connect through proxy
Proxy auth error
Proxy connection timed out
No proxy servers available
Target site blocks proxy traffic
Content Extraction Errors (EXT series)
Page loaded but no content found
Unable to parse page content
Content format invalid
Content exceeds size limits
Content extraction timed out
System Errors (SYS series)
DB connection error
Redis connection error
Unexpected server error
Server resources exhausted
System configuration error
Rate Limiting and Quota Errors (LIM series)
Request rate limit exceeded
Too many concurrent requests
Daily usage quota exceeded
IP address blocked
Authentication Errors (AUTH series)
API key not provided
API key invalid
API key expired
Insufficient API permissions
cURL Integration
Use cURL for quick testing and command-line integration with the Prompt Fuel API.
Basic Request
curl -X POST "https://app.promptfuel.io/api/v1/scrape" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "example.com",
"timeout": 180
}'
With Cache Bypass
curl -X POST "https://app.promptfuel.io/api/v1/scrape" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "example.com",
"timeout": 180,
"cache_bypass": true
}'
Raw HTML Request
curl -X POST "https://app.promptfuel.io/api/v1/scrape" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "example.com",
"timeout": 180,
"raw": true
}'
Go Integration
Build high-performance applications with Go and the Prompt Fuel API.
Complete Example
package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"time"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 200*time.Second)
defer cancel()
payload := map[string]interface{}{
"url": "example.com",
"timeout": 180,
}
jsonData, err := json.Marshal(payload)
if err != nil {
panic(err)
}
req, err := http.NewRequestWithContext(ctx, "POST",
"https://app.promptfuel.io/api/v1/scrape",
bytes.NewBuffer(jsonData))
if err != nil {
panic(err)
}
req.Header.Set("X-API-Key", "YOUR_API_KEY")
req.Header.Set("Content-Type", "application/json")
client := &http.Client{
Timeout: 200 * time.Second,
}
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
var result map[string]interface{}
json.NewDecoder(resp.Body).Decode(&result)
output, _ := json.MarshalIndent(result, "", " ")
fmt.Println(string(output))
}
Rust Integration
Build fast, safe applications with Rust and the Prompt Fuel API.
Installation
[dependencies]
tokio = { version = "1.0", features = ["full"] }
reqwest = { version = "0.11", features = ["json"] }
serde_json = "1.0"
Complete Example
use reqwest;
use serde_json::Value;
use std::collections::HashMap;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box> {
scrape_domain().await?;
Ok(())
}
async fn scrape_domain() -> Result<(), Box> {
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(200))
.build()?;
let mut payload = HashMap::new();
payload.insert("url", "example.com");
payload.insert("timeout", "180");
let response = client
.post("https://app.promptfuel.io/api/v1/scrape")
.header("X-API-Key", "YOUR_API_KEY")
.header("Content-Type", "application/json")
.json(&payload)
.send()
.await?;
let result: Value = response.json().await?;
println!("{}", serde_json::to_string_pretty(&result)?);
}
PHP Integration
Integrate Prompt Fuel API into your PHP applications with cURL or Guzzle.
Async with Guzzle
postAsync('https://app.promptfuel.io/api/v1/scrape', [
'headers' => [
'X-API-Key' => 'YOUR_API_KEY',
'Content-Type' => 'application/json'
],
'json' => [
'url' => 'example.com',
'timeout' => 180
],
'timeout' => 200 // 200 second timeout
]);
$promise->then(function ($response) {
echo $response->getBody();
})->wait();
}
scrapeDomainAsync();
?>
Ruby Integration
Use Ruby with Net::HTTP or HTTParty for elegant API integration.
Basic Example
require 'net/http'
require 'json'
require 'uri'
class PromptFuelScraper
def initialize(api_key)
@api_key = api_key
@base_url = 'https://app.promptfuel.io'
end
def scrape_website(domain)
uri = URI("#{@base_url}/scrape/#{domain}")
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.read_timeout = 60
request = Net::HTTP::Get.new(uri)
request['Authorization'] = "Bearer #{@api_key}"
request['Content-Type'] = 'application/json'
response = http.request(request)
case response.code.to_i
when 200
JSON.parse(response.body)
else
raise "HTTP Error #{response.code}: #{response.body}"
end
rescue StandardError => e
puts "Error: #{e.message}"
nil
end
end
# Example usage
scraper = PromptFuelScraper.new('YOUR_API_KEY')
result = scraper.scrape_website('example.com')
if result
puts "Title: #{result['title']}"
puts "Emails: #{result['emails'].join(', ')}"
else
puts "Scraping failed"
end
Java Integration
Build enterprise applications with Java and the Prompt Fuel API.
Async Example
import java.net.http.*;
import java.net.URI;
import java.util.concurrent.CompletableFuture;
import java.time.Duration;
public class AsyncScraper {
public static void main(String[] args) {
HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(200))
.build();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://app.promptfuel.io/api/v1/scrape"))
.timeout(Duration.ofSeconds(200))
.header("X-API-Key", "YOUR_API_KEY")
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(
"{\"url\": \"example.com\", \"timeout\": 180}"
))
.build();
CompletableFuture> response =
client.sendAsync(request, HttpResponse.BodyHandlers.ofString());
response.thenApply(HttpResponse::body)
.thenAccept(System.out::println)
.join();
}
}
C# Integration
Build .NET applications with C# and the Prompt Fuel API.
Async Example
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
public class AsyncScraper
{
private static readonly HttpClient client = new HttpClient()
{
Timeout = TimeSpan.FromSeconds(200)
};
public static async Task Main(string[] args)
{
await ScrapeDomainAsync();
}
public static async Task ScrapeDomainAsync()
{
try
{
var json = "{\"url\": \"example.com\", \"timeout\": 180}";
var content = new StringContent(json, Encoding.UTF8, "application/json");
client.DefaultRequestHeaders.Add("X-API-Key", "YOUR_API_KEY");
var response = await client.PostAsync(
"https://app.promptfuel.io/api/v1/scrape",
content
);
string result = await response.Content.ReadAsStringAsync();
Console.WriteLine(result);
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
}
}
Swift Integration
Build iOS and macOS apps with Swift and the Prompt Fuel API.
Basic Example
import Foundation
struct ScrapeResult: Codable {
let url: String
let title: String
let emails: [String]
let status: String
}
class PromptFuelScraper {
private let apiKey: String
private let session: URLSession
init(apiKey: String) {
self.apiKey = apiKey
let config = URLSessionConfiguration.default
config.timeoutIntervalForRequest = 60.0
self.session = URLSession(configuration: config)
}
func scrapeWebsite(domain: String) async throws -> ScrapeResult {
guard let url = URL(string: "https://app.promptfuel.io/scrape/\(domain)") else {
throw URLError(.badURL)
}
var request = URLRequest(url: url)
request.httpMethod = "GET"
request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let (data, response) = try await session.data(for: request)
guard let httpResponse = response as? HTTPURLResponse,
httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)
}
let decoder = JSONDecoder()
return try decoder.decode(ScrapeResult.self, from: data)
}
}
// Example usage
Task {
let scraper = PromptFuelScraper(apiKey: "YOUR_API_KEY")
do {
let result = try await scraper.scrapeWebsite(domain: "example.com")
print("Title: \(result.title)")
print("Emails: \(result.emails.joined(separator: ", "))")
} catch {
print("Error: \(error)")
}
}
Kotlin Integration
Build Android and server-side applications with Kotlin and the Prompt Fuel API.
Basic Example
import kotlinx.coroutines.*
import kotlinx.serialization.*
import kotlinx.serialization.json.*
import java.net.http.HttpClient
import java.net.http.HttpRequest
import java.net.http.HttpResponse
import java.net.URI
import java.time.Duration
@Serializable
data class ScrapeResult(
val url: String,
val title: String,
val emails: List,
val status: String
)
class PromptFuelScraper(private val apiKey: String) {
private val httpClient = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(10))
.build()
private val json = Json { ignoreUnknownKeys = true }
suspend fun scrapeWebsite(domain: String): ScrapeResult = withContext(Dispatchers.IO) {
val url = "https://app.promptfuel.io/scrape/$domain"
val request = HttpRequest.newBuilder()
.uri(URI.create(url))
.timeout(Duration.ofSeconds(60))
.header("Authorization", "Bearer $apiKey")
.header("Content-Type", "application/json")
.GET()
.build()
val response = httpClient.send(request, HttpResponse.BodyHandlers.ofString())
if (response.statusCode() == 200) {
json.decodeFromString(response.body())
} else {
throw Exception("HTTP Error ${response.statusCode()}: ${response.body()}")
}
}
}
// Example usage
fun main() = runBlocking {
val scraper = PromptFuelScraper("YOUR_API_KEY")
try {
val result = scraper.scrapeWebsite("example.com")
println("Title: ${result.title}")
println("Emails: ${result.emails.joinToString(", ")}")
} catch (e: Exception) {
println("Error: ${e.message}")
}
}