AI Providers
Configure AI-powered bucket name generation with OpenAI, Ollama, Anthropic, or Gemini.
Overview
s3finder uses LLMs to generate intelligent bucket name variations. The AI analyzes CT log data to discover organizational naming patterns and generates bucket names following those conventions.
Context-Aware Generation
When using CT log discovery (-d), the AI receives discovered infrastructure words as context. It analyzes these patterns to generate bucket names that match the organization's naming conventions.
Default Models
Each provider has a default model optimized for bucket name generation:
| Provider | Default Model | Environment Variable |
|---|---|---|
| OpenAI | gpt-4o-mini | OPENAI_API_KEY |
| Anthropic | claude-3-5-haiku-20241022 | ANTHROPIC_API_KEY |
| Gemini | gemini-3-flash-preview | GEMINI_API_KEY |
| Ollama | llama3.2 | N/A (local) |
Supported Providers
OpenAI
Cloud-based, best quality. Requires API key with per-request costs.
RecommendedOllama
Local, free, and private. Requires local GPU/resources.
FreeAnthropic
Cloud-based Claude models. High quality alternative.
AlternativeGemini
Google's Gemini models. Fast and cost-effective.
NewOpenAI Configuration
Setup
# Set API key via environment variable
export OPENAI_API_KEY=sk-xxxxx
# Or pass directly (not recommended)
s3finder -s target --ai --ai-key sk-xxxxx
Usage
# Use default model
s3finder -s acme-corp --ai
# Use a specific model
s3finder -s acme-corp --ai --ai-model <model-name>
# Generate more names
s3finder -s acme-corp --ai --ai-count 100
Model Selection
Choose a model based on your needs:
| Type | Speed | Quality | Cost |
|---|---|---|---|
| Mini/Small models | Fast | Good | Low |
| Standard models | Medium | Excellent | Medium |
| Large/Turbo models | Slow | Excellent | High |
Check OpenAI's documentation for the latest available models.
Ollama Configuration
Prerequisites
# Install Ollama (macOS)
brew install ollama
# Start Ollama server
ollama serve
# Download a model
ollama pull <model-name>
Usage
# Use local Ollama
s3finder -s acme-corp --ai --ai-provider ollama
# Custom model
s3finder -s acme-corp --ai --ai-provider ollama --ai-model <model-name>
# Custom Ollama URL
s3finder -s acme-corp --ai --ai-provider ollama --ai-url http://192.168.1.100:11434
Model Selection
Choose a model based on your hardware:
| Size | Quality | RAM Required |
|---|---|---|
| 7-8B parameters | Good | 8GB |
| 13-14B parameters | Better | 16GB |
| 70B+ parameters | Excellent | 48GB+ |
Check Ollama's model library for available models.
Anthropic Configuration
Setup
# Set API key via environment variable
export ANTHROPIC_API_KEY=sk-ant-xxxxx
# Use Anthropic
s3finder -s acme-corp --ai --ai-provider anthropic
# Use a specific model
s3finder -s acme-corp --ai --ai-provider anthropic --ai-model <model-name>
Model Selection
Choose a model based on your needs:
| Type | Speed | Quality | Cost |
|---|---|---|---|
| Haiku (fast) | Fast | Good | Low |
| Sonnet (balanced) | Medium | Excellent | Medium |
| Opus (powerful) | Slow | Best | High |
Check Anthropic's documentation for the latest available models.
Gemini Configuration
Setup
# Set API key via environment variable
export GEMINI_API_KEY=xxxxx
# Use Gemini
s3finder -s acme-corp --ai --ai-provider gemini
# Use a specific model
s3finder -s acme-corp --ai --ai-provider gemini --ai-model <model-name>
Model Selection
Choose a model based on your needs:
| Type | Speed | Quality | Cost |
|---|---|---|---|
| Flash (fast) | Fast | Good | Low |
| Pro (balanced) | Medium | Excellent | Medium |
Check Google's documentation for the latest available models.
Custom Base URL (Proxy Support)
All providers support custom base URLs for proxies or self-hosted endpoints:
# OpenAI with proxy
s3finder -s acme-corp --ai --ai-provider openai --ai-url https://your-proxy.com/v1
# Anthropic with custom endpoint
s3finder -s acme-corp --ai --ai-provider anthropic --ai-url https://your-proxy.com
# Gemini with proxy
s3finder -s acme-corp --ai --ai-provider gemini --ai-url https://your-proxy.com
# Ollama on remote server
s3finder -s acme-corp --ai --ai-provider ollama --ai-url http://192.168.1.100:11434
--ai-url when you need to route requests through a corporate proxy, use a self-hosted LLM endpoint, or connect to a remote Ollama instance.How AI Generation Works
The AI generates bucket names like a "lazy sysadmin" would create them:
- Predictable patterns (backup, logs, assets)
- Year suffixes (2023, 2024, 2025)
- Environment prefixes (dev, prod, staging)
- Region suffixes (us-east-1, eu-west-1)
- Abbreviations and shortcuts
Example AI Output
For seed "acme-corp", the AI might generate:
acme-corp-backup-2024
acme-corp-internal-assets
dev-acme-corp-logs
acme-corp-db-exports
acme-corp-config-backup
staging-acme-corp-uploads
acme-corp-customer-data
acme-corp-analytics-reports
Best Practices
- Start with the permutation engine alone
- Add AI if you need more coverage
- Use Ollama for privacy-sensitive targets
- Small/mini models offer the best cost/quality ratio