Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Configuration

The pg_ai_query extension supports flexible configuration through a configuration file that allows you to customize AI providers, API keys, logging, and query behavior.

Configuration File Location

The configuration file should be placed at:

~/.pg_ai.config

This file will be automatically detected and loaded when the extension is first used.

Configuration File Format

The configuration uses an INI-style format with sections and key-value pairs:

# PG AI Query Configuration File
[section_name]
key = value

Complete Configuration Example

Here’s a complete example configuration file with all available options:

# PG AI Query Configuration File
# Place this file at ~/.pg_ai.config

[general]
# Logging level: DEBUG, INFO, WARNING, ERROR
log_level = "INFO"

# Enable or disable all logging output (true/false)
enable_logging = false

# Request timeout in milliseconds
request_timeout_ms = 30000

# Maximum number of retries for failed requests
max_retries = 3

[query]
# Always enforce LIMIT clause on SELECT queries
enforce_limit = true

# Default LIMIT value when not specified by user
default_limit = 1000

[response]
# Show detailed explanation of what the query does
show_explanation = true

# Show warnings about performance, security, or data implications
show_warnings = true

# Show suggested visualization type for query results
show_suggested_visualization = true

# Use formatted response (JSON format) instead of plain SQL
# When enabled, returns structured JSON with query, explanation, warnings, etc.
# When disabled, returns plain SQL with optional comments
use_formatted_response = false

[openai]
# Your OpenAI API key
api_key = "sk-your-openai-api-key-here"

# Default model to use (options: gpt-4o, gpt-4, gpt-3.5-turbo)
default_model = "gpt-4o"

# Custom API endpoint (optional) - for OpenAI-compatible APIs
# api_endpoint = "https://api.openai.com"

[anthropic]
# Your Anthropic API key (if using Claude)
api_key = "sk-ant-your-anthropic-api-key-here"

# Default model to use (options: claude-sonnet-4-5-20250929)
default_model = "claude-sonnet-4-5-20250929"

# Custom API endpoint (optional) - for Anthropic-compatible APIs
# api_endpoint = "https://api.anthropic.com"

[gemini]
# Your Google API key (if using Gemini)
api_key = "AIzaSy-your-google-api-key-here"

# Default model to use (options: gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash)
default_model = "gemini-2.5-flash"

Configuration Sections

[general] Section

Controls general behavior of the extension.

OptionTypeDefaultDescription
log_levelstring“INFO”Minimum level for log messages: DEBUG, INFO, WARNING, ERROR
enable_loggingbooleanfalseEnable/disable all logging output
request_timeout_msinteger30000Timeout for AI API requests in milliseconds
max_retriesinteger3Maximum retry attempts for failed API requests

[query] Section

Controls query generation behavior.

OptionTypeDefaultDescription
enforce_limitbooleantrueAlways add LIMIT clause to SELECT queries
default_limitinteger1000Default row limit when none specified

[response] Section

Controls how query results are formatted and what additional information is included.

OptionTypeDefaultDescription
show_explanationbooleantrueInclude detailed explanation of what the query does
show_warningsbooleantrueInclude warnings about performance, security, or data implications
show_suggested_visualizationbooleanfalseInclude suggested visualization type for the query results
use_formatted_responsebooleanfalseReturn structured JSON instead of plain SQL

[openai] Section

OpenAI provider configuration.

OptionTypeDefaultDescription
api_keystring“”Your OpenAI API key from platform.openai.com
default_modelstring“gpt-4o”Default OpenAI model to use
api_endpointstring“https://api.openai.com”Custom API endpoint for OpenAI-compatible APIs

Available OpenAI Models: You can use any valid OpenAI model name. Common options include:

  • gpt-4o - Latest GPT-4 Omni model (recommended)
  • gpt-4 - High-quality GPT-4 model
  • gpt-3.5-turbo - Fast and efficient model
  • o1-preview - New reasoning model

[anthropic] Section

Anthropic (Claude) provider configuration.

OptionTypeDefaultDescription
api_keystring“”Your Anthropic API key from console.anthropic.com
default_modelstring“claude-sonnet-4-5-20250929”Default Claude model to use
api_endpointstring“https://api.anthropic.com”Custom API endpoint for Anthropic-compatible APIs

Available Anthropic Models: You can use any valid Anthropic model name. Common options include:

  • claude-sonnet-4-5-20250929 - Latest Claude 3.5 Sonnet model
  • claude-3-opus-20240229 - Most powerful Claude 3 model
  • claude-3-sonnet-20240229 - Balanced Claude 3 model
  • claude-3-haiku-20240307 - Fastest and most compact model

[gemini] Section

Google Gemini provider configuration.

OptionTypeDefaultDescription
api_keystring“”Your Google API key from aistudio.google.com
default_modelstring“gemini-2.5-flash”Default Gemini model to use
max_tokensinteger8192Maximum tokens in response
temperaturefloat0.7Model temperature (0.0-1.0)

Available Google Gemini Models: You can use any valid Gemini model name. Common options include:

  • gemini-2.5-pro - Latest Gemini Pro model (best quality)
  • gemini-2.5-flash - Fast and cost-effective (recommended)
  • gemini-2.0-flash - Previous generation flash model

Setting Up API Keys

Getting an OpenAI API Key

  1. Visit platform.openai.com
  2. Create an account or sign in
  3. Navigate to API Keys section
  4. Create a new API key
  5. Copy the key and add it to your config file

Getting an Anthropic API Key

  1. Visit console.anthropic.com
  2. Create an account or sign in
  3. Navigate to API Keys section
  4. Create a new API key
  5. Copy the key and add it to your config file

Getting a Google Gemini API Key

  1. Visit aistudio.google.com
  2. Sign in with your Google account
  3. Click on “Get API key” in the left sidebar
  4. Create a new API key or use an existing one
  5. Copy the key and add it to your config file

Provider Selection Priority

The extension automatically selects an AI provider based on the following priority:

  1. Explicit provider parameter in function call
  2. First configured provider with a valid API key in config file
  3. Error if no providers are configured

Configuration Validation

The extension validates configuration on startup:

  • API Key Format: Checks that API keys follow expected format
  • Model Availability: Accepts any valid model name string (verify availability with provider)
  • Numeric Values: Ensures timeouts and limits are positive integers
  • Boolean Values: Validates true/false values

Security Considerations

API Key Security

  • Never commit API keys to version control
  • Use appropriate file permissions: chmod 600 ~/.pg_ai.config
  • Rotate keys regularly as per your organization’s security policy
  • Monitor usage through your AI provider’s dashboard

Network Security

  • API requests are made over HTTPS
  • Consider firewall rules for outbound connections to AI providers
  • Monitor network traffic for unexpected API usage

Database Security

  • The extension only accesses user tables (not system catalogs)
  • Generated queries respect PostgreSQL’s permission system
  • Consider using dedicated database users with limited privileges

Debugging Configuration

Enable Logging

To troubleshoot configuration issues, enable logging:

[general]
enable_logging = true
log_level = "DEBUG"

Check Configuration Loading

You can verify configuration loading by observing log messages when first calling the extension:

-- This will log configuration loading details
SELECT generate_query('test query');

Validate API Keys

Test your API configuration:

-- Test with explicit API key
SELECT generate_query('show tables', 'your-api-key-here', 'openai');

Updating Configuration

Configuration is loaded on first use. To reload after making changes:

  1. Disconnect from PostgreSQL
  2. Reconnect to your database
  3. Call any extension function to trigger reload

Or restart your PostgreSQL session.

Next Steps

Once configured:

  1. Follow the Quick Start Guide to test your setup
  2. Explore Usage Examples
  3. Learn about AI Providers to optimize your model selection