Core Concepts
Configuration
Learn how to configure AI Developer Assistant for your needs
Configuration
AI Developer Assistant can be configured through multiple methods: environment variables, configuration files, and command-line options. This guide covers all configuration options and best practices.
Configuration Methods
1. Environment Variables (Recommended)
Set environment variables for persistent configuration:
# Add to your shell profile (.bashrc, .zshrc, etc.)
export LLM_PROVIDER="gemini"
export GEMINI_API_KEY="your-gemini-api-key"
export LLM_MODEL="gemini-2.0-flash"
export LLM_TEMPERATURE="0.7"
export LLM_MAX_TOKENS="2000"
export OUTPUT_FORMAT="console"
export OUTPUT_VERBOSE="true"
export OUTPUT_COLORIZE="true"
2. Configuration Files
Create configuration files for project-specific settings:
# Copy default configuration
cp ai-dev.config.yaml ai-dev.config.local.yaml
# Edit with your settings
nano ai-dev.config.local.yaml
3. Command-Line Options
Override configuration for specific commands:
ai-dev review --llm-provider openai --openai-api-key your-key --verbose
Configuration Priority
Configuration is applied in this order (later overrides earlier):
- Default values (built-in)
- Configuration files (
ai-dev.config.local.yaml
) - Environment variables
- Command-line options
LLM Provider Configuration
OpenAI Configuration
llm:
provider: "openai"
model: "gpt-4"
temperature: 0.7
maxTokens: 2000
openai:
apiKey: "${OPENAI_API_KEY}"
baseUrl: "https://api.openai.com/v1"
organization: "${OPENAI_ORGANIZATION}" # Optional
Environment Variables:
export OPENAI_API_KEY="sk-your-openai-key"
export OPENAI_ORGANIZATION="org-your-org-id" # Optional
export LLM_PROVIDER="openai"
export LLM_MODEL="gpt-4"
Google Gemini Configuration
llm:
provider: "gemini"
model: "gemini-2.0-flash"
temperature: 0.7
maxTokens: 2000
gemini:
apiKey: "${GEMINI_API_KEY}"
model: "gemini-2.0-flash"
baseUrl: "https://generativelanguage.googleapis.com/v1beta"
Environment Variables:
export GEMINI_API_KEY="AIzaSy-your-gemini-key"
export LLM_PROVIDER="gemini"
export LLM_MODEL="gemini-2.0-flash"
Ollama (Local) Configuration
llm:
provider: "ollama"
model: "llama2"
temperature: 0.7
maxTokens: 2000
ollama:
enabled: true
baseUrl: "http://localhost:11434"
model: "llama2"
Environment Variables:
export OLLAMA_ENABLED="true"
export OLLAMA_BASE_URL="http://localhost:11434"
export OLLAMA_MODEL="llama2"
export LLM_PROVIDER="ollama"
Output Configuration
Console Output
output:
format: "console"
colorize: true
verbose: false
Markdown Output
output:
format: "markdown"
outputPath: "./reports/review.md"
colorize: false
verbose: true
JSON Output
output:
format: "json"
outputPath: "./reports/review.json"
colorize: false
verbose: false
Security Configuration
Basic Security Settings
security:
enabled: true
severity: ["medium", "high", "critical"]
categories: ["injection", "authentication", "authorization"]
includeDependencies: true
Advanced Security Settings
security:
enabled: true
severity: ["low", "medium", "high", "critical"]
categories:
- "injection"
- "authentication"
- "authorization"
- "cryptography"
- "data_exposure"
- "input_validation"
- "dependency"
- "configuration"
- "logging"
- "other"
includeDependencies: true
customRules: []
Test Configuration
Basic Test Settings
test:
enabled: true
framework: "jest"
testType: "unit"
language: "typescript"
coverageTarget: 80
Advanced Test Settings
test:
enabled: true
framework: "jest"
testType: "unit"
language: "typescript"
includeSetup: true
includeTeardown: true
coverageTarget: 80
customTemplates: []
Documentation Configuration
Basic Documentation Settings
documentation:
enabled: true
format: "markdown"
language: "typescript"
includeExamples: true
Advanced Documentation Settings
documentation:
enabled: true
format: "markdown"
language: "typescript"
includeExamples: true
includeParameters: true
includeReturnTypes: true
includeSeeAlso: true
style: "technical" # formal, casual, technical, beginner
Git Configuration
Basic Git Settings
git:
repoPath: "."
defaultBranch: "main"
includeStaged: true
includeUnstaged: true
Advanced Git Settings
git:
repoPath: "."
defaultBranch: "main"
includeStaged: true
includeUnstaged: true
diffAlgorithm: "histogram" # myers, minimal, patience, histogram
ignoreWhitespace: false
maxFileSize: "1MB"
GitHub Integration Configuration
Basic GitHub Settings
github:
token: "${GITHUB_TOKEN}"
baseUrl: "https://api.github.com"
apiVersion: "2022-11-28"
Advanced GitHub Settings
github:
token: "${GITHUB_TOKEN}"
baseUrl: "https://api.github.com"
apiVersion: "2022-11-28"
timeout: 30000
retries: 3
rateLimitRetries: 3
File Pattern Configuration
Include Patterns
filePatterns:
- "**/*.ts"
- "**/*.js"
- "**/*.tsx"
- "**/*.jsx"
- "**/*.py"
- "**/*.java"
- "**/*.cpp"
- "**/*.c"
- "**/*.cs"
- "**/*.php"
- "**/*.rb"
- "**/*.go"
- "**/*.rs"
- "**/*.dart"
Exclude Patterns
excludePatterns:
- "**/node_modules/**"
- "**/dist/**"
- "**/build/**"
- "**/.git/**"
- "**/coverage/**"
- "**/*.min.js"
- "**/*.bundle.js"
Complete Configuration Example
Here's a complete configuration file with all options:
# AI Developer Assistant Configuration
llm:
provider: "gemini"
model: "gemini-2.0-flash"
temperature: 0.7
maxTokens: 2000
timeout: 30000
# LLM Provider Configurations
openai:
apiKey: "${OPENAI_API_KEY}"
baseUrl: "https://api.openai.com/v1"
organization: "${OPENAI_ORGANIZATION}"
ollama:
enabled: false
baseUrl: "http://localhost:11434"
model: "llama2"
gemini:
apiKey: "${GEMINI_API_KEY}"
model: "gemini-2.0-flash"
baseUrl: "https://generativelanguage.googleapis.com/v1beta"
# GitHub Integration
github:
token: "${GITHUB_TOKEN}"
baseUrl: "https://api.github.com"
apiVersion: "2022-11-28"
# Git Configuration
git:
repoPath: "."
defaultBranch: "main"
includeStaged: true
includeUnstaged: true
# Output Configuration
output:
format: "console"
colorize: true
verbose: false
# Security Scanning
security:
enabled: true
severity: ["medium", "high", "critical"]
categories: ["injection", "authentication", "authorization"]
includeDependencies: true
# Test Generation
test:
enabled: true
framework: "jest"
testType: "unit"
language: "typescript"
coverageTarget: 80
# Documentation Generation
documentation:
enabled: true
format: "markdown"
language: "typescript"
includeExamples: true
# File Patterns
filePatterns:
- "**/*.ts"
- "**/*.js"
- "**/*.py"
- "**/*.java"
excludePatterns:
- "**/node_modules/**"
- "**/dist/**"
- "**/build/**"
# Global Settings
verbose: false
debug: false
Configuration Validation
Validate Configuration
# Validate your configuration
ai-dev config validate
# Show current configuration
ai-dev config show
# Test configuration with specific provider
ai-dev config test --provider gemini
Common Configuration Issues
Invalid YAML syntax:
# Check for syntax errors
ai-dev config validate
# Common issues:
# - Incorrect indentation (use 2 spaces)
# - Missing quotes around values with special characters
# - Invalid boolean values (use true/false, not True/False)
Missing required fields:
# Check for missing API keys
ai-dev config show | grep -E "(apiKey|token)"
# Set missing values
export GEMINI_API_KEY="your-key"
Best Practices
Security
- Never commit API keys:
# Add to .gitignore
echo "ai-dev.config.local.yaml" >> .gitignore
echo ".env" >> .gitignore
- Use environment variables:
# Instead of hardcoding
apiKey: "sk-your-key"
# Use environment variables
apiKey: "${OPENAI_API_KEY}"
Organization
- Use project-specific configs:
# Project-specific configuration
cp ai-dev.config.yaml ai-dev.config.local.yaml
- Separate concerns:
# Different configs for different purposes
ai-dev.config.dev.yaml # Development
ai-dev.config.prod.yaml # Production
ai-dev.config.test.yaml # Testing
Performance
- Optimize file patterns:
# Be specific with patterns
filePatterns:
- "src/**/*.ts" # Instead of "**/*.ts"
- "lib/**/*.dart" # Instead of "**/*.dart"
- Exclude unnecessary files:
excludePatterns:
- "**/node_modules/**"
- "**/dist/**"
- "**/*.min.js"
- "**/coverage/**"
Start with basic configuration and gradually add advanced features as needed. The default configuration works well for most use cases.