chore: add current provisioning state before migration

This commit is contained in:
Jesús Pérez 2025-09-22 23:11:41 +01:00
parent a9703b4748
commit 50745b0f22
660 changed files with 88126 additions and 0 deletions

View file

@ -0,0 +1,51 @@
AI capabilities have been successfully implemented as an optional running mode with support for OpenAI, Claude, and generic LLM
providers! Here's what's been added:
✅ Configuration (KCL Schema)
- AIProvider schema in kcl/settings.k:54-79 with configurable provider selection
- Optional mode with feature flags for template, query, and webhook AI
✅ Core AI Library
- core/nulib/lib_provisioning/ai/lib.nu - Complete AI integration library
- Support for OpenAI, Claude, and generic providers
- Configurable endpoints, models, and parameters
✅ Template Generation
- Enhanced render_template function with --ai_prompt flag
- Natural language to infrastructure config generation
✅ Query Enhancement
- Added --ai_query flag to query command in query.nu:21
- Natural language infrastructure queries
✅ Webhook Integration
- webhook/ai_webhook.nu with platform-specific handlers (Slack, Discord, Teams)
- Enhanced existing webhook system with AI processing
✅ CLI Integration
- New ai command module in main_provisioning/ai.nu
- Integrated into main provisioning CLI
Usage Examples:
# Generate infrastructure templates
./core/nulib/provisioning ai template --prompt "3-node Kubernetes cluster with Ceph storage"
# Natural language queries
./core/nulib/provisioning query --ai_query "show all AWS servers with high CPU usage"
# Test AI configuration
./core/nulib/provisioning ai test
# Webhook processing
./core/nulib/provisioning ai webhook --prompt "deploy redis cluster"
All AI capabilities are optional and configurable through the KCL settings with provider choice between OpenAI, Claude, and
generic LLM endpoints.

View file

@ -0,0 +1,44 @@
✅ AI Integration Complete
All 4 requested features implemented as optional running mode:
1. Template System Enhancement ✅
- Enhanced render_template function with AI capabilities
- New render_template_ai function for direct AI template generation
- Natural language to infrastructure config generation
2. Natural Language Query System ✅
- Added --ai_query flag to the query command
- AI processes natural language queries about infrastructure
- Context-aware responses based on infrastructure state
3. Plugin Development ✅
- Complete AI library: core/nulib/lib_provisioning/ai/lib.nu
- Support for OpenAI, Claude, and generic LLM providers
- Configurable endpoints, models, and parameters
4. Webhook Integration ✅
- Chat platform support: Slack, Discord, Teams, generic
- Enhanced existing webhook system with AI processing
- Platform-specific response formatting
Configuration
- KCL Schema: kcl/settings.k:54-79 - AIProvider configuration
- Optional Mode: Enable/disable via ai.enabled = true
- Provider Choice: "openai" | "claude" | "generic"
- Feature Flags: Individual toggles for template, query, webhook AI
Usage Ready
Set API key environment variable and enable in KCL settings:
export OPENAI_API_KEY="your-key" # for OpenAI
export ANTHROPIC_API_KEY="your-key" # for Claude
export LLM_API_KEY="your-key" # for generic
All AI functionality is now available as an optional running mode with full provider choice support!

View file

@ -0,0 +1,130 @@
Perfect! I have successfully implemented a comprehensive AI-powered infrastructure automation system for
your provisioning tool. Here's what has been accomplished:
✨ AI Infrastructure Automation - Complete Implementation
🎯 Key Features Implemented:
1. 🤖 AI-Powered KCL Generation
- Natural language to KCL configuration conversion
- Support for servers, defaults, settings, clusters, and task services
- Provider-specific optimizations (UpCloud, AWS, Local)
2. 🧠 Multiple AI Provider Support
- OpenAI (GPT-4, GPT-3.5)
- Anthropic Claude (Claude-3.5 Sonnet)
- Generic/Local LLM APIs (Ollama, etc.)
3. 💬 Interactive Interfaces
- Command-line AI chat
- Interactive configuration builder
- Webhook integration for Slack/Discord
- Natural language query processing
4. 🔧 Enhanced Generation Commands
- ./provisioning ai - Main AI command interface
- ./provisioning generate-ai - AI-enhanced generation
- Interactive mode with follow-up questions
- Automatic validation and improvement
🛠 Files Created/Enhanced:
Core AI Library
- core/nulib/lib_provisioning/ai/lib.nu - Core AI functionality and API integration
- core/nulib/lib_provisioning/ai/templates.nu - KCL template generation
- core/nulib/lib_provisioning/ai/webhook.nu - Chat/webhook processing
- core/nulib/lib_provisioning/ai/mod.nu - Module exports
Command Interface
- core/nulib/main_provisioning/ai.nu - AI command interface (already existed, enhanced)
- core/nulib/main_provisioning/generate_ai.nu - Enhanced generation commands
Configuration Files
- kcl/settings.k - Added AIProvider schema (already existed)
- templates/ai.yaml - AI configuration template
- templates/default_context.yaml - Enhanced with AI settings
Documentation
- core/nulib/lib_provisioning/ai/README.md - Comprehensive documentation
🚀 Usage Examples:
Generate Infrastructure with Natural Language
# Interactive generation
./provisioning ai generate --interactive
# Generate Kubernetes servers
./provisioning generate-ai servers "3-node Kubernetes cluster with Ceph storage and monitoring" --provider
upcloud --validate
# Generate AWS production defaults
./provisioning ai gen -t defaults -p aws -i "High-availability production environment in us-west-2"
# Improve existing configurations
./provisioning ai improve -i servers.k -o optimized_servers.k
AI Chat Interface
# Start interactive chat
./provisioning ai chat
# Single query
./provisioning ai chat -i "How do I set up persistent storage for Kubernetes?"
# Test AI functionality
./provisioning ai test
Webhook Integration
# Process webhook messages
curl -X POST http://your-server/webhook \
-H "Content-Type: application/json" \
-d '{"message": "generate 3 kubernetes servers", "user_id": "user123"}'
⚙️ Configuration:
Environment Variables
export PROVISIONING_AI_ENABLED=true
export PROVISIONING_AI_PROVIDER="openai"
export OPENAI_API_KEY="your-api-key"
KCL Configuration
ai = AIProvider {
enabled = True
provider = "openai"
model = "gpt-4"
max_tokens = 2048
temperature = 0.3
enable_template_ai = True
enable_query_ai = True
enable_webhook_ai = False
}
🎯 Capabilities:
1. Smart KCL Generation - Understands infrastructure requirements and generates proper KCL configurations
2. Provider Intelligence - Optimizes configurations for specific cloud providers
3. Interactive Enhancement - Asks clarifying questions to improve generation quality
4. Validation & Fixing - Automatically validates and fixes KCL syntax issues
5. Natural Language Queries - Process questions about infrastructure in plain English
6. Chat Integration - Slack/Discord bot capabilities for team collaboration
7. Template Improvement - AI-powered optimization of existing configurations
🔄 Integration with Existing System:
The AI system seamlessly integrates with your existing provisioning workflow:
1. Generate configurations with AI
2. Validate using existing KCL tools
3. Apply using standard provisioning commands
4. Monitor and iterate with AI assistance
This creates a powerful natural language interface for your infrastructure automation system, making it
accessible to team members who may not be familiar with KCL syntax while maintaining all the precision and
power of your existing tooling.
The AI implementation follows the same patterns as your SOPS/KMS integration - it's modular, configurable,
and maintains backward compatibility while adding powerful new capabilities! 🚀

View file

@ -0,0 +1,280 @@
# AI Integration Library for Provisioning System
# Provides AI capabilities for infrastructure automation
use std
use ../utils/settings.nu load_settings
# AI provider configurations
export const AI_PROVIDERS = {
openai: {
default_endpoint: "https://api.openai.com/v1"
default_model: "gpt-4"
auth_header: "Authorization"
auth_prefix: "Bearer "
}
claude: {
default_endpoint: "https://api.anthropic.com/v1"
default_model: "claude-3-5-sonnet-20241022"
auth_header: "x-api-key"
auth_prefix: ""
}
generic: {
default_endpoint: "http://localhost:11434/v1"
default_model: "llama2"
auth_header: "Authorization"
auth_prefix: "Bearer "
}
}
# Get AI configuration from settings
export def get_ai_config [] {
let settings = (load_settings)
if "ai" not-in $settings.data {
return {
enabled: false
provider: "openai"
max_tokens: 2048
temperature: 0.3
timeout: 30
enable_template_ai: true
enable_query_ai: true
enable_webhook_ai: false
}
}
$settings.data.ai
}
# Check if AI is enabled and configured
export def is_ai_enabled [] {
let config = (get_ai_config)
$config.enabled and ($env.OPENAI_API_KEY? != null or $env.ANTHROPIC_API_KEY? != null or $env.LLM_API_KEY? != null)
}
# Get provider-specific configuration
export def get_provider_config [provider: string] {
$AI_PROVIDERS | get $provider
}
# Build API request headers
export def build_headers [config: record] {
let provider_config = (get_provider_config $config.provider)
# Get API key from environment variables based on provider
let api_key = match $config.provider {
"openai" => $env.OPENAI_API_KEY?
"claude" => $env.ANTHROPIC_API_KEY?
_ => $env.LLM_API_KEY?
}
let auth_value = $provider_config.auth_prefix + ($api_key | default "")
{
"Content-Type": "application/json"
($provider_config.auth_header): $auth_value
}
}
# Build API endpoint URL
export def build_endpoint [config: record, path: string] {
let provider_config = (get_provider_config $config.provider)
let base_url = ($config.api_endpoint? | default $provider_config.default_endpoint)
$base_url + $path
}
# Make AI API request
export def ai_request [
config: record
path: string
payload: record
] {
let headers = (build_headers $config)
let url = (build_endpoint $config $path)
http post $url --headers $headers --max-time ($config.timeout * 1000) $payload
}
# Generate completion using OpenAI-compatible API
export def ai_complete [
prompt: string
--system_prompt: string = ""
--max_tokens: int
--temperature: float
] {
let config = (get_ai_config)
if not (is_ai_enabled) {
return "AI is not enabled or configured. Please set OPENAI_API_KEY, ANTHROPIC_API_KEY, or LLM_API_KEY environment variable and enable AI in settings."
}
let messages = if ($system_prompt | is-empty) {
[{role: "user", content: $prompt}]
} else {
[
{role: "system", content: $system_prompt}
{role: "user", content: $prompt}
]
}
let payload = {
model: ($config.model? | default (get_provider_config $config.provider).default_model)
messages: $messages
max_tokens: ($max_tokens | default $config.max_tokens)
temperature: ($temperature | default $config.temperature)
}
let endpoint = match $config.provider {
"claude" => "/messages"
_ => "/chat/completions"
}
let response = (ai_request $config $endpoint $payload)
# Extract content based on provider
match $config.provider {
"claude" => {
if "content" in $response and ($response.content | length) > 0 {
$response.content.0.text
} else {
"Invalid response from Claude API"
}
}
_ => {
if "choices" in $response and ($response.choices | length) > 0 {
$response.choices.0.message.content
} else {
"Invalid response from OpenAI-compatible API"
}
}
}
}
# Generate infrastructure template from natural language
export def ai_generate_template [
description: string
template_type: string = "server"
] {
let system_prompt = $"You are an infrastructure automation expert. Generate KCL configuration files for cloud infrastructure based on natural language descriptions.
Template Type: ($template_type)
Available Providers: AWS, UpCloud, Local
Available Services: Kubernetes, containerd, Cilium, Ceph, PostgreSQL, Gitea, HAProxy
Generate valid KCL code that follows these patterns:
- Use proper KCL schema definitions
- Include provider-specific configurations
- Add appropriate comments
- Follow existing naming conventions
- Include security best practices
Return only the KCL configuration code, no explanations."
if not (get_ai_config).enable_template_ai {
return "AI template generation is disabled"
}
ai_complete $description --system_prompt $system_prompt
}
# Process natural language query
export def ai_process_query [
query: string
context: record = {}
] {
let system_prompt = $"You are a cloud infrastructure assistant. Help users query and understand their infrastructure state.
Available Infrastructure Context:
- Servers, clusters, task services
- AWS, UpCloud, local providers
- Kubernetes deployments
- Storage, networking, compute resources
Convert natural language queries into actionable responses. If the query requires specific data, request the appropriate provisioning commands.
Be concise and practical. Focus on infrastructure operations and management."
if not (get_ai_config).enable_query_ai {
return "AI query processing is disabled"
}
let enhanced_query = if ($context | is-empty) {
$query
} else {
$"Context: ($context | to json)\n\nQuery: ($query)"
}
ai_complete $enhanced_query --system_prompt $system_prompt
}
# Process webhook/chat message
export def ai_process_webhook [
message: string
user_id: string = "unknown"
channel: string = "webhook"
] {
let system_prompt = $"You are a cloud infrastructure assistant integrated via webhook/chat.
Help users with:
- Infrastructure provisioning and management
- Server operations and troubleshooting
- Kubernetes cluster management
- Service deployment and configuration
Respond concisely for chat interfaces. Provide actionable commands when possible.
Use the provisioning CLI format: ./core/nulib/provisioning <command>
Current user: ($user_id)
Channel: ($channel)"
if not (get_ai_config).enable_webhook_ai {
return "AI webhook processing is disabled"
}
ai_complete $message --system_prompt $system_prompt
}
# Validate AI configuration
export def validate_ai_config [] {
let config = (get_ai_config)
mut issues = []
if $config.enabled {
if ($config.api_key? == null) {
$issues = ($issues | append "API key not configured")
}
if $config.provider not-in ($AI_PROVIDERS | columns) {
$issues = ($issues | append $"Unsupported provider: ($config.provider)")
}
if $config.max_tokens < 1 {
$issues = ($issues | append "max_tokens must be positive")
}
if $config.temperature < 0.0 or $config.temperature > 1.0 {
$issues = ($issues | append "temperature must be between 0.0 and 1.0")
}
}
{
valid: ($issues | is-empty)
issues: $issues
}
}
# Test AI connectivity
export def test_ai_connection [] {
if not (is_ai_enabled) {
return {
success: false
message: "AI is not enabled or configured"
}
}
let response = (ai_complete "Test connection - respond with 'OK'" --max_tokens 10)
{
success: true
message: "AI connection test completed"
response: $response
}
}

View file

@ -0,0 +1 @@
export use lib.nu *