← Back to Law Firms
TECHNICAL GUIDANCE
AIBridges Security Division
Air-Gapped Local AI Deployment
Ollama Configuration Guide
DOCUMENT NO: OAG-2026-0218-001
Date: 2026-02-18
Classification: UNCLASSIFIED
Platforms: Linux, Windows, macOS
Compliance: Attorney-Client Privilege
Author: AIBridges Technical Team
Version: 1.0

1. Executive Summary

This document provides technical guidance for deploying Ollama, an open-source local AI inference engine, in an air-gapped configuration. Air-gapping ensures that after initial setup, the AI system has zero internet connectivity, guaranteeing that sensitive client data never leaves the local machine.

Key Benefits for Legal Practice

2. System Requirements
Component Minimum Recommended Notes
RAM 8 GB 16-32 GB More RAM = larger models
Storage 20 GB free 50+ GB free Models range 2-40GB each
CPU 4 cores 8+ cores Apple Silicon preferred
GPU Optional NVIDIA 8GB+ 10-50x faster inference
OS Windows 10/11, macOS 12+, Ubuntu 20.04+
Recommended Hardware for Law Firms

Mac Mini M4 Pro (24GB): $1,399 — Runs 7-13B models smoothly. Silent, low power, fits in server closet. Recommended for small-to-medium firms.

Mac Mini M4 Pro (64GB): $2,199 — Runs 70B models. For larger firms or complex document analysis.


3. Installation — Linux (Ubuntu/Debian)

Step 3.1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Step 3.2: Download Models (While Online)

# Recommended for legal document analysis ollama pull llama3.2 # 8B general purpose ollama pull llama3.2:3b # 3B lighter model # For code/contract analysis ollama pull codellama:7b

Step 3.3: Verify Installation

echo "Summarize attorney-client privilege" | ollama run llama3.2:3b

Step 3.4: Enable Air-Gap (Block Internet)

# Get ollama user ID OLLAMA_UID=$(id -u ollama) # Allow localhost only, block all external sudo iptables -I OUTPUT 1 -m owner --uid-owner $OLLAMA_UID -d 127.0.0.0/8 -j ACCEPT sudo iptables -I OUTPUT 2 -m owner --uid-owner $OLLAMA_UID -j DROP

Step 3.5: Persist Firewall Rules

cat << 'EOF' | sudo tee /etc/network/if-pre-up.d/ollama-firewall #!/bin/bash OLLAMA_UID=$(id -u ollama 2>/dev/null || echo 997) iptables -I OUTPUT 1 -m owner --uid-owner $OLLAMA_UID -d 127.0.0.0/8 -j ACCEPT iptables -I OUTPUT 2 -m owner --uid-owner $OLLAMA_UID -j DROP EOF sudo chmod +x /etc/network/if-pre-up.d/ollama-firewall

Step 3.6: Verify Air-Gap

# Should FAIL (blocked) sudo -u ollama curl -s --connect-timeout 5 https://ollama.com || echo "BLOCKED ✓" # Should WORK (local) curl http://localhost:11434/api/tags

4. Installation — Windows

Step 4.1: Download Installer

https://ollama.com/download/windows

Run the .exe installer. Ollama will install to C:\Users\[USERNAME]\AppData\Local\Ollama

Step 4.2: Download Models (PowerShell)

ollama pull llama3.2 ollama pull llama3.2:3b

Step 4.3: Enable Air-Gap via Windows Firewall

# PowerShell (Run as Administrator) # Block Ollama outbound New-NetFirewallRule -DisplayName "Block Ollama Outbound" ` -Direction Outbound ` -Program "C:\Users\$env:USERNAME\AppData\Local\Ollama\ollama.exe" ` -Action Block # Allow localhost only New-NetFirewallRule -DisplayName "Allow Ollama Localhost" ` -Direction Outbound ` -Program "C:\Users\$env:USERNAME\AppData\Local\Ollama\ollama.exe" ` -RemoteAddress 127.0.0.1 ` -Action Allow

Step 4.4: Alternative — Windows Firewall GUI

  1. Open Windows Defender Firewall with Advanced Security
  2. Click "Outbound Rules" → "New Rule"
  3. Select "Program" → Browse to ollama.exe
  4. Select "Block the connection"
  5. Apply to all profiles
  6. Create second rule allowing localhost (127.0.0.1) only
⚠ Windows Air-Gap Verification

Windows firewall rules are application-based, not user-based. Verify by checking Event Viewer → Windows Logs → Security for blocked connection attempts from ollama.exe.


5. Installation — macOS

Step 5.1: Download from official site

https://ollama.com/download/mac

Or via Homebrew:

brew install ollama

Step 5.2: Download Models

ollama pull llama3.2 ollama pull llama3.2:3b

Step 5.3: Enable Air-Gap via PF Firewall

# Edit /etc/pf.conf (requires sudo) sudo nano /etc/pf.conf # Add these lines: block out quick proto tcp from any to any user _ollama block out quick proto udp from any to any user _ollama pass out quick proto tcp from any to 127.0.0.1 user _ollama # Reload firewall sudo pfctl -f /etc/pf.conf sudo pfctl -e

Alternative for macOS: Physical air-gap — simply disconnect from network after model download. Mac Mini can run headless with no network cable.


6. Integration with DraftBridge

Ollama can serve as the local AI backend for DraftBridge document processing:

# DraftBridge configuration (config.json) { "ai_provider": "ollama", "ollama_url": "http://localhost:11434", "model": "llama3.2", "fallback_model": "llama3.2:3b" }

This configuration ensures all document analysis occurs locally with no external API calls.


7. Verification Checklist
Check Command Expected Result
Ollama running curl http://localhost:11434 "Ollama is running"
Models available ollama list Lists downloaded models
Inference works ollama run llama3.2:3b "Hello" Returns response
Internet blocked Attempt external connection Connection refused/timeout
Firewall rules active iptables -L OUTPUT (Linux) Shows ACCEPT/DROP rules

8. Security Considerations
Data Flow Diagram
┌─────────────────────────────────────────────────────────┐ │ LOCAL MACHINE │ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ User │ ───▶ │ Ollama │ ───▶ │ Model │ │ │ │ Input │ │ Server │ │ (Local) │ │ │ └──────────┘ └──────────┘ └──────────┘ │ │ │ │ │ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌──────────────────────────────────────────────┐ │ │ │ STAYS ON MACHINE │ │ │ └──────────────────────────────────────────────┘ │ │ │ ├──────────────────────────────────────────────────────────┤ │ FIREWALL: BLOCK ALL OUTBOUND FROM OLLAMA │ ├──────────────────────────────────────────────────────────┤ │ │ │ ✗ INTERNET ✗ │ │ (No connection) │ │ │ └─────────────────────────────────────────────────────────┘

What is blocked:

What still works:


9. Troubleshooting
Issue Cause Solution
"Model not found" Model not downloaded before air-gap Temporarily disable firewall, download model, re-enable
Slow inference CPU-only mode, no GPU Normal for CPU. Consider smaller model or GPU upgrade.
"Connection refused" Ollama service not running sudo systemctl start ollama (Linux)
Out of memory Model too large for RAM Use smaller model (3b instead of 7b)

10. Summary
Capability Status
Local model inference ✓ ENABLED
API access (localhost) ✓ ENABLED
DraftBridge integration ✓ ENABLED
Internet connectivity ✗ BLOCKED
Telemetry ✗ BLOCKED
Remote model downloads ✗ BLOCKED
Data exfiltration ✗ IMPOSSIBLE
Certification Statement

When properly configured per this guide, Ollama operates in complete isolation from external networks. All data processing occurs locally. No client data is transmitted, stored externally, or accessible to third parties.