Skip to content

15 - Developer & Integration Guide

πŸ› οΈ Technical Documentation for Developers
⏱️ Time Estimate: 30 minutes
πŸ“‹ What You’ll Learn: Architecture, database schema, API integration, building from source



β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Frontend (React) β”‚
β”‚ - UI Components β”‚
β”‚ - State Management β”‚
β”‚ - Visualization Rendering β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ Tauri IPC
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Backend (Rust) β”‚
β”‚ - Database Operations β”‚
β”‚ - File System Access β”‚
β”‚ - API Integrations β”‚
β”‚ - Audio Processing β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ SQLite Databaseβ”‚ β”‚ File System β”‚
β”‚ - Projects β”‚ β”‚ - Audio Files β”‚
β”‚ - Transcripts β”‚ β”‚ - Models β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1. Local-First:

  • All data stored locally (SQLite + file system)
  • No cloud dependencies (except AI providers)
  • Offline capable with local models

2. Privacy-Focused:

  • No telemetry or tracking
  • Encrypted API key storage
  • User controls all data flow

3. Modular Architecture:

  • Clear separation: Frontend ↔ Backend
  • Pluggable AI providers
  • Extensible visualization system

4. Performance:

  • Async processing (Rust Tokio)
  • Debounced search (300ms)
  • Efficient database queries (indexed)

Core:

  • React 18 - UI framework
  • TypeScript - Type safety
  • Vite - Build tool & dev server
  • Tailwind CSS - Utility-first styling

Key Libraries:

{
"react": "^18.2.0",
"typescript": "^5.0.0",
"vite": "^4.3.0",
"tailwindcss": "^3.3.0",
"lucide-react": "^0.263.1",
"@tauri-apps/api": "^1.4.0",
"openai": "^6.3.0",
"@google/generative-ai": "^0.24.1"
}

Visualization:

  • Mermaid.js - Flowcharts
  • D3.js - Mind maps
  • TanStack Table - Action matrices

Core:

  • Rust - Systems programming
  • Tauri 2.0 - Desktop framework
  • Tokio - Async runtime
  • SQLx - Database access

Key Crates:

[dependencies]
tauri = "2.0"
tokio = { version = "1.28", features = ["full"] }
sqlx = { version = "0.7", features = ["sqlite", "runtime-tokio"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
reqwest = { version = "0.11", features = ["json"] }

AI Integration:

  • reqwest - HTTP client for API calls
  • Custom service layers for each provider

Core Tables:

1. PROJECTS

CREATE TABLE projects (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
description TEXT,
color_tag TEXT,
created_at TEXT NOT NULL,
last_accessed TEXT NOT NULL
);
CREATE INDEX idx_projects_name ON projects(name);
CREATE INDEX idx_projects_created ON projects(created_at DESC);

2. TRANSCRIPTS

CREATE TABLE transcripts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id INTEGER NOT NULL,
filename TEXT NOT NULL,
original_text TEXT NOT NULL,
processed_json TEXT,
processing_status TEXT DEFAULT 'pending',
file_size INTEGER,
upload_date TEXT NOT NULL,
-- F011 Audio Recording Fields
audio_file_path TEXT,
recording_duration INTEGER,
transcription_method TEXT DEFAULT 'file_upload',
transcription_provider TEXT,
transcription_model TEXT,
audio_format TEXT,
recording_type TEXT,
transcription_status TEXT DEFAULT 'pending',
-- F015 Temporal Data
temporal_data_json TEXT,
-- Processing Metadata
tokens_used INTEGER,
processing_cost REAL,
model_used TEXT,
provider TEXT,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE
);
CREATE INDEX idx_transcripts_project ON transcripts(project_id);
CREATE INDEX idx_transcripts_upload ON transcripts(upload_date DESC);
CREATE INDEX idx_transcripts_audio ON transcripts(audio_file_path);

3. LLM_SETTINGS

CREATE TABLE llm_settings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
-- Transcription Settings
transcription_provider TEXT DEFAULT 'ollama',
transcription_model TEXT DEFAULT 'whisper:base',
transcription_api_key TEXT,
transcription_api_url TEXT DEFAULT 'http://localhost:11434',
-- Analysis Settings
analysis_provider TEXT DEFAULT 'ollama',
analysis_model TEXT DEFAULT 'llama3.1:latest',
analysis_api_key TEXT,
analysis_api_url TEXT,
-- Automation
auto_transcribe BOOLEAN DEFAULT 0,
auto_analyze BOOLEAN DEFAULT 0,
-- Custom Models
custom_transcription_models TEXT, -- JSON array
custom_analysis_models TEXT, -- JSON array
updated_at TEXT NOT NULL
);
-- Single settings record
INSERT INTO llm_settings (id, updated_at) VALUES (1, datetime('now'));

4. LICENSE_SETTINGS

CREATE TABLE license_settings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
license_key TEXT UNIQUE,
activation_status TEXT DEFAULT 'inactive',
instance_id TEXT,
plan_name TEXT,
activation_date TEXT,
activation_response_json TEXT,
updated_at TEXT NOT NULL
);

5. VISUALIZATIONS (Planned)

CREATE TABLE visualizations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transcript_id INTEGER NOT NULL,
visualization_type TEXT NOT NULL,
svg_data TEXT,
mermaid_code TEXT,
export_format TEXT,
created_at TEXT NOT NULL,
FOREIGN KEY (transcript_id) REFERENCES transcripts(id) ON DELETE CASCADE
);

Format stored in transcripts.processed_json:

interface ProcessedTranscript {
meeting_metadata: {
title: string;
date: string; // ISO 8601
participants: string[];
meeting_type: "standup" | "planning" | "decision" | "brainstorm" | "other";
transcript_type: "single_speaker" | "multi_speaker";
};
decisions: Array<{
id: string; // "D1", "D2", etc.
text: string;
timestamp?: string;
flows_to: string[]; // Action IDs
}>;
actions: Array<{
id: string; // "A1", "A2", etc.
task: string;
owner: string;
deadline: string | null; // ISO 8601 date
priority: "high" | "medium" | "low";
source_decision: string; // Decision ID
}>;
concepts: Array<{
id: string; // "C1", "C2", etc.
topic: string;
importance: number; // 1-5
sub_topics: string[];
related_concepts: string[]; // Concept IDs
}>;
processing_metadata: {
model_used: string;
provider: "openai" | "gemini" | "ollama";
timestamp: string; // ISO 8601
tokens_used?: number;
cost_estimate?: number;
};
}

src/
β”œβ”€β”€ components/
β”‚ β”œβ”€β”€ Dashboard/ # Main app layout
β”‚ β”œβ”€β”€ Header/ # Global header with search
β”‚ β”œβ”€β”€ Sidebar/ # Project navigation (F001)
β”‚ β”œβ”€β”€ AudioRecorder/ # Recording UI (F011)
β”‚ β”œβ”€β”€ TranscriptUpload/ # File upload (F002)
β”‚ β”œβ”€β”€ LLMProcessing/ # AI processing modal
β”‚ β”œβ”€β”€ TranscriptVisualization/ # Results display
β”‚ β”œβ”€β”€ Settings/ # Settings page (F007)
β”‚ β”œβ”€β”€ GlobalSearch/ # Search dropdown (F020)
β”‚ β”œβ”€β”€ EditableText/ # Inline editing (F004)
β”‚ β”œβ”€β”€ SentimentArc/ # F015 visualization
β”‚ └── ParticipationHeatmap/ # F017 visualization
β”‚
β”œβ”€β”€ contexts/
β”‚ β”œβ”€β”€ ProjectContext.tsx # Global project state
β”‚ └── SettingsContext.tsx # App settings
β”‚
β”œβ”€β”€ utils/
β”‚ β”œβ”€β”€ llmService.ts # LLM integration
β”‚ β”œβ”€β”€ transcriptApi.ts # Backend API calls
β”‚ └── searchApi.ts # Search functions
β”‚
β”œβ”€β”€ types/
β”‚ β”œβ”€β”€ transcript.ts # TypeScript interfaces
β”‚ β”œβ”€β”€ project.ts
β”‚ └── settings.ts
β”‚
└── App.tsx # Root component
src-tauri/src/
β”œβ”€β”€ main.rs # Tauri app entry
β”œβ”€β”€ commands.rs # Tauri IPC commands
β”œβ”€β”€ database.rs # Database operations
β”œβ”€β”€ audio_commands.rs # Audio recording (F011)
β”œβ”€β”€ ollama_service.rs # Ollama integration
β”œβ”€β”€ license_service.rs # License management (F014)
└── lib.rs # Module exports
// Project Management
#[tauri::command]
async fn create_project(pool: State<'_, SqlitePool>, name: String, description: String, color: String) -> Result<i64, String>
#[tauri::command]
async fn get_projects(pool: State<'_, SqlitePool>) -> Result<Vec<Project>, String>
// Transcript Management
#[tauri::command]
async fn upload_transcript(pool: State<'_, SqlitePool>, project_id: i64, filename: String, content: String) -> Result<i64, String>
#[tauri::command]
async fn get_transcripts(pool: State<'_, SqlitePool>, project_id: i64) -> Result<Vec<Transcript>, String>
// Audio Recording (F011)
#[tauri::command]
async fn save_audio_recording(pool: State<'_, SqlitePool>, project_id: i64, audio_data: Vec<u8>) -> Result<i64, String>
#[tauri::command]
async fn transcribe_audio_ollama(pool: State<'_, SqlitePool>, transcript_id: i64) -> Result<String, String>
// Search (F020)
#[tauri::command]
async fn search_transcripts(pool: State<'_, SqlitePool>, query: String) -> Result<Vec<SearchResult>, String>
// Data Export (F013)
#[tauri::command]
async fn export_app_data(pool: State<'_, SqlitePool>) -> Result<String, String>
#[tauri::command]
async fn import_app_data(pool: State<'_, SqlitePool>, zip_content: Vec<u8>, overwrite: bool) -> Result<ImportResult, String>

Required:

Terminal window
# Node.js 18+
node --version # v18.0.0+
# Rust 1.70+
rustc --version # 1.70.0+
# Cargo (comes with Rust)
cargo --version
# Platform-specific:
# Windows: Visual Studio Build Tools
# macOS: Xcode Command Line Tools
# Linux: build-essential, libwebkit2gtk-4.0-dev
Terminal window
git clone https://github.com/shobankr/selfoss.git
cd selfoss

Frontend:

Terminal window
npm install

Backend (Rust):

Terminal window
# Tauri CLI
cargo install tauri-cli
# Dependencies auto-installed via Cargo.toml
Terminal window
# Start dev server (hot reload)
npm run tauri dev
# Runs:
# 1. Vite dev server (frontend)
# 2. Tauri app (backend)
# 3. Opens desktop app window
Terminal window
# Build optimized app
npm run tauri build
# Outputs:
# Windows: src-tauri/target/release/selfoss.exe
# macOS: src-tauri/target/release/bundle/macos/Selfoss.app
# Linux: src-tauri/target/release/selfoss

Windows:

Terminal window
# Install Visual Studio Build Tools
# https://visualstudio.microsoft.com/downloads/
# Or via winget
winget install Microsoft.VisualStudio.2022.BuildTools
# Then build
npm run tauri build

macOS:

Terminal window
# Install Xcode Command Line Tools
xcode-select --install
# Then build
npm run tauri build
# Code signing (for distribution)
codesign -s "Developer ID" target/release/bundle/macos/Selfoss.app

Linux (Ubuntu/Debian):

Terminal window
# Install dependencies
sudo apt update
sudo apt install -y \
libwebkit2gtk-4.0-dev \
build-essential \
curl \
wget \
libssl-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev
# Then build
npm run tauri build

VS Code (Recommended):

.vscode/extensions.json
{
"recommendations": [
"tauri-apps.tauri-vscode",
"rust-lang.rust-analyzer",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}

VS Code Settings:

.vscode/settings.json
{
"editor.formatOnSave": true,
"rust-analyzer.cargo.allFeatures": true,
"eslint.format.enable": true
}

Frontend (Jest/Vitest):

Terminal window
# Run tests
npm test
# Watch mode
npm test -- --watch
# Coverage
npm test -- --coverage

Backend (Rust):

Terminal window
# Run tests
cargo test
# With output
cargo test -- --nocapture
# Specific test
cargo test test_name

Frontend:

Terminal window
# Chrome DevTools built-in
# Ctrl+Shift+I (Windows/Linux)
# Cmd+Option+I (macOS)

Backend (Rust):

Terminal window
# Use VS Code debugger
# Or rust-lldb
rust-lldb target/debug/selfoss

1. Write Modelfile:

# Modelfile
FROM llama3.1:latest
PARAMETER temperature 0.7
PARAMETER top_p 0.9
SYSTEM """
You are an expert meeting analyzer specializing in technical discussions.
Extract decisions, action items, and concepts with high precision.
Focus on technical terminology and specifications.
"""

2. Create Model:

Terminal window
ollama create technical-meeting -f Modelfile

3. Test Model:

Terminal window
ollama run technical-meeting "Test prompt"

4. Add to Selfoss:

Settings β†’ LLM & Processing
β†’ Manage Custom Models
β†’ Add "technical-meeting"

Examples:

Medical Meetings:

FROM llama3.1:70b
SYSTEM """
You are a medical meeting analyst.
Understand medical terminology, procedures, and diagnoses.
Extract patient-related actions and clinical decisions.
Maintain HIPAA compliance in outputs.
"""

Legal Meetings:

FROM llama3.1:latest
SYSTEM """
You are a legal meeting analyst.
Understand legal terminology, case references, and procedures.
Extract legal actions, deadlines, and decisions.
Note document references and jurisdictions.
"""

Sales Calls:

FROM llama3.1:latest
SYSTEM """
You are a sales call analyzer.
Extract: objections, commitments, next steps, pricing discussions.
Identify buying signals and deal status.
Track follow-up actions and decision-makers.
"""

Example: Custom transcription call

// Frontend
async function transcribeWithOllama(audioPath: string): Promise<string> {
const endpoint = 'http://localhost:11434/api/generate';
const response = await fetch(endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'whisper:base',
prompt: '', // Audio handling varies
stream: false
})
});
const data = await response.json();
return data.response;
}

Rust Backend:

src-tauri/src/ollama_service.rs
pub async fn transcribe_audio(
audio_path: &str,
model: &str,
endpoint: &str
) -> Result<String, String> {
let client = reqwest::Client::new();
// Read audio file
let audio_data = tokio::fs::read(audio_path)
.await
.map_err(|e| e.to_string())?;
// Call Ollama API
let response = client
.post(format!("{}/api/generate", endpoint))
.json(&serde_json::json!({
"model": model,
"audio": base64::encode(audio_data),
"stream": false
}))
.send()
.await
.map_err(|e| e.to_string())?;
let result: serde_json::Value = response
.json()
.await
.map_err(|e| e.to_string())?;
Ok(result["response"].as_str().unwrap().to_string())
}

1. Define Provider Interface:

src/types/llm.ts
interface LLMProvider {
name: string;
transcribe(audio: File): Promise<string>;
analyze(text: string): Promise<ProcessedTranscript>;
testConnection(): Promise<boolean>;
}

2. Implement Provider:

src/utils/providers/customProvider.ts
class CustomProvider implements LLMProvider {
private apiKey: string;
private endpoint: string;
async transcribe(audio: File): Promise<string> {
// Implementation
}
async analyze(text: string): Promise<ProcessedTranscript> {
// Implementation
}
async testConnection(): Promise<boolean> {
// Implementation
}
}

3. Register Provider:

src/utils/llmService.ts
const providers = {
'ollama': OllamaProvider,
'openai': OpenAIProvider,
'gemini': GeminiProvider,
'custom': CustomProvider // Add here
};

  1. Fork repository
  2. Create feature branch: git checkout -b feature/my-feature
  3. Follow code style (Prettier, ESLint, Rustfmt)
  4. Write tests for new features
  5. Update documentation as needed
  6. Submit pull request

TypeScript:

  • Use Prettier for formatting
  • Follow ESLint rules
  • Use TypeScript strict mode
  • Document complex functions

Rust:

  • Use rustfmt for formatting
  • Follow Clippy lints
  • Document public APIs
  • Write unit tests

Format:

type(scope): subject
body (optional)
footer (optional)

Types:

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation
  • refactor: Code refactoring
  • test: Adding tests
  • chore: Maintenance

Examples:

feat(audio): add support for MP3 files
Implements MP3 transcoding to WAV before processing.
Uses ffmpeg for conversion.
Closes #123

New features must include:

  • Unit tests (Jest/Vitest or Rust)
  • Integration tests (if applicable)
  • Manual testing steps in PR

Update relevant docs:

  • README.md (if user-facing)
  • USER_GUIDES/ (if new feature)
  • Code comments (for complex logic)
  • API documentation (if backend changes)

πŸ› οΈ Ready to contribute!

  1. πŸ”§ Set up dev environment - Follow build instructions
  2. πŸ“š Read architecture docs - Understand codebase
  3. πŸ› Fix a bug - Start with β€œgood first issue”
  4. πŸ’‘ Propose feature - Open discussion first
  5. 🀝 Join community - GitHub Discussions
  • Tauri Docs: tauri.app/v1/guides
  • React Docs: react.dev
  • Rust Book: doc.rust-lang.org/book
  • SQLite Docs: sqlite.org/docs.html

πŸ› οΈ Build the future of meeting intelligence.