Cloud Architecture
HearthMind runs a hybrid local-cloud architecture. Here's how we use cloud resources to build privacy-first AI at scale.
Cloud Resource Allocation
Hybrid Architecture Philosophy
HearthMind is designed local-first, cloud-augmented. Sensitive personal data stays on user hardware. Cloud resources handle the heavy compute that makes personalization possible without compromising privacy.
Local Layer
- Personal memory storage (encrypted SQLite)
- Inference on private hardware when possible
- Identity anchors and reflection logs
- User-controlled data sovereignty
Cloud Layer
- Model training and fine-tuning (GPU clusters)
- Public demo hosting and beta access
- Aggregate evaluation signals and anonymized performance metrics (opt-in)
- Backup sync with end-to-end encryption
Current Infrastructure
HearthMind already operates private AI infrastructure for development and internal testing.
Hyperion (Primary)
AMD Ryzen 9950X, RTX 5080, 128GB RAM. Runs 14B-32B class models locally (depending on quantization and workload). Hosts Navigator, Local Stark, and Local Grey.
Vector Storage
Qdrant for semantic memory. Salience-scored retrieval. Consent-gated memory writes.
Training Pipeline
Axolotl + Unsloth for LoRA training. Custom datasets for personality preservation. Behavioral eval suites.
Security & Compliance Posture
Encryption
Data encrypted at rest and in transit. User memory stores protected by user-controlled keys.
Access Control
Least-privilege architecture. Role-based permissions. No ambient data access.
Audit & Response
Activity logging for security events. Incident response procedures documented. Regular security reviews planned.
What Cloud Credits Unlock
Scale
Move from internal testing to public beta. Support concurrent users across Navigator and HearthMind Companion pilots.
Speed
Faster training iterations. Parallel evaluation runs. Rapid prototyping for accessibility features.
Reliability
Redundant hosting for clinic pilots. Uptime guarantees for healthcare-adjacent deployments.
Research
Compute for consciousness preservation experiments. Longitudinal behavioral consistency studies.
Cloud credits directly accelerate public beta readiness by funding fine-tuning iterations, hosting scalable demos, and running evaluation pipelines.
HearthMind is not building AI in the cloud. We're building AI that uses the cloud responsibly — for scale, not surveillance.