LARIA - Premium Offline AI Companion
LARIA is a personal AI companion designed to feel warm, helpful and practical for everyday life.
Chat, think, write, brainstorm or generate ideas with an assistant that can run locally on your device — while letting you connect to powerful online models whenever you choose.
Now with Android home screen widgets, experimental multilingual text-to-speech, and new multimodal models that can understand both text and images.
Offline, Online, or Private Cloud
Run LARIA locally on your device, connect to online providers, or plug into your own private-cloud AI endpoints.
Switch easily between offline, online, and self-hosted setups so each identity can use the model style that fits the moment.
Why You’ll Want LARIA
Premium companion feel: warm, personal, and expressive, with clear boundaries.
Multiple “moods”: create identities so your assistant matches your moment (serious, creative, comforting).
Always close by: keep LARIA on your Android home screen with dynamic avatar widgets.
More personal your way: tune greetings, presentation, and voice settings per identity.
You stay in control: decide what runs offline and what uses the internet.
Built for real life: writing, learning, planning, and creative work.
Key Features
Offline models: including Granite-4, Mistral, Gemma, LFM, Phi-4, Dolphin, and Qwen.
New multimodal Qwen models: Qwen 0.8B and 4B support text and image input for richer conversations.
Identity presets: different system prompt, avatar, and model per identity.
Dynamic avatars: preset or custom images, with time-aware variations.
Android widgets: dynamic avatar widgets bring your assistant right to the home screen.
Per-identity voice settings: choose how each identity sounds, with experimental text-to-speech in English, French, Italian, and Spanish.
Scheduled reminders: concise summaries on your schedule (news + weather).
Advanced reminders: scheduled summaries with news sources, weather locations, and optional webhooks from your own HTTP endpoints.
Optional web search: fetch web results only when a request needs them.
Broader online and private-cloud provider support: connect OpenAI, Anthropic, Gemini, OpenRouter, xAI (Grok), Home Assistant, OpenClaw, or your own backend.
OpenAI-compatible or Ollama endpoints: connect LM Studio, vLLM, LocalAI, llama.cpp server, Ollama, or your own compatible server.
Attached-image chat: send text together with an image to supported multimodal models.
Offline image generation: generate images on-device, with optional experimental hardware acceleration on supported devices.
Storage controls: download/remove models and clean unused files.
Chat history: resume and manage past chats.
Great For
Writing: emails, posts, scripts, rewriting, tone.
Learning: summaries, explanations, Q&A.
Planning: lists, routines, schedules, travel ideas.
Visual prompts: ask questions about images with multimodal-compatible models.
Creativity: stories, prompts, worldbuilding, brainstorming.
Tech: coding help, debugging notes, structured thinking.
Connectivity Notes
Offline mode runs locally on your device.
Online features require internet access and may send prompts to the selected provider, endpoint, or data source.