Skip to content

Atomic

Website · Docs · Discord · X

Open-source local AI — agents, chat, and inference. Private by default.


Atomic Agent

Stars  Forks  Last Commit  Docs

Atomic Agent

A local-first operator agent for llama.cpp. Standalone SEA binary, tuned for small local models. Data, traces, browser profile, memory, and model traffic stay on your machine.

Install

curl -fsSL https://api.atomicbot.ai/agent-install | sh

Run

atomic-agent

Capabilities

  • System browser via ARIA snapshots, shell, filesystem, documents (PDF/DOCX/XLSX), git, clipboard, HTTP, notifications
  • GBNF grammar-constrained tool calls, parallel tool batches, cache-hot prompt prefix, externalized state in SQLite
  • Local Markdown skills loaded on demand, FTS5 note recall, durable cron and webhook-triggered tasks
  • TUI, CLI, OpenAI-compatible HTTP server, and a Tauri sidecar speaking newline-delimited JSON
  • Policy-gated dangerous actions, append-only NDJSON traces with prompt-drift replay

Atomic Chat

Stars  Forks  Last Commit  Docs

Atomic Chat

An open-source desktop AI app. Run local LLMs from Hugging Face or connect cloud models (OpenAI, Anthropic, Mistral, Groq, MiniMax, others). Available on macOS, Windows, and iOS.

Download

Download for macOS  Download for Windows  Download for iOS

Local OpenAI-compatible server

curl http://localhost:1337/v1/chat/completions -d '{
  "model": "llama-3.2-3b-instruct",
  "messages": [{ "role": "user", "content": "Why is the sky blue?" }]
}'

Highlights

  • Run LLMs (Llama, Gemma, Qwen, others) from Hugging Face — fully offline
  • Connect cloud providers: OpenAI, Anthropic, Mistral, Groq, MiniMax
  • Custom assistants for specialized tasks
  • MCP integration for agentic capabilities
  • Native iOS app, not a wrapper

Atomic Hermes

Stars  Forks  Last Commit  Docs

Atomic Hermes

A native autonomous AI agent for desktop. Built on the Hermes Agent core by Nous Research, with computer use, time-travel file history, and offline operation.

Download

Download for macOS

Highlights

  • Computer use with native OCR (Apple Vision / Windows.Media.Ocr) — pixel-accurate click coordinates, no guessing
  • Time-travel file history: every file the agent touches is snapshotted before and after, one-click diff or restore
  • Self-improving skills and memory: the agent writes its own procedures and decides what to remember across sessions
  • Bundled inference engine, or 20+ cloud providers (OpenRouter, Anthropic, OpenAI, Gemini, DeepSeek, others)
  • One agent across 16+ messengers — Telegram, Discord, Slack, WhatsApp, Signal, iMessage, Email, Matrix, Teams
  • 40+ tools, MCP-native: file ops, web search, code execution, subagents, cron, browser automation, agentskills.io Skills Hub
  • Approval modals for dangerous shell commands and writes

Atomic Bot

Stars  Forks  Last Commit  Docs

Atomic Bot

A native desktop app that turns OpenClaw (330k+ stars) into a personal AI assistant. No terminal, no config, no Docker.

Download

Download for macOS  Download for Windows  Linux — coming soon

Highlights

  • Drafts emails, schedules meetings, summarizes docs, automates the browser
  • 13,000+ skills from ClawHub
  • Multi-model: Claude, GPT, Gemini — switch on the fly with your own API keys
  • One AI across Telegram, Slack, Discord, WhatsApp
  • Built-in Whisper transcription, local or cloud
  • Persistent memory across sessions and tasks
  • Auto-updates to the latest OpenClaw release

Atomic LLaMA

Stars  Forks  Last Commit  Docs

Atomic LLaMA

A llama.cpp fork with TurboQuant KV cache compression and Gemma 4 MTP speculative decoding. ~30-50% throughput gains on the same hardware, drop-in compatible with upstream tools and GGUF.

TurboQuant KV cache

WHT-rotated 2/3/4-bit KV cache with backend-native kernels (Metal TurboFlash, CUDA, Vulkan, HIP). turbo3 is the default — 3-bit, ~4.3× compression vs F16.

llama-server -m model.gguf -c 32768 -ngl 99 -fa on \
  -ctk turbo3 -ctv turbo3

Gemma 4 MTP speculative decoding

Pair any gemma4 target with the official gemma4_assistant head — loaded into the target context, no second tokenizer or KV cache. +30-50% short-prompt throughput on Gemma 4 26B-A4B / 31B at 85-88% accept rate. Pre-built assistant heads on Hugging Face.

llama-server -m gemma-4-target.gguf -c 16384 -ngl 99 -ngld 99 -fa on \
  --mtp-head gemma-4-assistant-Q4_K_M.gguf \
  --spec-type mtp --draft-block-size 3

Also included

  • TQ3_1S / TQ4_1S weight quantization via llama-quantize — 25-35% smaller than Q8_0, single-digit % PPL delta
  • Regularly synced with ggml-org/llama.cpp
  • Powers local inference in Atomic Chat, Atomic Hermes, and Atomic Agent

Atomic Computer Use

Stars  Forks  Last Commit  npm

Framework-agnostic desktop automation for AI agents. Screenshot, click, type, scroll, drag, OCR — works with any tool-calling LLM, MCP server, or custom pipeline.

Install

npm install @atomicbotai/computer-use

Use

import { screenshot, click, type } from "@atomicbotai/computer-use";

const { image, anchors } = await screenshot();
const send = anchors.find(a => a.text === "Send");
await click(send.x, send.y);
await type("hello");

Packages

Highlights

  • Zero-dependency native OCR (Apple Vision on macOS, Windows.Media.Ocr on Windows) — no Tesseract, no cloud, no API keys
  • Pixel-accurate UI anchors: "Send" at (1450, 890) instead of guessing from a downscaled screenshot
  • Full action set: click / double / triple, type, press, scroll, drag, hold key, clipboard, app switch, list displays
  • Native overlay (Swift on macOS, PowerShell on Windows) shows when the agent is driving the mouse and keyboard
  • File-based session lock prevents two agents from fighting over the desktop
  • Guardrails against misclicks in dock/launcher and submit zones
  • Per-action debug artifacts: screenshots, OCR results, tool outputs

ClawHub Layer API

Stars  Forks  Last Commit  Docs

A complete REST API for ClawHub. The official ClawHub API exposes a subset of the data; this layer aggregates everything into clean endpoints, no Convex knowledge required.

Highlights

  • Full coverage: skills, metadata, and everything the official API doesn't expose
  • Auto-syncing: periodically pulls and caches the latest data from Convex
  • Standard REST, no upstream dependency at query time
  • Drop-in for apps, bots, and workflows

Community

Discord  X  LinkedIn

© 2026 Atomic · atomicbot.ai

Popular repositories Loading

  1. Atomic-Chat Atomic-Chat Public

    Forked from janhq/jan

    Atomic-Chat is an open source alternative to ChatGPT that runs 100% offline on your computer.

    TypeScript 621 54

  2. atomicbot atomicbot Public

    Forked from openclaw/openclaw

    The Fastest Way to Run OpenClaw 🦞

    TypeScript 305 45

  3. atomic-hermes atomic-hermes Public

    Forked from NousResearch/hermes-agent

    The agent that grows with you

    Python 130 7

  4. atomic-llama-cpp-turboquant atomic-llama-cpp-turboquant Public

    Forked from TheTom/llama-cpp-turboquant

    llama.cpp fork with TurboQuant WHT-rotated KV cache & weight compression + Gemma 4 MTP speculative decoding for ~30-50% throughput gains

    C++ 81 9

  5. Atomic-Chat-HQ Atomic-Chat-HQ Public

    Forked from janhq/jan

    TypeScript 53 12

  6. clawhub-layer-api clawhub-layer-api Public

    🐾 Complete REST API for ClawHub skills marketplace data

    TypeScript 11 2

Repositories

Showing 10 of 15 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…