Chat Console is a comprehensive terminal-based interface for interacting with various Large Language Models (LLMs) directly from your command line. This application provides an intuitive TUI (Text User Interface) for conducting AI conversations with multiple model providers.
- Interactive terminal UI built with the Textual library
- Support for multiple AI providers:
- OpenAI (GPT-3.5, GPT-4)
- Anthropic (Claude 3 Opus, Sonnet, Haiku, Claude 3.7 Sonnet)
- Ollama (local models like Llama, Mistral, CodeLlama)
- Conversation history with search functionality
- Customizable response styles (concise, detailed, technical, friendly)
- Code syntax highlighting
- Markdown rendering
- SQLite database for persistent storage
pip install chat-console
git clone https://github.com/wazacraftrfid/chat-console.git
cd chat-console
pip install -e .
Create a .env file in your project directory with your API keys:
# OpenAI API key for GPT models
OPENAI_API_KEY=your_openai_api_key_here
# Anthropic API key for Claude models
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Ollama base URL (optional, defaults to http://localhost:11434)
OLLAMA_BASE_URL=http://localhost:11434
Start the application with:
chat-console
Or use the short alias:
c-c
You can also start it with an initial prompt:
chat-console "Explain quantum computing"
The ask command provides quick AI assistance with terminal output and errors. It can automatically capture your recent terminal activity and send it to an AI model for analysis and suggestions.
Basic Usage:
ask "Why did this command fail?"
ask "What does this error mean?"
Automatic Terminal Capture:
The ask command will automatically try to capture recent terminal output using various methods:
- tmux sessions: Captures pane content if you're using tmux
- GNU screen: Captures screen content if you're using screen
- Kitty terminal: Uses kitty's built-in capture API
- Clipboard detection: Checks clipboard for terminal-like content
Manual Input Options:
ask --paste "What's wrong here?" # Manually paste terminal output
ask --history "Explain recent commands" # Include command history
ask --model gpt-4 "Help with this error" # Use specific model
Examples:
# After getting an error, just run:
ask "help me fix this"
# For installation issues:
ask "why did the installation fail?"
# Include recent command history:
ask --history "what went wrong with my setup?"
# Use a specific model:
ask --model claude-3-opus "explain this error"
Tips:
- Works best in tmux, screen, or kitty terminal for automatic capture
- If auto-capture fails, use
--pasteto manually provide output - The ask command uses your last-used model from the main chat interface
- For Ollama models, make sure they're downloaded first:
ollama pull model-name
q- Quit the applicationn- Start a new conversations- Open settings panel (when input is not focused)h- View chat historyEscape- Cancel current generation or close settingsCtrl+C- Quit the application
The Chat CLI interface consists of:
- A conversation display area showing message history
- A text input area for entering prompts
- Model and style selectors in the settings panel
- Chat history browser
Chat CLI supports different response styles that can be selected from the settings menu:
- Default: Standard assistant responses
- Concise: Brief and to-the-point responses
- Detailed: Comprehensive and thorough explanations
- Technical: Technical language with precise terminology
- Friendly: Warm, conversational tone
All conversations are stored in a SQLite database at ~/.terminalchat/chat_history.db.
Configuration is stored at ~/.terminalchat/config.json.
To use local models with Ollama:
- Install Ollama from https://ollama.ai
- Start the Ollama service:
ollama serve - Pull models you want to use:
ollama pull mistral - Launch Chat CLI and select an Ollama model from the settings panel
app/- Main application codeapi/- API clients for different LLM providersui/- Textual UI componentsconfig.py- Configuration managementdatabase.py- SQLite database operationsmodels.py- Data modelsutils.py- Utility functions
main.py- Application entry point
The included Makefile provides helpful commands:
make install- Install in development modemake dev- Install development dependenciesmake run- Run the applicationmake demo- Run with sample datamake clean- Clean Python cache filesmake format- Format code with isort and blackmake lint- Run lintermake test- Run tests
This project is licensed under the MIT License - see the LICENSE file for details.