Open Source · Self-Hosted · v1.0

Your AI Agent. Your Infrastructure.

Helium Bees is an open-source autonomous AI agent framework that runs entirely on your hardware. Connect any LLM, automate workflows, control browsers, and chat across every platform — all without giving up your data.

helium-bees — bash
$ git clone https://github.com/helium-bees/core
$ cd core && pip install -r requirements.txt
✓ Installing helium-core dependencies...
$ python agent.py --provider openrouter --channel telegram
🐝 Helium Bees agent started on port 8080
⚡ Connected to Telegram · Memory loaded · Skills active
→ Awaiting instructions...
Supports OpenRouter Groq Gemini Alibaba + more
Scroll
0
LLM Providers
0
Messaging Channels
0
% Self-Hosted
0
GitHub Stars

Everything an agent needs.
Nothing it doesn't.

A complete toolkit for building autonomous AI agents that actually work in production — on your terms.

🧠
Multi-LLM Flexibility
Switch between providers without changing your code. OpenRouter, Groq, Gemini, and Alibaba supported out of the box with unified API abstraction.
OpenRouter Groq Gemini Alibaba
💬
Omnichannel Messaging
Deploy your agent across Telegram, Discord, Slack, and WhatsApp simultaneously. One agent, every platform, unified conversation context.
Telegram Discord Slack WhatsApp
🗄️
Memory & Skills System
Persistent vector memory lets your agent remember context across sessions. Modular skills extend capabilities — load only what you need.
Vector DB Long-term Memory Skill Modules
Cron Jobs & Automation
Schedule tasks with cron expressions. Your agent wakes up, executes workflows, sends reports, and goes back to sleep — fully autonomous.
Cron Scheduler Workflows Event Triggers
🌐
Browser Control
Full Playwright-powered browser automation. Navigate, click, fill forms, extract data, take screenshots — your agent sees and interacts with the web.
Playwright Web Scraping Form Automation
💰
Cost Tracking
Real-time token usage and cost monitoring per provider, per conversation, per task. Set budgets, get alerts, and optimize your LLM spend intelligently.
Token Counting Budget Alerts Usage Reports
Code Execution Engine
Sandboxed Python and JavaScript execution. Your agent can write code, run it, inspect the output, and iterate — all in a secure isolated environment. Build data pipelines, generate reports, automate complex multi-step tasks with real computation.
Python Sandbox JS Runtime File I/O Package Install Secure Isolation
🔒
100% Self-Hosted
Your data never leaves your infrastructure. Deploy on any Linux server, Docker container, or Raspberry Pi. Full control, zero vendor lock-in.
Docker On-Premise Air-Gapped

Built for the real world.

A layered architecture that separates concerns cleanly — swap any component without breaking the rest.

01
Configure Your Stack
Choose your LLM provider, set API keys, pick your messaging channels. A single YAML config file controls everything.
02
Load Skills & Memory
Enable the skill modules you need — browser, code execution, web search. Memory initializes from your vector store automatically.
03
Connect Your Channels
Webhook handlers spin up for each platform. Your agent is live on Telegram, Discord, and Slack simultaneously within seconds.
04
Automate & Scale
Set cron schedules, define workflows, monitor costs. Your agent runs 24/7, learns from interactions, and gets smarter over time.
// System Architecture
🌐 Channel Layer
Telegram Discord Slack
🧠 Agent Core
Planner Router Memory
⚡ Skills Engine
Browser Code Search
🔌 LLM Providers
OpenRouter Groq Gemini

Meet your users
where they are.

Deploy once, reach everywhere. Helium Bees maintains unified conversation context across all platforms — your agent remembers who it's talking to.

✈️
Telegram
Bot API with inline keyboards, file sharing, groups
Live
🎮
Discord
Slash commands, embeds, server-wide deployment
Live
💼
Slack
Workspace bots, channel monitoring, app actions
Live
📱
WhatsApp
Business API integration, media messages
Beta
// Live conversation
👤
Hey, can you scrape the top 10 AI papers from arxiv today and summarize them?
09:41 · Telegram
🐝
On it! Launching browser, navigating to arxiv.org/cs.AI...
09:41 · Helium Bees
🐝
✅ Found 10 papers. Summarizing with Groq (llama-3.1-70b)... Cost so far: $0.0012
09:42 · Helium Bees
👤
Also post the summary to our #research Discord channel
09:42 · Discord
🐝
Done! Posted to #research. Total cost: $0.0031 🎯
09:43 · Helium Bees

Pick your model.
Control your costs.

Helium Bees tracks every token, every cent. Switch providers mid-conversation or route different tasks to different models automatically.

Groq
Ultra-Fast
Inference at 500+ tokens/sec
  • LLaMA 3.1 70B / 8B
  • Mixtral 8x7B
  • Gemma 2 9B
  • Real-time streaming
💎
Gemini + Alibaba
Multimodal
Vision, audio, long context
  • Gemini 1.5 Pro (1M ctx)
  • Qwen 2.5 series
  • Image & video input
  • Cost-optimized routing
// Cost Dashboard — Today
Auto-refreshes every 60s
TOTAL SPEND
$0.47
TOKENS USED
1.2M
REQUESTS
847
AVG LATENCY
312ms

Deploy your agent
in under 5 minutes.

Clone the repo, set your API keys, and your autonomous AI agent is live. No cloud accounts, no subscriptions, no data leaving your server.

Open source under MIT License · View on GitHub · Join Discord