- Akmatori Blog
- I Run OpenClaw as My AI Personal Assistant 24/7. Here Is What Actually Works (And What Does Not)
I Run OpenClaw as My AI Personal Assistant 24/7. Here Is What Actually Works (And What Does Not)

There is a gap between "I asked ChatGPT to help me write an email" and "I have an AI agent running 24/7 that manages my finances, calendar, work reports, and personal journal." This post is about the second one.
I have been running a self-hosted AI assistant using OpenClaw for several months now. It connects to my Telegram, has access to my APIs and calendars, runs cron jobs on a schedule, and acts as something between a personal secretary and a sysadmin daemon.
Here is what the actual setup looks like, what genuinely saves time, and where AI agents still fall apart.
The Setup
The assistant runs on a VPS, connected to Telegram as the primary interface. It has access to:
- Google Calendar (personal + work) via CalDAV
- ZenMoney (personal finance app) API
- Work task management systems
- News and RSS feeds
- File system for notes, journals, and memory files
Everything is orchestrated through cron jobs that fire at specific times, plus an interactive chat interface for ad-hoc requests.
What Runs Daily
Morning: News Digest
Every morning, the assistant compiles an AI and tech news digest. It pulls from multiple sources, filters for relevance, and sends a summary to Telegram.
This is one of the most reliable automations. News aggregation is a well-defined task: fetch, filter, summarize, send. The LLM is good at this because it does not need to maintain state or interact with complex APIs.
Verdict: Works well. Saves 15-20 minutes of morning scrolling.
Morning: Financial Analytics
The assistant connects to ZenMoney, pulls transaction data, and runs analytics. It also updates brokerage account balances and makes corrections where needed.
This works but requires careful prompt engineering. Financial APIs return structured data, and the LLM needs to do math, categorization, and sometimes make API calls to update balances. Occasional errors happen with currency conversion or when the API response format changes.
Verdict: Works, but needs monitoring. Financial data is not where you want silent failures.
Morning: Calendar Check
The assistant checks both personal and work CalDAV calendars. When a new meeting appears, it sends an inline confirmation button in Telegram. This is useful for meetings added by others that you might miss.
The CalDAV integration is straightforward and reliable. Calendar data is structured and predictable.
Verdict: Reliable. The inline buttons for confirming meetings feel genuinely useful.
Evening: Journaling
This is the most interesting automation. In the evening, the assistant writes a journal entry based on the day's activities: what tasks were completed, what conversations happened, what decisions were made. It maintains both my journal and its own "agent journal" about its performance and learnings.
The quality varies. On days with lots of clear activity (commits, messages, completed tasks), the journal is useful. On quiet days, it sometimes invents activities or writes generic filler.
Verdict: Useful but not fully trustworthy. Best as a draft that you review, not a final record.
Weekly/Monthly/Quarterly: Work Reports
The assistant generates structured reports from work task systems. Weekly reports summarize completed tasks and blockers. Monthly reports add trends and metrics. Quarterly reports provide higher-level analysis.
For anyone who has ever scrambled to remember what they did last quarter during a performance review, this is genuinely valuable. The raw data is already in the task management system; the assistant just structures it into a readable format.
Verdict: One of the highest-value automations. Saves hours of report writing per quarter.
Night: Self-Improvement
The assistant has a late-night cron that attempts to improve its own configuration, prompts, and workflows. It reviews what went wrong during the day and tries to fix it.
In practice, this is hit-or-miss. Sometimes it makes useful improvements to its system prompts or memory files. Other times it makes changes that break things or optimize for the wrong metric. I end up manually requesting improvements more often than the automated ones succeed.
Verdict: Ambitious idea. Reality is that most self-improvement still needs human direction.
The Honest Problems
Context Loss Within a Single Session
This is the biggest issue. The assistant can lose context between adjacent messages in the same conversation. You discuss topic A, it responds coherently. You follow up with a related question, and it acts like the previous message never happened.
This is not a theoretical concern. It happens regularly and is the single most frustrating aspect of the setup. Modern LLMs have large context windows, but the way sessions are managed means context does not always flow cleanly between messages.
Cron Jobs Run in Separate Sessions
Each scheduled cron job runs in its own isolated session. This means:
- The morning news digest does not know what the calendar check found
- The financial analytics cannot reference the work tasks
- The evening journal does not have full context of what the cron jobs discovered
You end up with an assistant that knows things in fragments but cannot synthesize across its own automated tasks. The interactive chat session is separate from the cron sessions, so even asking "what did you find in my calendar this morning?" requires re-fetching rather than recalling.
Unpredictability
The same prompt, the same data, the same time of day can produce wildly different results. Sometimes the financial analytics are precise and actionable. Sometimes the same cron job hallucinates transactions or miscategorizes expenses.
For recreational automations like news digests, this is fine. For anything involving money or work commitments, it means you cannot fully trust the output without verification.
What I Learned
Structure Your Automations by Trust Level
Not all tasks deserve the same level of autonomy:
| Trust Level | Task Type | Example |
|---|---|---|
| High autonomy | Read-only aggregation | News digest, calendar check |
| Medium autonomy | Structured data + human review | Work reports, journal drafts |
| Low autonomy | Write operations | Financial corrections, calendar confirmations |
| Manual trigger only | External communications | Emails, messages to others |
Memory Is the Hard Problem
LLMs are stateless. Every session starts fresh. The assistant uses markdown files as external memory (daily notes, long-term memory, tool configurations), but this is a workaround, not a solution.
The real limitation is not the LLM's intelligence but its continuity. A human assistant who forgot everything every morning but was brilliant for 8 hours would still be frustrating to work with.
Cron Jobs Are Better Than Chat for Routine Tasks
Counterintuitively, scheduled automations work better than asking the assistant to do things interactively. Cron jobs have:
- Fixed, well-tested prompts
- Consistent data sources
- Predictable timing
- No conversational context to lose
Interactive chat is better for ad-hoc questions and one-off tasks. But for anything you do daily, write a cron job with a specific prompt and let it run.
The 80% Assistant
The honest assessment: this setup handles about 80% of what a human personal assistant would do, at maybe 60% quality. The remaining 20% (judgment calls, nuanced communication, cross-referencing information across contexts) is where it falls apart.
But that 80% at 60% quality is still valuable because it runs 24/7, costs a fraction of a human assistant, and handles the boring stuff you would procrastinate on anyway.
The Stack
For anyone who wants to replicate this:
- Runtime: OpenClaw on a VPS
- Interface: Telegram bot
- LLM: Configurable (OpenAI, Anthropic, Google, etc.)
- Calendar: CalDAV integration
- Finance: ZenMoney API
- Memory: Markdown files (MEMORY.md, daily notes, learnings)
- Scheduling: Built-in cron system with isolated sessions
The key insight is that this is not one big AI agent. It is a collection of small, focused automations (cron jobs) plus an interactive fallback (chat). Each automation has a specific prompt, specific data sources, and a specific output format. The more constrained the task, the more reliable the output.
Would I Recommend It?
Yes, with caveats.
If you are comfortable self-hosting, debugging prompt failures, and treating the output as "first draft" rather than "final answer," an always-on AI assistant is genuinely useful. The work reports alone justify the setup cost.
If you expect it to work like a reliable human assistant from day one, you will be disappointed. The technology is impressive but inconsistent. Every week brings a mix of moments where the assistant feels like magic and moments where it feels like a very confident intern who just started.
The trajectory is clearly upward. Each model generation gets more reliable, context windows get larger, and tool use gets better. The setup I described will work significantly better in a year. But right now, it is a power user tool, not a consumer product.
Akmatori builds AI agents for SRE incident management. The same principles apply: constrain the task, structure the data, keep humans in the loop for critical decisions. The agents that work best are not the most autonomous ones. They are the ones with the clearest boundaries.
