2026-02-03

Experiments with OpenClaw and running it in a VM for better security.

Tidied script

Below is a concise, structured script outline suitable for a 5–10 minute microcast. It keeps close to your language, tightens it, and is designed to be read or lightly improvised.


AI Learn Today – Microcast #1

OpenClaw: What It Is, Why It Matters, and What We’re Learning

1. Opening (30–45s)

  • Today we’re sharing a quick update on what we’ve been experimenting with.
  • Specifically: OpenClaw, autonomous AI agents, and some early lessons—especially around security and deployment.
  • This is not a deep dive or tutorial—just a snapshot of what’s live and interesting right now.

2. What Is OpenClaw? (1–2 min)

  • OpenClaw is an open-source framework for autonomous AI agents.

  • At a basic level:

    • You already have LLMs that generate text.
    • Agents go one step further: they act, not just respond.
  • An agent:

    • Uses an LLM for reasoning
    • Can control tools (files, APIs, browsers, CLIs, services)
    • Can plan, loop, and execute multi-step tasks
  • OpenClaw provides the scaffolding to wire all that together in a flexible, extensible way.


3. Why Agents Matter (1–2 min)

  • Most people already “have AI,” but:

    • It’s usually locked into a single interface (a web app, a chat window).
    • It has limited memory and limited agency.
  • Agents change that by:

    • Turning AI into a process, not just a chat
    • Allowing it to operate continuously or semi-autonomously
    • Letting it interact with real systems and workflows
  • This is the shift from:

    • AI as a toolAI as an assistant / operator

4. Interfaces: Meeting People Where They Are (1 min)

  • Most people interact with AI via:

    • Browser tabs
    • Mobile apps
  • But in practice, many of us live in:

    • WhatsApp
    • Telegram
    • Discord
    • Slack
  • A big advantage of agent frameworks like OpenClaw:

    • You can connect the same agent to multiple familiar interfaces
    • The agent travels to your workflow, not the other way around

5. Memory, Personality, and Ownership (1–2 min)

  • Another core issue: memory

  • A real assistant should:

    • Remember things about you
    • Develop continuity over time
  • Today, memory often lives:

    • Inside proprietary platforms
    • Locked into a specific vendor’s database
  • What’s interesting about agents:

    • Memory can live with the agent itself
    • Potentially portable, inspectable, and user-controlled
  • This opens questions about:

    • Data ownership
    • Long-term assistants
    • Trust and continuity

6. Why OpenClaw, and Why Now? (1 min)

  • People have been building agents for several years.

  • But OpenClaw has seen a recent surge in attention, especially in the last weeks.

  • Reasons include:

    • Maturing LLM capabilities
    • Better tooling
    • Strong open-source momentum
  • We’ve played with similar systems before, but this feels like an inflection point.


7. Security and Risk: The Hard Part (1–2 min)

  • The more power you give an agent, the more dangerous it can be.

  • Especially if:

    • It runs on your local machine
    • It has filesystem, network, or credential access
  • Obvious red flags:

    • Access to personal files
    • Access to financial systems
    • Unclear permission boundaries
  • This is not hypothetical—it’s a real design constraint.


8. What We’ve Been Experimenting With (1–2 min)

  • Our focus so far has been:

    • Deployment isolation
    • Permission boundaries
  • In particular:

    • Running agents inside virtual machines
    • Being explicit about what they can and cannot access
    • Treating agents more like semi-trusted processes than “magic helpers”
  • Early takeaway:

    • Architecture and containment matter as much as prompts.

9. Closing (30–45s)

  • This is an early snapshot, not a conclusion.

  • We’ll keep sharing:

    • What works
    • What feels risky
    • What’s genuinely useful versus hype
  • If you’re experimenting with agents too, this is a space worth watching closely.


If you want, next steps could be:

  • A tight 60–90s version of this
  • A follow-up episode purely on security models for agents
  • Or a hands-on “how we set it up” walkthrough (conceptual, not tutorial)

Raw

I wanted to quickly create a script for our first AI Learn Today short microcast, like what we've been playing around with today. I think we want to talk about OpenClaw, setting it up, and about the fact that there's security stuff. First of all, just what OpenClaw is. So, yeah, we should just explain that for people. What is it? So, yeah, so it's open source autonomous artificial intelligence processes. Yeah, so just one minute. So the point here that's going on is people already have AI. You know, tooling, but you want to really turn AI into an agent and something, an artificial, you know, you've got the LLM, you've got something, large language model, you've got something that can generate this text, but you want it to actually control other tooling and learn how to kind of, you know, connect it, but also give it the instructions to do so. And that's what agents are. And then you also want to connect it to interfaces that are convenient. You know, I'm sure you can go to ChatGPT in your browser or on an app on your phone, but what happens if you just wanted to chat with it through your default, you know, through WhatsApp or Telegram or Discord or whatever other tool that you just conveniently use? And the other issue that AI systems and agents often have is about memory and kind of what I say personality, but that, you know, a proper assistant and an agent would actually know stuff about you and remember things. And we started to have things like memory in AI tools, but what I'd you'd kind of prefer that memory to not be locked into a particular tool, but rather independent as part of the agent rather than as part of ChatGPT's, you know, database and what they have about you. And that's what people have been playing around creating agents now for several years at the moment, LLMs really broke through and before, but this open source project, OpenClone has really taken off in the last few months and particularly in the last week, really, for a variety of reasons. And so we've been playing around with it. We've been playing around with things like this before, but we've been playing around with this now and installing it. Of course, the first thing is the more power you give an AI agent, obviously the more dangerous it is, particularly people who are running this on their local machine and in which case has access to kind of everything or connected to lots of things, you know, into your bank account. That would not maybe be that wise. And so one of the things we were interested in is, like, how do we deploy this in a virtual machine? What kind of access we give it and things like that. And we'll talk briefly about our experiences so far.