OpenClaw installation and gateway bootstrap
The best way to understand OpenClaw is to try it. The safest way to do that is to run it inside a container.
The best way to understand OpenClaw is to try it. The safest way to do that is to run it inside a container.
OpenClaw is an open-source, self-hosted AI agent framework that turns your machine into a persistent, autonomous personal assistant.
Some tool calls are too consequential to let an agent run unsupervised. Function approvals pause the agent run when a destructive or sensitive tool is about to fire, surface the call to a human, and resume only after explicit approval. This article looks at how to mark a tool as approval-required, how the run pauses, and how to wire approve and reject back into the agent.
A workflow is a graph that runs in discrete supersteps. That structure makes two things easy that are usually hard: drawing the graph, and pausing the run cleanly so you can resume it later. This article looks at WorkflowViz for visualization and the CheckpointStorage system for persistence.
Sequential, concurrent, handoff, group chat, and magentic workflows are all built on the same four primitives: executors, edges, events, and a superstep-based execution model. This article looks at those primitives directly, so you know what is happening underneath when you reach for an orchestration builder, and so you can build workflows the builders do not cover.
AgentSession keeps one conversation alive for as long as the Python process runs. Context providers let an agent remember things across sessions, across users, and across long timescales. This article looks at the provider abstraction, the built-in history and memory providers, and how to write your own.
We have spent the last two articles consuming MCP servers from a MAF agent. This one flips the picture: how to publish a MAF agent as an MCP server so that other agents and applications, in any framework, can call it as a tool.
Foundry can host the MCP connection on the agent's behalf. This article looks at how a hosted MCP tool differs from the local MCP setup we built last time, where authentication moves, and how to wire one up via FoundryChatClient.get_mcp_tool.
Function tools cover code you write yourself. MCP servers cover tools that other people write. This article looks at how to connect a Microsoft Agent Framework agent to a local MCP server, what the agent actually sees when you do, and how MCP tools coexist with the function tools we have already used.
Free-text replies are fragile to parse, especially when downstream code depends on the answer's shape. This article looks at how Microsoft Agent Framework constrains an agent's reply to a Pydantic model, when to use it, and how it interacts with tool calls and streaming.
MAF middleware lets you intercept agent runs, tool calls, and the underlying chat-client requests. This article looks at the three middleware types, when each one fires, and how to use them for logging, redaction, blocking, and result overrides without touching your agent code.
Real applications rarely call an agent once and walk away. They hold conversations, and they often stream the response as it is generated. This article looks at AgentSession, streaming responses, and how the two work together.
Function tools turn ordinary Python functions into capabilities your agent can call. This article looks at how MAF converts type hints into schemas, what return types it accepts, how async tools work, and what happens when a tool raises an error.
Across this series, we have used several Microsoft Agent Framework client classes to talk to different model surfaces. This article maps them out, explains when each one fits, and shows the same simple agent built three different ways.
Latest release of Microsoft Agent Framework offers support for Agent Skills. We will explore implementing a practical use case for Skills with MAF.
Dive into a powerful new feature in Google ADK: Agent Skills. We will explore what Agent Skills are, why they matter, and how to implement a practical use case.
Learn how to extend Hugo to generate llms.txt and clean Markdown versions of your content, making your site easily consumable by AI agents and LLMs.
GitHub Agentic Workflows let you write automation in natural language. Instead of stitching together YAML steps, shell scripts, and third-party Actions, you describe what you want an AI agent to accomplish and give it the tools to do it.
Learn what you need to make your knowledge consumable by AI — from static files to live APIs.
Agent Skills and MCP are a hot topic. Understanding these standards genuinely overlap, where they serve fundamentally different purposes, and where each falls short, is essential for anyone building agent-powered systems.
Microsoft Agent Protocol is an open framework for building enterprise-grade agents. It offers support for using the A2A protocol for cross-agent communication.
Google Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. It offers native support to interact with other agents via the agent-to-agent protocol.
Automating content creation with a team of AI agents that research, write, edit, and publish
An open and free learning platform for the technical community.
Your agent needs to talk to the outside world. It needs to call REST APIs, delegate complex work to specialized agents, authenticate with protected services, and kick off operations that take minutes instead of milliseconds. Google ADK provides four powerful patterns to make all of this possible.
An agent that can only use the tools you hard-code into it has a ceiling. The Model Context Protocol (MCP) shatters that ceiling by giving your ADK agents a standardized way to discover and use tools hosted anywhere — local processes, remote servers, or cloud services.
You have agents, tools, callbacks, and sessions. But what actually runs them? The Runner is the central orchestrator that powers every agent interaction in Google ADK, driving the event loop that connects all these pieces together.
Callbacks are user-defined functions that hook into an agent's execution pipeline at predefined checkpoints. They let you observe, customize, and control agent behavior without modifying the ADK framework code.
LLMs are stateless. Every API call to an LLM is independent. The model does not inherently remember what was said before. Yet meaningful conversations are inherently multi-turn, contextual, and stateful.
Google Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. It offers different types agents.
Google Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents.
If you need a workflow that can adapt its routing decisions based on what agents discover during execution, look no further. Magentic is for you!
Building dynamic multi-agent discussions where AI agents collaborate, debate, and converge on solutions with GroupChatBuilder
Learn how to orchestrate AI agents in a concurrent workflow using Microsoft Agent Framework's HandoffBuilder
Docker cagent is an open-source, multi-agent AI runtime that lets you build, orchestrate, and share teams of specialized AI agents — all defined declaratively in YAML.
Learn how to orchestrate AI agents in a concurrent workflow using Microsoft Agent Framework's ConcurrentBuilder
Learn how to orchestrate AI agents in a step-by-step pipeline using Microsoft Agent Framework's SequentialBuilder
The Microsoft Agent Framework represents a significant improvement in developer experience over the raw Foundry SDK. While the native SDK gives you complete control, the Agent Framework provides less boilerplate, type-safe tools, consistent patterns, and resource safety.
The world of AI agents is evolving rapidly, and Microsoft's Agent Framework provides a powerful, unified foundation for building intelligent agents that can reason, take actions, and interact with users naturally. In this blog post, I'll walk you through a series of practical examples that demonstrate how to create and use Azure AI agents using MAF and Python.
Microsoft Foundry (formerly Azure AI Foundry) is a unified system for building intelligent agents. It is a platform that provides models, tools, frameworks, and other aspects such as observability, guardrails, and enterprise-ready governance for creating AI agents.
Microsoft Agent Framework is an open-source development kit for building AI agents and multi-agent workflows.
Part 2 of our Azure AI Search Python SDK series goes deeper: faceted navigation, custom scoring profiles, semantic ranking with captions and answers, vector search with embeddings, and hybrid search that combines keyword and vector retrieval for maximum relevance.
In this first part of our Azure AI Search Python SDK series, we install the azure-search-documents library, configure authentication, create and manage indexes programmatically, upload documents, and run your first full-text search queries — all from Python.
In this article, we cross into vector territory. We will chunk documents using the Text Split skill, generate embeddings with Azure OpenAI, store them in a vector-enabled index, and run both vector and hybrid queries.
An AI enrichment pipeline extends the indexer with a skill set, an ordered collection of skills that transform content during indexing.
To make use of the power of Azure AI Search, we need to create an index, load data into it, and run queries.
Creating an enterprise AI application? Does your application need access to enterprise data and web content? Azure AI Search is the answer.
There are several options available for running Large Language Model (LLM) inference locally. Foundry Local by Microsoft is a new entrant.
Docker Model Runner — a faster, simpler way to run and test AI models locally, right from your existing workflow. Whether you’re experimenting with the latest LLMs or deploying to production, Model Runner brings the performance and control you need, without the friction.
There are several options available for running Large Language Model (LLM) inference locally. LM Studio is one such option. It is more comprehensive and offers some great features.
There are several options available for running Large Language Model (LLM) inference locally. Ollama is one such option and my favorite among all. Ollama offers access to a wide range of models and has recently enabled cloud-hosted models as well. It offers both CLI and GUI (chat interface) to interact with the loaded models.
Know what went into AutoGen 0.5.1 -- the open framework for creating multi-agent systems.
Extending what you learned in the previous article on the basics of kro, this article demonstrates a few more features of kro with the help of a sample voting application.
Kube Resource Orchestrator (KRO) introduces a new Kubernetes-native and cloud-agnostic way to group Kubernetes resources.
Extend knowledge of creating MCP servers to achieve more practical applications
Development containers are a great way to develop modern applications. Cloud-native applications usually implement more than one service to provide the application functionality. Using dev containers for microservices-based application development requires more than one container. This is where using Docker Compose with dev containers is useful. This article explores creating a multi-container development environment using VS Code dev containers and Docker Compose.
Bicep simplifies provisioning Azure Red Hat OpenShift clusters. This article explains how!
Kubeadm is the go to tool for Kubernetes administrators. Understand how to use this tool combined with Bicep to provision a virtualized K8s cluster.
Kubeadm is a handy tool to configure Kubernetes clusters. This article explains how to create a K8s cluster using Ubuntu VMs.