Exploring NemoClaw — NVIDIA's Local AI Agent Sandbox

Posted on Fri 01 May 2026 in GenAI • Tagged with GenAI, LLM, NVIDIA, NemoClaw, Ollama, Docker

NemoClaw is NVIDIA's agent sandbox that lets you run AI assistants locally using your own inference backend — Ollama, llama.cpp, or cloud providers. It bundles OpenShell as a gateway and OpenClaw as the agent runtime, all orchestrated through Docker containers.

Here's a walkthrough of setting it up from scratch.

Setup …


Continue reading

Using Free Cloud-Based LLMs via Ollama on Ubuntu

Posted on Sun 19 April 2026 in GenAI Engineering • Tagged with ollama, llm, ubuntu, cloud-llm, local-ai, kactii, linux


Ollama is a lightweight, open-source LLM runner. Its :cloud model suffix lets you route prompts to free-tier hosted models — no GPU, no paid API key required. Useful for learning, prototyping, and small projects on modest hardware.

This post covers the full Ubuntu setup: manual install, service startup, chatting with Kimi …


Continue reading