<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Enkaidu</title>
    <link>https://enkaidu.dev/posts/</link>
    <description>Recent content on Enkaidu</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <copyright>Copyright © 2025, 2026 @nogginly and friends</copyright>
    <lastBuildDate>Wed, 04 Mar 2026 04:33:14 -0500</lastBuildDate>
    <atom:link href="https://enkaidu.dev/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>How Do Enkaidu Sessions Work</title>
      <link>https://enkaidu.dev/posts/how_do_enkaidu_sessions_work/</link>
      <pubDate>Sun, 01 Mar 2026 05:31:28 -0500</pubDate>
      <guid>https://enkaidu.dev/posts/how_do_enkaidu_sessions_work/</guid>
      <description>&lt;p&gt;Enkaidu is an agentic assistant that works with your choice of local AI models and providers. When launched, Enkaidu presents the user with an initial &lt;em&gt;primary&lt;/em&gt; chat session connected to an AI model and system prompt based on your &lt;a href=&#34;https://enkaidu.dev/docs/using_enkaidu/configuration/&#34;&gt;configuration&lt;/a&gt;.&lt;/p&gt;&#xA;&lt;p class=&#34;float-right w40pct pl-1rem&#34;&gt;&lt;img src=&#34;assets/02-after-conversation.svg&#34; alt=&#34;An Enkaidu session&#34; /&gt;&lt;/p&gt;&#xA;&lt;p&gt;Queries to the model accumulates history, gathering up queries, responses, and tool calls. When using small models with modest context windows running on systems with limited memory, the history can soon overwhelm the context window&amp;rsquo;s size. Depending on the LLM provider&amp;rsquo;s configuration, the context will get &amp;ldquo;clipped&amp;rdquo;, often by just dropping the oldest messages in the history and sometimes by shedding content in the middle. This in effect can lead to the model losing its &amp;ldquo;memory&amp;rdquo; of requests in the conversation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How Enkaidu Config Works</title>
      <link>https://enkaidu.dev/posts/how_enkaidu_config_works/</link>
      <pubDate>Sun, 08 Feb 2026 00:17:54 -0500</pubDate>
      <guid>https://enkaidu.dev/posts/how_enkaidu_config_works/</guid>
      <description>&lt;h2 id=&#34;many-projects-one-configuration&#34;&gt;Many projects, one configuration&lt;a class=&#34;anchor&#34; href=&#34;#many-projects-one-configuration&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p class=&#34;float-right w40pct pl-1rem&#34;&gt;&lt;img src=&#34;assets/many_projects_one_config.svg&#34; alt=&#34;Many projects, home config&#34; /&gt;&lt;/p&gt;&#xA;&lt;p&gt;The simplest way to use Enkaidu is to create a single config file called &lt;code&gt;enkaidu.yml&lt;/code&gt; in your home directory outside all your project workspaces.&lt;/p&gt;&#xA;&lt;p&gt;When you run &lt;code&gt;enkaidu&lt;/code&gt; from either the &lt;code&gt;Project1/&lt;/code&gt; or the &lt;code&gt;Project2/&lt;/code&gt; folder, Enkaidu detects the config file in your home folder and loads that.&lt;/p&gt;&#xA;&lt;p&gt;I suggest having one in your home folder even if you later decide you need a different config for one of your projects.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Why Make an Agentic Assistant</title>
      <link>https://enkaidu.dev/posts/why_make_an_agentic_assistant/</link>
      <pubDate>Sun, 11 Jan 2026 00:20:46 -0500</pubDate>
      <guid>https://enkaidu.dev/posts/why_make_an_agentic_assistant/</guid>
      <description>&lt;h2 id=&#34;inspiration&#34;&gt;Inspiration&lt;a class=&#34;anchor&#34; href=&#34;#inspiration&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;It all started when I read the article &amp;ldquo;How to Build an Agent&amp;rdquo; by Thorsten Ball&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt; where he shows that an &lt;em&gt;agentic&lt;/em&gt; coding assistant is at its heart no more than a loop with inference and tool calling.&lt;/p&gt;&#xA;&lt;p&gt;I was already running local models on my Macbook using &lt;code&gt;ollama&lt;/code&gt; and playing with building out AI-driven workflows in Ruby. When I implemented the core loop in Ruby using the &lt;code&gt;ruby_llm&lt;/code&gt; gem and tried it with recent models, I was amazed how well tool-calling worked. While my earlier experiments with tool calling had been disappointing, clearly the models had improved substantially.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
