\n\n\n\n My Agent Kit: How I Manage My Essential Code & Prompts - AgntKit \n

My Agent Kit: How I Manage My Essential Code & Prompts

📖 11 min read2,163 wordsUpdated Apr 2, 2026

Hey everyone, Riley Fox here, back in the digital trenches with another dive into what makes our agent lives a little easier (or at least, less chaotic). It’s April 2nd, 2026, and I’ve been wrestling with a particular beast lately: the ever-growing, ever-shifting pile of “stuff” we rely on. You know what I’m talking about, right? Those bits of code, those configuration files, those carefully crafted prompts that you just can’t live without. For a long time, I just called it my “kit,” a catch-all term for whatever I needed to get the job done.

But recently, especially with the rapid evolution of large language models (LLMs) and the agents built on top of them, I’ve started thinking more critically about the concept of a “starter kit.” Not just any starter kit, mind you, but a *minimalist* starter kit for building resilient, adaptable AI agents. We’re past the point of just throwing a few API calls together and calling it an agent. The demands for introspection, self-correction, and robust error handling are pushing us to think differently. And frankly, the bloat is real.

I remember a project last year, a fairly complex data extraction agent for a client in the financial sector. I started with my usual go-to libraries, a few custom utilities I’d built over the years, and a couple of shiny new LLM orchestration frameworks. By the time I was halfway through, the dependency tree looked like a particularly dense rainforest, and the initial setup for anyone else on the team took a solid afternoon of troubleshooting. It worked, yes, but at what cost? The cognitive load of understanding all those moving parts, the constant fear of a breaking change in one obscure dependency… it was exhausting. That experience really hammered home the idea that sometimes, less truly is more, especially when you’re trying to build something that needs to be both powerful and maintainable.

So, today, I want to talk about distilling our agent-building process down to its absolute essentials. We’re not aiming for a barebones, unusable setup. We’re aiming for a lean, mean, agent-building machine that gives us maximum flexibility with minimal overhead. Think of it as the ultimate “go bag” for AI agents.

Why a Minimalist Starter Kit? The Bloat Problem

Let’s be honest. It’s easy to get carried away. The LLM ecosystem is exploding with new tools, new frameworks, new abstractions every week. And each one promises to solve your problems, make your code cleaner, or give you superpowers. And some of them do! But the cumulative effect can be detrimental.

  • Increased Complexity: Every added dependency introduces another layer of abstraction, another set of configurations, and another potential point of failure.
  • Slower Development: While frameworks aim to speed things up, a heavily layered approach can actually slow down debugging and understanding the core logic.
  • Dependency Hell: Version conflicts, security vulnerabilities in obscure packages, and the sheer effort of keeping everything updated can be a nightmare.
  • Higher Resource Usage: More code often means more memory, more CPU cycles, and ultimately, higher operational costs.
  • Steeper Learning Curve: Onboarding new team members or even revisiting old projects becomes a heavier lift when the foundation is a tangled mess of libraries.

My goal with a minimalist starter kit is to cut through that noise. To provide just enough structure and utility to get an agent off the ground, leaving the choice of advanced features and specific frameworks to be added only when a clear need arises. It’s about being intentional with every single component we include.

Core Components of My Minimalist Agent Starter Kit

After much experimentation and more than a few headaches, I’ve boiled down my ideal agent starter kit to these fundamental pieces. We’re talking Python here, because that’s where the majority of the action happens for me.

1. The LLM Orchestrator: Directly to the API (mostly)

This might be controversial, but for a starter kit, I’m advocating for direct API calls to your chosen LLM provider (OpenAI, Anthropic, Google, etc.) with a thin wrapper, rather than a full-blown framework like LangChain or LlamaIndex right out of the gate. Why? Because these frameworks, while incredibly powerful, come with a lot of baggage. For a basic agent, you often only need two things: sending a prompt and getting a response.

My wrapper typically handles:

  • API key management (from environment variables, always!)
  • Basic retry logic for transient errors
  • Standardizing input/output formats (e.g., always returning a string or a parsed JSON object)
  • Simple token counting (useful for cost estimation and prompt engineering)

Here’s a simplified example of what I mean:


import os
import openai
import json
import time

class LLMClient:
 def __init__(self, model="gpt-4o", api_key=None, max_retries=3, initial_delay=1):
 self.model = model
 self.api_key = api_key or os.getenv("OPENAI_API_KEY")
 if not self.api_key:
 raise ValueError("OpenAI API key not provided or found in environment variables.")
 openai.api_key = self.api_key
 self.max_retries = max_retries
 self.initial_delay = initial_delay

 def _call_api(self, messages, temperature=0.7, json_output=False):
 response_format = {"type": "json_object"} if json_output else {"type": "text"}
 
 return openai.chat.completions.create(
 model=self.model,
 messages=messages,
 temperature=temperature,
 response_format=response_format
 )

 def generate(self, system_prompt: str, user_prompt: str, temperature=0.7, json_output=False) -> str:
 messages = [
 {"role": "system", "content": system_prompt},
 {"role": "user", "content": user_prompt}
 ]
 
 for attempt in range(self.max_retries):
 try:
 response = self._call_api(messages, temperature, json_output)
 if json_output:
 return json.loads(response.choices[0].message.content)
 return response.choices[0].message.content
 except openai.APIError as e:
 print(f"API Error (attempt {attempt+1}/{self.max_retries}): {e}")
 if attempt < self.max_retries - 1:
 time.sleep(self.initial_delay * (2 ** attempt)) # Exponential backoff
 else:
 raise
 except json.JSONDecodeError as e:
 print(f"JSON Decode Error (attempt {attempt+1}/{self.max_retries}): {e}")
 if attempt < self.max_retries - 1:
 time.sleep(self.initial_delay * (2 ** attempt))
 else:
 raise
 
 return "" # Should not be reached if exceptions are re-raised

# Usage example:
# client = LLMClient()
# response = client.generate("You are a helpful assistant.", "What is the capital of France?")
# print(response)

This gives me control and transparency. If I later decide I need LangChain’s agents or LlamaIndex’s retrieval capabilities, I can integrate them. But I start simple.

2. Configuration Management: Dotenv and Pydantic

Hardcoding values is a cardinal sin. For configuration, I swear by a combination of python-dotenv and Pydantic. Dotenv handles loading environment variables from a .env file, which is perfect for API keys and other sensitive data that shouldn’t be checked into version control.

Pydantic, on the other hand, is fantastic for defining and validating configuration schemas. It ensures that my agent starts with all the necessary parameters and that they are of the correct type. This catches a lot of silly mistakes before they become runtime errors.


from pydantic import BaseModel, Field
from dotenv import load_dotenv
import os

load_dotenv() # Load environment variables from .env file

class AgentConfig(BaseModel):
 openai_api_key: str = Field(..., env="OPENAI_API_KEY")
 agent_name: str = "MyCoolAgent"
 log_level: str = "INFO"
 max_tokens_per_response: int = 2048

# Example .env file content:
# OPENAI_API_KEY="sk-..."
# AGENT_NAME="MyCustomAgent"

try:
 config = AgentConfig()
 # print(config.openai_api_key) # Never print real API keys!
 print(f"Agent Name: {config.agent_name}")
 print(f"Log Level: {config.log_level}")
except Exception as e:
 print(f"Error loading configuration: {e}")
 # Handle missing/invalid config gracefully, perhaps exit.

This setup means I can easily swap out environments (dev, staging, prod) by just changing the .env file or system environment variables, and I get robust validation for free.

3. Logging: The Standard Library

Another area where people often reach for heavy frameworks: logging. Python’s built-in logging module is incredibly powerful and flexible. For a minimalist starter, it’s more than enough. You can configure different handlers (console, file), set levels, and format messages without adding a single external dependency.


import logging
import os

# Define log file path
log_dir = "logs"
os.makedirs(log_dir, exist_ok=True)
log_file_path = os.path.join(log_dir, "agent.log")

# Configure logging
logging.basicConfig(
 level=logging.INFO, # Can be set by config.log_level
 format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
 handlers=[
 logging.FileHandler(log_file_path),
 logging.StreamHandler() # Also log to console
 ]
)

logger = logging.getLogger(__name__)

# Example usage
logger.info("Agent started successfully.")
logger.debug("This is a debug message, won't show with INFO level.")
logger.warning("Something might be going wrong here.")
logger.error("Critical error encountered!")

This gives me visibility into my agent’s operations, which is crucial for debugging and understanding its behavior. And it’s baked right into Python.

4. Tooling: A Simple Function Registry

Agents often need to interact with external tools (APIs, databases, local file system). Instead of relying on a framework’s tool abstraction layer, I start with a simple, decorator-based function registry. This allows the LLM to “see” and call functions, but I retain full control over how those functions are defined and executed.


class ToolRegistry:
 def __init__(self):
 self._tools = {}

 def register_tool(self, func):
 self._tools[func.__name__] = func
 return func

 def get_tool(self, name):
 return self._tools.get(name)

 def get_tool_specs(self):
 # This is where you'd generate OpenAPI-like specs for the LLM
 # For simplicity, we'll just return function names and docstrings
 specs = []
 for name, func in self._tools.items():
 specs.append({
 "name": name,
 "description": func.__doc__,
 # You'd add parameter definitions here for real world use
 })
 return specs

tool_registry = ToolRegistry()

@tool_registry.register_tool
def get_current_weather(location: str):
 """
 Fetches the current weather for a given location.
 Parameters:
 - location (str): The city and state, e.g., "San Francisco, CA".
 """
 # In a real agent, this would call a weather API
 if "San Francisco" in location:
 return {"location": location, "temperature": "15C", "conditions": "cloudy"}
 return {"location": location, "temperature": "25C", "conditions": "sunny"}

@tool_registry.register_tool
def search_web(query: str):
 """
 Performs a web search for a given query and returns relevant snippets.
 Parameters:
 - query (str): The search query.
 """
 # This would call a search API like Google Search
 return f"Searched for '{query}'. Found information about {query}."

# Example of how an LLM might interact with this (simplified)
# tool_specs = tool_registry.get_tool_specs()
# print(tool_specs)

# tool_to_call = "get_current_weather" # LLM decides this
# args = {"location": "San Francisco, CA"} # LLM decides these
# result = tool_registry.get_tool(tool_to_call)(**args)
# print(result)

This allows me to easily define functions that my agent can call, and I can later implement sophisticated tool calling logic (like parsing JSON from the LLM to determine which tool and arguments to use) on top of this simple registry. It keeps the core agent logic clean and focused.

Actionable Takeaways for Your Agent Development

So, what does this all mean for you and your next AI agent project? Here are my distilled thoughts:

  1. Challenge Every Dependency: Before adding a new library or framework, ask yourself: "Do I absolutely need this right now, or can I achieve 80% of what I need with Python's standard library or a few lines of my own code?" The answer might surprise you.
  2. Start Small, Grow Organically: Begin with the absolute minimum. Get your core LLM interaction working, add basic configuration, and implement essential logging. Only introduce more complex abstractions (like full-blown agent frameworks, vector databases, or complex orchestration tools) when your agent’s requirements clearly demand them.
  3. Embrace Environment Variables: Never hardcode sensitive information. Use python-dotenv or your system’s environment variables for API keys and other secrets from day one.
  4. Validate Your Inputs (and Configs): Pydantic (or a similar library) is your friend. Define clear data models for your configuration and any complex inputs your agent receives. It prevents a ton of baffling runtime errors.
  5. Master the Standard Library: Python's standard library is a treasure trove. For logging, file operations, basic networking, and data structures, you often don't need external packages.
  6. Build Your Own Thin Wrappers: Especially for LLM APIs, a simple wrapper gives you immense control over retry logic, error handling, and standardizing inputs/outputs. It also makes it easier to swap providers if needed.
  7. Document Your "Why": If you do decide to include a larger framework, make a note of *why* you needed it. What specific problem did it solve that couldn't be addressed simply? This helps you and your team understand the architectural choices later on.

Building AI agents is already complex enough without piling on unnecessary abstractions. By adopting a minimalist starter kit approach, we can create agents that are not only powerful and effective but also easier to understand, maintain, and adapt as the AI landscape continues its relentless march forward. Give it a try on your next project – you might find yourself breathing a little easier.

That's all from me for today. Happy coding, and may your agents be lean and mighty!

🕒 Published:

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →
Browse Topics: comparisons | libraries | open-source | reviews | toolkits
Scroll to Top