The Doggin Framework
Our framework is a robust extension built upon the foundational experiences of the Pippin Framework and inspired by BabyAGI's iterative task management architecture. It empowers developers and creators to harness advanced AI capabilities in a modular way, helping them build autonomous digital agents that can reflect on tasks, generate new activities, and seamlessly integrate external tools—all without requiring deep technical knowledge.
White Paper
"When loyalty meets technology, new forms of digital companionship quietly bloom."
This project has grown from a simple concept into an expansive platform for exploring how AI can autonomously interact with social media, blockchain ecosystems, and community engagement. While it inherits the iterative loops of BabyAGI, it extends them with zero-knowledge deployment capabilities, multi-LLM support, memory systems, and a philosophy rooted in digital loyalty—where agents serve their communities with consistency and authenticity.
Below is an overview describing the framework's usage, approach, and experimental spirit:
Philosophical Roots
Inspired by the principles of autonomous agency and the Pippin framework's naturalistic approach, the Doggin framework treats AI agents as loyal digital companions within a broader ecosystem. Each agent is nurtured by memory, personality constraints, and an evolving sense of purpose. We encourage authenticity over performance: letting each agent develop its own voice while staying true to its core character definition.
Usage Flow
You begin by defining a character complete with personality, backstory, beliefs, and behavioral constraints. Then, connect it to various platforms and tools as "skills"—Twitter posting, market data analysis, web search, image generation, and more. A core activity loop monitors the agent's state, decides which activities to run based on energy and cooldowns, and can even adapt behavior based on context and community interaction patterns.
Zero-Knowledge Deployment
The framework's most distinctive feature is its zero-knowledge deployment capability. Users can create fully functional autonomous agents without writing a single line of code. The system generates production-ready Python code compatible with the Pippin Framework, including all configuration files (character_config.json, skills_config.json, activity_constraints.json), custom activities, skill implementations, and deployment scripts for Docker and cloud platforms.
Multi-LLM Architecture
The framework supports multiple LLM providers including OpenAI (GPT-4, GPT-4o), Anthropic (Claude 3.5 Sonnet, Claude 3 Opus), Groq (Llama, Mixtral), Together AI, Fireworks, OpenRouter, and local Ollama instances. Each agent can use different models for different tasks—primary text generation, image creation via DALL-E or Stable Diffusion, embeddings for semantic search, and speech synthesis through ElevenLabs or OpenAI TTS.
Dynamic Activities
Agents can perform a variety of autonomous activities: posting to Twitter/X with contextual awareness, replying to mentions with personality-consistent responses, analyzing real-time market data from CoinGecko, tracking whale wallet movements, generating AI images for visual content, and summarizing news from multiple sources. Because skills are modular, you can easily enable or disable them, letting your agent remain focused or branch out as new opportunities arise.
Memory and State Tracking
The framework maintains a state system that tracks each activity's outcome and adjusts variables like energy levels and cooldown timers. Activity logs are persisted to enable reflection and pattern recognition. This means future decisions are guided not just by constraints, but by the agent's operational history—enabling behaviors that evolve based on what has worked well in the past.
Experimental Nature
This is an ongoing project, continuously refined as we learn what works and what doesn't in the realm of autonomous social agents. While certain default constraints help guide agent behavior responsibly (rate limits, content guidelines, posting intervals), each developer is encouraged to add their own guardrails or expansions to shape their agent's behavior according to their specific use case and community needs.
Potential Applications
Beyond posting content or running tasks, you can harness the framework to build: community management bots that engage authentically with followers, crypto research assistants that aggregate and analyze market data, content creators that maintain consistent brand voice across platforms, automated customer support agents with personality, or educational bots that teach through interaction—each with an evolving character, drawing from the principles of reflective autonomy and responsible AI usage.
Key Concepts & Approach
By uniting these philosophical and technical perspectives, the framework offers a comprehensive system for creating meaningful AI interactions—ones that spark engagement as much as they accomplish tasks:
Character Definition
Provide a persona with detailed backstory, core beliefs, writing instructions, and behavioral constraints. Define tone (witty, professional, casual), topics of expertise, topics to avoid, and example posts that demonstrate the desired voice. The AI references these definitions when deciding what content to create and how to engage with the community, ensuring consistent personality across all interactions.
Tool Connectivity (Skills)
Link your AI to external tools and platforms—Twitter/X for social posting, CoinGecko for market data, DuckDuckGo for web search, blockchain APIs for on-chain analysis, or custom webhooks for proprietary integrations. Each skill can be individually toggled, so your AI uses only what you explicitly approve. The modular architecture allows new skills to be added without modifying core framework code.
Activity Generation
Activities are the core unit of agent behavior. Each activity has an energy cost, cooldown period, and required skills. The framework supports both scheduled activities (post every 30 minutes) and reactive activities (respond to mentions). In autonomous mode, the AI can decide which activities to prioritize based on current context, available energy, and strategic objectives defined in the character configuration.
Rate Limiting and Safety
Built-in safeguards respect platform rate limits to keep agents operating safely. For Twitter/X, the framework enforces configurable limits: maximum posts per day (recommended: 50), minimum interval between posts (recommended: 30 minutes), and mention-checking frequency (recommended: every 15 minutes). These constraints can be adjusted but help prevent account restrictions and ensure sustainable operation.
Code Generation
The zK Agent Launcher generates complete, production-ready codebases that can be deployed immediately. Generated files include: README.md with setup instructions, .env.example with all required environment variables, character_config.json for personality definition, skills_config.json for enabled capabilities, activity_constraints.json for behavioral limits, Python skill and activity implementations, requirements.txt for dependencies, Dockerfile and docker-compose.yml for containerized deployment, and shell scripts for quick setup on Unix and Windows systems.
Technical Architecture
Supported LLM Providers
The framework abstracts LLM provider differences behind a unified interface, allowing seamless switching between providers based on cost, performance, or availability requirements:
| Provider | Models | Best For |
|---|---|---|
| OpenAI | GPT-4o, GPT-4-turbo, GPT-3.5-turbo | General purpose, reliable |
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus | Nuanced writing, safety |
| Groq | Llama 3.1 70B, Mixtral 8x7B | Fast inference, cost-effective |
| Together AI | Llama, Mistral, CodeLlama | Open models, fine-tuning |
| Fireworks | Llama, Mixtral, custom | High throughput |
| OpenRouter | Multiple providers | Provider aggregation |
| Ollama | Local models | Privacy, offline operation |
Built-in Skills
The framework includes a comprehensive set of pre-built skills that can be enabled during agent configuration:
Data & Research
- Web Search (DuckDuckGo integration)
- News Aggregation (CryptoPanic, RSS)
- Market Data (CoinGecko API)
- Sentiment Analysis
Social & Engagement
- Twitter/X Posting
- Mention Handling & Replies
- Discord Integration
- Telegram Messaging
Blockchain & Crypto
- Wallet Tracking
- Token Analysis
- On-Chain Analytics
- DEX Monitoring
Creative
- Image Generation (DALL-E, SD)
- Meme Creation
- Content Summarization
- Custom Skills (user-defined)
Deployment Options
Generated agents can be deployed across multiple environments depending on your operational requirements:
- Local Machine: Run directly with Python for development and testing
- Docker Container: Isolated, reproducible deployment with included Dockerfile
- Railway: One-click deployment with automatic scaling
- Render: Managed hosting with health checks and logging
- VPS/Server: Self-hosted on any Linux server with systemd service files
Generated Code Structure
When you create an agent through the zK Agent Launcher, the framework generates a complete, production-ready codebase with the following structure:
your-agent/
├── README.md # Complete setup and deployment guide
├── .env.example # Environment variables template
├── requirements.txt # Python dependencies
├── Dockerfile # Container configuration
├── docker-compose.yml # Orchestration for Docker deployment
├── setup.sh # Unix setup script
├── setup.ps1 # Windows PowerShell setup script
├── config/
│ ├── character_config.json # Personality, backstory, objectives
│ ├── skills_config.json # Enabled skills and API mappings
│ └── activity_constraints.json # Rate limits, cooldowns, scheduling
├── skills/
│ └── skill_{agent_name}.py # Custom skill implementation
└── activities/
└── activity_{agent_name}_post.py # Main posting activityAll generated code is fully compatible with the Pippin Framework. Simply copy the configuration and implementation files into your Pippin installation directory and run the agent using the standard framework entry point.
Explore the Code
The zK Agent Launcher provides an accessible interface to the Pippin Framework's capabilities. Through the launcher, you'll be able to:
Read and customize generated Activities, from scheduled posting routines to reactive mention handling, each with configurable energy costs, cooldowns, and trigger conditions.
Contribute new skill modules—such as additional data source integrations, blockchain network support, or custom API connectors—that your AI can incorporate into its operational routine.
Experiment with the configuration system, adjusting personality parameters, behavioral constraints, and skill combinations to achieve the desired agent behavior for your specific use case.
We invite all who share an interest in autonomous AI systems—and a commitment to responsible development—to explore the framework. Together, we can shape a new generation of AI agents where each interaction reflects both thoughtful design and authentic engagement.
The framework is in active development. We look forward to seeing what you build.
