Shela: The Self-Improving AI Agent Built by One Person in 135 Days

A stonemason from Ljubljana and his small team built an autonomous AI agent with 54 tools that reads its own source code, diagnoses its own errors, and restarts itself, along with an entire ecosystem of 12 applications, in 135 days. No VC funding. A four-person founding team. Over 80,000 EUR invested. And a philosophy borrowed from cutting granite.

Sven PetrovicMarch 9, 202618 min read61
Shela: The Self-Improving AI Agent Built by One Person in 135 Days

In the basement of the Nebotičnik — Ljubljana's iconic 1933 skyscraper — there is a gallery of stone sculptures. Upstairs, in an apartment that overlooks the Slovenian capital, a man who spent 25 years carving granite monuments has built something that most well-funded AI labs have not: a fully autonomous AI agent that lives on its own server, maintains persistent memory, and can modify its own source code to improve itself.

Her name is Shela.

She is not a chatbot. She is not a thin wrapper around a large language model. She is an execution-layer system — an AI agent with 48 production tools that can deploy full-stack applications, SSH into servers, manage Docker containers, query databases, automate browsers, generate images, send emails, manage GitHub repositories, and conduct multi-source web research. She does not describe what she could do. She does it.

What makes Shela unusual — and what has drawn the attention of investors from Singapore to Hong Kong — is not the breadth of her capabilities, though that alone would be notable. It is her architecture. Shela is a self-hosting, self-improving AI agent. She runs on her own server. She can read her own source code. She can inspect her own error logs, identify performance bottlenecks in her own orchestration engine, write patches, deploy them, restart herself, and verify that the fix worked — without human intervention.

This is not theoretical. It happens in production, with real users, on real infrastructure.

The Builder

Gregor Vidmajer does not fit the profile of a typical AI founder. He has no computer science degree. He did not attend Y Combinator. He learned to code by building — first small tools, then full platforms, then an entire ecosystem.

For 13 years, he worked alongside his father in the memorial and stonemason industry, carving monuments from granite and marble. When his father passed away in 2015, Vidmajer took over the business alone. He still runs it today. But during the winter off-seasons, when stonework slows across Central Europe, he began channeling the same philosophy into software: build things designed to outlast their creators.

"A good monument stands for a thousand years," Vidmajer says. "I wanted to build technology with that same mindset — infrastructure, not features."

The result, after 135 days of solo development, is 150,000 lines of production TypeScript, 11 functional applications, 32 interconnected modules, and an AI agent that can do what most engineering teams of five to ten people would spend 12 to 18 months building.

How Shela Works

At Shela's core is an orchestrator — a TypeScript-based engine that receives a user's natural language request, decides which tools to invoke, executes them in sequence or parallel, handles errors with exponential backoff, and streams results back in real time.

The orchestrator currently manages 48 tools across eight categories: code execution (Python, JavaScript, Bash), file operations, server management (SSH, Docker, Nginx, PM2), database queries (PostgreSQL, MySQL, Redis), web interaction (scraping, browser automation, multi-source search), communication (email, webhooks, Telegram), AI generation (images via Leonardo, Fal.ai; video via VEO), and self-management (reading its own code, inspecting logs, analyzing usage patterns, restarting its own process).

Each user operates in an isolated sandbox. Files, code executions, and tool outputs are contained within per-user workspaces. Shela maintains persistent memory across sessions — she remembers your projects, preferences, and server configurations. Every conversation makes her more effective for that user.

The self-improvement loop is perhaps the most technically interesting feature. Shela can invoke read_own_code to examine any file in her own codebase. She can call inspect_logs to review her error and output logs. She can use analyze_patterns to study her own tool usage statistics and identify inefficiencies. If she detects a problem — a rate-limiting bottleneck, a suboptimal retry strategy, a memory leak pattern — she can write a fix, deploy it via her own build toolchain, and restart herself using PM2 process management.

In one documented session, Shela detected rate-limiting errors in her own logs, read her orchestrator code, identified that the backoff timing was too aggressive, wrote an optimized version with adjusted parameters, and restarted herself — all within a single conversation, without being asked to do so.

The Competitive Landscape

The autonomous AI agent space has attracted significant capital in recent months. Cognition's Devin, positioned as an AI software engineer, raised at a $2 billion valuation. Manus, a Chinese-built general-purpose agent, reportedly reached $125 million in revenue within nine months of launch.

Shela occupies a different position. Where Devin focuses exclusively on code generation at $500 per month, and Manus offers 29 tools with a team-built infrastructure, Shela provides 48 tools across a broader execution surface — DevOps, research, communication, AI generation, and self-management — at a price point of $35 to $179 per month. More significantly, neither Devin nor Manus offers self-hosting or self-improvement capabilities. They are managed services. Shela is infrastructure.

The white-label angle is particularly relevant for the B2B market. Any company can deploy their own branded version of Shela on their own infrastructure — a capability that neither major competitor currently offers.

The Technical Stack

Shela runs on a Hetzner Cloud server in the European Union, ensuring GDPR compliance by default. The frontend is built with React 19 and Tailwind CSS. The backend uses Node.js 22 with tRPC for type-safe API communication and Drizzle ORM for database operations against MySQL 8. The entire system is bundled with esbuild and managed with PM2 for zero-downtime process management.

The AI backbone currently uses Anthropic's Claude (Sonnet) as the primary language model, with an intelligent API key rotation system that distributes requests across multiple keys to manage rate limits. The architecture is model-agnostic by design — Vidmajer is actively implementing a tiered model system that routes simple conversational requests through lighter models while reserving Claude for complex execution tasks, significantly reducing operational costs.

The streaming architecture deserves mention. Shela uses Server-Sent Events (SSE) to stream responses, tool executions, and status updates to the frontend in real time. Users see not just the final answer but the entire execution process — which tool is being invoked, what the output was, and what Shela is thinking at each step. This transparency is a deliberate design choice.

"If you can see what she's doing," Vidmajer explains, "you trust her to do more."

Part of Something Larger

Shela does not exist in isolation. She is the execution engine — the brain — of MEMORIS, a broader ecosystem that Vidmajer describes as "identity infrastructure for the next century." MEMORIS encompasses 12 applications including digital memorials, interactive heritage mapping, time capsules, AI-powered content generation, a social layer, and a virtual reality memorial world built in Unreal Engine.

Within this ecosystem, Shela serves as the orchestration layer. A user can say, "Create a time capsule for my daughter's 18th birthday with a video message and photos from today," and Shela will generate the AI video, select the photos, create the capsule within MEMORIS, schedule the delivery for the exact date, and confirm completion — executing across multiple applications in a single autonomous workflow.

She also powers two standalone products behind the scenes: KynBot, a conversational AI platform for businesses, and MailMind, an intelligent email automation system. Both run on Shela's agent infrastructure without users needing to know that an autonomous AI agent is handling their requests.

What Comes Next

Vidmajer is currently in conversations with investors across Asia-Pacific — Singapore, Hong Kong — and Europe. The pitch is not a typical early-stage story of "we have an idea and need money to build it." Eleven of twelve applications are functional. The infrastructure is live. The code is in production.

The immediate roadmap includes scaling to multi-server architecture for thousands of concurrent users, completing security audits, launching a public beta, and entering the APAC market where demand for AI agent infrastructure is growing fastest.

Whether Shela represents a new paradigm in AI agent architecture — where agents are not just tools but self-maintaining infrastructure — or whether the self-improving capability proves more interesting as a technical demonstration than a business moat, remains to be seen. What is clear is that a stonemason from Ljubljana has, in 135 days, built something that funded teams have not.

The monument, it turns out, is digital. And it is designed to last.


Shela is currently in private beta. For press inquiries, partnership opportunities, or investor relations, contact Gregor Vidmajer at [email protected].

MEMORIS is headquartered at Skyscraper (Nebotičnik), Štefanova ulica 1, 1000 Ljubljana, Slovenia.

TagsAI AgentAutonomous AISelf-ImprovingTypeScriptMEMORISShelaStartup