• Joined on 2026-03-31
A caching, resizing image proxy written in Go
Updated 2026-04-24 20:13:19 +02:00
The context development platform. Store, enrich, and retrieve structured knowledge with graph-native infrastructure, semantic retrieval, and portable context cores.
Updated 2026-04-24 15:10:24 +02:00
Cognitive memory for AI agents — FSRS-6 spaced repetition, 29 brain modules, 3D dashboard, single 22MB Rust binary. MCP server for Claude, Cursor, VS Code, Xcode, JetBrains.
Updated 2026-04-24 09:00:00 +02:00
An open source, privacy focused alternative to NotebookLM for teams with no data limit's. Join our Discord: https://discord.gg/ejRNvftDp9
Updated 2026-04-24 04:38:33 +02:00
Plano is an AI-native proxy and data plane for agentic apps — with built-in orchestration, safety, observability, and smart LLM routing so you stay focused on your agents core logic.
Updated 2026-04-24 00:55:07 +02:00
📑 PageIndex: Document Index for Vectorless, Reasoning-based RAG
Updated 2026-04-23 18:00:08 +02:00
Fast, local-first web content extraction for LLMs. Scrape, crawl, extract structured data — all from Rust. CLI, REST API, and MCP server.
Updated 2026-04-23 15:26:31 +02:00
Updated 2026-04-17 22:59:06 +02:00
Flakestorm — Automated Robustness Testing for AI Agents. Stop guessing if your agent really works. FlakeStorm generates adversarial mutations and exposes failures your manual tests and evals miss.
Updated 2026-04-16 03:21:38 +02:00
A vector search SQLite extension that runs anywhere!
Updated 2026-04-08 16:54:33 +02:00
🛡️ The only Open-Source RLM Memory Server with Mathematically Proven Safety. Persistent context, MCP tools & Sentinel Lattice protection for autonomous AI agents.
Updated 2026-04-01 14:52:01 +02:00
A GPU-accelerated general-purpose metaheuristic framework for combinatorial optimization
Updated 2026-03-30 14:42:32 +02:00
I replicated Ng's RYS method and found that duplicating 3 specific layers in Qwen2.5-32B boosts reasoning by 17% and duplicating layers 12-14 in Devstral-24B improves logical deduction from 0.22→0.76 on BBH — no training, no weight changes, just routing hidden states through the same circuit twice. Tools included. Two AMD GPUs, one evening.
Updated 2026-03-20 02:51:23 +01:00
Hypernetworks that update LLMs to remember factual information
Updated 2026-03-02 05:27:33 +01:00