Feed aggregator

Open Cloud Coalition survey, commissioned ahead of the CMA’s decision on measures against the two hyperscale giants, finds competing cloud providers demand regulation

Computer Weekly Feed - Fri, 03/06/2026 - 11:28am
Open Cloud Coalition survey, commissioned ahead of the CMA’s decision on measures against the two hyperscale giants, finds competing cloud providers demand regulation
Categories: Computer Weekly

Show HN: Anchor Engine – Deterministic Semantic Memory for LLMs Local (<3GB RAM)

Hacker News - Fri, 03/06/2026 - 11:27am

Anchor Engine is ground truth for personal and business AI. A lightweight, local-first memory layer that lets LLMs retrieve answers from your actual data—not hallucinations. Every response is traceable, every policy enforced. Runs in <3GB RAM. No cloud, no drift, no guessing. Your AI's anchor to reality.

We built Anchor Engine because LLMs have no persistent memory. Every conversation is a fresh start—yesterday's discussion, last week's project notes, even context from another tab—all gone. Context windows help, but they're ephemeral and expensive. The STAR algorithm (Semantic Traversal And Retrieval) takes a different approach. Instead of embedding everything into vector space, STAR uses deterministic graph traversal. But before traversal comes atomization—our lightweight process for extracting just enough conceptual structure from text to build a traversable semantic graph.

*Atomization, not exhaustive extraction.* Projects like Kanon 2 are doing incredible work extracting every entity, citation, and clause from documents with remarkable precision. That's valuable for document intelligence. Anchor Engine takes a different path: we extract only the core concepts and relationships needed to support semantic memory. For example, "Apple announced M3 chips with 15% faster GPU performance" atomizes to nodes for [Apple, M3, GPU] and edges for [announced, has-performance]. Just enough structure for retrieval, lightweight enough to run anywhere.

The result is a graph that's just rich enough for an LLM to retrieve relevant context, but lightweight enough to run offline in <3GB RAM—even on a Raspberry Pi or in a browser via WASM.

*Why graph traversal instead of vector search?*

- Embeddings drift over time and across models - Similarity scores are opaque and nondeterministic - Vector search often requires GPUs or cloud APIs - You can't inspect why something was retrieved

STAR gives you deterministic, inspectable results. Same graph, same query, same output—every time. And because the graph is built through atomization, it stays small and portable.

*Key technical details:*

- Runs entirely offline in <3GB RAM. No API calls, no GPUs. - Compiled to WASM – embed it anywhere, including browsers. - Recursive architecture – we used Anchor Engine to help write its own code. The dogfooding is real: what would have taken months of context-switching became continuous progress. I could hold complexity in my head because the engine held it for me. - AGPL-3.0 – open source, always.

*What it's not:* It's not a replacement for LLMs or vector databases. It's a memory layer—a deterministic, inspectable substrate that gives LLMs persistent context without cloud dependencies. And it's not a competitor to deep extraction models like Kanon 2; they could even complement each other (Kanon 2 builds the graph, Anchor Engine traverses it for memory).

*The whitepaper* goes deep on the graph traversal math and includes benchmarks vs. vector search: https://github.com/RSBalchII/anchor-engine-node/blob/d9809ee...

If you've ever wanted LLM memory that fits on a Raspberry Pi and doesn't hallucinate what it remembers—check it out, and I'd love your feedback on where graph traversal beats (or loses to) vector search.

We're especially interested in feedback from people who've built RAG systems, experimented with symbolic memory, or worked on graph-based AI.

Comments URL: https://news.ycombinator.com/item?id=47277084

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Decidel A Hacker News client for iOS with smart summaries and filtering

Hacker News - Fri, 03/06/2026 - 11:23am

I've been reading HN every day for months and always wished the experience was a bit smarter less noise, more signal, without losing the depth that makes HN worth reading. Decidel is what I built to fix that. It's an iOS client with AI-powered thread summaries, semantic topic filtering (mute topics you don't care about), threaded discussions, offline reading, and export to Markdown, Notion, or Obsidian. You bring your own API This is a rapid first release. A web version is in the works. Happy to answer any questions and would genuinely appreciate any feedbacks especially from daily HN readers.

App Store https://apps.apple.com/app/decidel/id6759561178

Comments URL: https://news.ycombinator.com/item?id=47277018

Points: 1

# Comments: 1

Categories: Hacker News

Don't Get Distracted

Hacker News - Fri, 03/06/2026 - 11:23am
Categories: Hacker News

Shell Basics

Hacker News - Fri, 03/06/2026 - 11:14am

Article URL: https://shell.nuts.services/

Comments URL: https://news.ycombinator.com/item?id=47276863

Points: 1

# Comments: 0

Categories: Hacker News

Pages