Hacker News

Ask HN: OpenClaw vs. Claude Cowork – local skills vs. MCP integrations?

Hacker News - Sun, 02/08/2026 - 9:49am

Been using both OpenClaw and Claude Cowork for automating workflows and noticed they take fundamentally different approaches to extensibility. OpenClaw relies on local skills — scripts that run on your machine, read files, control browsers, execute shell commands. Powerful for local automation, but everything runs in your environment and you're limited to what someone has written as a skill. Claude Cowork supports MCP (Model Context Protocol) servers, which opens up a completely different model. With something like Composio/Rube, Cowork can directly interact with 500+ apps — Slack, GitHub, Google Workspace, Twitter, Notion, CRMs — all through authenticated API connections. No scraping, no brittle browser automation, just native tool calls. It can also chain these together: read a GitHub PR, summarize it in Slack, create a follow-up task in Asana, all in one workflow. The gap feels significant. OpenClaw gives you a self-hosted Swiss Army knife for local tasks. Claude Cowork with MCP gives you an orchestration layer that talks to your entire SaaS stack natively. For those using either or both — is the MCP approach as much of a leap forward as it seems? Or does the self-hosted flexibility of OpenClaw still win for certain use cases?

Comments URL: https://news.ycombinator.com/item?id=46934662

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Curated collection of 70+ papers on computational morphology

Hacker News - Sun, 02/08/2026 - 9:46am

I've put together a curated collection of papers on computational morphology papers organized by venue and year, with bib entries for each paper. PRs are welcome!

https://github.com/akki2825/computational-morphology-lit

Comments URL: https://news.ycombinator.com/item?id=46934636

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: How to get started with robotics as a hobbyist?

Hacker News - Sun, 02/08/2026 - 9:44am

I wanted to find new hobbies for myself, something that involves more physical stuff compared to only code. How did you started on your journey with robotics, what's handy to learn in the first place? I know only basics about embedded programming and I'd need to brush up of my physics skills. I don't have a set goal in my mind, only exploring for the time being.

Comments URL: https://news.ycombinator.com/item?id=46934622

Points: 1

# Comments: 0

Categories: Hacker News

Tauri

Hacker News - Sun, 02/08/2026 - 9:42am

Article URL: https://v2.tauri.app/

Comments URL: https://news.ycombinator.com/item?id=46934603

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Sediment – Local semantic memory for AI agents (Rust, single binary)

Hacker News - Sun, 02/08/2026 - 9:41am

I've been increasingly relying on AI coding assistants. I recently had my first child, and my coding hours look different now. I prompt between feedings, sketch out ideas while he naps, and pick up where I left off later. AI lets me stay productive in fragmented time. But every session starts from zero.

Claude doesn't remember the product roadmap we outlined last week. It doesn't know the design decisions we already made. It forgot the feature spec we iterated on across three sessions. I kept re-explaining the same things.

I looked at existing memory solutions but never got past the door. Mem0 wants Docker + Postgres + Qdrant. I just want memory, not infrastructure. mcp-memory-service has 12 tools, which is just complexity tax on every LLM call. And anything cloud-hosted means my codebase context leaves my machine. The setup cost was always too high and privacy never guaranteed, so I stuck with CLAUDE.md files. They work for a handful of preferences, but it's a flat file injected into context every time. No semantic search, no cross-project memory, no decay, no dedup. It doesn't scale.

So I built Sediment. The entire API is 4 tools: store, recall, list, forget.

I deliberately kept it small. I tried adding tags, metadata, expiration dates. Every parameter I added made the LLM worse at using it. With just store content, it just works. The assistant stores things naturally when they seem worth remembering and recalls them when context would help.

It's made a noticeable difference. My assistant remembers product ideas I brainstormed at 2am, the coding guidelines for each project, feature specs we refined over multiple sessions, and the roadmap priorities I set last month. It remembers across projects too.

I benchmarked it against 5 alternatives to make sure I wasn't fooling myself. 1,000 memories, 200 queries. Sediment returns the correct top result 50% of the time (vs 47% for the next best). When I update a memory, it always returns the latest version. Competitors get this right only 14% of the time. And it's the only system that auto-deduplicates (99% consolidation rate).

Everything runs locally. Single Rust binary, no Docker, no cloud, no API keys.

A few things I expect pushback on:

"4 tools is too few." I tested 8, 12, and more. Every parameter is a decision the LLM makes on every call. Tags alone create a combinatorial explosion. Semantic search handles categorization better because it doesn't require consistent manual labeling.

"all-MiniLM-L6-v2 is outdated." I benchmarked 4 models including bge-base-en-v1.5 (768-dim) and e5-small-v2. MiniLM tied with bge-base on quality but runs 2x faster. The model matters less than you'd think when you layer memory decay, graph expansion, and hybrid BM25 scoring on top.

"Mem0 supports temporal reasoning too." Mem0's graph variant handles conflicts via LLM-based resolution (ADD/UPDATE/DELETE) on each store, which requires an LLM call on every write. Their benchmarks use LOCOMO, a conversational memory dataset that tests a different use case than developer memory retrieval. The bigger issue is that there's no vendor-neutral, open benchmark for comparing memory systems. Every project runs their own evaluation on their own dataset. That's why I open-sourced the full benchmark suite: same dataset, same queries, reproducible by anyone. I'd love to see other tools run it too.

Benchmark methodology: 1,000 developer memories across 6 categories, 200 ground-truth queries, 50 temporal update sequences, 50 dedup pairs.

Landing page: https://sediment.sh

GitHub: https://github.com/rendro/sediment

Benchmark suite: https://github.com/rendro/sediment-benchmark

Comments URL: https://news.ycombinator.com/item?id=46934582

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Readability API – Unrender

Hacker News - Sun, 02/08/2026 - 9:39am

Article URL: https://unrender.page/

Comments URL: https://news.ycombinator.com/item?id=46934567

Points: 2

# Comments: 1

Categories: Hacker News

Show HN: I built a free, open-source macOS screen recorder with modern features

Hacker News - Sun, 02/08/2026 - 9:32am

I built a screen recorder for macOS because the one I used before broke with macOS 26 and seems to be not well maintained.

I think it turned out pretty cool, so I'm sharing it here:

It uses ScreenCaptureKit and SwiftUI, supports ProRes 4444/HEVC/H.264 including alpha channel and HDR, and records system audio and mic at the same time. No accounts, no analytics. Recordings stay local. MIT licensed.

Feedback and contributions are welcome!

Comments URL: https://news.ycombinator.com/item?id=46934504

Points: 1

# Comments: 0

Categories: Hacker News

Pages