Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 11 min 15 sec ago

Show HN: Self-Healing AI Agents with Claude Code as Doctor

Tue, 02/10/2026 - 1:16am

I built a 4-tier self-healing system for OpenClaw (AI agent platform running on my Mac Mini 24/7). The interesting part is Level 3: when health checks fail repeatedly, the system spawns Claude Code in a tmux PTY session to autonomously diagnose and repair issues.

Recovery escalation: - Level 0-1: LaunchAgent KeepAlive + Watchdog - Level 2: Automated "doctor --fix" (config validation, port checks) - Level 3: Claude Code spawns in tmux, reads logs, attempts repairs - Level 4: Discord alert if all automation fails

Production-tested in my homelab over 3 months: 99% recovery rate, recovery time reduced from 45min → 3min avg. Handled 17 consecutive crashes, config corruption, port conflicts.

Built for macOS (stable) with Linux systemd support (beta). MIT licensed.

Curious what others think about AI-powered infrastructure self-healing.

Comments URL: https://news.ycombinator.com/item?id=46956003

Points: 3

# Comments: 0

Categories: Hacker News

Show HN: Lacune, Go test coverage TUI

Tue, 02/10/2026 - 1:15am

I’ve been using Zed for a while and missed inline code coverage visualization. Since this extension doesn’t seem to be coming anytime soon, I built Lacune, a TUI for tracking uncovered code in real time.

Comments URL: https://news.ycombinator.com/item?id=46956000

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: MCP Orchestrator – Spawn parallel AI sub-agents from one prompt

Tue, 02/10/2026 - 12:47am

I built an open-source MCP server (TypeScript/Node.js) that lets you spawn up to 10 parallel sub-agents using Copilot CLI or Claude Code CLI.

Key features: - Context passing to each agent (full file, summary, or grep mode) - Smart timeout selection based on MCP servers requested - Cross-platform (macOS, Linux, Windows) - Headless & programmatic — designed for AI-to-AI orchestration

Example: give one prompt like "research job openings at Stripe, Google, and Meta" — the orchestrator fans it out to 3 parallel agents, each with their own MCP servers (e.g., Playwright for browser), and aggregates results.

Install: npm i @ask149/mcp-orchestrator

This is a solo side project. Would love feedback on: - What CLI backends to support next (Aider, Open Interpreter, local LLM CLIs?) - Ideas for improving the context-passing system - What MCP server integrations would be most useful

PRs and issues welcome — check CONTRIBUTING.md in the repo.

Comments URL: https://news.ycombinator.com/item?id=46955848

Points: 2

# Comments: 0

Categories: Hacker News

Show HN: Agx – A Kanban board that runs your AI coding agents

Tue, 02/10/2026 - 12:44am

agx is a kanban board where each card is a task that AI agents actually execute.

agx new "Add rate limiting to the API" That creates a card. Drag it to "In Progress" and an agent picks it up. It works through stages — planning, coding, QA, PR — and you watch it move across the board.

The technical problems this solves:

The naive approach to agent persistence is replaying conversation history. It works until it doesn't:

1. Prompt blowup. 50 iterations in, you're stuffing 100k tokens just to resume. Costs explode. Context windows overflow.

2. Tangled concerns. State, execution, and orchestration mixed together. Crash mid-task? Good luck figuring out where you were.

3. Black box execution. No way to inspect what the agent decided or why it's stuck.

agx uses clean separation instead:

- Control plane (PostgreSQL + pg-boss): task state, stage transitions, job queue

- Data plane (CLI + providers): actual execution, isolated per task

- Artifact storage (filesystem): prompts, outputs, decisions as readable files

Agents checkpoint after every iteration. Resuming loads state from the database, not by replaying chat. A 100-iteration task resumes at the same cost as a 5-iteration one.

What you get: - Constant-cost resume, no context stuffing

- Crash recovery: agent wakes up exactly where it left off

- Full observability: query the DB, read the files, tail the logs

- Provider agnostic: Claude Code, Gemini, Ollama all work

Everything runs locally. PostgreSQL auto-starts via Docker. The dashboard is bundled with the CLI.

Comments URL: https://news.ycombinator.com/item?id=46955833

Points: 2

# Comments: 0

Categories: Hacker News

Why Every Business Must Engage with AI – and How to Do It Right

Tue, 02/10/2026 - 12:43am

Title: Why every business should engage with AI (the real question is how deep)

AI is no longer an experimental technology. It’s becoming a baseline capability for modern businesses. The real question most teams should be asking is not “should we use AI?” but “how deeply should we engage with it?”

I’ve talked to many founders, CTOs, and operators over the past couple of years. The hesitation around AI usually comes from two places:

Teams that haven’t really tried AI and feel comfortable sticking with existing workflows.

Teams that rushed into AI, spent money, got disappointing results, and walked away.

Both often conclude: “AI isn’t for us.” That conclusion is understandable — but increasingly risky.

Many organizations still rely on manual or semi-manual processes: document handling, internal knowledge search, reporting, customer support triage. Everything appears to “work,” but it’s slow, hard to scale, and dependent on headcount rather than leverage.

AI isn’t magic, but it is a force multiplier. Ignoring it means accepting structural inefficiency while competitors gradually improve speed, quality, and decision-making.

One misconception I see a lot: that engaging with AI means building custom models or hiring a large ML team. In practice, AI today is closer to what spreadsheets or search once were — general-purpose tools that most teams can benefit from without deep specialization.

Instead of treating AI adoption as a yes/no decision, it’s more useful to think in levels.

Level 1: AI literacy Every company should be here. This is about enabling people, not systems: using tools like ChatGPT for research, drafting, summarization, and analysis; teaching teams how to verify outputs; and setting clear rules around sensitive data. Low risk, high return.

Level 2: AI-assisted workflows Here AI becomes part of everyday processes without replacing humans. Examples include internal AI assistants over documentation, AI-supported customer support, content generation, or analytics help. This is where many teams see the best ROI with relatively low complexity.

Level 3: AI-driven systems At this level, AI is embedded into products or core operations: RAG systems, agent workflows, forecasting, personalization. This requires clean data, evaluation, and operational discipline. Many failures happen here not because AI doesn’t work, but because teams skip the earlier foundations.

The biggest risk isn’t “doing AI wrong.” It’s not building AI fluency at all while the rest of the market moves forward.

Once AI systems are in production, new problems appear: cost control, reliability, hallucinations, latency, silent regressions. At that point, AI stops being a demo and becomes infrastructure.

For teams already dealing with production AI systems, we’ve been thinking a lot about observability and reliability in this space. Some of that work is shared here: https://optyxstack.com/ai

Curious how others on HN think about the “depth” question when it comes to AI adoption.

Comments URL: https://news.ycombinator.com/item?id=46955823

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: PicoClaw – lightweight OpenClaw-style AI bot in one Go binary

Tue, 02/10/2026 - 12:38am

I’m building PicoClaw: a lightweight OpenClaw-style personal AI bot that runs as a single Go binary. OpenClaw (Moltbot / Clawdbot) is a great product. I wanted something with a simpler, more “single-binary” architecture that’s easy to read and hack on.

Repo: https://github.com/mosaxiv/picoclaw

Comments URL: https://news.ycombinator.com/item?id=46955793

Points: 2

# Comments: 0

Categories: Hacker News

Flood Fill vs. The Magic Circle

Tue, 02/10/2026 - 12:33am
Categories: Hacker News

Show HN: A CLI tool to automate Git workflows using AI agents

Tue, 02/10/2026 - 12:29am

Hi HN,

I built a CLI tool to automate common git workflows using AI agents (e.g. creating branches, summarizing context, and preparing PRs).

Supported platforms: - GitHub (via gh) - GitLab (via glab)

Supported AI agents: - Claude Code - Gemini CLI - Cursor Agent - Codex CLI

Design goals: - Agent-agnostic (same commands across different AI agents) - No MCP or custom prompts required - Minimal setup (from install to first PR in minutes)

Repo: https://github.com/leochiu-a/git-pr-ai

Feedback and questions welcome.

Comments URL: https://news.ycombinator.com/item?id=46955761

Points: 2

# Comments: 0

Categories: Hacker News

Use AI to find movies and TV shows on your streaming services

Tue, 02/10/2026 - 12:28am

Article URL: https://pickalready.com

Comments URL: https://news.ycombinator.com/item?id=46955757

Points: 2

# Comments: 0

Categories: Hacker News

GenAI Go SDK for AI

Tue, 02/10/2026 - 12:25am
Categories: Hacker News

Show HN: I built an AI-powered late-night call-in radio show from my RV

Tue, 02/10/2026 - 12:22am

Show HN: I built an AI-powered late-night call-in radio show from my RV I live in an RV in the desert and I built a system that generates AI callers who phone into my late-night talk show. Each caller has a unique voice, name, backstory, job, vehicle, and opinions. They know the local weather, road conditions, and what's happening in the towns around southern New Mexico. Some are recurring characters who call back with updates on their lives. The stack: - FastAPI backend running the show control panel - OpenRouter for LLM (caller personalities, dialog, topics) — mostly Grok and MiniMax - ElevenLabs / Inworld for TTS with 25+ distinct voices - Caller personality system with memory — regulars remember past conversations - Live phone integration via SignalWire so real people can call in too - Post-production pipeline: stem recording, gap removal, voice compression, music ducking, EBU R128 loudness normalization - Self-hosted on Castopod, episodes served from BunnyCDN The callers aren't scripted. The LLM generates their personality and topic, then we have a real conversation. I respond as the host, the AI generates their replies in real time with TTS. The result sounds like actual late-night radio — someone calls at 2 AM to argue about Pluto's planetary status, another calls about their divorce, another has a conspiracy theory about fusion energy. Real callers can dial in live and get mixed in with the AI characters. Nobody knows who's real. Listen: https://lukeattheroost.com RSS: Spotify, Apple Podcasts, YouTube Call in: 208-439-LUKE The code is a solo project — happy to answer questions about the architecture.

Comments URL: https://news.ycombinator.com/item?id=46955730

Points: 1

# Comments: 0

Categories: Hacker News

An emotional app to figure out your next step

Tue, 02/10/2026 - 12:22am

Article URL: https://www.heyecho.app/

Comments URL: https://news.ycombinator.com/item?id=46955722

Points: 2

# Comments: 0

Categories: Hacker News

Show HN: I built a macOS tool for network engineers – it's called NetViews

Tue, 02/10/2026 - 12:20am

Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.

I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.

About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.

NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.

As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.

Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.

Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.

Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.

Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).

Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.

I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?

Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.

Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.

Comments URL: https://news.ycombinator.com/item?id=46955712

Points: 3

# Comments: 0

Categories: Hacker News

Pages