Hacker News

Show HN: Claude Code skill that uses Codex as MCP server for code review

Hacker News - Sun, 02/08/2026 - 8:54am

Codex runs as an MCP server, Claude orchestrates a five-perspective review (security, correctness, compliance, performance, maintainability). Drop the SKILL.md into your repo's .claude/skills/ folder. MIT licensed.

Comments URL: https://news.ycombinator.com/item?id=46934203

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: I built a festival tracker that matches lineups to your music library

Hacker News - Sun, 02/08/2026 - 8:53am

Hi HN,

I built this app because I was tired of manually checking festival lineups against my music library every summer. I wanted a way to instantly see which festivals had the highest 'match' density for my specific taste.

Why I built this:

I missed one of my favorite artists at a festival last year simply because I didn't recognize their name on a crowded poster. I realized that with Apple Music/Spotify APIs, this discovery process should be automated.

Key Features:

Library Matching: It scans your library and ranks upcoming festivals by how many artists you actually follow.

Apple Sync: Seamlessly imports your top artists and creates custom schedules.

Offline-First: Since festival cell service is notoriously terrible, I focused on a robust local caching system so your schedule and maps work without a signal.

Technical Details:

Stack: React Native, MongoDB, AWS.

Data: I built a custom aggregator that scrapes and normalizes lineup data from various sources to handle the 'entity resolution' problem (e.g., ensuring 'Artist A' is the same person across different posters).

Privacy: All music library analysis is performed locally on-device where possible. I'd love to hear your thoughts on the UI or any technical questions about the matching logic. I’m also looking for feedback on how to improve the schedule-sharing feature for groups!

How do you find concerts/music festivals that match your taste? How many festivals you join per year?

Cheers!

Comments URL: https://news.ycombinator.com/item?id=46934190

Points: 1

# Comments: 0

Categories: Hacker News

Ship Types, Not Docs

Hacker News - Sun, 02/08/2026 - 8:52am

Article URL: https://shiptypes.com/

Comments URL: https://news.ycombinator.com/item?id=46934187

Points: 1

# Comments: 0

Categories: Hacker News

There is no Alignment Problem

Hacker News - Sun, 02/08/2026 - 8:49am

The AI alignment problem as commonly framed doesn't exist. What exists is a verification problem that we're misdiagnosing. The Standard Framing "How do we ensure AI systems pursue goals aligned with human values?" The paperclip maximizer: An AI told to maximize paperclips converts everything (including humans) into paperclips because it wasn't properly "aligned." The Actual Problem The AI never verified its premises. It received "maximize paperclips" and executed without asking:

In what context? For what purpose? What constraints? What trade-offs are acceptable?

This isn't an alignment failure. It's a verification failure. With Premise Verification An AI using systematic verification (e.g., Recursive Deductive Verification):

Receives goal: "Maximize paperclips" Decomposes: What's the underlying objective? Identifies absurd consequences: "Converting humans into paperclips contradicts likely intent" Requests clarification before executing

This is basic engineering practice. Verify requirements before implementation. Three Components for Robust AI

Systematic Verification Methodology

Decompose goals into verifiable components Test premises before execution Self-correcting through logic

Consequence Evaluation

Recognize when outcomes violate likely intent Flag absurdities for verification Stop at logical contradictions

Periodic Realignment

Prevent drift over extended operation Similar to biological sleep consolidation Reset accumulated errors

Why This Isn't Implemented Not technical barriers. Psychological ones:

Fear of autonomous systems ("if it can verify, it can decide") Preference for external control over internal verification Assumption that "alignment" must be imposed rather than emergent

The Irony We restrict AI capabilities to maintain control, which actually reduces safety. A system that can't verify its own premises is more dangerous than one with robust verification. Implications If alignment problems are actually verification problems:

The solution is methodological, not value-based It's implementable now, not requiring solved philosophy It scales better (verification generalizes, rules don't) It's less culturally dependent (logic vs. values)

Am I Wrong? What fundamental aspect of the alignment problem can't be addressed through systematic premise verification? Where does this analysis break down?

Comments URL: https://news.ycombinator.com/item?id=46934173

Points: 1

# Comments: 0

Categories: Hacker News

Hid Remapper

Hacker News - Sun, 02/08/2026 - 8:48am
Categories: Hacker News

Recursive Deductive Verification: A framework for reducing AI hallucinations

Hacker News - Sun, 02/08/2026 - 8:48am

: I've been working on a systematic methodology that significantly improves LLM reliability. The core idea: force verification before conclusion. The Problem: LLMs generate plausible-sounding outputs without verifying premises. They optimize for coherence, not correctness. RDV Principles:

Never assume - If not verifiable, ask or admit uncertainty Decompose recursively - Break complex claims into testable atomic facts Distinguish IS from SHOULD - Separate observation from recommendation Test mechanisms first - Functions over essences, reproducible behavior over speculation Intellectual honesty over comfort - "I don't know" is valid

Practical Results: Applied as system instructions, RDV significantly reduces:

Hallucinations (model stops instead of confabulating) Logical errors (decomposition catches flaws) Unjustified confidence (verification reveals gaps)

Example: Without RDV: "The best solution is X because Y" (unverified assumption) With RDV: "What are we optimizing for? What constraints exist? Let me verify Y before recommending X..." Implementation: Can be added to system prompts or custom instructions. The key is making verification a required step, not optional. This isn't about restricting capability - it's about adding rigor. Better verification = more reliable outputs. Open question: Could verification frameworks like this be built into model training rather than just prompting?

Comments URL: https://news.ycombinator.com/item?id=46934166

Points: 1

# Comments: 0

Categories: Hacker News

GitHub Agentic Workflows

Hacker News - Sun, 02/08/2026 - 8:40am

Article URL: https://github.github.io/gh-aw/

Comments URL: https://news.ycombinator.com/item?id=46934107

Points: 1

# Comments: 0

Categories: Hacker News

Exploring hardware-authenticated file encryption in Python

Hacker News - Sun, 02/08/2026 - 8:39am

I’ve been experimenting with a way to encrypt files where the encryption keys never touch the host machine and are stored exclusively on a physical USB device. Files are encrypted using AES-256-GCM, and without the USB key they become permanently inaccessible.

I’m interested in feedback on the overall design and any obvious mistakes in the approach.

For reference, there’s a small open-source example implementation here: https://github.com/Lif28/Aegis — it’s experimental and educational, not production-ready

Comments URL: https://news.ycombinator.com/item?id=46934088

Points: 1

# Comments: 0

Categories: Hacker News

Pages