Feed aggregator

Show Music Without Platforms: From Napster to Web3

Hacker News - 7 hours 36 min ago

Article URL: https://fony.space

Comments URL: https://news.ycombinator.com/item?id=44275461

Points: 1

# Comments: 1

Categories: Hacker News

Show HN: CountermarkAI – Protect your website from AI Bots

Hacker News - 8 hours 1 min ago

Hi HN,

I built CountermarkAI, a lightweight anti-scraping & bot-detection tool for content creators and website owners. It’s designed to help protect your work from unauthorized scraping and AI training, that repurposed your work without permission.

How It Works:

Use Hashtag – Creators add a unique hashtag to their content as a declaration of ownership.

Protect Website – For those running your own sites, simply add a small snippet to your . The protect.js script works asynchronously by sending metadata from every page load back to our servers, logging requests, and flagging known AI-training bot user-agents (such as CCBot, GPTBot, ClaudeBot). If a bot is detected, it outright replaces the page content with “AI bots are forbidden on this site. The snippet has a hidden image that no browser will load but some bots may be tricked into requesting it.

Dashboard - Once you sign up, you can see all the bots that entered your domain and their IPs. So you can block then by IP.

Looking ahead, I'm planning an API rollout that will let you automate bot and IP inquiries and allow push request metadata directly from your server for seamless integration with your existing systems.

I’d love to hear your feedback on usability and any features you think might help further secure creative work against bot scraping. Thanks for taking the time to check it out!

Cheers, Setas

Comments URL: https://news.ycombinator.com/item?id=44275375

Points: 7

# Comments: 0

Categories: Hacker News

Show HN: Vishu – Model Context Protocol (MCP) Suite

Hacker News - 8 hours 4 min ago

I'm thrilled to introduce Vishu (MCP) Suite, an open-source application I've been developing that takes a novel approach to vulnerability assessment and reporting by deeply integrating Large Language Models (LLMs) into its core workflow. What's the Big Idea? Instead of just using LLMs for summarization at the end, Vishu (MCP) Suite employs them as a central reasoning engine throughout the assessment process. This is managed by a robust Model Contet Protocol (MCP) agent scaffolding designed for complex task execution. Core Capabilities & How LLMs Fit In: 1. Intelligent Workflow Orchestration: The LLM, guided by the MCP, can: 2. • Plan and Strategize: Using a SequentialThinkingPlanner tool, the LLM breaks down high-level goals (e.g., "assess example.com for web vulnerabilities") into a series of logical thought steps. It can even revise its plan based on incoming data! • Dynamic Tool Selection & Execution: Based on its plan, the LLM chooses and executes appropriate tools from a growing arsenal. Current tools include: • ◇ Port Scanning (PortScanner) ◇ Subdomain Enumeration (SubDomainEnumerator) ◇ DNS Enumeration (DnsEnumerator) ◇ Web Content Fetching (GetWebPages, SiteMapAndAnalyze) ◇ Web Searches for general info and CVEs (WebSearch, WebSearch4CVEs) ◇ Data Ingestion & Querying from a vector DB (IngestText2DB, QueryVectorDB, QueryReconData, ProcessAndIngestDocumentation) ◇ Comprehensive PDF Report Generation from findings (FetchDomainDataForReport, RetrievePaginatedDataSection, CreatePDFReportWithSummaries)

• Contextual Result Analysis: The LLM receives tool outputs and uses them to inform its next steps, reflecting on progress and adapting as needed. The REFLECTION_THRESHOLD in the client ensures it periodically reviews its overall strategy.

• Unique MCP Agent Scaffolding & SSE Framework: • ◇ The MCP-Agent scaffolding (ReConClient.py): This isn't just a script runner. The MCP-scaffolding manages "plans" (assessment tasks), maintains conversation history with the LLM for each plan, handles tool execution (including caching results), and manages the LLM's thought process. It's built to be robust, with features like retry logic for tool calls and LLM invocations. ◇ Server-Sent Events (SSE) for Real-Time Interaction (Rizzler.py, mcp_client_gui.py): The backend (FastAPI based) communicates with the client (including a Dear PyGui interface) using SSE. This allows for: ◇ ▪ Live Streaming of Tool Outputs: Watch tools like port scanners or site mappers send back data in real-time. ▪ Dynamic Updates: The GUI reflects the agent's status, new plans, and tool logs as they happen. ▪ Flexibility & Extensibility: The SSE framework makes it easier to integrate new streaming or long-running tools and have their progress reflected immediately. The tool registration in Rizzler.py (@mcpServer.tool()) is designed for easy extension.

We Need Your Help to Make It Even Better! This is an ongoing project, and I believe it has a lot of potential. I'd love for the community to get involved: ◇ Try it Out: Clone the repo, set it up (you'll need a GOOGLE_API_KEY and potentially a local SearXNG instance, etc. – see .env patterns), and run some assessments! ◇ ▪ GitHub Repo: https://github.com/seyrup1987/ReconRizzler-Alpha

Comments URL: https://news.ycombinator.com/item?id=44275368

Points: 1

# Comments: 0

Categories: Hacker News

Stuxnet

Hacker News - 8 hours 36 min ago
Categories: Hacker News

Show HN: VerbaScan – Context-aware image translation

Hacker News - 8 hours 39 min ago

Hi HN,

I built Verbascan.com because I was tired of image translation tools that constantly failed me.

The usual problems:

- Incorrect translations

- Phrases taken out of context

- No support for handwritten text

Verbascan takes a smarter approach — it focuses on meaning, not just swapping words. You upload an image and get a translation that actually makes sense.

Would love your feedback!

Comments URL: https://news.ycombinator.com/item?id=44275235

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: A tool to make AI text undetectable

Hacker News - 8 hours 50 min ago

Hey HN, I wanted to share a project I've been working on, born out of a personal frustration. While working on an essay, I used an AI assistant to help polish my writing and refine some points. The result was good, but I immediately ran into a problem: my university uses AI detection software, and I was worried about my work getting flagged. My first thought was to find an "AI humanizer" to make the text sound more natural and bypass detection. I tried a bunch of popular online tools, but the results were consistently disappointing. The output often felt clunky, distorted the original meaning, or, ironically, still got flagged by AI detectors. Convinced there had to be a better way, I started digging into how these detectors work and what linguistic patterns they typically identify. I spent a significant amount of time researching and experimenting with different techniques to rewrite AI-generated content to be more... well, human. After a lot of trial and error, I developed a methodology that I found to be quite effective. It goes beyond simple synonym swaps, focusing on restructuring sentences, varying word choice, and adjusting the cadence to sound more natural, all while carefully preserving the core message. To make this process easier for myself and others, I've wrapped my method into a simple web tool. It's called Best Humanizer, and you can try it here: https://besthumanizer.net I built this to solve my own problem, but I'm sharing it today because I think it could be useful to others facing the same issue. I would love to hear your honest feedback and criticism. What do you think of the results? How could it be improved? Thanks for checking it out

Comments URL: https://news.ycombinator.com/item?id=44275198

Points: 1

# Comments: 0

Categories: Hacker News

Pages