Malware Bytes Security

Subscribe to Malware Bytes Security feed
Cyber Security Software & Anti-Malware
Updated: 22 min 18 sec ago

Developer creates app to detect nearby smart glasses

Wed, 02/25/2026 - 10:48am

An independent developer, moved after reading about the abuse of smart glasses to film people without their consent, decided to create an app to detect nearby smart glasses.

Smart glasses are wearable devices built into ordinary-looking eyewear that add functions like audio, cameras, sensors, and sometimes a small display. They can let you listen to music, take calls, capture photos or video from your point of view, or see simple information overlaid in your field of vision, depending on the model. To do this, they pack components such as microphones, touch controls, motion sensors, and sometimes a camera and tiny projector into the frame and arms of the glasses.

Nearby Glasses is an Android hobbyist app that continuously scans for Bluetooth Low Energy “advertising frames”—a type of data—to recognize devices from manufacturers linked to smart glasses, specifically Meta, Luxottica (Meta Ray-Bans), and Snap.

When it sees a matching Bluetooth signature, it sends a notification like “Smart Glasses are probably nearby,” though the developer explicitly warns about false positives, for example from Meta Quest VR headsets. Users install it from Google Play or GitHub, enable foreground scanning, start the scan, and then decide how to respond if an alert appears.

Because stalkers and harassers misuse smart glasses to target people, the developer built the app in deliberate defiance to modern surveillance after reading reports about people using Meta’s Ray-Ban smart glasses to secretly film others in massage parlors and during immigration raids.

In speaking with the outlet 404 Media about the project, developer Yves Jeanrenaud said: “I consider it to be a tiny part of resistance against surveillance tech.”

This kind of app matters most in contexts where covert recording or automated identification has real consequences:

  • For people in vulnerable or stigmatized workplaces (e.g., massage parlors, clinics, shelters) where non-consensual filming can lead to harassment, doxxing, or professional harm.​
  • During law-enforcement or immigration actions, protests, or political gatherings, where smart glasses could be used for evidentiary recording, intimidation, or bulk identification.​
  • In any setting where bystanders reasonably expect not to be recorded or profiled, either because of a sense of privacy or because of the law (public transport, bathrooms, gyms, support groups).

In these scenarios it makes sense to want an extra signal that someone nearby may be using surveillance-capable wearables.

As observed by the reporters at 404 Media, this app is an imperfect, tech-based mitigation to a social and legal problem: it can misfire, it can’t tell you who is being recorded, and it risks giving a false sense of safety. The developer frames it not as a solution but as a small, user-controlled countermeasure in an environment where surveillance devices are becoming more invisible and more AI-augmented.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Categories: Malware Bytes

Reddit, porn sites fined by UK regulators over children’s safety and privacy

Tue, 02/24/2026 - 10:48am

The UK’s online safety and privacy regulators are targeting companies that violate new age verification laws at both ends : Porn sites that did not keep children out, and mainstream platforms that profited from children coming in.

On February 23, media regulator Ofcom fined porn operators that failed to put “highly effective” age checks in place. Within the same 24-hour time frame, the Information Commissioner’s Office (ICO)—which is a separate, independent regulatory office—hit Reddit with an $18.2 million fine for unlawfully using children’s personal data for targeted advertising and recommendation systems.​​

Together, the cases show how quickly the UK is moving from guidance and “codes” to very real enforcement for services that don’t take children’s rights seriously.

Porn sites punished for weak age checks

Under the UK’s Online Safety Act 2023, services that publish or host pornography must use “highly effective” age assurance to stop children from accessing pornographic content. That means the classic splash page warning or an “I am over 18” tick box are no longer suitable.

Porn companies have reacted in different ways as the rules began to take hold: Some big players embraced more intrusive checks; others, like Pornhub, geo‑restricted or partially withdrew from the UK, and a minority effectively called the regulator’s bluff and carried on with token measures. Ofcom is now going after that last group.​

On Monday, Ofcom fined US‑based porn operator 8579 LLC around $1.8 million for failing to implement proper age verification on its sites and for dragging its heels on compliance. The company has also been ordered to hand over a complete list of all websites it operates, with an extra daily penalty if it fails to comply.​

Ofcom said it opened investigations into dozens of adult sites and will impose fines and daily penalties until proper age checks are in place, and reports that more than 6,000 porn sites have now moved to “highly effective” age checks. It can also employ certain business‑disruption and blocking powers for stubborn operators that continue to violate the law.​

Reddit fined for using children’s data

Meanwhile, the Information Commissioner’s Office (ICO) took aim at something different, but very much related: How mainstream online platforms treat the children they already have, rather than how they keep children out.

The Office’s latest decision imposes a substantial fine on Reddit for “children’s privacy failures,” after the regulator concluded the platform unlawfully used UK children’s personal data for targeted advertising and profiling. The decision follows a multi‑year push by the ICO to enforce its Age Appropriate Design Code (also known as the Children’s code) and to crack down on platforms that treat under‑18s like just another audience segment.​

The ICO said its investigation found that Reddit:

  • Failed to effectively identify and protect under‑18s on the platform, despite knowing that children were present in large numbers.
  • Used children’s personal information in recommender systems and targeted advertising, without adequate safeguards or a lawful basis under UK data protection law.
  • Did not ensure that the “best interests of the child” were a primary consideration in its design and data‑use decisions, as required by the Children’s code.

The regulator’s concerns about Reddit are not new. In 2025 it announced investigations into TikTok, Reddit, and Imgur that focused on how the companies use UK children’s data and what age‑assurance tools they rely on. By late 2025, the ICO had issued a notice of intent to fine Reddit, signaling that it believed serious breaches had occurred. The new $18.2 million fine is the outcome of that process.

The message from Information Commissioner John Edwards is blunt: Social media and video‑sharing platforms are welcome in the UK, but “this cannot be at the expense of children’s privacy,” and the responsibility to keep children safe “lies firmly at the door of the companies offering these services.”

Two regulators, one child‑safety push

Although Ofcom and the ICO have different remits, their actions line up neatly.

  • Ofcom enforces the Online Safety Act, which focuses on harmful content and requires robust age assurance for porn and other high‑risk services.​​
  • The ICO enforces UK GDPR and the Children’s code, which focus on how children’s personal data is collected, used, and shared.

The regulators have explicitly said they will work closely together to coordinate their efforts on children’s safety. In practical terms, that means:

  • A porn site that implements “highly effective” age checks to satisfy Ofcom may also find itself under ICO scrutiny if its identity checks or data sharing do not respect data protection law.
  • A social platform that complies with the Children’s code by turning off profiling for children and tightening privacy defaults may still need Ofcom‑compliant age assurance if it hosts adult or otherwise high‑risk content.

The overlap is already visible. The ICO investigated how Reddit and other platforms use age assurance and recommender systems, while Ofcom set out specific guidance on acceptable age‑verification methods under the Online Safety Act.

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that sensitive data ends up in the wrong hands.​

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Categories: Malware Bytes

Roblox gives predators “powerful tools” to target children, says LA County

Tue, 02/24/2026 - 10:22am

Los Angeles County has sued online gaming company Roblox, adding to a series of suits that accuse the virtual worlds platform of misleading parents into thinking it’s safe while leaving children exposed to predators and sexually explicit content. The February 19 filing makes LA County the first California government body to take the company to court over child safety.

Roblox claims over 151 million daily users, most of which are kids. The company said it disputes the claims and will defend itself vigorously.

What the suit tells us about how predators operate

According to the complaint, Roblox violated California’s Unfair Competition Law and False Advertising Law. County Counsel Dawyn R. Harrison, who filed the lawsuit, said that the gaming platform has repeatedly exposed kids to sexually explicit material, grooming, and exploitation because it has chosen profit over safety.

“This is not about a minor lapse in safety,” Harrison said in a prepared press release. “It is about a company that gives pedophiles powerful tools to prey on innocent and unsuspecting children.”

Until November 2024, anyone could friend and message a child on the platform, the suit said. When Roblox changed those rules it was allegedly still possible for accounts registered with ages over 13 to message each other without having previously been connected, meaning that adults could still message teens who didn’t know them.

The suit also alleged that it’s easy for predators to masquerade as children on the site, because age has historically been self-reported with no enforcement of parental approval when kids sign up.

But Roblox’s approach to age verification changed last September, when the company announced plans to use age estimation on all users who wanted to the platform’s communication features. It then introduced the third-party Persona system, which requires a facial age check to use chat features. But Persona itself has become a problem.

Researchers recently discovered an exposed frontend revealing the tool does far more than check ages, including running facial recognition against watchlists. It can also hold on to personal data including government IDs, device fingerprints, and biometric information for up to three years. Discord has already walked away from Persona, but Roblox hasn’t.

Even setting the vendor aside, the safeguards aren’t working as advertised. When Malwarebytes researchers created an account for a child under 13 on Roblox in December 2025, it found that a child account could find communities linked to cybercrime and fraud-related keywords.

The complaint contains many allegations about the type of behavior that has occurred on Roblox, including:

  • The simulated rape of a seven year-old’s avatar in a digital playground environment
  • “Diddy” games that recreated some events from the imprisoned rap star’s parties
  • The creation of Jeffrey Epstein-themed accounts, and the operation of a game called “Escape to Epstein Island”
  • Virtual strip clubs where avatars can disrobe and give lap dances

The LA County complaint also mentioned a report from financial forensic research company Hindenburg Research published in October 2024. The company, targeting short sellers who trade by selling stocks in vulnerable companies, said that it had found multiple groups on the site trading child sexual abuse material and soliciting sexual favors. The report also alleged that Roblox was cutting safety spending even as problems mounted.

A former senior product designer allegedly told Hindenburg the trade-off was deliberate. “If you’re limiting users’ engagement, it’s hurting your metrics…in a lot of cases, the leadership doesn’t want that,” the product designer allegedly said, according to the lawsuit.

A cacophony of cases

This won’t be the only case Roblox has defended. In 2022, the Social Media Victims Law Center filed suit against the company for allegedly touting child safety while allowing the exploitation of a young girl. The following year, multiple families filed suit against the gaming company for allegedly misleading them about content harmful to children. Last year, the mother of a 15 year-old boy from Texas sued Roblox after he committed suicide. The complaint alleged that he was groomed and subsequently blackmailed over nude pictures he’d been persuaded to send a predator on the site.

Another lawsuit filed against the company in San Mateo in February 2025 claimed that a 27-year-old predator reached a 13-year-old boy through the platform’s “whisper” messaging system. That case described the platform as “a digital and real-life nightmare for children.”

The California suit joins an expanding pile of government cases against Roblox. Louisiana sued the company in August 2025, followed by Kentucky (October 2025), Texas (November 2025), and Florida (December 2025). Georgia’s Attorney General is also investigating the company. And a collection of separate private suits against the company have been consolidated into a single multi-district litigation.

What parents can do

So, what can parents do? Interestingly, one potential answer came last year when the company’s CEO Dave Baszucki spoke with the BBC:

“My first message would be, if you’re not comfortable, don’t let your kids be on Roblox.”

If you do want to let your children use Roblox (or any other site), then close monitoring is important. Restrict friend requests and disable open chat to the extent that the platform allows. Anonymize your children’s profiles to potentially avoid what one family claimed happened to them in an earlier lawsuit, , in which they had to move across the country after the predator reportedly tracked down their child’s address via Roblox.

Child education is key. Tell your children not to reveal personal information and not to take conversations off-platform, because that’s where exploitation escalates. And keep the conversation going, not as a one-time lecture, but as a regular part of talking about their day.

For more information about child safety, check out Malwarebytes’ research on the topic, which also offers useful advice.

LA County is seeking civil penalties of up to $2,500 per violation per day, plus injunctive relief that could force structural changes to how the platform operates.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Categories: Malware Bytes

Fake Zoom meeting “update” silently installs surveillance software

Tue, 02/24/2026 - 4:47am

A fake Zoom meeting website is silently pushing surveillance software onto Windows machines. Visitors land on a convincing imitation of a Zoom video call. Moments later, an automatic “Update Available” countdown downloads a malicious installer—without asking for permission.

The software being installed is a covert build of Teramind, a commercial monitoring tool companies use to record what employees do on work computers. In this campaign, it is being quietly dropped onto the machines of ordinary people who thought they were joining a meeting.

You clicked a Zoom link but there was no meeting

The whole operation starts at uswebzoomus[.]com/zoom/, a website that opens as a Zoom waiting room. The moment it loads, it quietly sends a message back to the attackers letting them know someone has arrived.

Three scripted fake participants—“Matthew Karlsson,” “James Whitmore,” and “Sarah Chen”—appear to join the call one by one, each announced by a genuine-sounding Zoom join chime. Their conversation audio loops on repeat in the background.

The page behaves differently if no one interacts with it. The audio and meeting sequence only begin once a real person clicks or types. Automated security tools that scan suspicious pages without interacting may see nothing unusual.

A permanent “Network Issue” warning is displayed over the main video tile. This is not a glitch: the page is hardcoded to always show it. The choppy audio and lagging video are entirely deliberate, and they serve a specific psychological purpose. A visitor sitting through a broken call will naturally assume something is wrong with the app. When an “Update Available” prompt appears moments later, it feels like the fix.

The countdown nobody asked for

Ten seconds after the meeting screen appears, a pop-up takes over: “Update Available — A new version is available for download.” A spinner turns and a counter ticks from five to zero. There is no close button.

By this point the visitor has already sat through a frustrating, glitchy call—and a software update is exactly what they have been primed to want. The pop-up arrives not as a surprise, but as an answer.

When the counter hits zero, the browser is instructed to silently download a file. At the exact same moment, the page switches to what looks like the Microsoft Store showing “Zoom Workplace” mid-installation, spinning and all. While the visitor watches what appears to be a legitimate install resolving the problem, the real installer has already landed in their Downloads folder— and it didn’t ask for permission at any point.

A Zoom update with Teramind inside

The downloaded file is called zoom_agent_x64_s-i(__941afee582cc71135202939296679e229dd7cced) (1).msi. It’s a standard Windows installer format. Its unique digital fingerprint is 644ef9f5eea1d6a2bc39a62627ee3c7114a14e7050bafab8a76b9aa8069425fa.

The filename itself is telling: the string s-i(__) is Teramind’s own naming convention for a stealth instance installer, with the hash after it identifying the specific attacker-controlled Teramind account the agent will report back to.

Security analysis of the file’s contents revealed two particularly telling pieces of text hidden inside it: Agent version 26.3.3403 and a field labelled Server IP or host name. These fields confirm the installer was preconfigured to connect to an attacker-controlled Teramind server.

The installer executes through Windows Installer without presenting a typical interactive consumer installation interface. The target being set up as a surveillance target has no idea it is happening.

Built to be invisible

Inside the installer’s internal build files—notes left over from the development process that are normally only seen by the software’s authors—the folder name out_stealth appears in the build path. This is not a coincidence. Teramind sells a dedicated “stealth mode” deployment option, specifically designed so the agent runs with no visible presence: no icon in the taskbar, no entry in the system tray, no trace in the list of installed programs.

In this version of the Windows agent, Teramind’s MSI defaults to naming the agent binary dwm.exe and installs it under a ProgramData\{GUID} directory. This behavior is documented by the vendor and can be changed using the TMAGENTEXE installer parameter.

During installation, the software assembles itself in stages. Several Teramind components are unpacked into temporary directories during installation. These intermediate files are not individually signed, which can sometimes trigger security tooling during analysis. The installation chain first confirms whether Teramind is already on the machine, then collects the computer’s name, the current user account, the keyboard language, and the system locale. These are the details Teramind needs to identify the device and begin reporting activity back to whoever deployed it.

The agent is configured to communicate with a remote Teramind server instance, consistent with enterprise monitoring deployments.

Designed to fool the tools that would catch it

One of the most deliberate aspects of this installer is how hard it works to avoid being analysed. Security researchers examine suspicious software in controlled “sandbox” environments (essentially isolated virtual machines where the software can run safely while being watched). This installer is built to detect exactly that situation and behave differently.

Runtime analysis flags indicate the presence of debug and environment detection logic (DETECT_DEBUG_ENVIRONMENT). The installer performs checks consistent with identifying analysis or sandbox environments and may alter its behavior under those conditions.

Once installation completes, the installer removes its temporary files and staging folders. That means by the time someone checks the machine, obvious traces of the installer may already be gone. The monitoring agent itself, however, continues running in the background.

Why Teramind makes this campaign unusually dangerous

Teramind is a legitimate product. Businesses pay for it to monitor staff on company-owned devices: it logs every keystroke, takes screenshots at regular intervals, records which websites were visited and which applications were opened, captures clipboard contents, and tracks email and file activity.

In a corporate context, with employees informed and policies in place, this is legal. That same capability, installed secretly on a personal machine, is something else entirely.

The attackers did not write custom malware. They deployed a professionally developed commercial product that is designed to run reliably and persist through restarts. That makes it more durable than many traditional malware strains.

Because the files themselves belong to legitimate software, traditional antivirus tools that look only for known malicious code may not flag them. Context matters. When monitoring software is installed without consent on a personal device, it fits the category often described as stalkerware—software used to monitor someone without their knowledge.

What to do if you may have been affected

If you visited uswebzoomus[.]com/zoom/ and a file with the name above was downloaded:

Do not open it.

If you already ran it, treat your device as compromised.

Check for the installation folder:

  1. Open File Explorer.
  2. Navigate to C:\ProgramData.
  3. Look for a folder named {4CEC2908-5CE4-48F0-A717-8FC833D8017A}.

ProgramData is hidden by default. In File Explorer, select View and enable “Hidden items.”

Check whether the service is running:

  1. Open Command Prompt as administrator.
  2. Type: sc query tsvchst
  3. Press Enter.

If it shows STATE: 4 RUNNING, the agent is active. If the service does not exist, it was not installed using the default configuration.

Change passwords for important accounts—email, banking, and work—from a different, clean device.

If this happened on a work computer, contact your IT or security team immediately.

To avoid similar attacks in the future:

  • Open Zoom directly from the app on your device.
  • Type zoom.us into your browser yourself instead of clicking unexpected links.
  • Be cautious with meeting links you were not specifically expecting.
Closing thoughts

There is a quiet but growing trend of attackers reaching for legitimate commercial software rather than building their own. Tools like Teramind arrive on a machine carrying the credibility of a real company’s product—and that credibility is exactly what makes them useful to someone deploying them without permission.

This campaign does not rely on technical sophistication. No new hacking technique was used. The attacker built a convincing fake Zoom page, set an automatic download to fire before any visitor has a reason to be suspicious, and used a fake Microsoft Store screen to explain it all away. From click to install takes less than thirty seconds. Someone who was expecting a Zoom invite and saw what looked like a Microsoft installation in progress could easily walk away believing nothing unusual had happened.

Zoom is frequently impersonated because people receive meeting links through email, text, Slack, and calendar invites—and click quickly. Taking five seconds to confirm a link really leads to zoom.us is a simple habit that can prevent a serious problem.

Indicators of Compromise (IOCs)

File Hashes (SHA-256)

  • 644ef9f5eea1d6a2bc39a62627ee3c7114a14e7050bafab8a76b9aa8069425fa

Domains

  • uswebzoomus[.]com

Teramind Instance ID

  • 941afee582cc71135202939296679e229dd7cced
Categories: Malware Bytes

Refund scam impersonates Avast to harvest credit card details

Tue, 02/24/2026 - 3:28am

A fraudulent website dressed in Avast’s brand is tricking French-speaking users into handing over their full credit card details—card number, expiry date, and three-digit security code—under the cover story of processing a €499.99 refund that was never owed to them.

The operation combines live chat “support,” a hardcoded alarming transaction amount, and a convincing replica of Avast’s visual identity to create urgency and harvest payment data at scale.

“You were charged €499.99 today

The phishing page opens with what appears to be a legitimate Avast web portal. The Avast logo is loaded directly from Avast’s own content delivery network—a deliberate touch that ensures the orange-and-white shield renders perfectly and passes a casual visual check. The page header offers links to “Home,” “My Account,” and “Help,” all styled to match Avast’s real interface.

Below the header, a warning box in Avast’s signature orange catches the eye: cancellation requests must be filed within 72 hours, it says. Then, in the same breath, warns that transactions older than 48 hours “can no longer be cancelled.” The internal contradiction is easy to miss when your attention is fixed on the larger claim just below it.

That claim is a transaction record showing today’s date and a debit of -€499.99. The date is not hardcoded. A single line of JavaScript reads the visitor’s local system clock and writes the current date into the page at load time. Whenever a victim arrives, whether on a Tuesday in February or a Friday in August, the charge appears to have happened that very morning.

The amount, however, is fixed. Every visitor sees exactly -€499.99, a sum carefully chosen to be large enough to provoke immediate action but not so large as to strain credibility for a software subscription.

There is no real transaction. No Avast account has been accessed. The number exists solely to make the visitor feel robbed.

What the form asks for and where the data goes

The cancellation form below asks for a reason for the refund (a dropdown offers “Avast refund,” “Fraudulent transaction,” “Duplicate transaction,” and “Other”), followed by a full set of personal information: first name, last name, email address, phone number, street address, city, region, and postal code. Filling in this section is framed as routine identity verification—necessary, the page implies, before any refund can be processed.

Once the form is submitted, a modal dialogue appears titled “Card Information.” The page asks for the victim’s credit card number, expiry date, and CVV security code, supposedly so the refund can be credited back to the original payment method.

Fake Avast site: Request for victim’s card information

This is the moment the operation has been building toward.

The page even implements Luhn algorithm validation (the mathematical check banks use to verify card numbers) so test numbers or accidental typos are rejected before submission. Only structurally valid card numbers are accepted.

When the Confirm button is clicked, the browser sends a POST request to send.php; a backend file that receives the entire payload as a JSON object. That payload contains every field the victim filled in: name, address, contact details, card number, expiry, and CVV.

After the data is dispatched, the victim is redirected to a confirmation page that reads:

“Your application is being processed — Thank you for your inquiry.”

Below that reassuring message sits a button labeled “Uninstalling Avast”. A final social engineering nudge encouraging the victim to remove the very security software that might otherwise alert them to what has just happened.

Fake Avast site: Your application is being processed Why is there live chat on a phishing page?

What sets this campaign apart from many phishing pages is the presence of a real-time live chat widget embedded in the bottom-right corner of the screen. The widget is provided by Tawk.to, a legitimate customer support platform, and carries the account identifier 689773de2f0f7c192611b3bf with widget code 1j27pp82q.

This means someone (almost certainly the operators of the phishing site) can see when a visitor is on the page and engage them in live conversation.

The tactical value is significant. A confused visitor who notices the timing mismatch (“72 hours” vs “48 hours”), or who hesitates before entering card details, can be nudged forward by a “support agent” offering reassurance in real time.

It transforms a static phishing page into an interactive fraud operation.

Who is this page designed to catch?

What makes this page unusually effective is that it does not need to target a specific type of person. It is built to catch four entirely different kinds of visitor with the same form, each with a different reason to comply.

  • The Avast customer: someone who bought a license, did not want the renewal, and sees this page as a legitimate route to dispute the charge. Their existing relationship with the brand makes the interface feel familiar.
  • The forgotten subscriber: They have an Avast account but do not remember signing up. Perhaps it was years ago or bundled with another product. They see -€499.99, conclude they are being wrongly charged, and never think to log into an account they barely remember having.
  • The alarmed non-customer: They have never used Avast. They see the charge and assume their card details have been stolen and used without their knowledge. They arrive already convinced that a crime has been committed against them, already in a hurry, and already primed to trust any official-looking process that offers to fix it. This is the most dangerous profile as the 72-hour window is not just a detail to them, it feels like a deadline.
  • The opportunist: someone who knows they were not charged but believes they have found €499.99 waiting to be claimed. They attempt to collect money that was never theirs, only to lose their own card details in the process.

The page never has to distinguish between these visitors. It asks no questions that would reveal which profile a person belongs to. No account login, or license key, or proof of purchase. Just a charge, a form, and a card field.

How to tell if a refund page is a scam

Refund scams like this are not limited to Avast. Any brand can be impersonated. Here are the warning signs to watch for:

  • A charge you don’t recognize that appears “today”: Scammers often insert the current date automatically to make the transaction feel urgent and real.
  • Urgent cancellation windows: Messages claiming you have limited time to act are designed to pressure you into rushing.
  • Requests for full credit card details to “process” a refund: Legitimate refunds do not require you to re-enter your full card number and CVV on a random page.
  • No login, license key, or proof of purchase required: Real companies verify your account. Scam pages skip verification and go straight to payment details.
  • Live chat pushing you to complete the process: Real-time reassurance from a “support agent” can be part of the scam, not proof the site is legitimate.
  • Instructions to uninstall your security software: No genuine refund process will ever require you to remove your protection.
  • Lookalike domains: Slightly altered website names are a major red flag. Always type the official company website directly into your browser instead of clicking links.

Spotting even one of these signs should make you stop. Do not enter personal or financial information on a page you reached through an unsolicited message or suspicious link.

What to do if you entered your details

If you submitted your card details:

  • Contact your bank or card issuer immediately and cancel the card
  • Dispute any unauthorized charges
  • Do not wait for fraud to appear — stolen card data is often used quickly
  • Change passwords for accounts linked to the email address you provided
  • Run a full scan with a reputable security product

Other ways to stay safe:

  • Keep your device and software up to date
  • Use active anti-malware protection with web protection enabled
  • If you’re unsure whether something is a scam, Malwarebytes users can submit suspicious messages to Scam Guard for review

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

OpenClaw: What is it and can you use it safely?

Mon, 02/23/2026 - 4:10pm

An AI tool with a funny name has caused quite a commotion as of late—including some allegations of machine consciousness—so here is a breakdown on OpenClaw.

Launched in November 2025, OpenClaw is an open-source, autonomous artificial intelligence (AI) agent that was made to run locally on your own computer, allowing it to manage tasks, interact with applications, and read and write files directly. It acts as a personal digital assistant, integrating with chat apps like WhatsApp and Discord to automate emails, scan calendars, and browse the internet for information. 

OpenClaw was formerly known as ClawdBot, but the project brushed up against the large AI developer Anthropic, because of its own tool named “Claude.” In response, OpenClaw’s developer quickly renamed the project to “Moltbot,” which brought impersonation campaigns from cybercriminals. The trademark trouble and the abuse that followed put a dent in OpenClaw’s reputation.

Another dent followed when Hudson Rock published an article about the first observed case of an infostealer grabbing a complete OpenClaw configuration from an infected system, effectively looting the “identity” of a personal AI agent rather than just browser passwords.

The case underlines an impending danger—and not just for OpenClaw, but for other AI agents as well. Infostealers are starting to harvest not just credentials but entire AI personas plus their cryptographic “skeleton keys,” turning one compromised agent into a pivot point for full‑blown account takeover and long‑term profiling.

As I stated before in a broader context, adversaries are starting to target AI systems at the supply‑chain level, quietly poisoning training data and inserting backdoors that only surface under specific conditions. OpenClaw sits squarely in this emerging risk zone: open source, moving fast, and increasingly wired into mailboxes, cloud drives, and business workflows while its security model is still being improvised.

At this stage of its development, treating OpenClaw as a hardened productivity tool is wishful thinking, since it behaves more like an over‑eager intern with an adventurous nature, a long memory, and no real understanding of what should stay private.

Researchers and regulators have already documented prompt injection risks, log poisoning, and exposed instances that hand attackers plaintext credentials or tokens via poisoned emails, websites, or logs that the agent dutifully processes.

How to use OpenClaw safely

For anyone thinking about using OpenClaw in production, the bigger picture is even less comforting. OpenClaw runs locally but is designed to be adventurous: it can browse, run shell commands, read and write files, and chain “skills” together without a human checking every step. Misconfigured permissions, over‑privileged skills, and a culture of “just give it access so it can help” mean the agent often sits at the center of your accounts, tokens, and documents, with very few guardrails.

In fact, an employee at Meta who works in AI safety and alignment recently shared on the social media platform X that she was unable to prevent ClawBot from deleting a major portion of her email inbox.

Further, the Dutch data protection authority (Autoriteit Persoonsgegevens) warned organizations not to deploy experimental agents like OpenClaw on systems that handle sensitive or regulated data at all, flagging the combination of privileged local access, immature security engineering, and a rapidly growing ecosystem of dubious third‑party plugins as a kind of Trojan horse on the endpoint.

Microsoft provided a list of recommendations in this field that make a lot of sense. They are not specifically aimed at OpenClaw, but provide a conservative baseline for self‑hosted, Internet‑connected agents with durable credentials. (If these recommendations feel overly technical, it’s because safely using an AI agent with broad access is still an experimental and technical process.)

  •  Run OpenClaw (or similar agents) in a sandboxed VM or container on isolated hosts, with default‑deny egress and tightly scoped allow‑lists.
  • Give the runtime its own non‑human service identities, least privilege, short token lifetimes, and no direct access to production secrets or sensitive data.
  • Treat skill/extension installation as introducing new code into a privileged environment: restrict registries, validate provenance, and monitor for rare or newly seen skills.
  • Log and periodically review agent memory/state and behavior for durable instruction changes, especially after ingesting untrusted content or shared feeds.
  • Understand and provide for the event where you may need to nuke‑and‑pave: keep non‑sensitive state snapshots handy, document a rebuild and credential‑rotation playbook, and rehearse it.
  • Run an up to date real-time anti-malware solution that can detect information stealers and other malware.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Password managers keep your passwords safe, unless…

Mon, 02/23/2026 - 7:45am

I’m a big advocate of password managers. Granted, there are better alternatives for passwords like passkeys, but if a provider offers nothing but password options, which many do, you can’t do much about that. So, for the time being we seem to be stuck with passwords.

Every reputable password manager claims that they can’t see your passwords, even if they wanted to. But researchers have found that these “zero‑knowledge” cloud password managers are more vulnerable than their marketing suggests.

The researchers also warn that this is not an immediate cause for panic. For a full‑on password leakage to happen, would requires rare, high‑end failures such as a malicious or fully compromised server combined with specific design weaknesses and features being enabled.

The underlying “problem” is that most of these password managers are cloud-based. Very handy if you’re working on another device and need access, but it also enlarges the attack surface. Sharing your passwords with another device or another user opens it up to the possibility of unwanted access.

The researchers tested a number of different vendors, including LastPass, Bitwarden, and Dashlane, and devised several attack scenarios that could allow the recovery of passwords.

Weaknesses

Password managers with groups of users

In groups, the sharing of recovery keys, group keys, and admin public keys often means they are fetched from the server without an authenticity guarantee. Meaning that under the right circumstances, an attacker could gain access.

When a group admin has enabled policies such as “auto or manual recovery,” it’s possible to silently change them using a compromised server if there’s no integrity protection on the org “policy blob” (a small configuration file).

Weak encryption on compromised server

Your password manager takes your master password and runs it through PBKDF2 many times (e.g., 600,000 rounds) before storing a hash. But on a compromised server, an attacker could turn down the number of iterations to, say, 2, which makes the master password easy to guess or brute-force.

Account recovery options

On a compromised server an attacker could change the policy blob and change the settings to “auto recovery” and send it to the clients. Switching to auto‑recovery helps the attacker because it lets the system hand over your vault keys without anyone having to click “approve” or even notice it happening.

So, the attacker can turn what should be a rare, user‑visible emergency process into a silent, routine mechanism they can abuse to pull out vault keys at scale or in a stealthy, targeted way.

Backwards compatibility

To avoid locking out users on old clients, providers keep supporting deprecated key hierarchies and non‑AEAD (Authenticated Encryption with Associated Data) modes such as CBC (Cipher Block Chaining) without robust integrity checks. This opens the door to classic downgrade attacks where the server coaxes a client into using weaker schemes and then gradually recovers plaintext.

How to stay safe

We want to emphasize that these attacks would be very targeted and require a high level of compromise. So, cloud password managers are still much safer than password reuse and spreadsheets, but their “zero‑knowledge” claims don’t hold up under nation state‑level type of attacks.

After responsible disclosure, many of the issues has already been patched or mitigated, reducing the number of possible attacks.

Many of the demonstrated attacks require specific enterprise‑style features (account recovery, shared vaults, org membership) or older/legacy clients to be in use. So be extra careful with those.

Enable multi-factor authentication for important accounts, so the attacker will not have enough by just obtaining your password.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Categories: Malware Bytes

Fake Huorong security site infects users with ValleyRAT

Mon, 02/23/2026 - 7:18am

A convincing lookalike of the popular Huorong Security antivirus has been used to deliver ValleyRAT, a sophisticated Remote Access Trojan (RAT) built on the Winos4.0 framework, to users who believed they were improving their security.

The campaign, attributed to the Silver Fox APT group—a Chinese-speaking threat group known for distributing trojanized versions of popular Chinese software—uses a typosquatted domain to serve a trojanized NSIS installer that deploys a full-featured backdoor with advanced user-mode stealth and injection capabilities.

A fake site built to catch security-conscious users

Huorong Security—known in Chinese as 火绒—is a free antivirus product developed by Beijing Huorong Network Technology Co., Ltd., and widely used across mainland China.

The attackers registered huoronga[.]com—note the extra “a” at the end—as a near-perfect imitation of the legitimate huorong.cn. This typosquatting technique catches users who mistype the address or arrive via search engine poisoning or phishing links. The fake site looks convincing enough that most visitors would have no obvious reason to suspect anything is wrong.

Fake Huorong Security site

Another fake Huorong Security site

When a visitor clicks the download button, the request is silently routed through an intermediary domain (hndqiuebgibuiwqdhr[.]cyou) before the final payload is served from Cloudflare R2 storage—a legitimate cloud service chosen for its trusted reputation and availability. The file is named BR火绒445[.]zip, using the Chinese name for Huorong to maintain the disguise up to the moment of execution.

What happens after you click download

Inside the ZIP archive is a trojanized NSIS installer (Nullsoft Scriptable Install System), a legitimate open-source framework used by many real applications. Its use here is deliberate: an NSIS-built executable raises fewer red flags than a custom packer, and the installation experience feels normal.

When executed, the installer drops a desktop shortcut named 火绒.lnk (Huorong.lnk), reinforcing the illusion that the antivirus installed successfully.

At the same time, it extracts a cluster of files into the user’s Temp directory. Most are genuine supporting libraries or decoy executables meant to mimic a real installation, including copies of FFmpeg multimedia DLLs, a file posing as a .NET repair tool, and another mimicking a Huorong diagnostic utility.

The malicious components include:

  • WavesSvc64.exe: the main loader, disguised as a Waves audio service process
  • DuiLib_u.dll: a hijacked DirectUI library used for DLL sideloading
  • box.ini: an encrypted file containing shellcode
How Windows is tricked into loading malware

The core technique is DLL sideloading, a technique attackers use to trick Windows into loading a malicious file instead of a legitimate one.

WavesSvc64.exe appears legitimate—its PDB path references a gaming application code directory—so Windows loads it without complaint. When it runs, Windows automatically loads DuiLib_u.dll alongside it. That DLL has been replaced with a malicious version that reads encrypted shellcode from box.ini, decrypts it, and executes it directly in memory.

Rather than dropping a single monolithic backdoor executable, the chain culminates in in-memory shellcode execution loaded from files dropped to disk (e.g., box.ini) via DLL sideloading. The shellcode-based chain is consistent with the Catena loader pattern documented by Rapid7, where signed or legitimate-looking executables bundle attack code in .ini configuration files and use reflective injection to execute it while leaving a minimal forensic footprint.

How the backdoor becomes permanent

Behavioral analysis shows a methodical infection chain:

1. Defender exclusions
The malware spawns PowerShell at high integrity level and instructs Windows Defender to ignore its persistence directory (AppData\Roaming\trvePath) and its main process (WavesSvc64.exe). After these commands execute, Windows Defender is less likely to scan the malware’s chosen path/process, materially reducing native detection.

2. Persistence
It creates a scheduled task named Batteries (observed as C:\Windows\Tasks\Batteries.job). On every subsequent boot, the task launches WavesSvc64.exe /run from the persistence directory, reapplies Defender exclusions, and reconnects to command and control (C2).

3. File refresh
To evade signature-based detection, the malware deletes and re-writes WavesSvc64.exe, DuiLib_u.dll, libexpat.dll, box.ini, and vcruntime140.dll. Deletion of these files alone may not fully remediate the infection, as the malware demonstrates the ability to re-write core components during execution.

4. Registry storage
Configuration data, including the encoded C2 domain yandibaiji0203.[]com, is written to HKCU\SOFTWARE\IpDates_info. A secondary key at HKCU\Console\0\451b464b7a6c2ced348c1866b59c362e stores encrypted binary data likely used for malware configuration or payload staging.

How it avoids detection

Beyond disabling Defender, ValleyRAT takes steps to avoid detection and analysis.

It checks for debuggers and forensic tools by looking for characteristic window titles. It probes BIOS version, display adapters, and VirtualBox registry keys to detect virtual machines—the sandboxes researchers use to analyze malware safely. It also checks available memory and disk capacity, and inspects locale and language settings, likely as a geofencing measure to confirm it is running on a Chinese-language system before fully deploying.

Command-and-control communications

The Winos4.0 stager connects to its C2 server at 161.248.87.250 over TCP port 443. Using TCP 443 provides camouflage at the port level; however, inspection revealed a custom binary protocol rather than standard TLS-encrypted HTTPS.

Network intrusion detection systems triggered Critical-severity alerts for Winos4.0 CnC login and server-response messages, and a high-severity alert for ProcessKiller C2 initialization.

C2 traffic was observed originating from rundll32.exe, which executed with the command line “rundll32.exe”—lacking the typical <DLL>,<Export> argument structure. In environments with command-line and parent-child process monitoring, this execution pattern is a high-confidence anomaly. Sandbox analysis extracted multiple WinosStager plugin DLLs from the rundll32 process, confirming the modular architecture that makes ValleyRAT particularly dangerous: capabilities are not bundled in a single monolithic binary but downloaded on demand.

The ProcessKiller component is particularly concerning. Network telemetry indicates ProcessKiller C2 initialization, consistent with a module associated in prior reporting with terminating security software. Previous ValleyRAT/Winos4.0 campaigns targeted security products from Qihoo 360, Huorong, Tencent, and Kingsof—indicating the potential to terminate security software, including the product it impersonated as a lure.

Post-compromise capabilities

In short, once it’s installed, attackers can monitor the victim, steal sensitive information, and remotely control the system. Sandbox analysis confirmed the following behaviors once the malware has a foothold:

  • Keylogging via a system-wide keyboard hook installed through SetWindowsHookExW in the rundll32 process, capturing every keystroke.
  • Process injection: WavesSvc64.exe creates suspended processes and writes to the memory of other processes for stealth code execution.
  • Credential access: the malware reads credential-related registry keys and touches browser cookie files.
  • System reconnaissance: queries hostname, username, keyboard layout, locale, running processes, and physical drives.
  • RWX memory regions created inside rundll32.exe consistent with in-memory execution, reducing reliance on additional dropped payload executables.
  • Self-cleanup: deletes its own executed files and performs deletion of 10 or more additional files to obstruct forensic recovery.
  • The malware creates mutexes including the dated string 2026. 2. 5 and the path C:\ProgramData\DisplaySessionContainers.log, and writes a log file at that location.
Who’s behind this campaign?

This campaign fits the established pattern of Silver Fox operations. The group has repeatedly used trojanized installers of widely trusted Chinese software to distribute ValleyRAT and the Winos4.0 framework. Previous lures included QQ Browser, LetsVPN, and gaming applications.

Impersonating a security product raises the stakes. The victims are not just casual users—they are actively looking for protection.

The targeting remains consistent. Chinese-language filenames, the Huorong lure, and built-in locale checks all point to a geographically focused campaign.

However, the public leak of the ValleyRAT builder on GitHub in March 2025 significantly lowered the barrier to entry. Researchers identified approximately 6,000 related samples between November 2024 and November 2025, with 85% appearing in the latter half of that period. That increase suggests the tooling is spreading beyond a single operator.

How to stay safe

This campaign shows how easily trust can be turned against users. The attackers didn’t need a zero-day exploit. They needed a convincing website, a realistic installer, and the knowledge that many people will search for a product name and click the first result.

When the lure is a security product, the deception is even more effective.

Here’s what to check:

  • Verify download sources. The legitimate Huorong Security website is huorong.cn. Always double-check the domain before downloading security software—a single extra character can lead to a malicious site.
  • Monitor Windows Defender exclusions. Any Add-MpPreference command you did not initiate is a strong indicator of compromise. Audit exclusions regularly.
  • Hunt for persistence artifacts. Search endpoints for a scheduled task or job named Batteries (artifact observed as C:\Windows\Tasks\Batteries.job), the %APPDATA%\trvePath\ directory, and the registry key HKCU\SOFTWARE\IpDates_info.
  • Block outbound connections to 161.248.87.250 at the firewall and deploy IDS rules for Winos4.0 C2 signatures (ET SIDs 2052875, 2059975, and 2052262).
  • Alert on process anomalies. Rundll32.exe without a legitimate DLL argument, and WavesSvc64.exe outside a genuine Waves Audio installation, are high-confidence indicators.

Malwarebytes detects and blocks known variants of ValleyRAT and its associated infrastructure.

Indicators of Compromise (IOCs)

Infrastructure

  • Fake websites:
    • huoronga[.]com
    • huorongcn[.]com
    • huorongh[.]com
    • huorongpc[.]com
    • huorongs[.]com
  • Redirect domain: hndqiuebgibuiwqdhr[.]cyou
  • Payload host: pub-b7ce0512b9744e2db68f993e355a03f9.r2[.]dev
  • C2 IP: 161.248.87[.]250 (TCP 443, custom binary protocol)
  • Encoded C2 domain: yandibaiji0203[.]com

File hashes (SHA-256)

  • 72889737c11c36e3ecd77bf6023ec6f2e31aecbc441d0bdf312c5762d073b1f4  (NSIS installer)
  • db8cbf938da72be4d1a774836b2b5eb107c6b54defe0ae631ddc43de0bda8a7e  (WavesSvc64.exe)
  • d0ac4eb544bc848c6eed4ef4617b13f9ef259054fe9e35d9df02267d5a1c26b2  (DuiLib_u.dll)
  • 07aaaa2d3f2e52849906ec0073b61e451e0025ef2523dafbd6ae85ddfa587b4d  (WinosStager DLL #1)
  • 66e324ea04c4abbad6db4f638b07e2e560613e481ff588e0148e33e23a5052a9  (WinosStager DLL #2)
  • 47df12b0b01ddca9eb116127bf84f63eb31e80cec33e4e6042dff1447de8f45f  (WinosStager DLL #3)

Host-based indicators

  • Scheduled task named Batteries at C:\Windows\Tasks\Batteries.job
  • Persistence directory: %APPDATA%\trvePath\
  • Registry key: HKCU\SOFTWARE\IpDates_info
  • Registry key: HKCU\Console\0\451b464b7a6c2ced348c1866b59c362e
  • Log file: C:\ProgramData\DisplaySessionContainers.log
  • Processes: WavesSvc64.exe, rundll32.exe (without DLL argument)

MITRE ATT&CK

  • T1189 — Drive-by Compromise (Initial Access)
  • T1059.001 — PowerShell (Execution)
  • T1053.005 — Scheduled Task (Persistence)
  • T1562.001 — Impair Defenses: Disable or Modify Tools (Defense Evasion)
  • T1574.002 — DLL Side-Loading (Defense Evasion)
  • T1027 — Obfuscated Files or Information (Defense Evasion)
  • T1218.011 — Rundll32 (Defense Evasion)
  • T1555 — Credentials from Password Stores (Credential Access)
  • T1082 — System Information Discovery (Discovery)
  • T1057 — Process Discovery (Discovery)
  • T1056.001 — Keylogging (Collection)
  • T1071 — Application Layer Protocol (Command and Control)
  • T1070.004 — Indicator Removal: File Deletion (Defense Evasion)

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

What can&#8217;t you say on TikTok?

Sun, 02/22/2026 - 6:08pm

This week on the Lock and Code podcast…

A funny thing happened on TikTok last month, and it has brought allegations of censorship, manipulation, and control.

It was the week of January 22, and after a long legal battle, TikTok had finally—for the first time in its company history—moved its ownership to new, American stewards. But with the American restructuring, TikTok users immediately reported that something had changed: videos would sometimes fail to record any views, and even direct messages would fail to send. But, according to user complaints, the flaws weren’t random. Instead, they befell users who spoke openly about topics that have become political lightning rods in the US, including Immigration and Customs Enforcement and the actions of sex offender Jeffrey Epstein.

To some aggrieved users, the flaws looked like censorship. But, according to TikTok, the error messages and missing video count tallies were part of a larger power outage.

“Since yesterday we’ve been working to restore our services following a power outage at a US data center impacting TikTok and other apps we operate,” TikTok wrote on the social media platform X (formerly Twitter). “We’re working with our data center partner to stabilize our service. We’re sorry for this disruption and hope to resolve it soon.”

While TikTok has reportedly more than 200 million users in the US alone, it’s far from a universal app. But the changes made to TikTok hint at a bigger sea change in social media and the internet today, in which online spaces are increasingly being altered, shut down, or even controlled—if not through government plot then certainly through corporate influence.

Oddly, the ownership change of TikTok was supposed to solve many of these problems.

Since TikTok’s 2017 founding in China, American lawmakers and government officials claimed that American users were vulnerable to Chinese surveillance. All the data that Americans hand over when using TikTok—their names and email addresses, but also their viewing habits, interests, behaviors, political inclinations, and approximate locations—all of that, the argument went, should not belong in the hands of a foreign power.

As FBI Director Christopher Wray said in 2022, the risk of TikTok was:

“The possibility that the Chinese government could use [TikTok] to control data collection on millions of users or control the recommendation algorithm, which could be used for influence operations.”

But the rocky start to the new American TikTok has only drawn renewed scrutiny: Have the past concerns about foreign manipulation now become current concerns about domestic manipulation?

Today on the Lock and Code podcast with host David Ruiz, we speak with Zach Hinkle, senior social media manager for Malwarebytes, and MinJi Pae, social media content creator for Malwarebytes, about what they personally experienced during TikTok’s transition to American owners, why the changes matter for the delivery of news and information, and how the internet appears to be shrinking from its earlier promises.

As Hinkle said on the podcast:

“ The idea of the internet being a private, free space that was ingrained in its creation, and every platform since then sort of carried that spirit with it… those spaces are disappearing.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)

Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Categories: Malware Bytes

Age verification vendor Persona left frontend exposed

Fri, 02/20/2026 - 9:08am

Three researchers investigating Discord’s age-verification checks say they discovered an exposed frontend belonging to Persona, the identity-verification vendor used by Discord. It revealed a far more expansive surveillance and financial intelligence stack than a simple “teen safety” tool.

A short while ago we reported that Discord will limit profiles to teen-appropriate mode until you verify your age. That means anyone would wants to continue using Discord as before would have to let it scan their face—and the internet was far from happy.

To analyze these scans, Discord uses biometric identity verification start-up Persona Identities, Inc. a venture that offers Know Your Customer (KYC) and Anti-Money Laundering (AML) solutions that rely on biometric identity checks to estimate a user’s age.

To demonstrate the privacy implications, researchers took a closer look and found a publicly exposed Persona frontend on a US government–authorized server, with 2,456 accessible files.

You read that right. According to the researchers, the exposed code, which has now been removed, sat at a US government-authorized endpoint that appears to have been isolated from its regular work environment.

In those files, the researchers found details about the extensive surveillance Persona software performs on its users. Beyond checking their age, the software performs 269 distinct verification checks, runs facial recognition against watchlists and politically exposed persons, screens “adverse media” across 14 categories (including terrorism and espionage), and assigns risk and similarity scores.

Persona collects—and can retain for up to three years—IP addresses, browser and device fingerprints, government ID numbers, phone numbers, names, faces, plus a battery of “selfie” analytics like suspicious-entity detection, pose repeat detection, and age inconsistency checks.

At a time when age verification is very much a hot topic, this is not the kind of news to persuade privacy advocates that age verification is in our best interest. Sending data obtained during age verification checks to data brokers and foreign governments—reportedly Persona was tested by Discord in the UK—will not install the level of trust needed for users to feel comfortable submitting to this kind of scrutiny.

This comes amid broader questions about whether age verification is actually doing what it’s supposed to do. Euronews looked at the effect of Australia’s world-leading ban on social media for under-16s. Australia’s new rules have only been in force for six weeks, but while the country’s internet regulator says it has shut down about 4.7 million accounts held by under‑16s on platforms like TikTok, Instagram, Snapchat, YouTube, X, Twitch, Reddit, and Threads, children and parents describe a very different reality. Interviews with teenagers, parents and researchers indicate that many children are still accessing banned apps through simple workarounds.

According to The Rage,  Discord has stated it will not continue to use Persona for age verification. However, other platforms reported to use Persona include:

  • Roblox: Uses Persona’s facial age estimation and ID verification as the core of its “age checks to chat” system.
  • OpenAI / ChatGPT: OpenAI’s help center explains that if you need to verify being 18+, “Persona is a trusted third-party company we use to help verify age,” and that Persona may ask for a live selfie and/or government ID.
  • Lime: The ride-sharing service deploys custom age verification flows with Persona to meet each region’s unique requirements.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Categories: Malware Bytes

Facebook ads spread fake Windows 11 downloads that steal passwords and crypto wallets

Fri, 02/20/2026 - 5:00am

Attackers are running paid Facebook ads that look like official Microsoft promotions, then directing users to near-perfect clones of the Windows 11 download page. Click Download Now and instead of a Windows update, you get a malicious installer—one that silently steals saved passwords, browser sessions, and cryptocurrency wallet data.

“I just wanted to update Windows”

The attack starts with something completely ordinary: a Facebook ad. It looks professional, uses Microsoft branding, and promotes what appears to be the latest Windows 11 update. If you have been meaning to keep your PC current, it feels like a convenient shortcut.

Click the ad and you land on a site that looks almost identical to Microsoft’s real Software Download page. The logo, layout, fonts, and even the legal text in the footer are copied. The only obvious difference is in the address bar. Instead of microsoft.com, you’ll see one of these lookalike domains:

  • ms-25h2-download[.]pro
  • ms-25h2-update[.]pro
  • ms25h2-download[.]pro
  • ms25h2-update[.]pro

The “25H2” in domain names is deliberate. It mimics the naming convention Microsoft uses for Windows releases—24H2, the current version, was on everyone’s lips when this campaign launched, making the fake domains look plausible at a glance.

Geofencing: only the right targets get the payload

This campaign does not blindly infect everyone who visits the site.

Before delivering the malware, the fake page checks who you are. If you connect from a data center IP address—often used by security researchers and automated scanners—you get redirected to google.com. The site looks harmless.

Only visitors who appear to be regular home or office users receive the malicious file.

This technique, known as geofencing combined with sandbox detection, is what allowed this campaign to run for as long as it did without being caught and shut down by automated systems. The infrastructure is configured to evade automated security analysis.

When a targeted user clicks Download now, the site triggers a Facebook Pixel “Lead” event—the same tracking method legitimate advertisers use to measure conversions. The attackers are monitoring which victims take the bait and optimizing their ad spend in real time.

A 75 MB “installer” served straight from GitHub

If you pass the checks, the site downloads a file named ms-update32.exe. At 75 MB, it feels like a legitimate Windows installer.

The file is hosted on GitHub, a trusted platform used by millions of developers. That means the download arrives over HTTPS with a valid security certificate. Because it comes from a reputable domain, browsers do not automatically flag it as suspicious.

The installer was built using Inno Setup, a legitimate tool often abused by malware authors because it creates professional-looking installation packages.

What happens when you run it

Before doing anything damaging, the installer checks whether it is being watched. It looks for virtual machine environments, debugger software, and analysis tools. If it finds any of them, it stops. This is the same evasion logic that lets it slip past many automated security sandboxes—those systems run inside virtual machines by design.

On a real user’s machine, the installer proceeds to extract and deploy its components.

The most significant component is a full Electron-based application installed to C:\Users\<USER>\AppData\Roaming\LunarApplication\. Electron is a legitimate framework used by apps like Slack and Visual Studio Code. That makes it a useful disguise.

The choice of name is not accidental. “Lunar” is a brand associated with cryptocurrency tooling, and the application comes bundled with Node.js libraries specifically designed to create ZIP archives—suggesting it collects data, packages it up, and sends it out. Likely targets include cryptocurrency wallet files, seed phrases, browser credential stores, and session cookies.

At the same time, two obfuscated PowerShell scripts with randomised filenames are written to the %TEMP% folder and executed with a command line that deliberately disables Windows script-signing protections:

powershell.exe -NoProfile -NoLogo -InputFormat Text -NoExit -ExecutionPolicy Unrestricted -Command -

Hiding in the registry, covering its tracks

To survive reboots, the malware writes a large binary blob to the Windows registry under: HKEY_LOCAL_MACHINE\SYSTEM\Software\Microsoft\TIP\AggregateResults.

The TIP (Text Input Processor) registry path is a legitimate Windows component, which makes it less likely to raise suspicion.

Telemetry also shows behavior consistent with process injection. The malware creates Windows processes in a suspended state, injects code into them, and resumes execution. This allows the malicious code to run under the identity of a legitimate process, reducing the chance of detection.

Once execution is established, the installer deletes temporary files to reduce its forensic footprint. It can also initiate system shutdown or reboot operations, potentially to interfere with analysis.

The malware uses multiple encryption and obfuscation techniques, including RC4, HC-128, XOR encoding, and FNV hashing for API resolution. These methods make static analysis more difficult.

The Facebook ads angle

The use of paid Facebook advertising to distribute malware is worth pausing on. This is not a phishing email that lands in a spam folder, or a malicious result buried in a search page. These are paid Facebook ads appearing alongside posts from friends and family.

The attackers ran two parallel ad campaigns, each pointing to separate phishing domains. Each campaign used its own Facebook Pixel ID and tracking parameters. If one domain or ad account gets shut down, the other can continue running.

The use of two parallel domains and two separate advertising campaigns suggests the operators have redundancy built in—if one domain is taken down or one ad account is suspended, the other continues running.

What to do if you think you’ve been affected

This campaign is technically polished and operationally aware. The infrastructure demonstrates awareness of common security research and sandboxing techniques. They understand how people download software and have chosen Facebook advertising as their delivery vector precisely because it reaches real users in a context where trust is high.

Remember: Windows updates come from Windows Update inside your system settings—not from a website and never from a social media ad. Microsoft does not advertise Windows updates on Facebook.

And a pro tip: Malwarebytes would have detected and blocked the identified payload and associated infrastructure.

If you downloaded and ran a file from either of these sites, treat the system as compromised and act quickly.

  • Do not log into any accounts from that computer until it has been scanned and cleaned.
  • Run a full scan with Malwarebytes immediately.
  • Change passwords for important accounts like email, banking, and social media from a different, clean device.
  • If you use cryptocurrency wallets on that machine, move funds to a new wallet with a new seed phrase generated on a clean device.
  • Consider alerting your bank and enabling fraud monitoring if any financial credentials were stored on or accessible from that device.

For IT and security teams:

  • Block the phishing domains at DNS and web proxy
  • Alert on PowerShell execution with -ExecutionPolicy Unrestricted in non-administrative contexts
  • Hunt for the LunarApplication directory and randomized .yiz.ps1 / .unx.ps1 files in %TEMP%
Indicators of Compromise (IOCs) File hash (SHA-256)
  • c634838f255e0a691f8be3eab45f2015f7f3572fba2124142cf9fe1d227416aa (ms-update32.exe)
Domains
  • ms-25h2-download[.]pro
  • ms-25h2-update[.]pro
  • ms25h2-download[.]pro
  • ms25h2-update[.]pro
  • raw.githubusercontent.com/preconfigured/dl/refs/heads/main/ms-update32.exe (payload delivery URL)
File system artifacts
  • C:\Users\<USER>\AppData\Roaming\LunarApplication\
  • C:\Users\<USER>\AppData\Local\Temp\[random].yiz.ps1
  • C:\Users\<USER>\AppData\Local\Temp\[random].unx.ps1
Registry
  • HKEY_LOCAL_MACHINE\SYSTEM\Software\Microsoft\TIP\AggregateResults (large binary data — persistence)
Facebook advertising infrastructure
  • Pixel ID: 1483936789828513
  • Pixel ID: 955896793066177
  • Campaign ID: 52530946232510
  • Campaign ID: 6984509026382
Categories: Malware Bytes

AI-generated passwords are a security risk

Thu, 02/19/2026 - 9:46am

Using Artificial Intelligence (AI) to generate your passwords is a bad idea. It’s likely to give that password to a criminal who can then use it in a dictionary attack—which is when an attacker runs through a prepared list of likely passwords (words, phrases, patterns) with automated tools until one of them works, instead of trying every possible combination.

AI cybersecurity firm Irregular tested ChatGPT, Claude, and Gemini and found that the passwords they generate are “highly predictable,” and not truly random. When they tested Claude, 50 prompts produced just 23 unique passwords. One string appeared 10 times, while many others shared the same structure.

This could turn out to be a problem.

Traditionally, attackers build or download wordlists made of common passwords, real‑world leaks, and patterned variants (words plus numbers and symbols) to use in dictionary attacks. It requires almost no effort to add a thousand or so passwords commonly provided by AI chatbots.

AI chatbots are trained to provide answers based on what they’ve learned. They are good at predicting what comes next based on what they already have, not at inventing something completely new.

As the researchers put it:

“LLMs work by predicting the most likely next token, which is the exact opposite of what secure password generation requires: uniform, unpredictable randomness.”

In the past, we explained why computers are not very good at randomness in the first place. Password managers get around this fact by using dedicated cryptographic random number generators that mix in real‑world entropy, instead of the pattern‑based text generation you see with LLMs.

In other words, a good password manager doesn’t “invent” your password the way an AI does. It asks the operating system for cryptographic random bits and turns those directly into characters, so there’s no hidden pattern for attackers to learn.

A website or platform where you submit such passwords may tell you they’re strong, but the same basic reasoning as to why you shouldn’t reuse passwords applies. What use is a strong password if cybercriminals already have it?

As always, we prefer passkeys over passwords, but we realize this isn’t always an option. If you have to use a password, don’t let an AI make one up for you. It’s just not safe. And if you already did, consider changing it and add multi-factor authentication (2FA) to make the account more secure.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Categories: Malware Bytes

Intimate products producer Tenga spilled customer data

Thu, 02/19/2026 - 6:48am

Tenga confirmed reports published by several outlets that the company notified customers of a data breach.

The Japanese manufacturer of adult products appears to have fallen victim to a phishing attack targeting one of its employees. Tenga reportedly wrote in the data breach notification:

“An unauthorized party gained access to the professional email account of one of our employees.”

This unauthorized access exposed the contents of said account’s inbox, potentially including customer names, email addresses, past correspondence, order details, and customer service inquiries.

In its official statement, Tenga said a “limited segment” of US customers who interacted with the company were impacted by the incident. Regarding the scope of the stolen data, it stated:

“The information involved was limited to customer email addresses and related correspondence history. No sensitive personal data, such as Social Security numbers, billing/credit card information, or TENGA/iroha Store passwords were jeopardized in this incident.”

From the wording of Tenga’s online statement, it seems the compromised account was used to send spam emails that included an attachment.

“Attachment Safety: We want to state clearly that there is no risk to your device or data if the suspicious attachment was not opened. The risk was limited to the potential execution of the attachment within the specific ‘spam’ window (February 12, 2026, between 12am and 1am PT).”

See if your personal data has been exposed.

SCAN NOW

We reached out to Tenga about this “suspicious attachment” but have not heard back at the time of writing. We’ll keep you posted.

Tenga proactively contacted potentially affected customers. It advises them to change passwords and remain vigilant about any unusual activity. We would add that affected customers should be on the lookout for sextortion-themed phishing attempts.

What to do if your data was in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.
  • Use our free Digital Footprint scan to see whether your personal information has been exposed online.

What do cybercriminals know about you?

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.

SCAN NOW

Categories: Malware Bytes

Meta patents AI that could keep you posting from beyond the grave

Thu, 02/19/2026 - 6:16am

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The legal minefield

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Categories: Malware Bytes

Betterment data breach might be worse than we thought

Wed, 02/18/2026 - 12:09pm

Betterment LLC is an investment advisor registered with US Securities and Exchange Commission (SEC). The company disclosed a January 2026 incident in which an attacker used social engineering to access a third‑party platform used for customer communications, then abused it to send crypto‑themed phishing messages and exfiltrate contact and identity data for more than a million people.

What makes this particularly concerning is the depth of the exposed information. This isn’t just a list of email addresses. The leaked files include retirement plan details, financial interests, internal meeting notes, and pipeline data. It’s information that gives cybercriminals real context about a person’s finances and professional life.

What’s worse is that ransomware group Shiny Hunters claims that, since Betterment refused to pay their demanded ransom, it is publishing the stolen data.

While Betterment has not revealed the number of affected customers in its online communications, general consensus indicates that the data of 1.4 million customers was involved. And now, every cybercriminal can download this information at their leisure.

We analyzed some of the data and found one particularly worrying CSV file with detailed data on 181,487 people. This file included information such as:

  • Full names (first and last)
  • Personal email addresses (e.g., Gmail)
  • Work email addresses
  • Company name and employer info
  • Job titles and roles
  • Phone numbers (both mobile and work numbers)
  • Addresses and company websites
  • Plan details—company retirement/401k plans, assets, participants
  • Survey responses, deal and client pipeline details, meeting notes
  • Financial needs/interests (e.g., requesting a securities-backed line of credit for a house purchase)
See if your personal data has been exposed.

SCAN NOW

This kind of data is a gold mine for phishers, who can use it in targeted attacks. It has enough context to craft convincing, individually tailored phishing emails. For example:

  • Addressing someone by their real name, company, and job title
  • Referencing the company’s retirement or financial plans
  • Impersonating Betterment advisors or plan administrators
  • Initiating scam calls about financial advice

Combined with data from other breaches it could even be worse and lead to identity theft.

What to do if your data was in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.

SCAN NOW

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Categories: Malware Bytes

Job scam uses fake Google Forms site to harvest Google logins

Wed, 02/18/2026 - 7:22am

As part of our investigation into a job-themed phishing campaign, we came across several suspicious URLs that all looked like this:

https://forms.google.ss-o[.]com/forms/d/e/{unique_id}/viewform?form=opportunitysec&promo=

The subdomain forms.google.ss-o[.]com is a clear attempt to impersonate the legitimate forms.google.com. The “ss-o” is likely introduced to look like “single sign-on,” an authentication method that allows users to securely log in to multiple, independent applications or websites using one single set of credentials (username and password).

Unfortunately, when we tried to visit the URLs we were redirected to the local Google search website. This is a common phisher’s tactic to prevent victims from sharing their personalized links with researchers or online analysis.

After some digging, we found a file called generation_form.php on the same domain, which we believe the phishing crew used to create these links. The landing page for the campaign was: https://forms.google.ss-o[.]com/generation_form.php?form=opportunitysec

The generation_form.php script does what the name implies: It creates a personalized URL for the person clicking that link.

With that knowledge in hand, we could check what the phish was all about. Our personalized link brought us to this website:

Fake Google Forms site

The greyed out “form” behind the prompt promises:

  • We’re Hiring! Customer Support Executive (International Process)
  • Are you looking to kick-start or advance your career…
  • The fields in the form: Full Name, Email address, and an essay field “Please describe in detail why we should choose you”
  • Buttons: “Submit” and “Clear form.”

The whole web page emulates Google Forms, including logo images, color schemes, a notice about not “submitting passwords,” and legal links. At the bottom, it even includes the typical Google Forms disclaimer (“This content is neither created nor endorsed by Google.”) for authenticity.

Clicking the “Sign in” button took us to https://id-v4[.]com/generation.php, which has now been taken down. The domain id-v4.com has been used in several phishing campaigns for almost a year. In this case, it asked for Google account credentials.

Given the “job opportunity” angle, we suspect links were distributed through targeted emails or LinkedIn messages.

How to stay safe

Lures that promise remote job opportunities are very common these days. Here are a few pointers to help keep you safe from targeted attacks like this:

  • Do not click on links in unsolicited job offers.
  • Use a password manager, which would not have filled in your Google username and password on a fake website.
  • Use an up to date, real-time anti-malware solution with a web protection component.

Pro tip: Malwarebytes Scam Guard identified this attack as a scam just by looking at the URL.

IOCs

id-v4[.]com

forms.google.ss-o[.]com

Blocked by Malwarebytes

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

Wed, 02/18/2026 - 5:10am

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

Chrome &#8220;preloading&#8221; could be leaking your data and causing problems in Browser Guard

Tue, 02/17/2026 - 1:25pm

This article explains why Chrome’s “preloading” feature can cause scary-looking blocks in Malwarebytes Browser Guard and how to turn it off.

Modern browsers want to provide content instantly. To do that, Chrome includes a feature called page preloading. When this is enabled, Chrome doesn’t just wait for you to click a link. It guesses what you’re likely to click next and starts loading those pages in the background—before you decide whether to visit them.

That guesswork happens in several places. When you type a search into the address bar, Chrome may start preloading one or more of the top search results so that, if you click them, they open almost immediately. It can also preload pages that are linked from the site you’re currently on, based on Google’s prediction that they’re “likely next steps.” All of this happens quietly, without any extra tabs opening, and often without any obvious sign that more pages are being fetched.​

From a performance point of view, that’s clever. From a privacy and security point of view, it’s more complicated.

Those preloaded pages can run code, drop cookies, and contact servers, even if you never actually visit them in the traditional sense. In other words, your browser can talk to a site you didn’t consciously choose to open.​

Malwarebytes Browser Guard inspects web traffic and blocks connections to domains it considers malicious or suspicious. So, if Chrome decides to preload a search result that leads to a site on our blocklist, Browser Guard will still do its job and stop that background connection. The result can be confusing: You see a warning page (called a block page) for a site you don’t recognize and are sure you never clicked.

Nothing unusual is happening there, and it does not mean your browser is “clicking links by itself.” It simply means Chrome’s preloading feature made a behind-the-scenes request, and Browser Guard intercepted it as designed. Other privacy tools take a similar approach. Some popular content blockers disable preloading by default because it leaks more data and can contact unwanted sites.

For now, the simplest way to stop these unexpected block pages is to turn off preloading in Chrome’s settings, which prevents those speculative background requests.

How to manage Chrome’s preloading setting

We recommend turning off page preloading in Chrome to protect your browsing privacy and to stop seeing unexpected block pages when searching the web. If you don’t want to turn off page preloading, you can try using a different browser and repeating your search.

To turn off page preloading:

  1. In your browser search bar, enter: chrome://settings
  2. In the left sidebar, click Performance.
  3. Scroll down to Speed, then toggle Preload pages off.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Categories: Malware Bytes

Scam Guard for desktop: A second set of eyes for suspicious moments 

Tue, 02/17/2026 - 8:50am

Scams aren’t so obvious anymore. They’re well-written, have working grammar, and can lead victims to very convincing branded webpages. Scammers increasingly use AI tools to clone sites and create highly sophisticated scams at scale, so don’t expect to rely on spotting typos anymore!

That’s why we’ve expanded Scam Guard, our free, AI-powered scam detection assistant, to Windows and Mac. Previously mobile-only, Scam Guard helps you quickly figure out whether something you’re looking at is risky, all before you click, reply, or share. 

When something feels off but you’re not sure what

Scams show up everywhere: emails, texts, pop-ups, messages, and websites that look legitimate. But when you’re moving fast, it’s easy to slip up. 

Scam Guard is designed for exactly those moments. If you’re unsure about a message or link, you can ask Scam Guard to take a look. It uses AI to analyze the content and give you a clear, fast assessment so you can decide what to do next with more confidence. 

How Scam Guard helps 

Scam Guard provides a quick reality check just when you need it: 

  • Real-time threat intelligence: An AI-powered chat companion backed by decades of Malwarebytes threat intelligence and cybersecurity expertise. Get instant verdicts you can trust, plus clear next steps.
  • Comprehensive scam detection: It flags suspicious messages and links, spots common scam tactics, and explains why something is risky. It covers romance, phishing, financial, text, robocall, and shipping scams, and more.
  • Built for where scams start: Works right on your desktop, where many scams begin. No more wondering, “Is this legit?” With Scam Guard, you get an answer you can trust.
  • 24/7 support: Available around the clock, so you can get help anytime you need it.

Stop wondering to yourself “Is this legit?” With Scam Guard, you get an answer you can trust.  

Now on desktop, right where scams happen 

Many scams target users while they’re on their computers—checking email, browsing the web, or managing accounts. Bringing Scam Guard to Windows and Mac helps where it’s most useful. 

Whether you’re reviewing an unexpected message, a pop-up that feels urgent, or a deal that sounds a little too good, Scam Guard gives you a smarter way to pause and check before reacting.  Here’s how to share a scam with Scam Guard on your computer.

Extra protection, without extra stress 

Staying safe online shouldn’t mean becoming suspicious of everything. It should mean having the right tools when something doesn’t add up. Scam Guard is there to help you slow down, spot warning signs, and avoid costly mistakes. It makes it easier to protect yourself from scams.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

Pages