Malware Bytes

A week in security (August 12 – 18)

Malware Bytes Security - Mon, 08/19/2019 - 1:55pm

Last week on Malwarebytes Labs, we took a look at the potential pitfalls of facial recognition technology, looked at ways domestic abuse survivors can secure their data, and explored the education threat landscape. We also kicked off a series looking at the Hidden Bee infection chain, and put QxSearch installs under the spotlight.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 12 – 18) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

How much personalization is too much?

Malware Bytes Security - Mon, 08/19/2019 - 11:00am

This story originally ran in The Parallax on January 25, 2019, and was written by Dan Tynan.

In 2012, when Target used data analytics to identify customers who were expecting a baby, then mailed them coupons for maternity clothing and nursery furniture, it inadvertently revealed a teenage girl’s pregnancy to her parents.

Back then, the revelation caused an uproar. Today, that kind of artificial intelligence-assisted profiling is rapidly becoming routine. Personalization is the new mantra of marketers. And most people are perfectly OK with that.

According to a 2018 survey by Accenture Interactive, 91 percent of consumers said they’d prefer to shop with brands that know their preferences and offer personal recommendations. Three-fourths of them said they wanted brands to deliver a curated experience. And only 27 percent complained about companies being too invasive.

Personalization can be a boon. It’s helpful when remembers past purchases so you can easily reorder them. It’s a plus when Netflix recommends shows you want to binge on. And you may appreciate receiving a personally curated box of clothing from StitchFix.

But how much personalization is too much? And how do you control what happens to this highly personal information? The answers aren’t always clear.

How marketers get to youH

For decades, marketers have relied on generic personas to customize their advertising: He’s a stay-at-home dad who watches basketball and drives a minivan; she’s a mother who shops at Whole Foods and goes running on weekends.

Now, thanks to the data explosion generated by Internet-connected devices, and the ability to rapidly analyze this tsunami of information using AI, marketers are on the cusp of crafting offers specific to individual consumers, at scale.

“One-to-one marketing is really the holy grail,” says Patrick Tripp, vice president of product strategy for RedPoint Global, which offers a customer data platform to help brands personalize their marketing campaigns. “Not simply knowing your name, background, or interests, but also recommending the right path of personalized experiences, delivered at the right moment.”

By analyzing data from smart appliances, fitness trackers, and grocery purchases, for example, a marketer could figure out that you’re trying to avoid gluten. In response, it might recommend a wheat-free pasta recipe or a fat-burning exercise regimen, Tripp says.

The challenge is doing it in a way that’s helpful but not creepy.

“Marketers need to be explicit about how they ask consumers for permission and capture data, but implicit about how they’re actually delivering these experiences,” he says. “There are subtle ways to recommend products that are in line with the clues you’ve been giving but aren’t invasive.”

Where’s the data coming from?

But this level of personalization requires lots of data—much of it collected, aggregated, and shared without most users ever being aware of it. In addition to information they collect in the course of doing business with you, many brands also augment your profiles with data acquired from third-party brokers and web-tracking companies.

December 2017 study by web browser privacy add-on maker Ghostery found that three out of four web pages contain some kind of tracking technology, and one in six sites use them to collect and share personal information. (Trackers for the biggest collectors of personal info, Google and Facebook, were respectively found on 60 percent and 27 percent of all sites surveyed.) Some trackers can uniquely identify individuals, such as when a URL request contains the user’s email address, says Jeremy Tillman, Ghostery’s director of product.

The information can get very personal. For example, he says, if you search a site like for information about HIV, or schedule an appointment with a clinician, that information could be shared among other companies that use the same tracking technology.

A recent report by Privacy International revealed that 20 popular Android apps—including those by Kayak, Spotify, TripAdvisor, and Yelp—are automatically transmitting data to Facebook, even if their users don’t have a Facebook account.

“If combined, data from different apps can paint a fine-grained and intimate picture of people’s activities, interests, behaviors, and routines, some of which can reveal special-category data, including information about people’s health or religion,” the report notes.

What could go wrong?

Any large collection of data is vulnerable to breaches and serves as a rich target for malicious actors, notes Paul Bischoff, a privacy advocate for Comparitech. The more personal the information, the more valuable it is. Companies may also share this data indiscriminately—as Facebook did when it allowed Cambridge Analytica to access personal data related to 87 million of its members, he adds.

“The same information used to personalize apps and websites can also be used to target you with political ads, and in more extreme cases can be used for harassment or discrimination,” Bischoff says.

And if a company goes out of business or is acquired, that highly personal data is almost always an asset that can be sold or transferred.

Personalization can also come back to bite you in the wallet. Life insurance giant John Hancock will soon require your Fitbit data, for example, to determine how much it charges you for coverage. Orbitz and Hotel Tonight already show different prices for flights and hotels, respectively, depending on the kind of device you use or the location of your phone. One-to-one personalized pricing is the next logical step, writes Neil Howe, a demographer and author credited with coining the term “millennials.”

What can you do?

If you’d rather not get personalization-themed offers from brands—or at least have more control over the data used to generate them—your options are pretty limited.

The Ghostery browser extension allows you to manage and block tracking technologies on each website. Android users can reset the unique advertising ID number on their phones, which essentially erases your previous tracking history and starts over. Google and Facebook let you opt out of seeing personalized ads, though they’ll continue to track you.

And while even Silicon Valley giants Amazon, Apple, and Google support some kind of overarching federal privacy regulation, it’s unlikely to go as far as GDPR’s “right to be forgotten,” which gives consumers control over the data companies generate about them.

“I definitely think we’ll see regulation in the US and other places beyond the European Union,” says Mike Herrick, senior vice president of Urban Airship, which helps brands engage with their customers using first-party data. “The key thing about GDPR is that it takes a privacy-by-design approach. Every company should be getting in front of that, being thoughtful about the data they use, and avoid doing anything sketchy.”

For now, though, the price of privacy remains eternal vigilance, Bischoff says.

“Any time you get a new device, sign up for a new account, or install a new app, take a moment to adjust your privacy settings,” he advises. “Often, it’s possible to opt out of a lot of data collection schemes, but most people never bother to do it.”

The post How much personalization is too much? appeared first on Malwarebytes Labs.

Categories: Malware Bytes

QxSearch hijacker fakes failed installs

Malware Bytes Security - Fri, 08/16/2019 - 5:06pm

Recently, one of the more dominant search hijacker families on our radar has started to display some curious behavior. The family in question is delivered by various Chrome extensions and classified as PUP.Optional.QxSearch because of its description in listings of installed extensions, which tells us that “QxSearch configures your default search settings.”

This branch of the search hijacker family is a clear descendant of SearchPrivacyPlus, which is referenced in our removal guide for a Chrome extension called SD App. The Chrome Web Store entries and websites that promote both QxSearch and SearchPrivacyPlus are almost identical. What’s different is that QxSearch tells users that the installation failed or that an extra step is required.

However, despite the message asking users to try again, the extension has already been installed. Curious.

How can we recognize QxSearch extensions?

QxSearch can be found in more than one Chrome extension in the Web Store. We can recognize them by spotting the QxSearch description, which also shows up in the overview section of the store.

QxSearch configures your default search settings

At the moment, these extensions are installed from the Web Store after a redirect from sites that are served up by ad-rotators. The sites all look similar, showing a prompt that tells users, “Flash SD App required to proceed” and a button marked “Browse Safely” that leads to the extension in the Web Store.

In the Web Store, another common denominator so far has been the “Offered by: AP” subhead.

During the installation, the “Permissions prompt” will show that the extension reads and changes your data on a number of websites:

Using the “Show Details” link will show users that the sites they want to read and change belong to some of the most commonly-used search engines, including Google, Bing, and Yahoo.,,,, and are the domains targeted by this search hijacker.

The hijacker intercepts searches performed on these domains and redirects the user to a domain of their own, showing the search results while adding some sponsored results at the top.

We are not sure whether the behavior of showing a failed install notification is by design or just sloppy programming, but given the fact that the “error” hasn’t been corrected after a few weeks, this leads us to believe it might be on purpose.

Looking at the installation process, it looks as if the fail occurs when the extension is due to add an icon to the browser’s menu bar. As a result, these hijackers do not display an icon—a handy way to make them more difficult to remove.

Protection against QxSearch

Malwarebytes removes these extensions and blocks the known sites that promote these extensions.

It is useless to blacklist the extensions, because a new one is pushed out at least once every day. So instead, we’ll show you some typical trademarks that they have in common so you can recognize them—and avoid them.

IOCs of QxSearch

Search result domains:


Landing pages:

  • … /chrome/new/2/?v=500#sdapp93
Similar but not the same

Another family of hijackers displays slightly similar behavior by showing an installation failed notification.

Only in this case the “interrupted installation” refers to the installation of a second extension that the first one tried to trigger. In this family, the first extension is a search hijacker and the second one is a “newtab” hijacker. The search hijackers in this family are detected as PUP.Optional.Safely and the Newtab hijacker is called Media New Tab.

Why would search hijackers do this?

Search hijackers don’t generate large amounts of cash for threat actors, like ransomware or banking Trojans. So, the publishers are always looking for ways to get installed on large numbers of systems and stay installed for as long as possible.

This “installation failed” tactic could have been invented to make users think nothing was installed, so there is no reason to check for or suspect suspicious behavior. This does not explain why they opted to redirect to their own domain rather than simply adding the sponsored results as we have seen in the past.

So, it remains a bit of a mystery and reason enough to keep an eye on this family.

Search hijackers in general

Search hijackers come in different flavors. Basically, they can be divided into three main categories if you look at their methodology:

  • The hijacker redirects victims to the best paying search engine.
  • The hijacker redirects victims to their own site and show additional sponsored ads.
  • The hijacker redirects victims to a popular search engine after inserting or replacing sponsored ads.

By far the most common vehicle are browser extensions, whether they are called extensions, add-ons, or browser helper objects. But you will see different approaches here as well:

  • The extension lets the hijacker take over as the default search engine.
  • The extension takes over as “newtab” and shows a search field in that tab.
  • The extension takes permission to read and change your data on websites. It uses these permissions to alter the outcome of the victim’s searches.

Especially in the case of both lists, it helps the hijacker to be hidden from plain sight as the user might not notice that his search results are “off.” Which seems to be exactly what this branch of the QxSearch family is doing.

A short lesson

The lesson we can take away from these search hijackers is that the sheer notification that an install has failed is not enough reason to assume that nothing was installed. Stay vigilant so that, even if the culprit isn’t readily visible, you’ll know what to do.

Stay safe everyone!

The post QxSearch hijacker fakes failed installs appeared first on Malwarebytes Labs.

Categories: Malware Bytes

The Hidden Bee infection chain, part 1: the stegano pack

Malware Bytes Security - Thu, 08/15/2019 - 11:26am

About a year ago, we described the Hidden Bee miner delivered by the Underminer Exploit Kit.

Hidden Bee has a complex and multi-layered internal structure that is unusual among cybercrime toolkits, making it an interesting phenomenon on the threat landscape. That’s why we’re dedicating a series of posts to exploring particular elements and updates made during one year of its evolution.

Recently, we decided to revisit this interesting miner, describing its loader that starts the infection from a single malicious executable. This post will present an alternative loader that is deployed when the infection starts from the Underminer Exploit Kit. It is analogous to the loader we described in the following posts from 2018: [1] and [2].

The dropped payloads: an overview

The first time we spotted Hidden Bee, it started the infection from a flash exploit. It downloaded and injected two elements with WASM extensions that in reality were executable modules in a custom format. We described them in detail here.

The files with WASM extensions, observed a year ago

Those elements were the initial loaders, responsible for initiating the infection chain that at the end installed the miner.

Nowadays, those elements have changed. If we take a look at the elements dropped by the same EK today, we will no longer find those WASM extensions. Instead, we encounter various multimedia files: a WAV (alternatively two WAVs), a JPEG, and a PNG.

The elements downloaded nowadays: WAV, JPG, PNG

The WAV files are downloaded by iexplore.exe, the browser where the exploit is run. In contrast, the images are downloaded at later stages of infection. For example, the JPG is always downloaded from the dllhost.exe process. The PNG is often downloaded from yet another process.

In some runs, we observed the PNG to be downloaded instead of the JPG:

Alternative: PNG being downloaded after WAV

We will start our journey of Hidden Bee analysis by looking at these files. Then, we will move to see the code responsible for processing them in order to reveal their hidden purpose.

The roadmap of the full described package:

Diagram showing the transitions between the elements The downloaded WAV

The WAV file sounds like grey noise, and we suspect that it is meant to hide some binary belonging to the malware.

An oscillogram of the WAV file

The data is unreadable, probably encrypted or obfuscated:

We also found a repeating pattern inside, which looks like an encrypted padding. The size of the chunk is 8 bytes.

The repeating pattern inside the file: 8 bytes long

This time, using the repeating pattern as an XOR key didn’t help in getting a readable result, so probably some more complex block cipher was used.


Below is a sample JPG, downloaded from the URL in the format: /views/[unique_string].jpg

In contrast to the WAV content, the JPG always looks like a valid image. (Interestingly, all the JPGs we observed have a consistent theme of manga-styled girls.) However, if we take a closer look at the image, we can see that some data is appended at the end.

Let’s analyze the JPG and try to extract the payload.

First, I opened the image in a hexeditor (i.e. HxD). The size of the full image is 156,005 bytes. The last 118,762 bytes belong to the malware. So, we need remove the first 37,243 bytes (156,005-118,762=37,243) in order to get the payload.

The appended part of the JPG

The payload does not look like a valid code, so it is probably obfuscated. Let’s try the easiest option first and see if there are any candidates for the XOR key. We can see that the payload has padding at the end:

Let’s try to apply the repeating character (in the given example it is 0xE5) as an XOR key. This is the result (1953032199142ea8c5872107da8f2297):

Repeating the experiment on various payloads, we can see that the result always start from the keyword !rcx. As we know from analyzing other elements of Hidden Bee, the authors of this malware decided to use various custom formats named after 64-bit Intel registers. We also encountered packages starting from !rbx and !rsi at different layers. So, this is the first element in the chain that uses this convention.

When we load the !rcx module into IDA, we can confirm that it contains valid code. More detailed explanation about the !rcx format will be given later on in this article.


Let’s have a look at a sample PNG, download from the “captcha.png” (URL format: /images/captcha.png?mod=attachment&u=[unique_id]):

Although it is a PNG in a valid format, it looks like noise. It probably represents bytes of some encrypted data. An attempt of converting PNG to raw bytes didn’t give any readable results. We need to analyze the code in order to discover what it hides.

Code analysis: the initial SWF file

The initial SWF file is embedded on the website and responsible for serving the exploit. If we look inside it, we will not find anything malicious at first. However, among the binary data we can find another suspicious WAV as an audio asset:

The beginning of the file:

This SWF file also contains a decoder for it:

The function “decode” takes four parameters. The first of them is the byte array containing the WAV asset: That is the content to be decoded. The second argument is an MD5 (the “setup” function is an MD5 implementation) made of concatenation of the AppId and the AppToken: That is probably the encryption key. The third parameter is a salt (probably the initialization vector of the crypto).

The salt is fetched from the HTML page, where the Flash component is embedded:

Alternative case: two WAV files

Sometimes, rather than embedding the WAV containing the Flash exploit, authors use another model of delivering it. They store the URL to the WAV, and then they retrieve the file.

In the below example, we can see how this model is applied to Hidden Bee. The salt, along with the WAV URL, are both stored in the Javascript embedded in the HTML:

The Flash file first loads it and then decodes as the next step:

Looking at the traffic capture, we can see that in this case, not one, but two WAV files are downloaded:

A case when two WAV files were downloaded (and none embedded in the Flash)

The algorithms used to encrypt the content of the first WAV may vary and sometimes the algorithm is supplied as one of the parameters. After the content is fetched, the data from the WAV files is decoded using one of the available algorithms:

We can see that the expected content is a Flash file that is then loaded:

The “decode” function

The function “decode” is imported from the package “”:

The full decompiled code is available here.

When we look inside, we see that the code is slightly obfuscated:

Looking at the decompiled code, we see some interesting constants. For example, –889275714 in hex is 0xCAFEBABE. As we found during analysis of other Hidden Bee elements, this DWORD was used by the same authors before as a magic number identifying one of the custom formats.

Internally, there are references to a function from another module: E_ENCRYPT_process_bytes(). Inside this function, we see calls suggesting that the Rabbit Cipher has been used:

Rabbit uses a 128-bit key (the same length as the MD5 hash that was mentioned before) and a 64-bit initialization vector. (In different runs, a different encryption algorithm may be selected.)

After the decoding process is complete, the revealed content is loaded:

The first WAV: a Flash exploit

The decoded WAV contains a package with two elements embedded: a Flash file (movies.swf) and the configuration file (config.cfg). The decrypted data starts from the magic DWORD 0xCAFEBABE, which we noticed in the code of the previous SWF.

The Flash file (movies.swf) contains an embedded exploit. In the analyzed case, the exploit used is CVE-2015-5122, however, a different exploit may be used on a different machine:

The payload (shellcode) is stored in form of an array (binary version available here: 9aec11ff93b9df14f060f78fbb1b47a2):

The configuration file (config.cfg) contains the URL to another WAV file.

The payload is padded with NOP (0x90) bytes, and the parameters, including the configuration, are filled there before the payload runs.

The fragment of the code feeding the configuration into the payload The shellcode: downloading the second WAV

The second WAV, in contrast to the first one, is always downloaded and never embedded. It is retrieved by the “PayloadWin32” shellcode (9aec11ff93b9df14f060f78fbb1b47a2), deployed after the successful exploitation.

Looking inside this shellcode, we find the function that is responsible for downloading and decrypting another WAV. The shellcode uses parameters that were filled by the previous layer. This buffer contains the URL that will be queried and the key that will be used for decryption of the payload. It loads functions from wininet.dll using their checksums. After the initialization steps, it queries the supplied URL. The expected result is a buffer with a header typical for WAV files.

As we already suspected, the data of the WAV (starting from the offset 0x2C) contains the encrypted content. Indeed, blocks that are 8 bytes long are decrypted in a loop:

After the decryption is complete, the next module will be revealed. It is interesting to take a look at the expected header of the payload to learn which format is used for the output element. This time, the decoded data is supposed to start with the following magic numbers: 0x01, 0x04, …, 0x10.

The second WAV: an executable in proprietary format

On the illustration below, we can see how the data of the WAV looks after being decrypted (9b37c9ec19a53007d450b9b9c8febbe2):

This is an executable component that is loaded into Internet Explorer. After it decodes the imports, it starts to look much more familiar:

We can see that it follows an analogical structure to the one described in last year’s article.

This module is first executed within Internet Explorer. Then, it creates another process (dllhost.exe) in a suspended state:

It injects its original copy there (769a05f0eddd6ef2ebdd13618b244758):

Then it redirects execution to its loading function. Below, we can see the Entry Point of the implanted module within dllhost.exe.

A detailed analysis of the execution flow of this module and its format will be given later in the article.

At this point, it is important to note that the dllhost.exe is the module that further downloads the aforementioned images.

The modules with the custom format

The module with the custom format is analogous to the one described before. However, we can see that it has significantly evolved.

There are changes in the header, as well as improvements in the implementation.

Changes in the custom format

The new header is similar to the previous one. The few details that have changed are: the magic number at the beginning (from 0x10000301 to 0x10000401), and the format in which the DLLs are stored (the length of a DLL name has been added). That’s why we will refer to this format as “0x10000401 format.”

Another change is that now the names of the DLLs are obfuscated by a simple XOR with 1 byte character. They are deobfuscated just before being loaded.

Summing up, we can visualize the new format in the following way:

Obfuscation used

This time, authors decide to obfuscate all the strings used inside the module. Now all the strings are decoded just before use.

Example: decoding the string before the use

The decoding algorithm is simple, based on XOR:

The string-decoding algorithm Inside the images downloader

Let’s look inside the first module in the 0x10000401 format that we encountered. This module is an initial stage, and its role is to download and unpack the other components. One such component is in a CAB format (that’s why we can see the Cabinet.dll among the imported DLLs).

The role of this module is similar to the first “WASM” mentioned in our post a year ago. However, the current version is not only better protected, but also comes with some improvements. This time the downloaded content is hidden in the images. So, analyzing this element can help us to understand how the used stenography works.

First, we can see that the URLs are retrieved from their Base64 form:

This string decodes to a list containing URLs of the PNG and JPG files that are going to be downloaded. For each sample, this set is unique. None of the URLs can be reused: the server gives a response only once. An example of a URL set:

So, we can confirm that this module is the one responsible for downloading and processing the observed images. Indeed, inside we can find the functions responsible for their decoding.

Decoding the JPG

After the payload is retrieved, the JPG header is validated.

Then, the payload is decoded by simply using an XOR with the last byte. The decoded content is expected to start from the !rcx magic ID.

After decoding the content, the hash of the !rcx module is validated with the help of SHA256 hash. The valid hash is stored in the module’s header and compared with the calculated hash of the file content.

If the validation passed, the shellcode stored in the !rcx module is loaded. More details about the execution flow will be given later.

The !rcx package has a simple header:

Decoding the PNG

Retrieving the content from the PNG is more complex.

“captcha.png” – the encrypted CAB file

First, after downloading, the PNG header is checked:

The function decoding the PNG has the following flow:

It converts the PNG into byte content and decrypts it with the help of ARIA cipher. The result should be a CAB format. The unpacked CAB is supposed to contain a module “bin/i386/core.sdb” that also occurred in our previous encounters with Hidden Bee.

The authors are careful not to reuse URLs as well as encryption keys. That’s why the Aria key is different for every unique payload. It is stored just after the end of the 0x10000401 module :

Key format: WORD key length; BYTE key_bytes[];

During the module’s loading, the key is rewritten into another memory area, from which it is used to decrypt the downloaded module.

The CAB file retrieved from the PNG is available here: 001bdc26b2845dcf839f67a8760c6839

It contains core.sdb (d1a2fdc79c154b120a0e52c46a73478d). That is a second module in Hidden Bee’s custom format.

Inside core.sdb

This module (retrieved from the PNG) is a second downloader component in the 0x10000401 format. This time, it uses a custom TCP-based protocol, referenced by the authors as SLTP. (This protocol was also used by the analogical component seen one year ago). The embedded links:

sltp:// sltp://

Execution flow
  1. Checks for blacklisted processes. If any are detected, exits.
  2. Removes functions: DbgBreakPoint, DbgUserBreakPoint by overwriting their beginning with the RET instruction.
  3. Checks if the malware is already installed. If yes, exits.
  4. Creates an installation mutex {71BB7F1C-D700-4487-B9C6-6DD9863DFE91}-ins.
  5. If the module was run with the flag==1:
    1. Connects to the first address: sltp://
    2. Sets an environment variable INSTALL_SOURCE to the value given as an argument.
    3. Runs the downloaded next stage module.
  6. If the module was run with the flag!=1:
    1. Performs checks against VM. If detected, exits.
    2. Connects to the second address: sltp:// This time, appends the victim’s fingerprint to the URL. Format: <URL>&sid=<INSTALL_SID>&sz=<unique machine ID: 16 bytes hex>&os=<Windows version number>&ar=<architecture>
    3. Runs the downloaded next stage module.
Defensive checks

At this stage, many anti-analysis checks are deployed. First, there are checks to detect if any of the blacklisted processes are running. The enumeration of the processes is implemented using a low-level function: NtQuerySystemInformation with a parameter 5 (SystemProcessInformation).

The blacklist contains popular debuggers and sniffers:

“devenv.exe” , “wireshark.exe”, “vmacthlp.exe”, “procmon.exe”, “ollydbg.exe”, “idag.exe”, “ImmunityDebugger.exe”, “windbg.exe”
“EHSniffer.exe”, “iris.exe”, “procexp.exe”, “filemon.exe”, “fiddler.exe”

The names of the processes are obfuscated, so they are not visible on the strings list. If any of those processes are detected, the execution of the module terminates.

Another function deploys a set of anti-VM checks. The anti-VM checks include:

CPUID with EAX=40000000 (a check for Hypervisor’s Brand):

The VMWAre I/O Port (more details [here]):

VPCEXT instruction (more details [here])

Checking the list of common VM vendors:

Checking the BIOS versions typical for virtual environments:

Detection of any of the features suggesting a VM results in termination of the component.

Downloading new modules

The next elements of HiddenBee are downloaded over the custom “STLP” protocol.

The raw TCP socket created to communicate using the SLTP protocol:

The communication is encrypted. We can see that the expected output is a shellcode that is loaded and executed:

The way in which it is loaded reminds me of the elements we described recently in “Hidden Bee: Let’s go down the rabbit hole“. The current module loads a list of functions that will be passed to the next module. It is a minimalistic, custom version of Import Table. It also passes the memory with the downloaded filesystem to be used for further loading of components.

The !rcx package

This element retrieves the custom filesystem used by this malware. As we know from previous analysis, Hidden Bee uses its own, custom filesystems that are mounted in the memory of the malware and passed to its components. This filesystem is important for the execution flow because it contains many other components that are supposed to be installed on the attacked system in order to continue the infection.

As mentioned before, unpacking the JPG gave us an !rcx package. After this package is downloaded, and its SHA256 checksum is validated, it is repackaged. First, at the end of the !rcx package, the list of URLs (JPG, PNG) from the previous module is copied. Then, the ARIA key is copied. The size of the module and its SHA256 hash are updated. Then, the execution is redirected to the first stage shellcode fetched from the !rcx.

This shellcode was the one that we saw at first, after decoding the !rcx package from the JPG. Yet, looking at this part, we do not see anything malicious. The elements that are more important are well protected and revealed at the next execution stages.

The shellcode from the !rcx package is executed in two stages. The first one unpacks and prepares the second. First, it loads its own imports using hardcoded names of libraries.

The checksums of the functions that are going to be used are stored in the module and compared with the names calculated by the function:

The checksum calculation algorithm

It uses the functions from kernel32.dll: GetProcessHeap, VirtualAlloc, VirtualFree, and from ntdll.dll: RtlAllocateHeap, RtlFreeHeap, NtQueryInformationProcess.

The repackaged !rcx module is supposed to be supplied as one of the arguments at the Entry Point of the first shellcode. It is most important because the second stage shellcode will be unpacked from the supplied !rcx package.

Checking the !rcx magic (first stage shellcode)

A new memory area is allocated, and the second stage shellcode is unpacked there.

Decoding and calling next module

Inside the second shellcode, we see strings referencing further components of the Hidden Bee malware:


The role of the second stage is unpacking another part from the !rcx: an !rdx package.

Checking the !rdx magic (second stage shellcode)

From our previous experience, we know that the !rdx package is a custom filesystem containing modules. Indeed, after the decryption is complete, the custom filesystem is revealed:

So the part that was hidden in the JPG is, in reality, a package that decrypts the custom filesystem and deploys the next stage modules: /bin/i386/preload and /bin/i386/coredll.bin. This filesystem has even more elements that are loaded at later stages of the infection. Their full functionality will be described in the next article in our series.

Even more hidden

From the beginning, Hidden Bee malware has been well designed and innovative. Looking at one year of its evolution, we can be sure that the authors are serious about making it even more stealthy—and they don’t stop improving it.

Although the initial dropper uses components analogous to ones observed in the past, revealing their encrypted content now takes many more steps and much more patience. The additional difficulty in the analysis is introduced by the fact that the URLs and encryption keys are never reused, and work only for a single session.

The team behind this malware is skilled and determined. We expect that the Hidden Bee malware won’t be going extinct anytime soon.

The post The Hidden Bee infection chain, part 1: the stegano pack appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Trojans, ransomware dominate 2018–2019 education threat landscape

Malware Bytes Security - Wed, 08/14/2019 - 9:00am

Heading into the new school year, we know educational institutions have a lot to worry about. Teacher assignments. Syllabus development. Gathering supplies. Readying classrooms.

But one issue should be worrying school administrators and boards of education more than most: securing their networks against cybercrime.

In the 2018–2019 school year, education was the top target for Trojan
malware, the number one most-detected (and therefore most pervasive) threat category for all businesses in 2018 and early 2019. Adware and ransomware were also particularly drawn to the education sector last year, finding it their first and second-most desired target among industries, respectively.

To better analyze these threats, we pulled telemetry on educational institutions from our business products, as well as from IP ranges connecting from .edu domains to our consumer products. What we found was that from January to June 2019, adware, Trojans, and backdoors were the three most common threats for schools. In fact, 43 percent of all education detections were adware, while 25 percent were Trojans. Another 3 percent were backdoors.

So what does this tell us to expect for the 2019–2020 school year? For one, educational institutions must brace themselves for a continuing onslaught of cyberattacks, as the elements that made them attractive to criminals have not changed. However, more importantly, by examining trends in cybercrime and considering solutions to weaknesses that made them susceptible to attack, schools may be able to expel troublesome threat actors from their networks for good.

Why education?

Surely there are more profitable targets for cybercriminals than education. Technology and finance have exponentially bigger budgets that could be tapped into via large ransom demands. Healthcare operations and data are critical to patient care—loss of either could result in lost lives.

But cybercriminals are opportunistic: If they see an easy target ripe with valuable data, they are going to take advantage. Why spend the money and time developing custom code for sophisticated attack vectors when they can practically walk through an open door onto school networks?

There are several key factors that combine to make schools easy targets. The first is that most institutions belonging to the education sector—especially those in public education—struggle with funding. Therefore, the majority of their budget is deferred to core curriculum and not so much security. Hiring IT and security staff, training on best practices, and purchasing robust security tools and programs are often an afterthought.

The second is that the technological infrastructure of educational institutions is typically outdated and easily penetrated by cybercriminals. Legacy hardware and operating systems that are no longer supported with patches. Custom school software and learning management systems (LMSes) that are long overdue for updates. Wi-Fi routers that are operating on default passwords. Each of these make schools even more vulnerable to attack.

Adding insult to injury, school networks are at risk because students and staff connect from personal devices (that they may have jailbroken) both on-premises and at home. With a rotating roster of new students and sometimes personnel each year, there’s a larger and more open attack surface for criminals to infiltrate. In fact, we found that devices plugging into the school network (vs. school-owned devices) represented 1 in 3 compromises detected in H1 2019.

To complicate matters, students themselves often hack school software out of sheer boredom or run DDoS attacks so they can shut down the Internet and disrupt the school day. Each infiltration only widens the defense perimeter, making it nearly impossible for those in education to protect their students and themselves from the cyberattacks that are sure to come.

And with such easy access, what, exactly, are criminals after? In a word: data. Schools collect and store valuable, sensitive data on their children and staff members, from allergies and learning disorders to grades and social security numbers. This information is highly sought-after by threat actors, who can use it to hold schools for ransom or to sell for high profit margins on the black market (data belonging to children typically garners a higher price).

School threats: a closer look

Adware represented the largest percentage of detections on school devices in H1 2019. Many of the families detected, such as SearchEncrypt, Spigot, and IronCore, advertise themselves as privacy-focused search engines, Minecraft plugins, or other legitimate teaching tools. Instead, they bombard users with pop-up ads, toolbars, and website redirects. While not as harmful as Trojans or ransomware, adware weakens an already feeble defense system.

Next up are Trojans, which took up one quarter of the threat detections on school endpoints in H1 2019. In 2018, Trojans were the talk of the town, and detections of this threat in organizations increased by 132 percent that year.

While still quite active in the first half of 2019, we saw Trojan detections decrease a bit over the summer, giving way to a landslide of ransomware attacks. In fact, ransomware attacks against organizations increased a shocking 365 percent from Q2 2018 to Q2 2019. Whether this is an indication of a switch in tactics as we head into the fall or a brief summer vacation from Trojans remains to be seen.

The top two families of Trojans in education are the same two who’ve been causing headaches for organizations worldwide: Emotet and TrickBot. Emotet leads Trojan detections in every industry, but has grown at an accelerated pace in education. In H1 2019, Emotet was the fifth-most predominant threat identified in schools, moving up from 11th position in 2018. Meanwhile TrickBot, Emotet’s bullying cousin, represents the single largest detection type in education among Trojans, pulling in nearly 6 percent of all identified compromises.

Emotet and TrickBot often work together in blended attacks on organizations, with Emotet functioning as a downloader and spam module, while TrickBot infiltrates the network and spreads laterally using stolen NSA exploits. Sometimes the buck stops there. Other times, TrickBot has one more trick up its sleeve: Ryuk ransomware.

Fortunately for schools but unfortunately for our studies, Malwarebytes stops these Emote-drops-TrickBot-drops-Ryuk attacks much earlier in the chain, typically blocking Emotet or TrickBot with its real-time protection engine or anti-exploit technology. The attack never progresses to the Ryuk stage, but our guess is that many more of these potential breaches would have been troublesome ransomware infections for schools if they hadn’t had the proper security solutions in place.

Class of 2020 threats

The class of 2020 may have a whole lot of threats to contend with, as some districts are already grappling with back-to-school attacks, according to The New York Times. Trojans such as Emotet and TrickBot had wildly successful runs last year—expect them, or other multi-purpose malware like them—to make a comeback.

In addition, ransomware has already made waves for one school district in Houston County, Alabama, which delayed its return to classes by 12 days because of an attack. Whether it’s delivered via Trojan/blended attack or on its own, ransomware and other sophisticated threats can bring lessons to a halt if not dealt with swiftly.

In 2019, Malwarebytes assisted the East Irondequoit Central School District in New York during a critical Emotet outbreak that a legacy endpoint security provider failed to stop. Emotet ran rampant across the district’s endpoint environment, infecting 1,400 devices and impacting network operations. Thankfully Malwarebytes was able to isolate, remediate, and recover all infected endpoints in 20 days without completely disrupting the network for students or staff.

If school IT teams do their research, pitch smart security solutions to their boards for funding, and help students and staff adopt best practices for online hygiene, they can help make sure our educational institutions remain functional, safe places for students to learn.

The post Trojans, ransomware dominate 2018–2019 education threat landscape appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Data and device security for domestic abuse survivors

Malware Bytes Security - Tue, 08/13/2019 - 12:33pm

For more than a month, Malwarebytes has worked with advocacy groups, law enforcement, and cybersecurity researchers to deliver helpful information in fighting stalkerware—the disturbing cyber threat that enables domestic abusers to spy on their partners’ digital and physical lives.

While we’ve ramped up our detections, written a safety guide for those who might have stalkerware on their devices, and analyzed the technical symptoms of stalkerware, we wanted to take a few steps back.

Many individuals need help before stalkerware even strikes. They need help with the basics, like how to protect a device from an abuser, how to secure sensitive data on a device, and how to keep their private conversations private.

At the end of July, we presented on data and device security to more than 150 audience-members at the National Network to End Domestic Violence’s Technology Summit. Malwarebytes broke down several simple, actionable device guidelines for domestic abuse survivors.

Let’s take a look.

Device security

Similar to our guide for domestic abuse survivors who suspected stalkerware was implanted on their devices, the following safety tips are options—not every tip can be used by every survivor. The survivor who just escaped their home has different options available to them than the survivor who still lives with their partner, or the survivor being supported at a domestic abuse shelter’s safe house.

Taken together, we hope these guidelines will give individuals the information they need to stay safe in the ways that help them most.

Create and use a device passcode

A device passcode is the first line of defense in device security. With it, you can prevent unwanted third parties from rummaging through your apps, reading your notes, viewing your messages and emails, and looking through your search history. It’s simple, it’s effective, and it’s available on nearly every single modern smart device available today.

When creating a passcode, create a string of at least six numbers that have no immediate connection to you. That way, another person can’t guess your passcode by inputting a few important numbers, like your birthdate, your zip code, or select digits from your phone number.

If you own a newer iPhone or Android device, you can also choose to lock your device by using your biometric information, like a scan of your face or thumbprint.

Finally, while it may seem like an annoyance, you should set your device to require a passcode for every attempted unlock. Some devices let their users keep a device unlocked if the passcode was entered within the past 10 minutes, or even an hour. Don’t do that. It leaves your device unnecessarily vulnerable to someone simply picking it up and accessing the important data inside.

Install an antivirus

Now that more cybersecurity companies are taking stalkerware seriously, you have a few options in best protecting yourself and your device.

The free-to-download version of Malwarebytes, available for iOS and Android devices, as well as Mac and PC computers, both detects and removes thousands of malware strings that we’ve identified as stalkerware. Even if you don’t recognize the names of popular stalkerware products, we’re still helping you find and remove them.

The premium version of Malwarebytes, which has a paid subscription, runs a 24-hour defense shield on your device, preventing stalkerware from being implanted on your device.

Practice good link hygiene

Stalkerware typically gets delivered onto a device when someone clicks on a link that has been sent through an email or text. Because of this easy installation process, you should be careful with the links you click.

Do not open any email attachments from unknown senders, and do not click on links sent in text messages from phone numbers you do not recognize.

Check your notification settings

While helpful, the notifications that pop up on your device that tell you about the latest news alert or your most recent email have a security vulnerability. Depending on your phone settings, these notifications could reveal—even on your phone’s lock screen—the subject line of an email, the sender, and even the first few words of a text message.

To protect yourself, you can navigate to your device’s system settings and find the specific settings for notifications, blocking them from revealing any details on your lock screen.

On iPhones, you have the option of hiding all notifications, or choosing how notifications are shown: on the Lock Screen, on a pull-down menu, or in banners across the top of your device’s screen.

On Android devices, depending on the model, you can again go into your system settings, find the notification settings, and choose whether your notifications will “Show Content” or “Hide Content.”

Update your software

This last step for securing your device has two huge upsides: It’s easy to remember, and it’s extremely useful. Updating your software is the simplest, most efficient way to protect your device from known security vulnerabilities. The longer your software goes without an update, the longer it stays open to cyber threats.

Don’t take the risk.

Securing your sensitive data

Next, we’re going to explain the many ways to secure the data that lives on your device. Maybe you’re thinking of protecting photos that you will eventually send to law enforcement as evidence of abuse. Maybe you want to keep your private conversations private. Or maybe you’re trying to protect the online accounts that you access on your device.

Here are a few ways to protect yourself.

Use a secure messaging app

Secure messaging apps do just that—they keep messages secure. By implementing a feature called end-to-end encryption, these apps prevent third parties from eavesdropping on your conversations. For many of these apps, even the companies that develop them cannot access users’ messages, because those companies do not have the keys necessary to decrypt that data.

There are several options today for both iOS and Android devices, and you may have heard their names—Signal, WhatsApp, iMessage, Wire, and more.

Because there are several options, the important thing to remember is that there is no one, perfect secure messaging app—there is only the right secure messaging app for you. When choosing an app, think about what you need. The domestic abuse survivor who still lives with their partner might need to have their messages erased if their partner somehow finds a way into their device. The domestic abuse survivor with a new device might need to hide their phone number from new contacts.

Here is a brief rundown of some popular secure messaging apps that offer end-to-end encryption:

  • iMessage
    • Available only on iOS devices
    • Requires a text messaging plan through your phone provider
    • Apple has fought requests to reveal iMessages
  • Signal
    • “Ephemeral” messages that disappear entirely after a user’s chosen time limit
    • Secondary lock screen to enter the application
    • Users can “name” a conversation to obscure contact details. For example, a message thread with “Jane Doe” can be renamed “Mom”
  • WhatsApp
    • Users can manually clear chats
    • Secondary lock screen based on an iPhone’s Face ID credentials
  • Wire
    • Easiest option for users who want to hide their phone numbers
    • Relies on Wire account names to talk to other users
Securing data and online accounts—methods and challenges

There are several methods to protecting the data stored on your device, and each method comes with its own challenges. Below, we look at what you should know when using these methods.

Encrypting your device’s data

Encrypting the data stored on your device means that, if your device is stolen or lost, most anyone who gets their hands on it cannot read the data in any legible form without knowing your passcode. Even if a third party tries to copy the entire contents of your device onto a laptop or desktop computer, your data will still be encrypted and unable to read without your passcode.

Your photos, videos, notes, screenshots, and audio records will all be protected this way, so long as a third party does not have some very high-tech, currently-questionable forensic devices only sold to law enforcement.

On iPhones, your data is encrypted by default, and on Android devices, users can go into their settings and choose to encrypt their data.

The one caveat to remember here is that, if you choose this method to protect sensitive data, if you lose your device, you also lose that data.

Using a secure folder app

On the Apple App store and the Google Play Store, several apps let users choose specific pieces of data that they can encrypt and protect behind a separate passcode that is required to access the app itself.

On the Google Play Store, the app Secure Folder (easy enough name to remember, right?) lets users hide the Secure Folder app itself from appearing in a device’s app menu. This could provide a level of clandestine coverage to domestic abuse survivors who may not have the agency to have a secret, unshared device passcode.

But, if you use a secure folder app that does not have a cloud backup option, the same wrinkle applies—if you lose your device, you lose your data.

Uploading data to the cloud

Let’s talk about cloud storage.

Uploading your data to the cloud has become a popular option for both individuals and businesses that want to access data across multiple devices, removing the concern of losing a specific document, spreadsheet, or presentation because it only rests on one device.

Several popular cloud storage platforms today include iCloud, Google Drive, Dropbox, Box, and Amazon Drive.

Malwarebytes Labs has previously told readers that uploading their data to an encrypted cloud database is a secure model for protecting data, and we stand by that. But cloud storage presents another challenge: Privacy.

Uploading your data to the cloud is inherently not private because, rather than relying on only your device’s storage—whether it’s your phone’s memory or your laptop’s hard drive—you are relying on a separate company to hold your data.  

What this means is that using cloud storage is a balancing act of your needs. If your data is for your eyes only, that need won’t be met with cloud storage. But if you want to protect your sensitive data outside of just one device, cloud storage fits that need.

If you do use cloud storage, remember to choose a provider that encrypts users’ data, create and use a secure and complex password that can’t be guessed, and enable two-factor authentication.

What’s two-factor authentication? Oh, right.

Two-factor authentication

Two-factor authentication (2FA), or multi-factor authentication, is a feature you can enable on the important online accounts that hold your banking information, health care info, emails, and social media presence.

By turning on 2FA, you will be telling an online service provider, like Facebook or Gmail, that when you sign in to the service from a new device, you’ll need more than just your password to access your account. You’ll need a second authenticator, which, for many platforms, is delivered to you as a multi-digit code sent in a text message to your mobile device. After you’ve entered the code, only then can you access your online account.

But, because so many 2FA schemes rely on sending a text message with a second code, you have to remember the importance of securing your notification settings. When an online service texts you to verify your account, the 2FA code might show up on your device’s lock screen if you haven’t hidden your notifications. This means that a third party could still log into your account if they are simply near your device and able to read the notifications that pop up.


Protecting your device, and the data on it, can be a long, complicated process, but we hope that some of the tips above help you start that process. If you need help understanding your own safety—and thus, your own available device security options—you can call the domestic abuse advocates at the National Domestic Violence Hotline at 1-800-799-7233.

You’re not alone in this. Stay safe.

The post Data and device security for domestic abuse survivors appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (August 5 – 11)

Malware Bytes Security - Mon, 08/12/2019 - 11:38am

Last week on Malwarebytes Labs, we explained how brain-machine interface (BMI) technology could usher in a world of Internet of Thoughts, why having backdoors is problematic, and how we can improve the security of our smart homes.

To cap off Hacker Summer Camp week, the Labs team released a special ransomware edition of its quarterly cybercrime tactics and techniques report, which you can download here.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 5 – 11) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Facial recognition technology: force for good or privacy threat?

Malware Bytes Security - Mon, 08/12/2019 - 11:00am

All across the world, governments and corporations are looking to invest in or develop facial recognition technology. From law enforcement to marketing campaigns, facial recognition is poised to make a splashy entrance into the mainstream. Biometrics are big business, and third party contracts generate significant profits for all. However, those profits often come at the expense of users.

There’s much to be said for ethics, privacy, and legality in facial recognition tech—unfortunately, not much of it is pretty. We thought it was high time we take a hard look at this burgeoning field to see exactly what’s going on around the world, behind the scenes and at the forefront.

As it turns out…quite a lot.

The next big thing in tech?

Wherever you look, government bodies, law enforcement, protestors, campaigners, pressure and policy groups, and even the tech developers themselves are at odds. Some want an increase in biometric surveillance, others highlight flaws due to bias in programming.

One US city has banned facial tech outright, while some nations want to embrace it fully. Airport closed-circuit TV (CCTV)? Fighting crime with shoulder-mounted cams? How about just selling products in a shopping mall using facial tracking to find interested customers? It’s a non-stop battlefield with new lines being drawn in the sand 24/7.

Setting the scene: the 1960s

Facial recognition tech is not new. It was first conceptualised and worked on seriously in the mid ’60s by pioneers such as Helen Chan Wolf and Woodroe Bledsoe. They did what they could to account for variances in imagery caused by degrees of head rotation using RAND tablets to map 20 distances based on facial coordinates. From there, a name was assigned to each image. The computer then tried to remove the effect of changing the angle of the head from the distances it had already calculated, and recognise the correct individual placed before it.

Work continued throughout the ’60s, and was by all accounts successful. The computers used consistently outperformed humans where recognition tasks were concerned.

Moving on: the 1990s

By the mid to late ’90s, airports, banks, and government buildings were making use of tech essentially built on its original premise. A new tool, ZN-face, was designed to work with less-than-ideal angles of faces. It ignored obstructions, such as beards and glasses, to accurately determine the identity of the person in the lens. Previously, this type of technology could flounder without clear, unobstructed shots, which made it difficult for software operators to determine someone’s identity. ZN-face could determine whether it had a match in 13 seconds.

You can see a good rundown of these and other notable moments in early facial recognition development on this timeline. It runs from the ’60s right up to the mid ’90s.

The here and now

Looking at the global picture for a snapshot of current facial recognition tech reveals…well, chaos to be honest. Several distinct flavours inhabit various regions. In the UK, law enforcement rallies the banners for endless automated facial recognition trials. This despite test results so bad the universal response from researchers and even Members of Parliament is essentially “please stop.”

Reception in the United States is a little frostier. Corporations jostle for contracts, and individual cities either accept or totally reject what’s on offer. As for Asia, Hong Kong experiences something akin to actual dystopian cyberpunk. Protestors not only evade facial recognition tech but attempt to turn it back on the government.

Let’s begin with British police efforts to convince everyone that seemingly faulty tech is as good as they claim.

All around the world: The UK

The UK is no stranger to biometrics controversy, having made occasional forays into breach of privacy and stolen personal information. A region averse to identity cards and national databases, it still makes use of biometrics in other ways.

Here’s an example of a small slice of everyday biometric activity in the UK. Non-European residents pay for Biometric Residence Permits every visa renewal—typically every 30 months. Those cards contain biometric information alongside a photograph, visa conditions, and other pertinent information linked to several Home Office databases.

This Freedom of Information request reveals that information on one Biometric Residence Permit card is tied to four separate databases:

  • Immigration and Asylum Biometric System (Combined fingerprint and facial image database)
  • Her Majesty’s Passport Office Passports Main Index (Facial image only database)
  • Caseworking Immigration Database Image Store (Facial image only database)
  • Biometric Residence Permit document store (Combined fingerprint and facial image database)

It’s worth noting that these are just the ones they’re able to share. On top of this, the UK’s Data Protection Act contains an exemption that prevents immigrants from accessing data, or indeed preventing others from processing it, as is their right under the Global Data Protection Regulation (GDPR). In practice, this results in a two-tier system for personal data, and it means people can’t access their own case histories when challenging what they feel to be a bad visa decision.

UK: Some very testing trials

It is against this volatile backdrop that the UK government wants to introduce facial recognition to the wider public, and residents with biometric cards would almost certainly be the first to feel any impact or fallout should a scheme get out of hand.

British law enforcement have been trialling the technology for quite some time now, but with one problem: All the independent reports claim what’s been taking place is a bit of a disaster.

Big Brother Watch has conducted extensive research into the various trials, and found that an astonishing 98 percent of automated facial recognition matches at 2018’s Notting Hill carnival were misidentified as criminals. Faring slightly (but not much) better than the Metropolitan Police were the South Wales Police, who managed to get it wrong 91 percent of the time—yet, just like other regions, continue to promote and roll out the technology. On top of that, no fewer than 2,451 people had their biometric photos taken and stored without their knowledge.

Those are some amazing numbers, and indeed the running theme here appears to be: “This doesn’t work very well and we’re not getting any better at it.”

Researchers at the Essex University of Essex Human Rights Centre essentially tore the recent trials to pieces in a comprehensive rundown of the technology’s current failings.

  • Across six trials, 42 matches were made by the Live Facial Recognition (LFR) technology, but only eight of those were considered a definite match.
  • Approaching the tests as if the LFR tech was simply some sort of CCTV device didn’t account for its invasive-by-design nature, or indeed the presence of biometrics and long-term storage without clear disclosure.
  • An absence of clear guidance for the public and the general assumption of legality for this tech used by police, versus a lack of explicit legal use in current law leaves researchers thinking this would indeed be found unlawful in the courts.
  • The public might naturally be confounded, considering that if someone didn’t want to be included in the trial, law enforcement would assume that the person avoiding this technology may be suspect. There’s no better example of this than a man who was fined £90 (US$115) for avoiding the LFR cameras for “disorderly behaviour” (covering his face) because they felt he was up to no good.

A damning verdict

The UK’s Science and Technology Committee (made up of MPs and Lords) recently produced their own findings on the trials, and the results were pretty hard hitting. Some highlights from the report, somewhat boringly called “The work of the Biometrics Commissioner and the Forensic Science Regulator” (PDF):

  • Concerns were raised that UK law enforcement is either aware or “struggling to comply” with a 2012 High Court ruling that the indefinite retention of innocent people’s custody images was unlawful—yet the practise still continues. Those concerns are exacerbated when considering they’d potentially be included in image matching watchlists for any LFR technology making use of custodial images. There is, seemingly, no money available for investing in the manual review and deletion of said images. There are currently some 21 million images of faces and tattoos on record, which will make for a gargantuan task. [Page 3]
  • From page 4, probably the biggest hammer blow for the trials: “We call on the Government to issue a moratorium on the current use of facial recognition technology and no further trials should take place until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established”
  • The Forensic Science Regulator isn’t on the lists it needs to be with regards to whistleblowing, so whistleblowers in (say) the LFR sector wouldn’t be as protected by legislation as they would in others. [Page 10]

There’s a lot more in there to digest but essentially, we have a situation where facial recognition technology is failing any and all available tests. We have academics, protest groups, and even MP committees opposing the trials, saying “The error rate is nearly 100 percent” and “We need to stop these trials.” We have a massive collection of images, many of which need to be purged instead of being fed into LFR testing. And to add insult to injury, there’s seemingly little scope for whistleblowers to call time on bad behaviour for technology potentially deployed to a nation’s police force by the government.

UKGOV: Keep on keeping on

This sounds like quite the recipe for disaster, yet nobody appears to be listening. Law enforcement insists human checks and balances will help address those appalling trial numbers, but so far it doesn’t appear to have helped much. The Home Office claims there is public support for the use of LFR to combat terrorism and other crimes, but will “support an open debate” on uses of the technology. What form this debate takes remains to be seen.

All around the world: the United States

The US experience with facial recognition tech is fast becoming a commercial one, as big players hope to roll out their custom-made systems to the masses. However, many of the same concerns that haunt UK operations are present here as well. Lack of oversight, ethics, failure rate of the technology, and bias against marginalised groups are all pressing concerns.

Corporate concerns

Amazon, potentially one of the biggest players in this space, has their own custom tech called Rekognition. It’s being licensed to businesses and law enforcement, and it’s entirely possible someone may have already experienced it without knowing. The American Civil Liberties Union weren’t exactly thrilled about this prospect, and said as much.

Wanting to roll out Amazon’s custom tech to law enforcement, and ICE specifically, was met with pushback from multiple groups, including their own employees. As with many objections to facial recognition technology, the issue was one focused on human rights. From the open letter:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used.”

Even some shareholders have cold feet over the potential uses for this powerful AI-powered recognition system. However, the best response you’ll probably find to some of these concerns from Amazon is a blogpost from February called “Some thoughts on facial recognition legislation.”

And in the blue corner

Not everyone in US commercial tech is fully on board with facial technology, and it’s interesting to see some of the other tech giant responses to working in this field. In April, Microsoft revealed they’d refused to sell facial tech to Californian law enforcement. According to that article, Google flat out refused to sell it to law enforcement too, but they do have other AI-related deals that have caused backlash.

The overwhelming concerns were (again) anchored in possible civil rights abuses. Additionally, the already high error rates in LFR married to potential bias in gender and race played a part.

From city to city, the battle rages on

In a somewhat novel turn of events, San Francisco became the first US city to ban facial recognition technology entirely. Police, transport authorities, and anyone else who wishes to make use of it will need approval by city administrators. Elsewhere, Orlando passed on Amazon’s Rekognition tech after some 15 months of—you guessed it—glitches and technical problems. Apparently, things were so problematic that they never reached a point where they were able to test images.

Over in Brooklyn, NY, the pressure has started to bear down on facial tech on a much smaller, more niche level. The No Biometric Barriers to Housing act wants to:

…prohibit the use of biometric recognition technology in certain federally assisted dwelling units, and for other purposes.

This is a striking development. A growing number of landlords and building owners are inserting IoT/smart technology into people’s homes. This is happening whether they want them or not, regardless of how secure they may or may not be.

While I accept I may be sounding like a broken record, these concerns are valid. Perhaps, just perhaps, privacy isn’t quite as dead as some would like to think. Error rates, technical glitches, exploitation of certain communities and using them as guinea pigs for emerging technology are all listed as reasons for the great United States LFR pushback of 2019.

All around the world: China

China is already a place deeply wedded to multiple tracking/surveillance systems.

There are 170 million CCTV cameras currently in China, with plans to add an additional 400 million between 2018 and 2021. This system is intended to be matched with facial recognition technology tied to multiple daily activities—everything from getting toilet roll in a public restroom to opening doors. Looping it all together will be 190 million identity cards, with an intended facial recognition accuracy rate of 90 percent.

People are also attempting to use “hyper realistic face molds” to bypass biometric authentication payment systems. There’s certainly no end of innovation taking place from both government and the population at large.

Hyper-realistic face molds capable of tricking face recognition payment authentication systems. High chance of being outlawed in China I feel (subtitles mine)

— Matthew Brennan (@mbrennanchina) 5 August 2019

Hong Kong

Hong Kong has already experienced a few run-ins with biometrics and facial technology, but mostly for promotional/marketing purposes. For example, in 2015, a campaign designed to raise awareness of littering across the region made use of DNA and technology produced in the US to shame litterbugs. Taking samples from rubbish found in the streets, they extracted DNA and produced facial reconstructions. Those face mockups were placed on billboards across Hong Kong in high traffic areas and places where the litter was originally recovered.

Mileage will vary drastically on how accurate these images were because, as has been noted, the “DNA alone can only produce a high probability of what someone looks like” and the idea was to generate debate, not point fingers.

All the same, wind forward a few years and the tech is being used to dispense toilet paper and shame jaywalkers. More seriously, we’re faced with daily protests in Hong Kong over the proposed extradition bill. With the ability to protest safely at the forefront of people’s minds, facial recognition technology steps up to the plate. Sadly, all it manages to achieve is to make the whole process even more fraught than it already is.

Protestors cover their faces, and phone owners disable facial recognition login technology. Police remove identification badges, so people on Telegram channels share personal information about officers and their families. Riot police carry cameras on poles because wall-mounted devices are hampered with laser pens and spray paint.

These lasers Hong Kong protesters are pointing at riot police through billowing tear gas, it’s like something out of a sci-fi movie. #AntiELAB

— Alejandro Alvarez (@aletweetsnews) 28 July 2019

Rules and (bending) regulations

Hong Kong itself has a strict set of rules for Automatic Facial Recognition. One protestor attempted to make a home-brew facial recognition system using online photos of police officers. The project was eventually shelved because of lack of time, but the escalation of recognition tech development by a regular resident is quite unique.

This may all sound a little bit out there or over the top. Even so, with 1,000 rounds of tear gas being fired alongside hundreds of rubber bullets, protestors aren’t taking chances. For now, we’re getting a birds-eye view of what it would look like if LFR were placed front-and-center in a battle between government oversight and civil rights. Whether it tips the balance one way or the other remains to be seen.

Watching…and waiting

Slow, relentless legal rumblings in the UK are one thing. Cities embracing or rejecting technology in the US is quite another—especially when the range of stances is from organizations and policies all the way down to the housing level. On the opposite side of the spectrum, seeing LFR in Hong Kong protests is an alarming insight into where the state of biometrics and facial recognition could lead if concerns aren’t addressed head on before implementation.

It seems technology, as it so often does, has raced far ahead of our ability to define its ethical use.

The question is: How do we catch up?

The post Facial recognition technology: force for good or privacy threat? appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Backdoors are a security vulnerability

Malware Bytes Security - Fri, 08/09/2019 - 12:10pm

Last month, US Attorney General William Barr resurrected a government appeal to technology companies: Provide law enforcement with an infallible, “secure” method to access, unscramble, and read encrypted data stored on devices and sent across secure messaging services.

Barr asked, in more accurate, yet unspoken terms, for technology companies to develop encryption backdoors to their own services and products. Refusing to endorse any single implementation strategy, the Attorney General instead put the responsibility on cybersecurity researchers and technologists.  

“We are confident that there are technical solutions that will allow lawful access to encrypted data and communications by law enforcement without materially weakening the security provided by encryption,” Attorney General Barr said.

Cybersecurity researchers, to put it lightly, disagreed. To many, the idea of installing backdoors into encryption is antithetical to encryption’s very purpose—security.

Matt Blaze, cybersecurity researcher and University of Pennsylvania Distributed Systems Lab director, pushed back against the Attorney General’s remarks.

“As someone who’s been working on securing the ‘net for going on three decades now, having to repeatedly engage with this ‘why can’t you just weaken the one tool you have that actually works’ nonsense is utterly exhausting,” Blaze wrote on Twitter. He continued:

“And yes, I understand why law enforcement wants this. They have real, important problems too, and a magic decryption wand would surely help them if one could exist. But so would time travel, teleportation, and invisibility cloaks. Let’s stick to the reality-based world.”

Blaze was joined by a chorus of other cybersecurity researchers online, including Johns Hopkins University associate professor Matthew Green, who said plainly: “there is no safe backdoor solution on the table.”

The problem with backdoors is known—any alternate channel devoted to access by one party will undoubtedly be discovered, accessed, and abused by another. Cybersecurity researchers have repeatedly argued for years that, when it comes to encryption technology, the risk of weakening the security of countless individuals is too high.

Encryption today

In 2014, Apple pushed privacy to a new standard. With the launch of its iOS 8 mobile operating system that year, no longer would the company be able to access the encrypted data stored on its consumer devices. If the company did not have the passcode to a device’s lock screen, it simply could not access the contents of the device.

“On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode,” the company said.

The same standard holds today for iOS devices, including the latest iPhone models. Data that lives on a device is encrypted by default, and any attempts to access that data require the device’s passcode. For Android devices, most users can choose to encrypt their locally-stored data, but the feature is not turned on by default.

Within two years of the iOS 8 launch, Apple had a fight on its hands.

Following the 2015 terrorist shooting in San Bernardino, Apple hit an impasse with the FBI, which was investigating the attack. Apple said it was unable to access the messages sent on an iPhone 5C device that was owned by one of the attackers, and Apple also refused to build a version of its mobile operating system that would allow law enforcement to access the phone.

Though the FBI eventually relied on a third-party contractor to crack into the iPhone 5C, since then, numerous messaging apps for iOS and Android have provided users with end-to-end encryption that locks even third-party companies out from accessing sent messages and conversations.

Signal, WhatsApp, and iMessage all provide this feature to users.

Upset by their inability to access potentially vital evidence for criminal investigations, the federal government has, for years, pushed a campaign to convince tech companies to build backdoors that will, allegedly, only be used by law enforcement agencies.

The problem, cybersecurity researchers said, is that those backdoors do not stay reserved for their intended use.

Backdoor breakdown

In 1993, President Bill Clinton’s Administration proposed a technical plan to monitor Americans’ conversations. Installed within government communications networks would be devices called “Clipper Chips,” which, if used properly, would only allow law enforcement agencies to listen in on certain phone calls.

But there were problems, as revealed by Blaze (the same cybersecurity researcher who criticized Attorney General Barr’s comments last month).  

In a lengthy analysis of the Clipper Chip system, Blaze found glaring vulnerabilities, such that the actual, built-in backdoor access could be circumvented.   

By 1996, adoption of the Clipper Chip was abandoned.

Years later, cybersecurity researchers witnessed other backdoor failures, and not just in encryption.

In 2010, the cybersecurity expert Steven Bellovin—who helped Blaze on his Clipper Chip analysis—warned readers of a fiasco in Greece in 2005, in which a hacker took advantage of a mechanism that was supposed to only be used by police.

“In the most notorious incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called ‘lawful intercept’ mechanisms in the switch—that is, the features designed to permit the police to wiretap calls easily—was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister’s,” Bellovin wrote. “This attack would not have been possible if the vendor hadn’t written the lawful intercept code.”

In 2010, cybersecurity researcher Bruce Schneier placed blame on Google for suffering a breach from reported Chinese hackers who were looking to see which of its government agents were under surveillance from US intelligence.

According to Schneier, the Chinese hackers were able to access sensitive emails because of a fatal flaw by Google—the company put a backdoor into its email service.

“In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts,” Schneier said. “This feature is what the Chinese hackers exploited to gain access.”

Interestingly, the insecurity of backdoors is not a problem reserved for the cybersecurity world.

In 2014, The Washington Post ran a story about where US travelers’ luggage goes once it gets checked into the airport. More than 10 years earlier, the Transpiration Security Administration had convinced luggage makers to install a new kind of lock on consumer bags—one that could be unlocked through a physical backdoor, accessible by using one of seven master keys which only TSA agents were supposed to own. That Washington Post story, though, revealed a close-up photograph of all seven keys.

Within a year, that photograph of the keys had been analyzed and converted into 3D printing files that were quickly shared online. The keys had leaked, and the security of nearly every single US luggage bag had been compromised. The very first flaw only required human error.

Worth the risk?

Attorney General Barr’s comments last month are part of a long-standing tradition in America, in which a representative of the Department of Justice (last year it was then-Deputy Attorney General Rod Rosenstein) makes a public appeal to technology companies, asking them to install backdoors as a means to preventing potential crime.

The arguments on this have lasted literal decades, invoking questions of the First Amendment, national security, and the right to privacy. That argument will continue, as it has today, but encryption may pick up a few surprising defenders along the way.

On July 23 on Twitter, the chief marketing officer of a company called SonicWall posted a link to a TechCrunch article about the Attorney General’s recent comments. The CMO commented on the piece:

“US attorney general #WilliamBarr says Americans should accept security risks of #encryption #backdoors

Michael Hayden, the former director of the National Security Agency—the very agency responsible for mass surveillance around the world—replied:

“Not really. And I was the director of national security agency.”

The fight to install backdoors is a game of cat-and-mouse: The government will, for the most part, want its own way to decrypt data when investigating crimes, and technologists will push back on that idea, calling it dangerous and risky. But as more major companies take the same stand as Apple—designing and building an incapability to retrieve users’ data—the public might slowly warm up to the idea, and value, of truly secure encryption.

The post Backdoors are a security vulnerability appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Labs quarterly report finds ransomware’s gone rampant against businesses

Malware Bytes Security - Thu, 08/08/2019 - 10:00am

Ransomware’s back—so much so that we created an entire report on it.

For 10 quarters, we’ve covered cybercrime tactics and techniques, covering a wide range of threats we saw lodged against consumers and businesses through our product telemetry, honeypots, and threat intelligence. We’ve looked at dangerous Trojans such as Emotet and TrickBot, the explosion and subsequent downfall of cryptomining, trends in Mac and Android malware, and everything in between.

But this quarter, we noticed one threat dominating the landscape so much that it deserved its own hard look over a longer period than a single quarter. Ransomware, which many researchers have noted took a long breather after its 2016 and 2017 heyday, is back in a big way—targeting businesses with fierce determination, custom code, and brute force.

Over the last year, we’ve witnessed an almost constant increase in business detections of ransomware, rising a shocking 365 percent from Q2 2018 to Q2 2019.

Therefore, this quarter, our Cybercrime Tactics and Techniques report is a full ransomware retrospective, looking at the top families causing the most damage for consumers, businesses, regions, countries, and even specific US states. We examine increases in attacks lodged against cities, healthcare organizations, and schools, as well as tactics for distribution that are most popular today. We also look at ransomware’s tactical shift from mass blanket campaigns against consumers to targeted attacks on organizations.

To dig into the full report, including our predictions for ransomware of the future, download the Cybercrime Tactics and Techniques: Ransomware Retrospective here.

The post Labs quarterly report finds ransomware’s gone rampant against businesses appeared first on Malwarebytes Labs.

Categories: Malware Bytes

8 ways to improve security on smart home devices

Malware Bytes Security - Wed, 08/07/2019 - 11:00am

Every so often, a news story breaks that hackers have made their way into a smart home device and stolen personal data. Or that vulnerabilities in smart tech have been discovered that allow their producers (or other cybercriminals) to spy on customers. We’ve seen it play out over and over with smart home assistants and other Internet of Things (IoT) devices, yet sales numbers for these items continue to climb.

Let’s face it: No matter how often we warn about the security concerns with smart home devices, they do make life more convenient—or at the very least, are a lot of fun to play with. It’s pretty clear this technology isn’t going away. So how can those who’ve embraced smart home technology do so while staying as secure as possible?

Here are eight easy ways to tighten up security on smart home devices so that users are as protected as possible while using the new technologies they love.

1. Switch up your passwords

Most smart home devices ship with default passwords. Some companies require that you change the default password before integrating the technology into your home—but not all. The first thing users can do to ensure hackers can’t brute force their way into their smart home device is change up the password, and make it something that is unique to that device. Once a hacker finds out one password, they’ll try to use it on every other account.

A few ways you can do to create less hackable passwords are:

  • Making them longer than eight characters
  • Creating passwords that are unrelated to pets, kids, birthdays, or other obvious combinations
  • Using a password manager so you don’t have to remember 27 different passwords
  • Using a password generator to create random combinations
2. Enable two-step authentication

Many online sites and smart devices are now allowing users to opt into two-step authentication, which is a two-step process for verifying information before allowing someone access to your account.

If you use a Chromecast or any other Google device, you can turn on this verification and receive email alerts. While you may be the only person to try logging into your account, it helps to know you’ll be notified if someone does try to hack in and get your information.

3. Disable unused features

Smart home tech, like the Amazon Echo or Google Nest, have made headlines for invasively recording users without their knowledge or shipping with unknown features, such as a microphone, that are later enabled. This makes trusting those devices implicitly a bit of a hazard.

While your voice assistant won’t be recording you all the time, it can be triggered by words used in a conversation between two people. Check your home assistant’s logs and delete your voice recordings if you find any you don’t approve of.

You can always turn off the voice control features on these devices. While their purpose is to respond to vocal commands, they can also be accessed through an app, remotely, or through a website instead.

4. Upgrade your devices

When was the last time you purchased smart tech for your home? If it was long enough ago that software updates are no longer compatible with the operating system, it might be time to upgrade.

Upgraded tech will always have new features, fewer malfunctions of previously cutting-edge but now standard innovations, plus more advanced ways to secure the device that may not be available on earlier models.

Another benefit of upgrading is that there are far more players on the market today than there were just a couple years ago. For example, many people use smart plugs to control their electricity usage, but not every brand considers security a top priority. Keep an eye on those that have received positive reviews by tech and science businesses. They’ll have fewer security issues than older models.

5. Check for software updates

Don’t worry if you don’t have the money for a big smart home upgrade right now. Many times, keeping software updated will do the trick—especially because security issues are most often fixed in periodic software updates, and not necessarily addressed in brand-new releases. Each new version of software released includes not only new functionality, but fixes to bugs and security patches.

These patches work to plug any known vulnerabilities in the smart device that allow for hackers to drop malware or steal valuable data. To make sure your device’s software is always updated, go into your settings to make sure to select automatic software updates. If that’s not possible, set reminders to check for updates yourself at least once per month.

6. Use a VPN

If you have concerns about the security of your ISP’s Wi-Fi network, you might consider using a VPN. A virtual private network (VPN) creates a closed system from any Internet connection, including public ones where you’re most at risk.

A VPN keeps your Internet protocol (IP) address from being discovered. This prevents hackers from knowing your location and also makes your Internet activity untraceable.

Perhaps the most important benefit of using a VPN is that it creates secured, encrypted connections. No matter where you access Wi-Fi—say if you wanted to turn on the air-conditioning at home from the airport—a VPN keeps that traffic secure.

7. Monitor your data

Are your devices sending you reports on energy usage or the top songs you played at home this month? Are you storing or backing up smart home data on the cloud? If not, where does that data go and how is it secured?

Smart home devices may not have easy instructions for determining whether data produced from their usage is stored on the cloud or on private, corporate-facing servers. Rest assured that, whether it’s visible or not, smart device companies are collecting data, whether to improve their marketing efforts or simply to show their own value.

So how can users monitor how their data is collected, stored, and transmitted? Some devices may allow you to back up info to the cloud in settings. If so, you should create strong passwords with two-factor authentication in order to access that data and protect it from hackers. If not, you might need to dig through a device’s EULA or even contact the company to find out how they store the data at rest and at transit, and whether that data is encrypted to ensure anonymity.

If you’d prefer not to let your smart tech back up information to the cloud, you can often manually turn this off in settings. The question still remains: What happens to your data if it’s not in the cloud? That’s where poking around the company’s website or calling them to learn how personal information is stored can hopefully calm your fears.

8. Limit smart home device usage

The only way to guarantee your privacy and security at home is to avoid using devices that connect to the Internet—including your phone. Obviously, in today’s world, that’s a difficult task. Therefore, the second-best option is to consider which devices are absolutely necessary for work, pleasure, and convenience, and slim down the list of smart-enabled devices.

Perhaps it makes sense for an energy-conscious person to use a Nest device to regulate temperatures, but do they need Internet-connected smoke detectors? Maybe some folks couldn’t live without streaming, but could get by using a tradition key over a smart lock.

There’s no such thing as 100 percent protection from cybercrime—even if you don’t use the Internet. So if you want to embrace the wonders of smart home technology, be sure you’re smart about how and when you use it.

The post 8 ways to improve security on smart home devices appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (July 29 – August 4)

Malware Bytes Security - Mon, 08/05/2019 - 11:44am

Last week on Malwarebytes Labs we discussed the security and privacy changes in Android Q, how to get your Equifax money and stay safe doing it, and we looked at the strategy of getting a board of directors to invest in government cybersecurity. We also reviewed how a Capital One breach exposed over 100 million credit card applications, analyzed the exploit kit activity in the summer of 2019, and warned users about a QR code scam that can clean out your bank account.

The busy week in security continued with looks at Magecart and others intensifying web skimming, ATM attacks and fraud, and an examination of the Lord Exploit Kit.

Other cybersecurity news
  • The Georgia State Patrol was reportedly the target of a July 26 ransomware attack that has necessitated the precautionary shutdown of its servers and network. (Source: SC Magazine)
  • Houston County Schools in Alabama delayed the school year’s opening scheduled for August 1st due to a malware attack. (Source: Security Affairs)
  • Over 95% of the 1,600 vulnerabilities discovered by Google’s Project Zero were fixed within 90 days. (Source: Techspot)
  • Researchers who discovered several severe vulnerabilities now uncovered two more flaws that could allow attackers to hack WPA3 protected WiFi passwords. (Source: The Hacker News)
  • Germany’s data protection commissioner investigates revelations that Google contract-workers were listening to recordings made via smart speakers. (Source: The Register)
  • Experts tend to recommend anti-malware protection for all mobile device users and platforms , but 47% of Android Anti-Malware apps are flawed. (Source: DarkReading)
  • Many companies don’t know the depth of their IoT-related risk exposure. (Source: Help Net Security)
  • Apple’s Siri follows Amazon Alexa and Google Home in facing backlash for its data retention policies. (Source: Threatpost)
  • There has been a 92% increase in the total number of vulnerabilities reported in the last year, while the average payout per vulnerability increased this year by 83%. (Source: InfoSecurity magazine)
  • Multiple German companies were off to a rough start last week when a phishing campaign pushing a data-wiping malware dubbed GermanWiper targeted them and asked for a ransom. (Source: BleepingComputer)

Stay safe, everyone!

The post A week in security (July 29 – August 4) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

How brain-machine interface (BMI) technology could create an Internet of Thoughts

Malware Bytes Security - Mon, 08/05/2019 - 11:00am

She plugged the extension for car transportation in the brain-machine interface connectors at the right side of her head, and off she went. The traffic was relatively slow, so there was no need to stop working. She answered a few more emails, then unplugged her work extension. Weekend mode could now be initiated. How about we play a game? her AI BrainPal companion, Phoenix, suggested. Or would you rather sit back and enjoy the ride?

Too futuristic? A Scalzi rip-off? Sci-fi that’s really more of a fantasy? Sure, it’s not technology that’s ready to ship, but driving a car while writing emails or playing video games—all while being physically paralyzed—is a future not-too-far-off. And no, we’re not talking about self-driving cars.

Brain-machine interface (BMI) technology is a field in which dedicated, big-name players are looking to develop a wide variety of applications for establishing a direct communication pathway between the brain (wired or enhanced) and an external device. Some of these players are primarily interested in healthcare-centric implementations, such as enabling paralyzed humans to use a computer, but for others, improving the lives of the disabled are simply short-term goals on the road to much more broad and far-reaching accomplishments.

One such application of BMI, for example, is the development of a Human Brain/Cloud Interface (B/CI), which would enable people to directly access information from the Internet, store their learnings on the cloud, and work together with other connected brains, whether they are human or artificial. B/CI, often referred to as the Internet of Thoughts, imagines a world where instant access to information is possible without the use of external machinery, such as desktop computers or Internet cables. Search and retrieval of information will be initiated by thought patterns alone.

So exactly how does brain-machine interface technology work? And how far off are we from seeing it applied in the real world? We take a look at where the technology stands today, our top concerns—both for security and ethical reasons—and how BMI could be implemented for optimal results in the future.

Brain-machine interface technology today

At some level, brain-machine interface technology already exists today.

For example, there are hearing aids that take over the function of the ears for people that are deaf or hard of hearing. These hearing aids connect to the nerves that transmit information to the brain, helping people translate sound they’d otherwise be unable to process. There are also several methods that allow mute or paralyzed people to communicate with others, although those methods are still crude and slow.

However, organizations are moving quickly to transform BMI technology from theoretical to practical. Many of the methods we’ll discuss below have already been tested on animals and are waiting for approval to be tested on humans.

One company working on technology to link the brain to a computer is Elon Musk’s startup Neurolink, which expects to be testing a system that feeds thousands of electrical probes into the human brain around 2020. Neuralink’s initial goal is to help people deal with brain and spinal cord injuries or congenital defects. After all, such a link would enable patients to use an exoskeleton. But the long-term goal is to accomplish a brain-to-machine interface that could achieve a symbiosis of human and artificial intelligence.

Working from a different angle are companies like Intel, IBM, and Samsung. Intel is trying to mimic the functionality of a brain by using neuromorphic engineering. This means they are building machines that work in the same way a biological brain works. Where traditional computing works by running numbers through an optimized pipeline, neuromorphic hardware performs calculations using artificial “neurons” that communicate with each other.

These are two wildly different techniques that are both optimized for different methods of computing. Neural networks, for example, excel at recognizing visual objects, so they would be better at facial recognition and image searches. Neuromorphic design is still in the research phase, but this and similar projects from competitors such as IBM and Samsung should break ground for eventual commoditization and commercial use. These projects might be able to provide a faster and more efficient interface between a real brain and a binary computer.

Using a technique called “neuralnanorobotics” neuroscientists have expressed they expect connections that are a lot more advanced to be possible within decades from now. While the technology is mainly being developed to facilitate accurate diagnoses and eventual cures for the hundreds of different conditions that affect the human brain, it also offers options in a more technological direction.

The human brain is an amazing computer

At a possible transmission speed racking up to ∼6 × 10^16 bits per second, the human brain is able to relay an incredible amount of information super fast. To compare, that is 60,000 Terabit per second or 7,500 Terabyte per second, which is a lot faster than the fastest stable Internet connection (1.6 Terabits per second) over a long distance recorded to date. This means that in our coveted brain-to-Cloud connection, the Internet would be the speed-limiting factor.

However, it’s most likely that the devices we are going to need to transform one kind of data into another will determine the speed at which BMI technology operates.

Beyond speed, there are other limiting factors that result in a technological mismatch for pairing brains and computers. Neuromorphic engineering is based on aligning the differences between the computers we are used to working with and biological brains. Neuromorphic engineers try to build computers that resemble a more human-like brain with a special type of chip. Of course, it is possible to mimic the functioning of the brain by using regular chips and special software, but this process is inefficient.

The main difference between logical and biological computers is in the number of possible connections. Simply put, if you want to match the thousands of possible connections that neurons can make, it takes a huge number of transistors. Enter: specially-crafted chips whose architecture resembles the human brain.

Yet, for all the brain’s marvels, speed, and thousands of neurons making connections and relying information, we are human after all, which means multiple flawed outcomes can result from those connections.

Know that frustrating feeling when you see a familiar face you’ve passed by hundreds of times, but can’t remember her name? Or hear a tune you’ve hummed endlessly for weeks, but don’t remember the lyrics? The number of connections a neuron can make does not always lead directly to the right answer. Sometimes, we are distracted or overwrought or we simply cannot retrieve known information from whichever location it’s been stored.

What if you could put a computer to work at such a moment? Computers do not forget information unless you delete it—and even then, it can sometimes be found. Now add the cloud and Internet connectivity, and suddenly, you’ve got an eidetic memory. It’d be like being able to Google something instantaneously in your head.

Should humans be wired to machines?

As we have shown, researchers are following several paths to determine applications for connecting our brains to the digital world, and they are considering the strengths and weaknesses of each as they attempt to achieve a symbiotic relationship. Whether it’s an exoskeleton that allows a paralyzed person to walk, Artificial Intelligence-powered computers that can ramp up on speed and visual capabilities, or connecting our brains to the Internet and the Cloud for storing and sharing information, the applications for BMI technology are nearly endless.

I’m sure this research can be of great benefit to handicapped people, enabling them to easier use appliances and devices, move around more freely, or communicate easier. And maybe one or more of these technologies could even bring relief to those suffering from mental health diseases or learning disabilities. In those cases, you will hear no argument from me, and I will applaud every step of progress made. But before I have my brain connected to the Internet, a lot of other requirements will have to be met.

For one, there are countless concerns about the ethical development of this technology. What is happening to the animals that are being tested? How would we determine the best way to move forward on testing humans? Is there a point of no return where once we hit a certain threshold, we lose control—and the computer or AI gains it? At which point do we stop and think: Okay, we know we can do this, but should we?

From a practical standpoint alone, there are some questions that need answering. For example, Bluetooth would be sufficient to control the medical applications, so why would we have to be hardwired to the Internet?

What is stopping brain-machine interface technology development today?

From where we are now in technology’s development, we see a few hurdles that will need to be jumped to move these techniques into a fully functional BMI. At a high level:

  • Progress needs to be made on developing smaller specialized computer chips that are capable of a multitude of connections. Remember, the first computers were the size of a whole room. Now, they fit in our pocket.
  • The research conducted in these fields will undoubtedly teach us more about the human brain, but there is so much we still don’t know. Will what we uncover about the brain be enough to successfully connect it to a machine? Or will what we don’t know hinder us or put us in danger in the end?
  • Approval from regulatory bodies like the FTC (Federal Trade Commission), law makers, and human rights organizations will be necessary to start testing on humans and expanding development into commercially viable products.

But there are more reasons that would stop me from using a BMI, even when the above points have been addressed:

  • Not everything you find on the Internet is true, so we would need some type of filter beyond search ranking to determine which information gets “downloaded” into people’s brains. How would we do so objectively? How could we simplify this without looking at a screen of search results, headlines, sources, and meta descriptions? Where does advertising come into play?
  • The combination of healthcare and cybersecurity has never been one that favors the security side. How will BMI integrate with hospital systems that use legacy software? What are the implications of someone actually hacking your brain?
  • Privacy will be a huge issue, since a cloud-connected brain could accidentally transmit information we’d rather keep to ourselves. I cannot control my thoughts, but I do like to control which ones I speak out loud, and which are published on the Internet.
  • The good old fear of the unknown, I will readily admit. We just don’t know what we don’t know. But who knows, maybe someday it will be as normal as having a smart phone.
What could stop us a few decades from now?

Let’s suppose we are able to work out all the high-level issues with privacy, security, filtering fact from fiction, and even learning all there is to know about the human brain. In a couple decades, we might have BMI in a place where we can conceivably release it to the public. There would still be kinks to iron out before this technology is ready for mass adoption. They include:

  • The cost of development will weigh heavily on the first use-cases. That means, quite simply, that the first people with access to BMI tech will likely be those with considerable wealth. With a widening gap between the haves and the have-nots today, how much further will this divide civilization when the top 1 percent not only control the majority of money, land, and media on the planet, but now they have super-powered brains?! Only mass production will make this sort of technology available to a larger part of the population.
  • The early adopters would be equipped with a super power, for all intents and purposes. Imagine interacting with a person that actually has all of human knowledge readily available. What will this do to working relationships? Friendships, marriages, or families? Outside of the economic imbalance mentioned above, what sort of sociological impact will result from BMI being unleashed?
  • Physical dangers are inherent when we directly connect devices of any sort to our bodies, and especially to our fragile brains. What are the possible effects of a discharge of static electricity directly into our brain?
Security concerns

Given that the path to the Internet of Thoughts seems destined to include medical research, discoveries, and applications, we fear that security will be implemented as an afterthought, at best. Healthcare has struggled as an industry to keep up with cybersecurity, with hospital devices or computers often running legacy software or institutions leaking sensitive patient data on the open Internet.

Where we already see healthcare technologies with the capability to improve quality of life (or even save lives) hurried through the development process without properly implementing security best practices, we shiver at the prospect of inheriting these poor programming and implementation habits when we start creating connections between the brain and the Internet.

Consider the implications if cybercriminals could hack into a high-ranking official’s BMI because of security vulnerabilities left unattended. One missed update could mean national security is now compromised. Imagine what an infected BMI might look like? Could criminals launch zombie-like attacks against communities, controlling people’s actions and words? Could they hold important information like passwords or your child’s birthday for ransom, locking you out of those memories forever if you don’t fork over? Could they extort celebrities or politicians for their private thoughts?

As a minor note and at the very least, I would certainly recommend using short-range connections like BlueTooth to develop medical applications for brain-machine interface. That might improve the chances of establishing a secure B/CI protocol for applications that require an Internet connection.

However, the main concern with BMI technology is not whether we’re capable of producing it, or even which applications of BMI deserve our attention. It’s that the Internet of Thoughts will become a dangerous and dark experiment that forever alters the way we humans communicate and interact. How can we be civil when our peers have access to our very thoughts—abstract or grim or judgmental or otherwise? What happens when those who cannot afford BMI attempt to compete with those that do? Will they simply get left behind?

When we connect our brain to a machine, are we even still human?

Questions to consider as this fairly new technology gains traction. In the meantime, stay safe everyone!

The post How brain-machine interface (BMI) technology could create an Internet of Thoughts appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Say hello to Lord Exploit Kit

Malware Bytes Security - Fri, 08/02/2019 - 2:15pm

Just as we had wrapped up our summer review of exploit kits, a new player entered the scene. Lord EK, as it is calling itself, was caught by Virus Bulletin‘s Adrian Luca while replaying malvertising chains.

In this blog post, we do a quick review of this exploit kit based on what we have collected so far. Malwarebytes users were already protected against this attack.

Exploit kit or not?

Lately there has been a trend of what we call pseudo-exploit kits, where a threat actor essentially grabs a proof of concept for an Internet Explorer or Flash Player vulnerability and crafts a very basic page to load it. It is probably more accurate to describe these as drive-by download attacks, rather than exploit kits.

With an exploit kit we expect to see certain feature sets that include:

  • a landing page that fingerprints the machine to identify client side vulnerabilities
  • dynamic URI patterns and domain name rotation
  • one or more exploits for the browser or one of its plugins
  • logging of the victim’s IP address
  • a payload that may change over time and that may be geo-specific
Quick glance at Lord EK

The first tweet from @adrian__luca about Lord EK came out in the morning of August 1st and shows interesting elements. It is part of a malvertising chain via the PopCash ad network and uses a compromised site to redirect to a landing page.

We can see a very rudimentary landing page in clear text with a comment at the top left by its author that says: <!– Lord EK – Landing page –>. By the time we checked it, it had been obfuscated but remained essentially the same.

There is a function that checks for the presence and version of the Flash Player, which will ultimately be used to push CVE-2018-15982. The second part of the landing page collects information that includes the Flash version and other network attributes about the victim.

Interesting URI patterns

One thing we immediately noticed was how the exploit kit’s URLs were unusual. We see the threat actor is using the ngrok service to craft custom hostnames (we informed ngrok of this abuse of their service by filing a report).

This is rather unusual at least from what we have observed with exploit kits in recent history. As per ngrok’s documentation, it exposes a local server to the public internet. The free version of ngrok generates randoms subomains which is almost perfect (and reminds us of Domain Shadowing) for the exploit kit author.

Flash exploit and payload

At the time of writing, Lord EK only goes for Flash Player, and not Internet Explorer vulnerabilities. Nao_Sec quickly studied the exploit and pointed out it is targeting CVE-2018-15982.

After exploiting the vulnerability, it launches shellcode to download and execute its payload:

The initial payload was njRAT, however the threat actors switched it the next day for the ERIS ransomware, as spotted by @tkanalyst.

We also noticed another change where after exploitation happens, the exploit kit redirects the victim to the Google home page. This is a behavior that was previously noted with the Spelevo exploit kit.

Active development

It is still too early to say whether this exploit kit will stick around and make a name for itself. However, it is clear that its author is actively tweaking it.

This comes at a time when exploit kits are full of surprises and gaining some attention back among the researchers community. Even though the vulnerabilities for Internet Explorer and Flash Player have been patched and both have a very small market share, usage of the old Microsoft browser still continues in many countries.

Brad Duncan from Malware Traffic Analysis has posted some traffic captures for those interested in studying this exploit kit.

Indicators of Compromise

Compromised site


Network fingerprinting


Lord EK URI patterns




Eris ransomware


The post Say hello to Lord Exploit Kit appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Capital One breach exposes over 100 million credit card applications

Malware Bytes Security - Fri, 08/02/2019 - 12:00pm

Just as we were wrapping up the aftermath of the Equifax breach—how was that already two years ago?—we are confronted with yet another breach of about the same order of magnitude.

Capital One was affected by a data breach in March. The hacker gained access to information related to credit card applications from 2005 to early 2019 for consumers and small businesses. According to the bank the breach affected around 100 million people in the United States and about 6 million people in Canada.

What’s very different in this breach is that a suspect has already been apprehended. On top of that, the suspect admitted she acted illegally and disclosed the method she used to get hold of the data. From the behavior of the suspect you would almost assume she wanted to get caught. She put forth only a minimal effort to hide her identity when she talked about having access to the data, almost bragging online about how much she had been able to copy.

What happened?

A former tech company software engineer that used to be employed by Amazon Web Services (AWS) was storing the information she gained from the breach in a publicly accessible repository. AWS is the cloud hosting company that Capital One was using. From the court filings we may conclude that Paige Thompson used her hands-on knowledge of how AWS works and combined it with exploiting a misconfigured web application firewall. As a result, she was able to copy large amounts of data from the AWS buckets. She posted about having this information on several platforms which lead to someone reporting the fact to Capital One. This led to the investigation of the breach and the arrest of the suspect.

How should Capital One customers proceed?

Capital One has promised to reach out to everyone potentially affected by the breach and to provide free credit monitoring and identity protection services. While Capital One stated that no log-in credential were compromised, it wouldn’t hurt to change your password if you are a current customer or you recently applied for a credit card with the company. For other useful tips, you can read our blogpost about staying safe in the aftermath of the Equifax breach. You will find a wealth of tips to stay out of the worst trouble. Also be wary of the usual scams that will go online as spin-offs from this breach.

What can other companies learn from this incident?

While the vulnerability has been fixed, there are other lessons to be learned from this incident.

Even though it is impractical for companies the size of Capital One to run their own web services, we can ask ourselves if all of the sensitive information needs to be stored in a place where we do not have full control. Companies like Capital One use these hosting services for scalability, redundancy, and protection. One of the perks is that employees all over the world can access, store, and retrieve any amount of data. This can also be the downside in cases of disgruntled employees or misconfigured Identity & Access Management (IAM) services. Anyone that can successfully impersonate an employee with access rights can use the same data for their own purposes. Amazon Elastic Compute Cloud (EC2) is a web-based service that allows businesses to run application programs in the AWS public cloud. When you run the AWS Command Line Interface from within an Amazon EC2 instance, you can simplify providing credentials to your commands. From the court filings it looks as if this is where the vulnerability was exploited for.

Companies using AWS and similar cloud hosting services should pay attention to:

  • IAM provisioning: Be restrictive when assigning IAM roles so access is limited to those that need it and taken away from those that no longer need it.
  • Instance metadata: Limit access to EC2 metadata as these can be abused to assume an IAM role with permissions that do not belong to the user.
  • Comprehensive monitoring: While monitoring is important for every server and instance that holds important data, it is imperative to apply extra consideration to those that are accessible via the internet. Alarms should have gone off as soon as TOR was used to access the EC2.
  • Misconfigurations: If you do not have the in-house knowledge or want to doublecheck, there are professional services that can scan for misconfigured servers.

Ironically, Capital One has released some very useful open source tools for AWS compliance and has proven to have the in-house knowledge. Capital One has always been more on the Fintech side than most traditional banks, and they were early adopters of using the Cloud.  So, the fact that this breach happened to them is rather worrying as we would expect other banks to be even more vulnerable.

Stay posted

Even though we already know a lot more details about this data breach than usual, we will follow this one as it further unravels.

If you want to follow up directly with a few resources, you can click below:

Official Capital One statement

Paige Thompson Criminal Complaint

The post Capital One breach exposes over 100 million credit card applications appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Everything you need to know about ATM attacks and fraud: part 2

Malware Bytes Security - Fri, 08/02/2019 - 11:00am

This is the second and final installment of our two-part series on automated teller machine (ATM) attacks and fraud.

In part 1, we identified the reasons why ATMs are vulnerable—from inherent weaknesses of its frame to its software—and delved deep into two of the four kinds of attacks against them: terminal tampering and physical attacks.

Terminal tampering has many types, but it involves either physically manipulating components of the ATM or introducing other devices to it as part of the fraudulent scheme. Physical attacks, on the other hand, cause destruction to the ATM and to the building or surrounding area where the machine is situated.

We have also supplied guidelines for users—before, during, and after—that will help keep them safe when using the ATM.

For part 2, we’re going to focus on the final two types of attacks: logical attacks and the use of social engineering.

Logical ATM attacks

As ATMs are essentially computers, fraudsters can and do use software as part of a coordinated effort to gain access to an ATM’s computer along with its components or its financial institution’s (FI’s) network. They do this, firstly, to obtain cash; secondarily, to retrieve sensitive data from the machine itself and strip or chip cards; and lastly, intercept data they can use to conduct fraudulent transactions.

Enter logical attacks—a term synonymous with jackpotting or ATM cash-out attacks. Logical attacks involve the exploitation and manipulation of the ATM’s system using malware or another electronic device called a black box. Once cybercriminals gain control of the system, they direct it to essentially spew cash until the safe empties as if it were a slot machine.

The concept of “jackpotting” became mainstream after the late renowned security researcher Barnaby Jack presented and demoed his research on the subject at the Black Hat security conference in 2010. Many expected ATM jackpotting to become a real-world problem since then. And, indeed, it has—in the form of logical attacks.

In order for a logical attack to be successful, access to the ATM is needed. A simple way to do this is to use a tool, such as a drill, to make an opening to the casing so criminals can introduce another piece of hardware (a USB stick, for example) to deliver the payload. Some tools can also be used to pinpoint vulnerable points within the ATM’s frame or casing, such as an endoscope, which is a medical device with a tiny camera that is used to probe inside the human body.

If you think that logical attacks are too complex for the average cybercriminal, think again. For a substantial price, anyone with cash to spare can visit Dark Web forums and purchase ATM malware complete with easy how-to instructions. Because the less competent ATM fraudsters can use malware created and used by the professionals, the distinction between the two blurs.

Logical attack types

To date, there are two sub-categories of logical attacks fraudsters can carry out: malware-based attacks and black box attacks.

Malware-based attacks. As the name suggests, this kind of attack can use several different types of malware, including Ploutus, Anunak/Carbanak, Cutlet Maker, and SUCEFUL, which we’ll profile below. How they end up on the ATM’s computer or on its network is a matter we should all familiarize ourselves with.

Installed at the ATM’s PC:

  • Via a USB stick. Criminals load up a USB thumb drive with malware and then insert it into a USB port of the ATM’s computer. The port is either exposed to the public or behind a panel that one can easily remove or punch a hole through. As these ATM frames are not sturdy nor secure enough to counter this type of physical tampering, infecting via USB and external hard drive will always be an effective attack vector. In a 2014 article, SecurityWeek covered an ATM fraud that successfully used a malware-laden USB drive.
  • Via an external hard drive or CD/DVD drive. The tactic is similar to the USB stick but with an external hard drive or bootable optical disk.
  • Via infecting the ATM computer’s own hard drive. The fraudsters either disconnect the ATM’s hard drive to replace it with an infected one or they remove the hard drive from its ATM, infect it with a Trojan, and then reinsert it.

Installed at the ATM’s network:

  • Via an insider. Fraudsters can coerce or team up with a bank employee with ill-intent against their employer to let them do the dirty work for them. The insider gets a cut of the cashed-out money.
  • Via social engineering. Fraudsters can use spear phishing to target certain employees in the bank to get them to open a malicious attachment. Once executed, the malware infects the entire financial institution’s network and its endpoints, which include ATMs. The ATM then becomes a slave machine. Attackers can send instructions directly to the slave machine for it to dispense money and have money mules collect.

    Note that as criminals are already inside the FI’s network, a new opportunity to make money opens its doors: They can now break into sensitive data locations to steal information and/or proprietary data that they can further abuse or sell in the underground market.

Installed via Man-in-the-Middle (MiTM) tactics:

  • Via fake updates. Malware could be introduced to ATM systems via a bogus software update, as explained by Benjamin Kunz-Mejri, CEO and founder of Vulnerability Lab after he discovered (by accident) that ATMs in Germany publicly display sensitive system information during their software update process. In an interview, Kunz-Mejri said that fraudsters could potentially use the information to perform a MiTM attack to get inside the network of a local bank, run malware that was made to look like a legitimate software update, and then control the infected the ATM.

Black box attacks. A black box is an electronic device—either another computer, mobile phone, tablet, or even a modified circuit board linked to a USB wire—that issues ATM commands at the fraudster’s bidding. The act of physically disconnecting the cash dispenser from the ATM computer to connect the black box bypasses the need for attackers to use a card or get authorization to confirm transactions. Off-premise retail ATMs are likely targets of this attack.

A black box attack could involve social engineering tactics, like dressing up as an ATM technician, to allay suspicions while the threat actor physically tamper with the ATM. At times, fraudsters use an endoscope, a medical tool used to probe the human body, to locate and disconnect the cash dispenser’s wire from the ATM computer and connect it to their black box. This device then issues commands to the dispenser to push out money.

As this type of attack does not use malware, a black box attack usually leaves little to no evidence—unless the fraudsters left behind the hardware they used, of course.

Experts have observed that as reports of black box attacks have dropped, malware attacks on ATMs are increasing.

ATM malware families

As mentioned in part 1, there are over 20 strains of known ATM malware. We’ve profiled four of those strains to give readers an overview of the diversity of malware families developed for ATM attacks. We’ve also included links to external references you can read in case you want to learn more.

Ploutus. This is a malware family of ATM backdoors that was first detected in 2013. Ploutus is specifically designed to force the ATM to dispense cash, not steal card holder information. An earlier variant was introduced to the ATM computer via inserting an infected boot disk into its CD-ROM drive. An external keyboard was also used, as the malware responds to commands executed by pressing certain function keys (the F1 to F12 keys on the keyboard). Newer versions also use mobile phones, are persistent, target the most common ATM operating systems, and can be tweaked to make them vendor-agnostic.

Daniel Regalado, principal security researcher for Zingbox, noted in a blog post that a modified Ploutus variant called Piolin was used in the first ATM jackpotting crimes in the North America, and that the actors behind these attacks are not the same actors behind the jackpotting incidents in Latin America.

References on Ploutus:

Anunak/Carbanak. This advanced persistent malware was first encountered in the wild affecting Ukrainian and Russian banks. It’s a backdoor based on Carberp, a known information-stealing Trojan. Carbanak, however, was designed to siphon off data, perform espionage, and remotely control systems.

The Anunak/Carbanak admin panel (Courtesy of Kaspersky)

It arrives on financial institution networks as attachment to a spear phishing email. Once in the network, it looks for endpoints of interest, such as those belonging to administrators and bank clerks. As the APT actors behind Carbanak campaigns don’t have prior knowledge of how their target’s system works, they surreptitiously video record how the admin or clerk uses it. Knowledge gained can be used to move money out of the bank and into criminal accounts.

References on Anunak/Carbanak:

Cutlet Maker. This is one of several ATM malware families being sold in underground hacking forums. It is actually a kit comprised of (1) the malware file itself, which is named Cutlet Maker; (2) c0decalc, which is a password-generating tool that criminals use to unlock Cutlet Maker; and (3) Stimulator, another benign tool designed to display information about the target ATM’s cash cassettes, such as the type of currency, the value of the notes, and the number of notes for each cassette.

Cutlet Maker’s interface (Courtesy of Forbes)

References on Cutlet Maker:

SUCEFUL. Hailed as the first multi-vendor ATM malware, SUCEFUL was designed to capture bank cards in the infected ATM’s card slot, read the card’s magnetic strip and/or chip data, and disable ATM sensors to prevent immediate detection.

The malware’s name is derived from a typo—supposed to be ‘successful’—by its creator, as you can see from this testing interface (Courtesy of FireEye)

References on SUCEFUL:

Social engineering

Directly targeting ATMs by compromising their weak points, whether they’re found on the surface or on the inside, isn’t the only effective way for fraudsters to score easy cash. They can also take advantage of the people using the ATMs. Here are the ways users can be social engineered into handing over hard-earned money to criminals, often without knowing.

Defrauding the elderly. This has become a trend in Japan. Fraudsters posing as relatives in need of emergency money or government officials collecting fees target elderly victims. They then “help” them by providing instructions on how to transfer money via the ATM.

Assistance fraud. Someone somewhere at some point in the past may have been approached by a kindly stranger in the same ATM queue, offering a helping hand. Scammers uses this tactic so they can memorize their target’s card number and PIN, which they then use to initiate unlawful money transactions.

The likely targets for this attack are also the elderly, as well as confused new users who are likely first-time ATM card owners.

Shoulder surfing. This is the act of being watched by someone while you punch in your PIN using the ATM’s keypad. Stolen PIN codes are particularly handy for a shoulder surfer, especially if their target absent-mindedly leaves the area after retrieving their cash but hasn’t fully completed the session. Some ATM users walk away before they can even answer the machine when it asks if they have another transaction. And before the prompt disappears, the fraudster enters the stolen PIN to continue the session.

Eavesdropping. Like the previous point, the goal of eavesdropping is to steal the target’s PIN code. This is done by listening and memorizing the tones the ATM keys make when someone punches in their PIN during a transaction session.

Distraction fraud. This tactic swept through Britain a couple years ago. And the scenario goes like this: An unknowing ATM user gets distracted by the sound of dropping coins behind him/her while taking out money. He or she turns around to help the person who dropped the coins, not knowing that someone else is already either stealing the cash the ATM just spewed out or swapping a fake card to his real one. The ATM user looks back at the terminal, content that everything looked normal, then goes on their way. The person they helped, on the other hand, is either given the stolen card to or tells their accomplice the stolen card’s PIN, which he/she memorized when their target punched it in and before deliberately dropping the coins.

A still taken from Barclay’s public awareness campaign video on distraction fraud (Courtesy of This is Money) Continued vigilance for ATM users and manufacturers

Malware campaigns, black box attacks, and social engineering are problems that are actively being addressing by both ATM manufacturers and their financial institutions. However, that doesn’t mean that ATM users should let their guards down.

Keep in mind the social engineering tactics we outlined above when using an ATM, and don’t forget to keep a lookout for something “off” with the machine you’re interacting with. While it’s quite unlikely a user could tell if an information-stealer had compromised her ATM (until she saw the discrepancies in her transaction records later), there are some malware types that can physically capture cards.

If this happens, do not leave the ATM premises. Instead, record every detail in relation to what happened, such as the time it was captured, the ATM branch you use, and which transactions you made prior to realizing the card would not eject. Take pictures of the surroundings, the ATM itself, and attempt to stealthily snap any people potentially lingering about. Finally, call your bank and/or card issuer to report the incident and request card termination.

We would also like to point you back to part 1 of this series again, where we included a useful guideline for reference on what to look out for before dropping by an ATM outlet.

As always, stay safe!

The post Everything you need to know about ATM attacks and fraud: part 2 appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Making the case: How to get the board to invest in government cybersecurity

Malware Bytes Security - Thu, 08/01/2019 - 12:00pm

Security leaders are no longer simply expected to design and implement a security strategy for their organization. As a key member of the business—and one that often sits in the C-suite—CISOs and security managers must demonstrate business acumen. In fact, Gartner estimates by 2020, 100 percent of large enterprise CISOs will be asked to report to their board of directors on cybersecurity and technology risk at least annually.

Presenting to the board, demonstrating ROI, and designing security as a business enabler are all in the job description for CISOs today.  But when it comes to communicating with the board and executive management, CISOs in different verticals will have disparate challenges to address.

In a series of posts about CISO communication, we look at these varying issues and concerns across verticals. This month, we examine what security leaders in government positions need to be mindful of when working to get buy-in from higher levels.

For perspective, I tapped Dan Lohrmann. Lohrmann is a security veteran who led Michigan government’s cybersecurity and technology infrastructure teams from May 2002 to August 2014. Now, as CSO and Chief Strategist for Security Mentor, he still writes and comments regularly on the requirements for government cybersecurity officials.

What unique challenges do CISOs working in government have that differ from their peers in the private sector? 

There are many, but I’ll mention three.

First, in government, the people closest to the top executive are almost always political friends/allies of the governor or mayor or other top public sector leader. The majority of these most trusted people were “on the bus” when they ran for office. This means that many top executives literally campaigned with them through primaries and long days of political rallies, gave financially to their campaigns and more. These are the people who are in the “inner circle” and who are listened to the most by government leaders. They have unique access and long-term relationships which are very hard to gain if you were not “on the bus.” There is nothing equivalent in the private sector, because there are not open public elections.

Second, while building trust takes time and skill in both the public and private sectors, the timelines for projects are often different. In government, there are set cycles which tend to follow election calendars, which often run for four years, but can range from two up to six. Investments and priorities with the board—often the cabinet or committee or council—also follow unique budget cycles that include getting legislative and perhaps other support. The timing of requests is paramount. Learn the lingo and metrics of these groups. How do they measure success?

Third, government rules, procedures, processes, approvals, oversight, and audits are often very complex and unique. It can take years to fully understand all the fiefdoms and side deals that occur in government silos. In the private sector, financial or staff support from the top leaders is generally acted upon swiftly. But, in contrast, I have seen government leaders make clear decisions, only to see the “government bureaucracy” kill projects through a long list of internal maneuvers and delaying tactics.

What do government CISOs need to keep in mind when they communicate with either the board or other governing body in their organization?

Know where you stand, not just on the org chart, but in the pecking order of “trust circles” in government. If you are not in the inner circle—and you probably are not if you were not on the bus—ask who is? Also, strive to at least be in the middle circle of career professionals who are trusted to “get things done” with a track record of career success. Build trusted relationships with those on the inner circle (or at least in the middle circle), where possible. Do lunch with the governments leaders. Learn the top governments leaders’ priorities and campaign promises. Get invited to the strategy sessions and priority setting meetings that impact technology and security. Make your case in different ways (from elevator pitches to formal cybersecurity presentations).   

Second, gain a good understanding of how things get done in government. Read case studies of successful projects. Learn budget timelines for official (and unofficial) proposals. Always have a list of current needs when “fallout money” becomes available. Side note: I was often told “no money for that project” for months or even years, only to have a budget person come up to me at the end of the fiscal year saying I need the spending details now. Lesson: Be ready with your hot needs list.      

Third, get to know the business leaders in the agencies who may be more sympathetic to your cause, even if/when the top elected leaders are not. Find a business champion in your organization who is backing cyber change in powerful ways and get behind that snowplow. Surprisingly, this may not be an IT manager. For example, I’ve seen security champions in the transportation and treasury departments. The senior execs in treasury were in charge of credit cards and needed payment card industry compliance. They pushed for extensive improvements in our network controls by demonstrating the penalties of noncompliance.  

Fourth, do regular cyber roadshows at least annually to business areas throughout government. Build a regular cadence for updates on what’s happening, and don’t assume this is a one-time deal. Go over the good, bad, and ugly and action items in security. Talk about what is working and where improvements are needed to be done with metrics.

Fifth, form a cyber committee (or better, utilize an existing technology sub-committee) to get executive buy-in from middle management in business areas. Get security ambassadors to help make the case through front-line non-IT leaders who are respected.

What tips for effective communication would you offer CISOs in government agencies?

Two tips and a word of caution.

I often hear CISOs and other government leaders say there is no money, or hiring, and that their projects never get funded. My response is to “get on the boats leaving the dock.” That is, what projects are getting funding? Are you, or your top deputies, in those important meetings? For example, a new tax database is a top priority, but you are not invited to participate. Why? Make sure security is built into all strategic projects. Build trust through getting involved in top priorities—or, if you can’t beat them, join them.  

Another tip is to strategically partner with others. This means building bridges through grants, other government groups like the MS-ISAC, police, FBI, DHS, etc. Many of these groups usually have the reputations and a level of trust associated with them, even when new leaders don’t. If you study what has worked and not worked in the past, you can benefit greatly from these relationships. This can also include relationships with the private sector.    

One final word of caution: When a new top leader is elected, the inner circle will inevitably change. Staying effective during this transition, especially if political parties change, is a huge challenge. Nevertheless, cybersecurity is one of the few high-priority topics which tends to be nonpartisan. Stay focused on protecting data and critical infrastructure, and you can survive, even during very difficult administration changes.    

The post Making the case: How to get the board to invest in government cybersecurity appeared first on Malwarebytes Labs.

Categories: Malware Bytes

No summer break for Magecart as web skimming intensifies

Malware Bytes Security - Thu, 08/01/2019 - 11:00am

This summer, you are more likely to find the cybercriminal groups Magecart client-side rather than poolside.

Web skimming, which consists of stealing payment information directly from within the browser, is one of today’s top web threats. Magecart, the group behind many of these attacks, gained worldwide attention with the British Airways and TicketMaster breaches, costing the former £183 million ($229 million) in GDPR fines.

Skimmers, sniffers, or swipers (all valid terms used interchangeably over the years) have been around for a long time and fought against mostly on the server side by security companies like Sucuri that perform website remediation.

Today, web skimming is a booming business comprised of numerous different threat groups, ranging from mere copycats to more advanced actors. During the past few months, we have witnessed a steady increase in the number of hacked e-commerce sites and skimming scripts. In this post, we share some statistics on web skimming based on our telemetry, as well as what Malwarebytes is doing to protect online shoppers from this threat.

65K theft attempts blocked in July

During the past few months, we have been observing a growing number of blocks related to skimmer domains and exfiltration gates. This activity drastically increased as the summer rolled out, most notably with peaks around July 4 (Figure 1).

Figure 1: Web blocks for skimmer domains and gates recorded in our telemetry

In the month of July alone, Malwarebytes blocked over 65,000 attempts to steal credit card numbers via compromised online stores. Fifty-four percent of those shoppers were from the United States, followed by Canada, with 16 percent and Germany with 7 percent, as seen in Figure 2.

Figure 2: Top 10 countries for Magecart activity in July

In addition to a greater number of compromised e-commerce sites (which often times have been injected with more than one skimmer), we also documented large and ongoing spray and pray attacks on Amazon S3 buckets.

Many skimmers, too many groups

Skimmer code can help to identify the groups behind them, but it is becoming increasingly difficult to do so. For instance, the Inter kit that is sold underground is used by different threat actors, and there are many copycats reusing existing code for their own purpose as well.

Figure 3: Fragments from different skimmer scripts

Having said that, skimmers typically have a similar set of functionalities:

  • Looking at the current page to see if it’s the checkout
  • Making sure developer tools are not in use
  • Identifying form fields by their ID
  • Doing some validation of the data
  • Encoding the data (Base64 or AES)
  • Exfiltrating the data to their external gate or on the compromised store

While some skimmers are simple and easily readable JavaScript code, more and more are using some form of obfuscation. This is an effort to thwart detection attempts, and it also serves to hide certain pieces of information, such as the gates (criminal-controlled servers) that are used to collect the stolen data. Fellow researchers also noted the same for the data exfiltration process, although strange encryption may actually raise suspicions.

Magecart protection, client-side

Combating skimmers ought to start server-side with administrators remediating the threat and implementing a proper patching, hardening, and mitigation regimen. However, based on our experience, a great majority of site owners are either oblivious or fail to prevent re-infections.

A more effective approach consists of filing abuse reports with CERTs and working with partners to take a more global approach by tackling the criminal infrastructure. But even that is no guarantee, especially when threat actors rely on bulletproof services.

We often get asked how consumers can protect themselves from Magecart threats. Generally speaking, it’s better to stick to large online shopping portals rather than smaller ones. But, this piece of advice hasn’t always held true in the past.

At Malwarebytes, we identify those skimmer domains and exfiltration gates. This means that by blocking one malicious hostname or IP address, we can protect shoppers from dozens, if not hundreds, of malicious or compromised online stores at once.

In Figure 4, we see how Malwarebytes intercepts a skimmer that had been injected into the website for Pelican Products before the customer entered their information. (We reported this breach to Pelican and it appears that the site is now clean).

Figure 4: Magecart theft attempt blocked in realtime

The recent headlines about data breaches have eroded people’s trust in entering personal information online. And yet, there are still many myths that persist and give a false sense of security. For example, the trust seals many merchants proudly display or even their use of digital certificates (HTTPS) will not protect you from a Magecart attack.

There is no doubt that Magecart threat actors, despite their diversity, are in it for the long game and because the attack surface is quite vast, we are bound to observe new schemes in the near future.

The post No summer break for Magecart as web skimming intensifies appeared first on Malwarebytes Labs.

Categories: Malware Bytes

QR code scam can clean out your bank account

Malware Bytes Security - Wed, 07/31/2019 - 12:05pm

“Excuse me sir, can I ask you for a favor? I want to pay for parking my car in this spot, but there are no machines around that accept cash. If I give you five dollars in cash, can you pay the parking for me? All you need to do is scan this QR code with your banking app.”

Of course, John felt the need to help this person, but since no good deed goes unpunished, he came home only to find that every penny he had in his bank account had vanished. So, all he had to last him through the rest of the month was the fiver in his wallet.

A week ago, one of the Netherlands’ local police departments issued a warning that this type of scam was making the rounds. Meanwhile, two suspects have been apprehended after robbing dozens of people and amassing tens of thousands of Euros.

As far as the police know, these scammers have been active in two cities so far. They left the first city behind when the police started to hand out flyers about the parking scam. And they were caught red-handed in the second city. It may have helped that some of the potential victims had read the warnings by police on social media. These were issued along with warning signs posted on the parking lots and flyers handed out that provided details about the scammers and a request to call the police if anyone saw them at work.

But in case criminals using this tactic are active in other European or US cities, we wanted to bring this particular scam into light.

What is a QR code exactly?

A QR (Quick Response) code is nothing more than a two-dimensional barcode. This type of code was designed to be read by robots that keep track of produced items in a factory. As a QR code takes up a lot less space than a legacy barcode, its usage soon spread.

Modern smartphones can easily read QR codes, as a camera and a small piece of software is all it takes. Some apps, like banking apps, have QR code-reading software incorporated to make it easier for users to make online payments. In other cases, QR codes are used as part of a login procedure.

QR codes are easy to generate and hard to tell apart from one another. To most human eyes, they all look the same. More or less like this:

UL to my contributor profile here

Even if we can spot any differences, we are unable to see what they stand for, exactly. And that is exactly what this scam banks on. To us, they all look the same—one payment instruction for five dollars looks just like any other.

How does this scam work?

Basically, it does the same as when you would enter your login credentials on a banking phish site. The scammers used social engineering to con victims into allowing them to scan the QR code on their own phone. By doing so, the victims provided the scammers with the login credentials to their banking environment.

With those in hand, it’s easy for the threat actors to make some payments on your behalf—into accounts under their control, obviously. It is likely that they used money mules to convert those payments into cash they could then spend freely without raising suspicion.

Other QR scams

Besides the fake banking environment scam, there have been reports of QR codes that were rigged to download malware onto the victim’s device. Also, criminals have been known to replace public and unguarded QR codes with their own so that payments would flow into their pockets.

For example, in China where bike-sharing is immensely popular and you pay in advance to unlock the bike, it can be profitable for criminals to replace the QR codes on a large number of bikes with some of their own. This could bring in a lot of (small) payments into the threat actor’s account, and many potential bike renters would shrug it off when the bike fails to unlock and move on to the next one to try their luck.

How can I protect myself?

There are a few things users can do to keep safe from QR code scams:

  • If you are using QR codes to make a payment, pay close attention to the details shown to you before you confirm the payment.
  • Use QR code payments only in circumstances that you consider normal. Don’t be rushed or talked into paying in a way that you are not completely familiar with.
  • Alarm your bank and work with them to change your credentials as soon as you suspect foul play.
  • Treat a QR code like any other link. Don’t follow it if you don’t know where it originated from, or if you don’t fully trust the source.
  • If you are using a QR code scanner or thinking about installing one, consider using one that uses built-in filters. Or, you can use it in combination with security software that blocks malicious sites, because every QR code scanner I have seen automatically takes users to the link it reads from the QR code.

To users, QR codes offer an advantage over having to type out a full URL in a browser address bar on their device. So to advertisers, this results in a higher turn-around and forgoes the need to use URL-shorteners.

But QR codes have one problem in common with the shortened URLs: Users cannot immediately see where the link is going to lead them. And that is where the problem lies and what offers criminals the chance to abuse the technology.

Luckily for John, his bank reimbursed him for the damages, but you can imagine the hassle he had to go through and how stupid he felt for falling for such a scam. But not every bank in every country will reimburse you fully for being scammed, so other victims may end up drawing the short straw.

Stay safe, everyone!

The post QR code scam can clean out your bank account appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Exploit kits: summer 2019 review

Malware Bytes Security - Tue, 07/30/2019 - 12:20pm

In the months since our last spring review, there has been some interesting activity from several exploit kits. While the playing field remains essentially the same with Internet Explorer and Flash Player as the most-commonly-exploited, it is undeniable that there has been a marked effort from exploit kit authors to add some rather cool tricks to their arsenal.

For example, several exploit kits are using session-based keys to prevent “offline” replays. This mostly affect security researchers who might want to test the exploit kit in the lab under different scenarios. In other words, a saved network capture won’t be worth much when it comes to attempting to reenact the drive-by in a controlled environment.

The same is true for better detection of virtual machines and network tools (something known as fingerprinting). Combining these evasion techniques with geofencing and VPN detection makes exploit kit hunting more challenging than in previous quarters.

Threat actors continue to buy traffic from ad networks and use malvertising as their primary delivery method. Leveraging user profiling (their browser type and version, country of origin, etc.) from ad platforms, criminals are able to maintain decent load rates (successful infection per drive-by attempts).

Summer 2019 overview
  • Spelevo EK
  • Fallout EK
  • Magnitude EK
  • RIG EK
  • GrandSoft EK
  • Underminer EK
  • GreenFlash EK

Internet Explorer’s CVE-2018-8174 and Flash Player’s CVE-2018-15982 are the most common vulnerabilities, while the older CVE-2018-4878 (Flash) is still used by some EKs.

Spelevo EK

Spelevo EK is the youngest exploit kit, originally discovered in March 2019, but by no means is it behind any of its competitors.

Payloads seen: PsiXBot, IcedID

Fallout EK

Fallout EK is perhaps one of the more interesting exploit kits. Nao_Sec did a thorough writeup on it recently, showing a number of new features in its version 4 iteration.

Payloads seen: AZORult, Osiris, Maze ransomware

Magnitude EK

Magnitude EK continues to target South Korea with its own Magniber ransomware in steady malvertising campaigns.

Payload seen: Magniber ransomware


RIG EK is still kicking around via various malvertising chains and perhaps offers the most diversity in terms of the malware payloads it serves.

Payloads seen: ERIS, AZORult, Phorpiex, Predator, Amadey, Pitou

GrandSoft EK

GrandSoft EK remains the weakest exploit kit of the bunch and continues to drop Ramnit in Japan.

Payload seen: Ramnit

Underminer EK

Underminer EK is a rather complex exploit kit with a complex payload which we continue to observe via the same delivery chain.

Payload seen: Hidden Bee

GreenFlash Sundown EK

The elusive GreenFlash Sundown EK marked a surprise return via its ShadowGate in a large malvertising campaign in late June.

Payloads seen: Seon ransomware, Pony, coin miner


A few other drive-bys were caught during the past few months, although it might be a stretch to call them exploit kits.

  • azera drive-by used the PoC for CVE-2018-15982 (Flash) to drop the ERIS ransomware
  • Radio EK leveraged CVE-2016-0189 (Internet Explorer) to drop AZORult
Three years since Angler EK left

June 2016 is an important date for the web threat landscape, as it marks the fall of Angler EK, perhaps one of the most successful and sophisticated exploit kits. Since then, exploit kits have never regained their place as the top malware delivery vector.

However, since our spring review, we can say there have been some notable events and interesting campaigns. While it’s hard to believe that users are still running machines with outdated Internet Explorer and Flash Player versions, this renewed activity proves us wrong.

Although we have not mentioned router-based exploit kits in this edition, they are still a valid threat that we expect to grow in the coming months. Also, if exploit kit developers start branching out of Internet Explorer more, we could see far more serious attacks.

Malwarebytes users are protected against the aforementioned drive-by download attacks thanks to our products’ anti-exploit layer of technology.

Indicators of Compromise (URI patterns)

Spelevo EK


Fallout EK


Magnitude EK




GrandSoft EK


Underminer EK


GreenFlash Sundown EK


The post Exploit kits: summer 2019 review appeared first on Malwarebytes Labs.

Categories: Malware Bytes