Malware Bytes

Is domain name abuse something companies should worry about?

Malware Bytes Security - Fri, 09/18/2020 - 12:57pm

Even though some organizations and companies may not realize it, their domain name is an important asset. Their web presence can even make or break companies. Therefor, “domain name abuse” is something that can ruin your reputation.

Losing control

There are several ways in which perpetrators can abuse your good name to make a profit for themselves, while ruining your good name in the process.

  • Domain name hijacking
  • Webserver takeovers
  • Domain name abuse

The first two are closely related and are usually the result of an attack or breach of some kind.

Domain name hijacking can be the result of someone getting hold of your credentials and changing the server that gets to display the information when the domain is queried. Generally speaking, this is done by changing the DNS records for the domain and if the attackers are planning to prolong the use of your domain, they will move the domain registration to a different registrar. This is done to make it harder for the original owner to get control back over the domain. To pull this off they will need to get hold of your login credentials with the original registrar, either by phishing or by a data breach at the registrar. Many registrars will also ask for an Auth-Code when a domain holder wants to transfer a domain name from one registrar to another. So, it is wise to store this separate from your login credentials. Worst case scenario: the registrar cannot solve the issue for you. Even the ICANN will not be able to remediate the illegal domain transfer if your requests to the original and new registrar do not manage to get your control back.

Webserver takeovers are more of a physical attack on your own servers, whether they are on premise, hosted, or in the cloud. This is what we often see when websites are defaced or other attacks with a shorter lifespan. The results are easier to remedy as it usually only takes a backup of the old website to restore it to its old glory. Sometimes all you need to do is remove a few files that were added by the attacker. But the important part here is to find out how the attacker got access to the webserver(s) and how you can prevent it in the future.

A whole different, but related topic, we have discussed before is the use of expired domains for malvertising. While the technique is totally different, the end goal—malvertising— is of common interest.

Domain name abuse

But the main topic for this post will be domain name abuse, a much harder to grasp subject as it does not involve access to something that belongs to you. At best (or worst rather) the infringement is on your intellectual property.

Again, there are several possible scenarios.

  • Typosquatting
  • Domain name registration under another Top Level Domain (TLD)
  • Replacing country code TLD’s (ccTLD’s)
  • Using ccTLD’s to replace .com or other general TLD’s

Depending on the objective of the domain name abuse some strategies will make more sense then others. If the motive is email fraud then making the website look exactly like the one the perpetrator wants to mimic is more important than having a convincingly deceiving domain name. Especially since spoofing is another option that is often used in email fraud.

Typosquatting is the method of using domain names that are only a little bit different from the real one. They are usually only one typo away, hence the name. These names are often used on highly popular domain names to increase the chance of success. To use an example: goggle[.]com. (See? At first glance, it kind of works.)

Changing the TLD means the holder of the new domain changed the TLD expecting the reader will not notice or be aware of the switch they made. Yet another example: whitehouse[.]com.

Replacing country code TLD is basically the same method but this is a technique often used for banking fraud sites where a national bank is impersonated by giving it a more international TLD. For example: becomes

The other way around happens as well. The international TLD gets replaced with a country code TLD Which also makes sense since many internationals use this method to direct traffic for local dealerships to the localized website. For example: Chevrolet also owns besides their own

What is the purpose of the abuse?

Before we look at how we can respond to domain name abuse, it is important to establish what the purpose of the abuser is. The motives can range from downright malicious and illegal to trying to grab some extra traffic using your brand, which is not immediately illegal, per se. There are some grey areas between the two where legal actions may or may not have the desired result.

What is definitely not allowed is when the abuser tries to pretend to be a representative for your company or to act in your name without your consent. On the other hand, it is not illegal to hope that someone makes a typo. But like we said there is a big grey area between these two. Let’s look at an example.

In most countries it is not illegal to act as an intermediary between the public and an organization. Let’s take for example the intermediaries that ask for money to do the necessary paperwork for a US Green Card. Probably every country has at least one that offers to assist you to apply for one. For a fee of course. There is nothing illegal about most of them. The terrain gets shady, though, when the intermediary uses a domain name that could make the visitor think they are dealing with the U.S. Citizenship and Immigration Services (USCIS) directly. For example, by using the domain uscis[.]us. It gets downright illegal when the owner of the domain puts official logos of another company on his website. At that point they are impersonating the USCIS and can expect a takedown. For commercial companies such behavior can also be treated as an infringement on intellectual property.


What are your options when you notice, or worse, get notified about domain name abuse? There are a few options to deal with websites that throw a negative shadow over your own:

  • Ignore. In some cases, there are few other options, so your best strategy may be to not waste any time on the matter and hope it goes away.
  • Contact the owner. If you look up the domain name there will be a contact email or abuse email of the registrar provided. This method may help in cases of an honest mistake, but your efforts are likely to be futile when there is malicious intent.
  • Contact the registrar. You will need some luck and provide decent evidence to get a registrar to take down a website of one of their customers. Some registrars are known for their slow and reluctance to help victims of domain name abuse.
  • In those cases you will have to resort to a so-called “Notice and takedown” procedure. Many countries have an independent authority that you can contact with complaints about their ccTLD’s. But these authorities will not be able to help you with international TLDs.
  • Take them to court. Easier said then done when you don’t know who you’re dealing with. And even if you do, court rulings can take a long time and are costly. But sometimes threatening with legal proceedings is enough since they are costly for the other party as well.
  • Make sure nobody finds the offensive sites. When the owners rely on search engines to find their site you can counter them there. Often, new sites rely on paid advertisements with search engines to bring in the necessary traffic. Your options are to file a complaint with the search engine or to simply outbid the opponent by paying more for advertisements pointing to your own site.
Required level of protection

There are very different levels of necessity for specialized services and systems that watch and report possible domain name abuse on your domain. Banks and other financials will probably have a whole department involved in takedowns, where your run of the mill pop and mom shop will be satisfied if they can keep their own site updated. Some companies will have an in-house department to keep an eye on possible domain name abuse which will be backed by a legal department that gets called in when necessary. Others will hire a specialized company to do this for them, while the vast majority has taken no precautions at all and will respond whenever a problem should arise.

And as long as your company is considered to be in the appropriate category there is not much reason to make any changes. Having a specialized department when there is absolutely no track record of domain name abuse and none is to be expected is a waste of time and money.

The post Is domain name abuse something companies should worry about? appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Charities and the advertising industry: data ecosystems and privacy risks

Malware Bytes Security - Thu, 09/17/2020 - 12:59pm

Data makes the world go round, more often than not via advertising and its tracking mechanisms. Whether you think making money from large volumes of PII to keep the web ticking over is a good thing, or a sleazy data-grab often encouraging terrible ad practices, it’s not going to go away anytime soon.

A detailed analysis of ad tracking mechanisms on popular charity websites has been released by ProPrivacy, and it explores the nuances of organisations balancing the need to stay in operation alongside ensuring personal data and privacy are top of the agenda. Unfortunately, it appears there’s still a lot of work to do in that regard.

The numbers game

Right off the bat, I think it’s important to pin down exactly what kind of numbers we’re talking about here. The report is incredibly long and detailed, and it’s quite easy to miss key points as a result. If you skimmed the report, or just glanced at bits and pieces, you might come away thinking 80,000 UK based charities are harvesting data on a grand scale. That isn’t the case.

The domains were extracted by researchers from the Charity Commissioner’s database. Once potentially unrelated sites such as publishing companies, subdomains, dead URLs and more were removed from the total, what’s left is 64k sites. That’s still a sizeable number of domains. Even so, that tally is about to drop further.

The study authors deduced that 42% of what remained used ad tracking technology. That’s around 27,000 sites. This is still a big number, but as you can see we’ve already lost a significant chunk of the original tally.

Adding shape to data

As for what lurked on those sites, I’ll stand back and let the researchers do the talking:

The majority of these trackers were related to social platforms. 33.8% of the sites analysed contained trackers belonging to: Facebook, Twitter, AddThis, YouTube, Instagram, LinkedIn, or Flickr.

DoubleClick, the Alphabet-owned programmatic advertising (RTB) platform was installed on 10,105 (15.6% of sites).

Outside of the Google advertising ecosystem, we found 330 (0.51%) charities with RTB trackers and 220 (0.34%) with data broker trackers installed.

Your mileage can (and will!) vary, but I don’t personally think people generally have issues with things like social plugins, especially when many of us use those tools daily. It’s also quite easy to find out what, exactly, those plugins do and how to avoid them if you really want to.

Some of the other elements could be cause for concern, however.

The study found that 90% of the top 100 popular charities in the UK used advertising methods via DoubleClick or similar technology. Again, Google’s DoubleClick is something you can at least find information on and make an informed decision as to whether you want your data to interact with it. With them taken out of the picture, 40% used third party elements belonging to either RTB players or data brokers.

This is where the story really kicks into gear. Before said gear can be kicked, it’s time for a brief “What is RTB” interlude.

What is Real Time Bidding (RTB)?

Back in the olden times of online advertising, ads were purchased in bulk and placed on specific websites only. It was all a bit cumbersome and not particularly sophisticated, at least compared to what’s now available. Real Time Bidding (RTB) is a system where advertisers compete in real time set against specific audiences and targets.

It’s more agile than more traditional methods of ad unit placement, and usually a bit cheaper. Instead of the old bulk methods, you can assign whatever size budget you like and only “win” the bids you’re interested in. Anything unimportant to your overall strategy won’t factor into things.

Think of it as an advertising sandwich, with the advertisers wanting to promote wares on one side, the website on the other, and the ad network filling in-between connecting the two. Within that ad network space, you’ve got the big players at the top of the…sandwich tree?…and an endless procession of ad agencies.

There are usually additional ad agencies filling the role of brokers liaising with said big guns. In amongst all of this, the rogue advertisers place their bids for impressions alongside legitimate buyers, and the real-time nature of things makes it tricky to sniff them out. Those bogus ads could be pushing malware, or redirects, or both.

That’s at the “definitely very bad” end of the scale. Elsewhere, we have simply “RTB working as intended”. That’s our sign to jump back to the story at hand.

Charity sites and RTB

According to the research, 21 charities are sharing data with brokers directly, and seven are sharing with more than one broker. As you can imagine, it’s important to comply with all relevant rules to keep site visitors safe from potential privacy intrusions. What the study found, however, was that in a lot of cases some charities simply had no idea what was happening on their site.

Daisy chains of third-party requests from the initially placed tracker means visitor data could be shared with multiple companies. Who are they? What are they doing with it? Well, the charity may not know and so neither would you. If people running the ad tech don’t fully explain what’s going to take place to the charities, that leaves both site and visitors at risk.

Oh no, my cookie jar

Worse, cookie compliance is a mess. In theory, when you see one of those “Do you accept” notices, you’re supposed to be able to decide if you accept cookies / tracking or not. Everything should pause under the hood and wait for you to make an informed decision. The reality is a little bit shocking, with a whopping 92% of the top charity sites failing to pause cookie loading till a decision is made.

Going back to the data, 8 charities paused 3rd party cookie loading till a decision was made. The rest were potentially sharing data with advertisers while the site visitor decides what to do next. 30% of those in the top tier gave no consent option either way. Some form of actual control offered to visitors was granted by just 32%, with 13% ensuring their cookies are inactive, waiting for the visitor to make a move.

This is, frankly, not great.

Scenes from a charitable donation

Many of us donate to charity organisations, whether it’s one-off payments, rolling subscriptions, bags of clothing, and more. To give one example, after a house-move I passed in a lot of clothing and other items I no longer had a use for. The way it works is you fill in a few forms when you hand it over, and a few months later a letter comes through the door. It encourages me to visit the website and “See what we’ve done with your items”.

There are a few different ways this can play out:

  1. The letter will be personalised to my items, for example with a unique printed code which I input on the site. From there, the website would attempt to tie me to the items given to begin the matching of personal data and advertising profiles. Who knows if the marketing tools under the hood do anything prior to me making cookie related decisions? If they’re connected to daisy-chained advertising firms?
  2. The letter includes a code tied to your name / address. This code may or may not be used on the website to update details in case of a house move. It’s possible this will be tied to marketing profiles when first entered or updated, and then you’re back to the same situation in example 1.
Time to make a choice

In my example, the site presents me with a popup the length of the page, telling me analytical / marketing cookies are set to off by default. Essential cookies are ticked, and there are two separately placed “accept recommended settings” boxes. Is there no way to disallow the essential cookies even if the site requires them to function? If I click the “accept recommended settings” next to the currently switched off marketing cookies, will it enable them? Or is “off” the recommended setting?

Does the “accept recommended settings” box next to the essential cookies tick related to those specifically, or does it do the same thing as the recommended settings box next to marketing cookies? Where do I click to find out?

These are just a few of the questions I had in my mind as I browse the page, and I’m not entirely sure what the correct answers will be. It may well be a slightly excessive observation of the choices before me, but such observations are required to figure out exactly what we’re agreeing to. Without them, the idea of granting consent seems somewhat meaningless.

Charitable ethics

As the report notes, many charities deal with very sensitive subjects. How prepared are we to become monetised for random third parties, in order to keep our favourite charities of choice ticking over? There are no easy answers to this question. The main requirement here is to ensure people’s data is treated with the same respect the charities give the recipients of their hard work. Donators are happy to keep these organisations ticking over, and it’s definitely in the long-term interests of the charities to keep them that way.

Full report: Exposing the hidden data ecosystem of the UKs most trusted charities (Source: ProPrivacy)

The post Charities and the advertising industry: data ecosystems and privacy risks appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Fintech industry developments, differences between Europe and the US

Malware Bytes Security - Tue, 09/15/2020 - 11:00am

“Put your money in the bank and you can watch it grow.” If there is a statement that shows us how much the financial world has changed it’s this one. With the introduction of negative interest, companies and consumers with a large amount of liquid assets are looking for a different way to handle those assets.

This is where the innovative fintech industry comes into play.

What is fintech?

The hardware and software used in the financial world is generally referred to as fintech. But the expression is also used to describe the startups in the financial world. In this article it will be used to describe the technology as many of the settled financial institutions feel they need to adapt to the same new technology that the startups offer their customers. Because of this we can find these new features in banking and other financial applications both in the apps of accomplished firms along with those of the new financials.

Differences in the leading markets

When you think about fintech there is a big difference in what everyone might envision, and this may rely for a big part on which part of the world you are in. Before 2017 the US was leading the way and they were making large investments in the development of new technologies related to online banking, mobile apps, and other new technologies in this field.

From then on investments in the US in this industry started to dwindle, simply because the existing companies turned out to need too much more investments before they could possibly become profitable. Also, there were too many horses to bet on, and the more horses, the harder it is to pick the winning one. The playing field in Europe was easier to oversee and many new branches were carried by older and more trustworthy trees. In other words, fintech firms in Europe have long been undervalued by the market while offering substantial added value and more interesting growth perspectives than their American counterparts. Product advancements and a focus on regulation have made European fintech companies more attractive for investors.

Regulation is an important factor

The use of consumer fintech in the United States seems to be well behind that in most of Europe, where regulation that looks ahead has sparked a surge of innovation in digital banking services along with the backend infrastructure onto which products are built and operated.

That might seem counterintuitive, as regulation is often blamed for slowing innovation down. Instead, European regulators have focused on reducing barriers to fintech growth rather than clinging to the way things are. For example, the U.K.’s Open Banking regulation requires the country’s nine big high-street banks to share customer data with authorized fintech providers.

Importance of fintech

The financial industry is considered to be vital infrastructure and for good reason. When we lose trust in our financial institutions, it turns our society upside down.

Web skimmers are a potential hurdle when it comes down to trust issues. Generally speaking, web skimmers insert code into legitimate websites to eavesdrop on the payment details and attempt to find enough information to steal from the buyer. These information sets are sold on the black market to the highest bidder and could turn out to be very costly for the victim, or his bank if they reimburse their customers. But even if you get reimbursed, the occurrence of someone plundering your bank account, could scare away potential buyers from online shopping.

Fintech as an industry is one of the possible caretakers when it comes to constructing tamper proof websites and prevent the interception of re-usable payment details.

PCI DSS compliance

One of the instruments that is already in place, but could be implemented better is PCI DSS compliance. Providers staying on top of achieving effective and sustainable compliance will make a notable contribution to the level of trust that consumers will have in online transactions and other fintech innovations.

The differences

The future of fintech is promising but how fast we can reap the fruits depends on where we live and how our governments handle regulation of the sector. As we have said before European laws are focused on enabling developments in the fintech industry. All the while keeping an eye on privacy issues under the flag of GDPR regulations. In the US the industry has been under heavy scrutiny since the 2008 banking crisis. For the fintech startups this has resulted in a complicated regulatory framework but also the inexistence of a concrete legislation for fintech firms that takes into consideration the different nature of their activities. And if these companies want to be players on an international level, they still have to adhere to GDPR regulations as well.

A common request from the US fintech industry has been to implement legislation that supports startups and is tailormade for the specific industry. The feeling is that this will create a favorable environment for growth and give the industry a chance to catch up with their European counterparts.

Given that the biggest part of the European fintech startups’ activity is based in the UK some analysts are still holding their breath while the implications of Brexit are starting to pen out. During the implementation period, EU law will continue to apply, firms and funds will continue to benefit. But this hasn’t brought a lot of certainty for the financial services sector in the long-term. The more traditional banks are already preparing to move hundreds of billions of dollars from London to the continent after Brexit. The magnitude of the move will likely depend on the result of the negotiations for new trade deals between the UK and the EU. But, a report from thinktank New Financial suggests that 332 financial services firms have already moved jobs out of London because of Brexit, up from 60 last time they looked in March 2019. And this was before the COVID pandemic threw a wrench in the progress of the negotiations between the UK and the EU. Companies are considering to move as a consequence of the delay and uncertainty around Brexit.

Consumer security

Whichever route the legislators decide to take, it should be clear that consumer security is a priority. Without consumer trust all the fintech endeavors will be futile anyway. Obviously mistakes will be made but they should be dealt with in a fair way and lessons should be learned from them.

One of the reasons why some of the fintech startups are so successful lies in their ability to offer alternatives to conventional financial solutions through cryptocurrencies, online loans, and P2P. Along comes a variety of challenges and one of them will be cybersecurity. The huge growth in the number and size of online platforms makes this industry very vulnerable to security breaches and creates potential targets for DDoS attacks.

The post Fintech industry developments, differences between Europe and the US appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Lock and Code S1Ep15: Safely using Google Chrome Extensions with Pieter Arntz

Malware Bytes Security - Mon, 09/14/2020 - 10:49am

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Pieter Arntz, malware intelligence researcher for Malwarebytes, about Google Chrome extensions.

These sometimes helpful online tools that work directly with the Google Chrome browser can pull off a variety of tricks—checking your grammar, scouring the web for coupon codes, and providing a dark mode for easier nighttime browsing.

But the Google Chrome extension landscape is enormous, and, for countless users trying to navigate it, they can run into trouble.

Tune in to hear about the the history of web browser extensions, and how to spot and protect against malicious Google Chrome extensions on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on: Other cybersecurity news

Stay safe, everyone!

The post Lock and Code S1Ep15: Safely using Google Chrome Extensions with Pieter Arntz appeared first on Malwarebytes Labs.

Categories: Malware Bytes

The informed voter’s guide to election cyberthreats

Malware Bytes Security - Fri, 09/11/2020 - 11:00am

Singapore held its most recent general election on July 10 2020, and although they used the electoral system called first-past-the-post (FPTP), a scheme favored by the US, UK, and most English-speaking countries, the road leading to Election Day was not without challenges and obstacles.

While all voters used paper ballots (thus removing the exposure to risk that comes with using electronic voting machines), phishing attempts, disinformation, threats of hacking political parties, and other election cyberthreats were nonetheless ever-present.

In light of the ongoing COVID-19 pandemic and the election technologies we know could be abused in the wrong hands, we look at these and other potential election cyberthreats—past, present, and (if pervasive enough) future—that the security world has observed for quite a while now, so that informed voters can remain informed and hopefully continue exercising your right to vote whenever and wherever that may be.

Ask, and ye shall know

Avid readers of this blog would know that we stress upon the importance of asking the right questions when it comes to deciding on what to do online, what to buy (and where), and which technologies to use. In this case, we’d like to know which election threats that we’d be facing and the ways we can protect against them or otherwise address them.

And in attempting to answer these, we first need to know what or who could be targeted during an election. This way, we’d also know what to protect and how. So, we ask—

What are election assets?

To identify the threats we can likely see/face/observe during election period, it is important to identify an election’s critical assets, which will in turn aid us in recognizing and categorizing threats to these assets and to the election as a whole.

Election assets are:

Infrastructure. This pertains to places where polling is held, including storage facilities for elections and locations where election systems are physically set up. If we want to be granular, the US Department of Homeland Security (DHS) listed some of these as: voter registration databases and their associated IT systems, election management systems (for counting, auditing, displaying election results, etc.), and voting systems.

Materials. Pretty self-explanatory. These pertain to the objects used during the election process. For example, paper ballots or voter registration cards, pens, and indelible inks.

As of this writing, there have been no real or theorized election cyberthreats lodged against such materials. However, this doesn’t mean that no one has attempted to circumvent their use.

Take, for example, the use of the electoral ink. Its indelibility is a means to prevent constituents from casting more than one vote. With the ongoing pandemic, however, hand sanitizing before and after casting votes could impact the product’s effectiveness. Not to mention that there have been cases in the past where the ink used washed off easily and completely.

People and information. Although they are two completely different entities, we cannot really separate them. The people are the political candidates, their staff, and the parties they represent; the third-party organizations that are directly or indirectly involved in the elections; electoral officials and staff; and, of course, the voters. Under third parties, we can include software and hardware manufacturers. Information, of course, refers to all of the content these people share to the public, amongst each other, and within election systems.

Technology. Voting machines aren’t the only technology that countries use for and during an election. Political candidates have freely made use of printers, mobile phones, and the Internet to reach out and communicate with their supporters. Voting technology also includes systems that are used to efficiently manage the elections. Examples of systems are information management systems, election management systems, and yes, the machine’s operating system itself.

With these in mind, we can start identifying potential threats to these assets. Note that such threats can happen before, during, or even after elections.

Why would cybercriminals do this?

Oftentimes, it is easy to pin down intent based on the kind of threat being carried out. A phishing email, for example, gives a clear indication that whoever is behind the campaign is aiming to gather something from their target—be it sensitive information or money.

In many cases, the phishing email carries with it a malware payload, or a link to its download location. The malware, once executed, then seeks and siphons out information back to its command center. But things don’t end here.

The malware could reside and remain undetected, allowing attackers to gain a foothold onto the target’s network. Sensitive data, credentials, and other information they may have gleaned can be used for their own benefit—from selling to the highest bidder, leaking to disrupt normal operations or call into question a victim’s reputation, and—if some of the files seem personal enough—used to blackmail the victim to extort money.

When it comes to elections, one would likely assume that nation-state actors would interfere to tip the political scales in their favor. Other actors would insert physical disruptions to the otherwise normal election process, either to cover up their true intent—which is usually something more sinister than causing trouble—or “just because.”

There is, however, a solid belief among cybersecurity professionals that the main motivation behind election-fueled attacks is to sow distrust among voters against either their government, the electoral staff and their processes, or the technologies used in conducting the elections. It’s also possible that their aim is all of the above.

How do threat actors disrupt or interfere in elections?

Now that we have identified what threat actors would likely target and why they would target them, we move to answering how threat actors would likely attack these targets. Here’s a comprehensive list of (potential) adversarial activities that has been observed (if not theorized) so far:

Discourse manipulation

We already mentioned earlier that election threats can happen months before election day arrives. Suffice to say that while politicians and election administrators are busy planning and preparing for election day, expect that threat actors are doing the same. One of the more subtle (yet still nefarious) ways of interfering with elections in the run-up to election day is to manipulate the discourse to favor one candidate over another, or simply to cast doubt on the entire election process.

When one manipulates the discourse, one asserts “The Good Me” while also highlighting “The Bad Them.” Unfortunately, we’re all too familiar now with the ways in which cybercriminals (and politicians alike) do this through disinformation, fake news, and computational propaganda.

Disinformation, fake news, and computational propaganda

Several forms of disinformation from all sides of the political spectrum have been around almost as long as politics itself, and certainly pre-date the Internet. Word of mouth and traditional media have helped amplify such campaigns.

With the Internet, and now especially social media, it is easier than ever to, say, cook up stories about what a certain politician purportedly said regarding a sensitive matter that would anger a certain group of people. This can even be accomplished in real time. According to The Global Disinformation Order [PDF], a paper from the Oxford Internet Institute, online disinformation campaigns have been on the uptick since 2017.

There are those who have no real affiliation with government and politics and merely want to earn easy money by targeting certain partisan groups. However, the cybersecurity industry has seen its share of serious organized criminals—often backed by their respective governments—that bank on disinformation campaigns and fake news to accomplish political subterfuge.

There are a number of reasons why governments and political parties spread disinformation, and the classic reasons are to:

  1. Discredit their opponents
  2. Bury views that oppose theirs
  3. Erode trust in the election/election systems
  4. Erode trust in democracy or other forms of government

It’s interesting to note that when it comes to conducting deliberate disinformation campaigns, our gut reaction is to point the finger at foreign nationals. There’s certainly precedent. For one, Russia has a known history of spinning phony political stories since the 1920’s.

But nowadays, Russia isn’t the only country engaged in such campaigns—or what others have started calling computational propaganda, which the Oxford Internet Institute defines as “the use of algorithms, automation, and big data to shape public life.” Iran is expanding its online disinformation operations, as are at least 70 more countries. Among these, China has dethroned Russia to become the new king of disinformation.

And then, there is social media manipulation, wherein engineered content is deployed using social networking platforms, and the proliferation of computational propaganda techniques themselves. Threat actors both domestic and foreign have been copying and honing Russia’s disinformation methods from the previous election to influence the outcome of this year’s US general elections.

The Oxford study also asserted that although we have countless social platforms currently in existence, Facebook remains the platform of choice to spread disinformation and fake news.

While governments and groups continue to either point fingers or deny allegations, let’s not forget that the recipients of such material can also amplify disinformation to their network, whether that’s their intention or not. Some may judge the veracity of the material as legitimate. Others may question it but circulate nonetheless to kick off discourse. Either way, both efforts serve to circulate baseless claims and call into question what is the truth.


In one of our blogs about election systems in the vital Infrastructure series, we touched on how vulnerable elections could be, highlighting that although voting machines have been hacked successfully on many occasions, doing so at a large scale can actually prove more challenging than what was initially thought. To further complicate matters for potential hackers, not all states use the same voting machine make, model, and supplier. But that doesn’t mean elections are safe from hacking.

Websites can also be hacked, too. Remember that news about the website of the Florida Secretary of State being successfully infiltrated via an SLQ injection done by an 11-year old? Okay, granted, the website was actually a replica of the real thing, but one can’t but help think how potentially vulnerable these supposed critical websites are, especially when being accessed and interacted to by various people—including those with ill intent.

In the last quarter of 2019, McAfee found that swing state election websites aren’t secure from cyberattacks. For starters, the connections of these websites are problematic, as they don’t use HTTPS by default, making them far, far easier to infiltrate. Cybercriminals could make small but significant changes, like altering the websites’ content, that could cause confusion about the dates, locations, and times to vote or otherwise disrupt the election process.

Perhaps, on an even more dangerous note, hackers could do as little as claim to have compromised the site to sew seeds of doubt, casting a shadow over the efficacy of the democratic voting system and eroding the integrity of election results in that state.

Hacking is one of the many attacks on infrastructure that a state or country may encounter during a sensitive time like general elections. The act of infiltrating systems and infrastructures is also usually just the first step of a much larger campaign aimed at destabilizing government or other organizations.

For example, once an election or government site has been hacked, cybercriminals could use information and credentials to kick off a social engineering campaign. This could branch off to cyber espionage (which usually involves using malware), data theft, distributed denial-of-service (DDoS) attacks, and extortion. Come election season, attack campaigns would widen to include the possible interruption of ballots for their modification, deletion, or blocking (the act of blocking ballots from arriving to their supposed destination by threat actors).

Cyber espionage

Foreign spying has always been a challenge for governments, and it doesn’t just happen during the election season. Through the years, cybersecurity experts have investigated nation-state actors who had been involved in election-fueled espionage, particularly advanced persistent threat groups such as APT28, otherwise known as Fancy Bear; the Sandworm team; and APT40, otherwise known as Leviathan.

So far, persistent threat actors have infiltrated diverse targets during elections, from those on staff for a particular candidate or campaign to those responsible for administering ballots or distributing election materials. They are targeted for the sensitive information they have access to, but also for cybercriminals to familiarize themselves with the network’s infrastructure, so they can identity the locations housing critical information that they may use to their advantage.


Data theft is a known and given problem during the elections since there is a huge exchange of information going on within distinct groups: among constituents and the registration databases; chat, email, or phone conversations between or among political staff and candidates; and credential information of electoral staff and administrators among others. And one way of stealing such data is via phishing.

In 2017, then-presidential candidate for France Emmanuel Macron confirmed that his staff and party, “En Marche!” or “Onwards!,” were targeted by several advanced phishing campaigns. However, no campaign data was stolen. And this is just one of the many “hundreds if not thousands” of attacks they received from locations inside Russia at that time.

Feike Hacquebord, a researcher from Trend Micro, confirmed these phishing campaigns with some emails containing a malware payload. He also noted that these attacks had telltale signs that connected them previous attacks targeting the campaigns of Hilary Clinton and Angela Merkel in 2016.

Perhaps the most notable political phishing story we can mention here is how President Macron’s campaign was able to turn the tide against the threat actors who were after their data. Led by Mounir Mahjoubi, the campaign’s digital director, they outplayed their attackers using a method called cyber-blurring or digital blurring.

This is a known diversionary tactic in the banking industry, and the Macron campaign used it to slow down and confuse hackers. They did this by deliberately creating fake documents—with some containing outright ridiculous information—accounts, and credentials and mixing them with real but otherwise uninteresting data. As a result, the attackers’ time was wasted, and the burden of proof to justify why they stole or leaked useless information was successfully shifted to the attackers.


Ransomware is seen as one of the big threats during the election season. Some might believe that by using ransomware, threat actors’ primary motivation must be profit in the form of digital coins. In fact, security researchers have reason to believe that, just as with other attack methods and threats used during elections, threat actors are more motivated to undermine the confidence of the results, either at the local level or state level.

“If a ransomware hits an election system, you can pay the ransom or pay a consultancy service. You can restore the data, but you still have the damage done. You cannot undo that damage,” said Lee Imrey, Cybersecurity Advisor for Slunk, in a candid webinar on election threats, “Because, even if you restore all the votes, you’ve lost confidence that the votes you’re restoring are valid votes.”

Possible scenarios also include poll workers not being able to access voter information databases due to them being locked up, which could also happen with websites that post unofficial results on and after Election Day. Some files impacted by ransomware may not being able to reverse their encryption—remember that a notable majority of files affected by a ransomware attack are not typically 100 percent recovered. Finally, a ransomware attack could result in the possible deletion of voter databases and other sensitive data.

It is known that, like some organizations in the private sector, several local governments are ill-equipped to protect themselves from ransomware attacks. Not only do they lack the manpower, they also lack the in-house expertise needed to at least guide them on what to do before, during, and after such a devastating attack.

SIM swapping/SIM swap attack/SIM intercept attack

SIM swapping is a form of identity theft, and it isn’t a new attack. But because of that successful takeover of Jack Dorsey’s own Twitter account, SIM swapping is a thing again.

This is considered an electoral threat because of its high potential to take over high profile accounts belonging to individuals involved in the election proceedings or the politicians themselves with simple social engineering tactics.

Although we have yet to see SIM swap attacks against individuals who are related to the 2020 US General Elections, there is the case of Euridice Pamela Sanchez, an Associated Students Incorporated (ASI) President and CEO candidate for California State University/East bay (VSUEB), being SIM swapped days before ASI elections in March of this year. The still-unknown actor was able to delete all her 3000+ Instagram connections. Since all candidate campaigning is done via social media due to the ongoing pandemic, Sanchez couldn’t reach her supporters to continue her campaign. The attacker also compromised her family’s AT&T account and attempted to access her personal email.

We can only imagine the scale of damage this could cause if political candidates or poll officials—even those in lower positions who may not directly report to them—would become victims of SIM swapping at this crucial time.


DDoS can, no doubt, hinder an election process at any point. Take, for example, the cyber attack that affected the UK’s Labor Party’s websites during the 2019 general elections. Not only was the party hit once but twice over. The first attack failed, as confirmed by a National Cyber Security Centre (NCSC) spokesperson with the BBC.

A DDoS attack against websites also disrupted the voting procedures in South Korea in 2011. It was notable that a botnet attack from 200 devices happened in the morning, the time when the younger population of voters would be able to vote before they head for work. Investigators proclaimed that the threat actors were aiming for a low turnout of voters during this time, which in turn would have benefited the conservative party in South Korea.

Such attacks against this part of the election infrastructure could hinder candidates and their staff from accessing data they can use to plan on what campaigns to run in what areas and who they would be targeting to attempt to change their minds.

Deepfakes (and its other forms)

Deepfakes have come under the watchful eye of not just technology and cybersecurity experts but law enforcement as well. And why not? In the wrong hands—from nation-states with agendas tipping to their favor to naughty miscreants who just want to disrupt—it could turn the tides of any election cycle.

Many see deepfakes as another way to make disinformation campaigns more impactful and believable. But perhaps the good thing here is that, at this point in time, people are already familiar with this technology and are actually expecting a form of a politically motivated deepfake to emerge before Election Day, especially at the last minute.

But what really worries Kathryn Harrison, founder and CEO of the DeepTrust Alliance, an alliance devoted to fighting deepfakes and misinformation in general, is the emergence of something that is not a deepfake—a video, for example—of which its veracity cannot be verified. A candidate caught doing or saying something questionable could easily cry “Deepfake!” even if a video is truly legit. Researcher in deepfake circles call this conundrum Liar’s Dividend.

Other foreseen worries include an authority figure reading out wrong elections results, some fake disruptions at certain polling stations, or miseducating constituents on how they can stay safe during Election Day.

How can we protect ourselves from these threats?

For voters, it’s overwhelming to look at this lengthening list of election cyberthreats they might get affected by and feel they have so little time to prepare for to counter or protect themselves from. Fret not—most challenges on this list can only be addressed by governments in the state, local, and national level.

However, there are two things that voters must start doing (if they haven’t already):

Stay informed. Keeping apprised with the current news, especially those that touch on election cycles that are about to happen in your country or state, is one way of staying informed. And while it’s important to know what’s going on, it is equal if not more important to know credible sources of news you can refer to. Disinformation could be anywhere, and we must become smarter. With the elections drawing near, expect such campaigns to pop up in social media and other platforms.

Vote. Whether you feel like going to the polling station—wearing proper protection and following the recommended guidelines, of course—or you prefer to cast your vote via mail, please vote. The more people exercise their democratic right, the less likely electoral fraud can influence the overall outcome of the elections. It’s a numbers game, and we better make sure that the numbers weigh heavy on the side of an honest election cycle.

Good luck and stay safe!

The post The informed voter’s guide to election cyberthreats appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Report: Pandemic caused significant shift in buyer appetite in the dark web

Malware Bytes Security - Thu, 09/10/2020 - 4:29pm

Last year, credentials for PayPal, Facebook, and Airbnb were among the top goods on high demand in the dark web, aka the Internet’s underground market. But due to the COVID-19 outbreak, with most of the worldwide population sheltering, working, and studying indoors, many facets of life have made a full 180-degree turn—including the criminal world.

Almost everything we do is not how we used to do it before, and this is true for public and private individuals, organizations, and governments. And it’s certainly true within the dark web.

According to a recent report by, the most valuable data currently being peddled within the dark web are from services that bring about a little ease, relaxation, entertainment, and, admittedly, a little sanity for people sheltering in place.

Here are the findings

It’s no surprise to see a population on lockdown spending more time online than they normally would. And with nothing more important to do than keeping the house tidy, many have been busy binging on TV shows and movies, getting groceries delivered, and investing in mental health or learning something new. Because of this shift, data on these accounts fetches a high price tag on the dark web.

Table taken from the report, Dark Web Market Price Index 2020: Covid-19 Edition.

Data related to 72 percent of entries in the above table are noted as “New Item,” which means that they were never traded last year, and yet, they command the highest price tags in the underground market today. This not only gives us an idea of how profound the shift is in the dark web, but also solidifies what we already know: cybercriminals follow the money. And if these sources happen to be low hanging fruits, even better.

Other data types continue to be on high demand during the pandemic. Below is a shortened list of items for sale on the dark web and how much they are worth on average.

Table highlighting the top premium items for sale in the underground market, arranged per category. (Source data from More data on the dark web
  • Whether the world is in the middle of a pandemic or not, details related to plastic cards, such as debit and credit cards, and banking credentials remain sought after commodities. It is, after all, practically effortless to pull off a heist when you use someone’s stolen credentials to open their account and empty it.
  • Underground vendors were also seen selling a fraud bundle, which comprises of hacked debit card data, cryptocurrency accounts, and SIM cards. This allows criminal buyers to SIM jack accounts and syphon money to crypto accounts. Such a bundle is sold for the maximum price of $4,600.
  • Fraudsters kept their eyes on SMBs and consumers as they continue to sell details for Cash App (at $47) and Venmo ($14).
  • The price of hacked Verizon accounts ($102.50) is noted to have increased ten-fold as they are now being bundled to include customers’ personally identifiable information (PII), such as social security numbers (SSNs) and dates of birth. Not only can buyers use these accounts for personal use, they can use real-world data about someone to pose as them or create new, synthetic identities.
  • The lack of air travel during the pandemic had forced some people to settle for the next best thing to a vacation: a staycation. And Airbnb offers the perfect service for this. Airbnb accounts are now more prized than ever. Now valuing $13.50, these accounts can be used to create fake listings or as part of a bigger phishing campaign.
  • Hacked accounts from health and wellness services like Peloton, Headspace, and Fitbit (all sold for $7) are used for identity theft and potential house burglary using GPS location data.
  • It’s interesting to note that Facebook continues to be the social media platform of choice for cybercriminals. With hacked accounts now valued at $7.79, the platform is still a potent avenue to find and reach targets for various social engineering campaigns. It’s likely that its value will increase as election day draws near.
  • Scammers love entertainment services as much as we do, so it’s no surprise for them to start asking for stolen accounts for Netflix (sold at $6), Disney+ (sold at $7), YouTube Premium (sold at $7.50), and Spotify (sold at $3.50).
  • Hacked student emails (sold at $6) are hot, presumably because of the “.edu” domain that goes with it. For a targeted campaign, this would be more useful as it brings legitimacy to the content of the email and the purported sender.
  • Perhaps what the security community should keep an eye out for are accounts related to new content platforms, OnlyFans (sold at $16) and MasterClass (sold at $6). It is still unclear why such accounts are in high demand and how they are used to commit crime.
Some points to ponder

Because there is an abundance of new hacked data being peddled in the underground, one might wonder if this is just because of the pandemic, and that such goods would eventually decrease their value—if not kill the market entirely—once a vaccine is found and life goes back to normal. So, we asked Simon Migliano, Head of Research at, and he thought that these accounts will continue to sell.

“The reality is that even if people do stop using services such as Instacart or Peloton as they return to picking up their own groceries or going to the gym, it’s unlikely that they will completely delete their accounts when they cancel their subscriptions,” Migliano said, “This abandonment of unused accounts is an aspect of consumer behavior online commonly exploited by cybercriminals to harvest personal data.”

Migliano also asserted that, if not for the pandemic, these peddled goods would have looked quite different. “Had there not been a pandemic, we would have seen many more travel brand accounts credentials for sale, such as for Uber, Expedia and JetBlue. I would also have expected to see a much greater range of online retail beyond Amazon and big box stores like Walmart.”

Time to wise up on cybersecurity best practices

If you, dear reader, are worried about the data found for sale on the dark web or have accounts on any of these sites and services, it’s a good idea to start taking computer security hygiene seriously. If you’re not sure where to begin, here are some quick and helpful tips:

  • Use a password manager. They help hold multiple strings of account passwords that our memories cannot, plus encrypt and sometimes periodically change those passwords to keep them away from prying criminal eyes. Another option is to use a hardware authentication device or a hardware security key, and there are a lot of them in the market for you to check out.
  • Always have two-factor authentication (2FA) enabled on all your accounts.
  • Spring clean online accounts you don’t use or rarely use. Much like what we do with the apps we install on our phones but never got around to using them until they were forgotten, we should also make it a point to check for possible accounts you own and delete them if you haven’t used them for months or years. It’s a bit of a chore, yes, but like forgotten apps, these accounts could be unlocked doors just waiting for cybercriminals to open.
  • Keep an eye out for notifications of account breaches. Some of the services we use are responsible enough to let us know when something has gone wrong. But there are also services that neglect this important step. If you’re unsure whether your account for a certain service has been compromised, try visiting and sending a query to Have I Been Pwned.
  • Update software on all devices you use.
  • Install software that can protect you from malware and harmful sites.

Now is as good a time as any always to start making a habit of practicing effective security methods, whether you are still sheltering at home or have ventured out into the world. Equip yourself with knowledge and common sense online behaviors, and you can protect against threats from the dark web or anywhere else.

Stay safe!

The post Report: Pandemic caused significant shift in buyer appetite in the dark web appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Malvertising campaigns come back in full swing

Malware Bytes Security - Wed, 09/09/2020 - 1:07pm

Malvertising campaigns leading to exploit kits are nowhere near as common these days. Indeed, a number of threat actors have moved on to other delivery methods instead of relying on drive-by downloads.

However, occasionally we see spikes in activity that are noticeable enough that they highlight a successful run. In late August, we started seeing a Fallout exploit kit campaign distributing the Raccoon Stealer via high-traffic adult sites. Shortly after we reported it to the ad network, the same threat actor came back again using the RIG exploit kit instead.

Then we saw possibly the largest campaign to date on top site xhamster[.]com from a malvertiser we have tracked for well over a year. This threat actor has managed to abuse practically all adult ad networks but this may be the first time they hit a top publisher.

Malvertising on popular ad network

The first malicious advertiser we observed was able to bid for ads on a number of adult sites by targeting users running Internet Explorer without any particular geolocation restriction, although the majority of victims were in the US.

Figure 1: Victims by country on the left, adult sites traffic on the right

In this campaign, the crooks abused the popular ad network ExoClick by using different redirection pages. However, each time we were able to notify the ad network and get them shut down quickly.

The first domain they used was inteca-deco[.]com, which was setup as a web design agency but visibly a decoy page to the trained eye.

Figure 2: Decoy page used as a gate to exploit kit

Simple server-side cloaking performs the redirect to a Fallout exploit kit landing page witch attempts to exploit CVE-2019-0752 (Internet Explorer) and CVE-2018-15982 (Flash Player) before dropping the Raccoon Stealer.

Figure 3: Traffic for Fallout exploit kit

About 10 days later, another domain, websolvent[.]me, became active but used a different redirection technique, a 302 redirect, also known as 302 cushioning. This time we see the RIG exploit kit which also delivers Raccoon Stealer.

Figure 4: Traffic for RIG exploit kit

Beyond a common payload, those two domains are also related. A RiskIQ crawl confirms a relationship between these 2 domains where the parent host was caught doing a meta refresh redirect to the child:

Figure 5: Passive Total’s host pairs Malvertising on top adult site gets maximum reach

The second malvertiser (‘malsmoke’) is one that we have tracked diligently over the past several months and whose end payload is often the Smoke Loader malware. It is by far the most daring and successful one in that it goes after larger publishers and a variety of ad networks. However, up until now we had only seen them on publishers from the adult industry that are still relatively small in scale.

In this instance, the threat actor was able to abuse the Traffic Stars ad network and place their malicious ad on xhamster[.]com, a site with just over 1.06 billion monthly visits according to

The gates used by this group also use a decoy site and over time they have registered domains mocking ad networks and cloud providers.

Figure 6: Malicious Popunder on xhamster (brought to the forefront)

The redirection mechanism is more sophisticated than those used in other malvertising campaigns. There is some client-side fingerprinting and connectivity checks to avoid VPNs and proxies, only targeting legitimate IP addresses.

Figure 7: Traffic for xhamster malvertising

Interestingly, this Smoke Loader instance also downloads Raccoon Stealer and ZLoader.

Malsmoke is probably the most persistent malvertising campaigns we have seen this year. Unlike other threat actors, this group has shown that it can rapidly switch ad networks to keep their business uninterrupted.

Figure 8: Malvertising campaigns related to malsmoke Still using Internet Explorer?

Threat actors still leveraging exploit kits to deliver malware is one thing, but end users browsing with Internet Explorer is another. Despite recommendations from Microsoft and security professionals, we can only witness that there are still a number of users (consumer and enterprise) worldwide that have yet to migrate to a modern and fully supported browser.

As a result, exploit kit authors are squeezing the last bit of juice from vulnerabilities in Internet Explorer and Flash Player (due to retire for good next year).

Malwarebytes customers have long been protected from malvertising and exploit kits. We continue to track and report the campaigns we run into to help do our part in keeping the Internet safer.

Indicators of compromise

Gates used in malvertising campaign pushing Raccoon Stealer


Raccoon Stealer


Raccoon Stealer C2s


Smoke Loader


Smoke Loader C2s


Gates used in the malsmoke campaign


Tweets referencing the malsmoke campaign

https://twitter[.]com/MBThreatIntel/status/1245791188281462784 https://twitter[.]com/FaLconIntel/status/1232475345023987713 https://twitter[.]com/nao_sec/status/1231149711517634560 https://twitter[.]com/tkanalyst/status/1229794466816389120 https://twitter[.]com/nao_sec/status/1209090544711815169

The post Malvertising campaigns come back in full swing appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (August 31 – September 6)

Malware Bytes Security - Mon, 09/07/2020 - 10:24am

Last week on Malwarebytes Labs, we dug into security hubris on the Lock and Code podcast, explored ways in which Apple’s notarization process may not be hitting all the right notes, and detailed a new web skimmer. We also explained how to keep distance learners secure, talked about PCI DSS compliance, and revealed that SMB security posture is weakened by COVID-19.

Other cybersecurity news
  • School’s out for cyber attacker: Arrests made after multiple DDoS attacks target district networks (Source: Miami-Dade office of communications)
  • Long arm of the  law: British citizen extradited to the US regarding $2m in scam charges (Source: The Register)
  • Warning signs: Your servers could be at risk should you spot cryptomining activity taking place (Source: Help Net  Security)
  • Election threats: How ransomware could spell trouble for the upcoming US election (Source: GovTech)
  • Lloyd’s bank phish warning: A scam SMS attack is the order of the day for this bank’s customers (Source: Computer Weekly)
  • COVID-19 scammers play on data breach fears: An interesting look at how old breach data is being repackaged to coax payment information from potential victims (Source: The Record)
  • Fake ASDA mails in circulation: Missives offering entry into a competition for a £1,000 gift card should be ignored (Source: My London)
  • Ad scams on TikTok: Researchers look at some of the ways bad ads make their way to the person holding the device (Source: Tenable)
  • I can’t dance to this: Warner music group stores compromised by hackers (Source: Bleeping Computer)
  • Fakes on Facebook: The social media giant takes down fake content run by a US-based pr firm (Source: Buzzfeed)

Stay safe, everyone!

The post A week in security (August 31 – September 6) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

SMB cybersecurity posture weakened by COVID-19, Labs report finds

Malware Bytes Security - Fri, 09/04/2020 - 11:00am

In August, Malwarebytes Labs analyzed the damage caused by COVID-19 to business cybersecurity. Because of immediate, mandated transitions to working from home (WFH), businesses across the United States suffered more data breaches, lost more dollars, and increased their overall attack surfaces, all while experiencing a worrying lack of cybersecurity awareness on behalf of workers and IT and security directors.

Today, we have parsed the data to understand the pandemic’s effect on, specifically, small- and medium-sized businesses (SMBs).

The data on SMB cybersecurity is troubling.

Despite smart maneuvering by some SMBs—like those that provided cybersecurity trainings focused on WFH threats, or those that refrained from rolling out a new software tool because of its security or privacy risks—28 percent of SMBs still paid unexpected expenses to address a malware attack, and 22 percent suffered a security breach due to a remote worker.

Those numbers are higher than the averages we found for companies of all sizes in August—by a respective 4 percent and 2 percent.

The numbers don’t look good. But perhaps more worrying than the actions that befell our respondents are the actions they might fail to take themselves. For example, while a majority of SMBs said that they planned to install a more permanent WFH model for employees in the future, the same number of SMBs said they did not plan to deploy an antivirus solution that can specifically protect those distributed workforces.

Further, while SMBs widely agreed that they were using more video conferencing, online communication, and cloud storage platforms during WFH—thus expanding their online attack surface—a worrying number of respondents said they did not complete any cybersecurity or online privacy reviews of those software tools before making them available to employees.

The cybersecurity posture of organizations of all sizes, including SMBs, can and should be taken seriously—especially as WFH becomes the new normal.

A closer look at SMB cybersecurity

Today’s data represents a follow-up to our August report, Enduring from Home: COVID-19’s Impact on Business Security, in which we surveyed more than 200 IT and cybersecurity executives, directors, and managers from businesses of all sizes. Our analysis today takes a magnifying glass to the more than 100 respondents who work for companies that have between 100 and 1,249 employees.

We separated the data into three bands according to company size: companies with 100–349 employees; companies with 350–699 employees; and companies with 700–1,249 employees.

At times, certain patterns or unique findings emerged within those bands.

For example, larger SMBs had far greater concerns about the effectiveness of a remote IT workforce. When asked about their biggest cybersecurity concerns with employees now working remotely, 50 percent of respondents working at companies with 700–1,249 employees said “our IT support may not be as effective in supporting remote workers.”

Respondents from smaller organizations, however, were not as concerned. Only 27.3 percent of respondents from the smallest businesses we surveyed (100–349 employees ) and 21.6 percent of midsized companies (350–699 employees) answered the same.

Intuitively, this makes sense—larger companies have more employees and more potential opportunities for ad-hoc cybersecurity and IT issues that should be addressed. But without an office, those issues might be ignored by employees. Similarly, those issues might become so frequent that they overwhelm remote IT workers.

Elsewhere in the data, in at least one situation, we found a potential correlation between company size and pandemic impact.

Like we said above, across all SMBs, 28 percent said they paid unexpected expenses to address a malware attack.

But that percentage increased depending on the size of the company affected. Surprise malware expenses hit 21.2 percent of companies with 100–349 employees, 29.7 percent of companies with 350–699 employees, and 30.4 percent of companies with 700–1,249 employees.

Maybe, then, there is some truth to the age-old saying: They bigger they are, the harder they fall.

Not every discovered trend was worrying, though.

Good trends in SMB cybersecurity

The immediate transition to WFH hit businesses everywhere, no matter their size. With no preparation time and sometimes lacking clarity from local and state governments for what was considered safe, businesses were forced to chart their own paths.

Despite these pressures, many SMBs rose to the occasion to protect their businesses and their employees, while also providing their workers with the tools and software necessary to succeed in their roles.

For example, 58.2 percent of respondents said their business provided work-issued devices as needed, and 41.4 percent said their business deployed previously unused software tools to maintain communication and productivity. Further, 56.9 percent of respondents said their business performed a cybersecurity and online privacy analysis of newly deployed software tools, while 21.6 percent said that those reviews led to a decision to not deploy a software tool.

Finally, 55.2 percent of respondents said their business provided cybersecurity trainings focused on the specific cybersecurity threats of WFH, with information on the importance of secured home networks, strong passwords, and unauthorized device access.

As SMBs showed promising action in the immediate transition to WFH, they also responded with encouraging preparations for the future.

More than half—56.9 percent—of respondents said their business would “develop stronger remote security policies,” 50 percent said their business would “host more cybersecurity trainings tailored for working from home,” and 48.2 percent said their business would “develop cybersecurity and online privacy reviews for new, necessary software in the transition to working from home.”

That last point is a welcome one. Though, as we showed, 56.9 percent of respondents said their business “performed a cybersecurity and online privacy analysis of any newly-deployed software tools,” those reviews may have been ad-hoc. Codifying these types of reviews into a broader set of policies is a good practice.

While all of these are encouraging trends, we cannot neglect some of the more worrying data points. In fact, one of our survey respondents accurately described some of same risks that we uncovered.

“Employees are not as vigilant as they would be working from home about potential cyber attacks,” said a Florida IT director at a company of 100 – 349 employees. “We’ve seen some lax efforts from some of our better more observant employees in the last few months.”

Conflicting postures in SMB cybersecurity

In our main report in August, we found potential cases of security hubris—the simple phenomenon in which a business believes it is more secure than it actually is. In our deeper analysis of SMB cybersecurity, similar trends emerged.

For example, when we asked SMB respondents to rank their preparedness to transition to WFH on a scale from 1–10, a majority ranked themselves highly—62 percent gave their business an 8 or higher, and 74.1 percent gave their business a 7 or higher.

However, our respondents’ actual transition to WFH did not involve the type of preparation and cybersecurity protection that would typically warrant such high evaluations.

Yes, 55.2 percent said they provided cybersecurity trainings focused on the specific cybersecurity threats of WFH, but think about the 44.8 percent who did not respond that way. Yes, 57 percent said they performed a cybersecurity and online privacy analysis of new software tools, but that likely means that more than 40 percent did not. Also, only 34.5 percent of respondents said they deployed a new antivirus tool for devices provided by the organization, which leaves us scratching our heads about the roughly 65 percent who did not say the same. What gives?

Amidst the transition to WFH, our SMB respondents entirely agreed on one aspect—they are using more tools, more frequently.

We found that 81.9 percent of SMB respondents said that their usage of video conferencing platforms, like Zoom, and Microsoft Teams, had increased “slightly more” or “significantly more,” 75 percent said the same about their increased use of online instant messaging platforms, and 69.8 percent said the same about their increased use of cloud storage platforms. Relatedly, 33 percent of respondents said they are using personal devices for work more often than their work-issued device, compared to the time before the pandemic.

Put into perspective, more software tools being used more frequently, with some employees reporting more frequent personal device usage, all points to one big problem—an increased attack surface.

And yet, even with this hard data showing an increased attack surface, 65.5 percent of respondents said their organizations were at least “equally secure” as they were before the pandemic; within those numbers, 35.4 percent went further, saying their business was actually “slightly more” or “significantly more” secure.

On our podcast Lock and Code, security evangelist and Malwarebytes Labs director Adam Kujawa explained why these positions are likely impossible to square.

“For the most part, I don’t see how people can actually say they’re more secure,” Kujawa said about the results from our broader COVID-19 report in August. “There may be an idea that, because folks are distributed—because remote workers are no longer located in a single, physical space—that they are somehow decentralized, and therefor harder to gain access to by cybercriminals.”

Kujawa continued: “The reality is that that is complete baloney.”

The clearest discrepancy between the words and the actions of SMBs came in the responses to their future. When asked about future plans to protect their businesses, 54.3 percent of SMB respondents said they would “install a more permanent work-from-home model for employees who do not need to be in the office every day.” However, just 38.8 percent said they would “deploy an antivirus solution that can better handle a more dispersed, remote workforce.”

This is disappointing because it seems so obvious. Any plans to install a more permanent workforce must include plans to protect that workforce.

Future proof

The advice that we offer to bolster SMB cybersecurity is similar to the advice we had for businesses of all sizes that were hit by the pandemic. Companies can come in many, many sizes, but none of those sizes are too small to care about cybersecurity.

You can read the full report to get a better understanding of those steps. In the meantime, though, if you’re really stumped, seriously, consider an antivirus solution.  

The post SMB cybersecurity posture weakened by COVID-19, Labs report finds appeared first on Malwarebytes Labs.

Categories: Malware Bytes

PCI DSS compliance: why it’s important and how to adhere

Malware Bytes Security - Thu, 09/03/2020 - 4:57pm

PCI DSS is short for Payment Card Industry Data Security Standard. Every party involved in accepting credit card payments is expected to comply with the PCI DSS. The PCI Standard is mandated by the card brands, but administered by the Payment Card Industry Security Standards Council (PCI SSC). The standard was created to increase controls around cardholder data to reduce credit card fraud.

The PCI Security Standards Council’s mission is to enhance global payment account data security by developing standards and supporting services that drive education, awareness, and effective implementation by stakeholders.

Compliance will ensure that a company can uphold a positive image and build consumer trust. This also helps build consumer loyalty, since customers are more likely to return to a service or product from a company they consider to be trustworthy.

What exactly is PCI DSS?

PCI DSS is an international security standard that was developed in cooperation between several credit card companies. The PCI DSS tells companies how to keep their card and transaction data safe.

When the PCI DSS was published in 2004, it was expected that organizations would achieve effective and sustainable compliance within about five years. Some 15 years later, less than half of organizations maintain programs that prevent PCI DSS security controls from falling out of place within a few months after formal compliance validation. According to a 2019 Verizon Payment Security Report, research shows that PCI sustainability is trending downward since 2017.

An increase in online transactions

One of the side effects of the COVID-19 pandemic has been an increase in online transactions. As more people worldwide have started to work from home and practice social distancing to combat the spread of COVID-19, businesses must prepare to handle a higher percentage of online transactions.

After all, it is likely that these online customers will continue to shop online when they learn to appreciate the ease of use, especially if they are confident about the security of their online transactions. However, with this rise in the frequency of digital payments comes the increased threat of data breaches and digital fraud.

The elements of compliance

A recent Bank of America report states that small businesses are protecting themselves by implementing industry security standards, like PCI compliance. Specifically, PCI Compliance Requirement 5 indicates that you must protect all systems against malware and regularly update anti-malware software. PCI DSS Requirement 5 has four distinct elements that imply they need to be addressed daily:

  • 5.1: For a sample of system components, including all operating system types commonly affected by malicious software, verify that anti-malware software is deployed.
  • 5.2.b: Examine anti-malware configurations, including the master installation of the software, to verify anti-malware mechanisms are configured to perform automatic updates and periodic scans.
  • 5.2.d: Examine anti-malware configurations, including the master installation of the software and a sample of system components, to verify that the anti-malware’s software log generation is enabled, and logs are retained per PCI DSS Requirement 10.7.
  • 5.3.b: Examine anti-malware configurations, including the master installation of the software and a sample of system components, to verify that the anti-malware software cannot be disabled or altered by users.

Basically, this boils down to our regular advice pillars:

  • Make sure software (including anti-malware) is updated.
  • Perform automatic and/or periodic scans for malware.
  • Log and retain the results of those scans.
  • Make sure protection software (especially anti-malware) can’t be disabled.
Common problems and objections

The first requirement (5.1) requires an organization to maintain an accurate inventory of their devices and the operating systems on those devices. However, configuration management database (CMDB) solutions are notorious for not being completely implemented. As a result, it can be quite an exercise to determine if every system that needs anti-malware software is installed. If so, look for a solution that provides an inventory of protected endpoints for you. You may use such an inventory for auditing your CMDB and verifying compliance.

The next hurdle with requirement 5.1 is that we still run into pushback from macOS and Linux users/administrators over their need to run an antivirus solution. Yet, a review of the CVE database debunks those claims.

Yes, these OSes have fewer vulnerabilities than Windows. However, they would still be “commonly affected,” given the number of vulnerabilities and the frequency with which those vulnerabilities are published. And as we have reported in the past, Mac threat detections are on the rise and actually outpace Windows in sheer volume. Using a solution that can cover all the operating systems in use in your organization can help you organize and control all your devices without adding extra software.

Sometimes, you will get pushback from server administrators who swear that any antivirus solution takes too much CPU to run and adversely affects server performance. While it’s getting better, we still regularly encounter people who make this claim but then fail to provide documented proof. (Not that we don’t believe them, as there are several legacy antivirus programs that can adversely affect performance.)

However, in most cases, the person is making these claims based on past experiences and not on trials of a more contemporary solution. No matter how you look at this, you will have to deploy anti-malware on Windows, macOS, and Linux Server endpoints to meet the PCI DSS.

Why compliance matters

Data from the Verizon Threat Research Advisory Center (VTRAC) demonstrates that a compliance program without the proper controls to protect data has a more than 95 percent probability of not being sustainable and is more likely to be the potential target of a cyberattack.

The costs of a successful cyberattack are not limited to liabilities and loss of reputation. There are also repairs to be made and reorganizations may be necessary, especially when you are dealing with ransomware or a data breach.

A data breach also involves lost opportunities and competitive disadvantages that are near impossible to quantify. The 2019 IBM/Ponemon Institute study calculated the cost of a data breach at $242 per stolen record, and more than $8 million for an average breach in the US. Ransomware is the biggest financial threat of all cyberattacks, causing an estimated $ 7.5 billion in damage in 2019 for the US alone.

For those companies engaged in online transactions, reputational damage can be fatal. Imagine customers shying away from the payment portal as soon as they spot your logo. PCI compliance, then, is not just a regulation—it could quite literally save your company’s bacon.

So stay safe (which in this case means staying compliant)!

The post PCI DSS compliance: why it’s important and how to adhere appeared first on Malwarebytes Labs.

Categories: Malware Bytes

How to keep K–12 distance learners cybersecure this school year

Malware Bytes Security - Wed, 09/02/2020 - 2:03pm

With the pandemic still in full swing, educational institutions across the US are kicking off the 2020–2021 school year in widely different ways, from re-opening classrooms to full-time distance learning. Sadly, as schools embracing virtual instruction struggle with compounding IT challenges on top of an already brittle infrastructure, they are nowhere near closing the K-12 cybersecurity gap.

Kids have no choice but to continue their studies within the current social and health climate. On top of this, they must get used to new learning setups—possibly multiple ones—whether they’re full-on distance learning, homeschooling, or a hybrid of in-class and home instruction.

Regardless of which of these setups school districts, parents, or guardians decide are best suited for their children, one thing should remain a priority: the overall security of students’ learning experience during the pandemic. For this, many careful and considerable preparations are needed.

New term, new terms

Parents in the United States are participating in their children’s learning like never before—and that was before the pandemic forced their hand. Now more than ever, it’s important to become familiar with the different educational settings to consider which is best suited for their family.

Full-on distance learning

Classes are held online while students are safe in their own homes. Teachers may offer virtual classes out of their own homes as well, or they may be using their empty classrooms for better bandwidth.

This setup requires families to have, ideally, a dedicated laptop or computer students can use for class sessions and independent work. In addition, a strong Internet connection is necessary to support both students and parents working from home. However, children in low-income families may have difficulties accessing this technology, unless the school is handing out laptops and hot spot devices for Wi-Fi. Often, there are delays distributing equipment and materials—not to mention a possible learning curve thanks to the Digital Divide.

Full-on distance learning provides children with the benefit of teacher instruction while being safe from exposure to the coronavirus.

Homeschool learning or homeschooling

Classes are held at home, with the parent or guardian acting as teacher, counselor, and yes, even IT expert to their kids. Nowadays, this setup is often called temporary homeschooling or emergency homeschooling. Although this is a viable and potentially budget-friendly option for some families, note that unavoidable challenges may arise along the way. This might be especially true for older children who are more accustomed to using technology in their studies.

This isn’t to say that the lack of technology use when instructing kids would result in low quality of learning. In fact, a study from Tilburg University [PDF] on the comparison between traditional learning and digital learning among kids ages 6 to 8 showed that children perform better when taught the traditional way—although, the study further noted, that they are more receptive to digital learning methods. But perhaps the most relevant implication from the study is this: The role of teachers (in this article’s context, the parents and guardians) in achieving desirable learning outcomes continues to be a central factor.

Parents and guardians may be faced with the challenge of out-of-the-box-thinking when it comes to creating valuable lessons for their kids that target their learning style while keeping them on track for their grade level.

Hybrid learning

This is a combination of in-class and home instruction, wherein students go to school part-time with significant social distancing and safety measures, such as wearing masks, regular sanitizing of facilities and properties, and regular cleaning of hands. Students may be split into smaller groups, have staggered arrival times, and spend only a portion of their week in the classroom.

For the rest of students’ time, parents or guardians are tasked with continuing instruction at home. During these days or hours, parents or guardians must grapple with the same stressors on time, creativity, patience, and digital safety as those in distance learning and homeschooling models.

New methods of teaching and learning might be borne out of the combination of any or all three setups listed above. But regardless of how children must continue their education—with the worst or best of circumstances in mind—supporting their emotional and mental well-being is a priority. To achieve peace of mind and keep students focused on instruction, parents must also prioritize securing their children’s devices from online threats and the invasion of privacy.

Old threats, new risks

It’s a given that the learning environments that expose children to online threats and risk their privacy the most involve the use of technology. Some are familiar, and some are born from the changes introduced by the pandemic. Let’s look at the risk factors that make K-12 cybersecurity essential in schools and in homes.

Zoombombing. This is a cyberthreat that recently caught steam due to the increased use of Zoom, a now-popular web conference tool. Employees, celebrities, friends, and family have used this app (and apps like it) to communicate in larger groups. Now it’s commonly adopted by schools for virtual instruction hours.

Since shelter-in-place procedures were enforced, stories of Zoombombing incidents have appeared left and right. Take, for example, the case of the unknown man who hacked into a Berkeley virtual class over Zoom to expose himself to high school students and shout obscenities. What made this case notable was the fact that the teacher of that class followed the recommended procedures to secure the session, yet a breach still took place.

Privacy issues. When it comes to children’s data, privacy is almost always the top issue. And there are many ways such data can be compromised: from organizational data breaches—something we’re all too familiar with at this point—to accidental leaking to unconsented data gathering from tools and/or apps introduced in a rush.

An accidental leaking incident happened in Oakland when administrators inadvertently posted hundreds of access codes and passwords used in online classes and video conferences to the public, allowing anyone with a Gmail account to not only join these classes but access student data.

In April 2020, a father filed a case against Google on behalf of his two kids for violating the Children’s Online Privacy Protection Act (COPPA) and the Biometric Information Privacy Act (BIPA) of Illinois. The father, Clinton Farwell, alleges that Google’s G Suite for Education service collects the data—their PII and biometrics—of children, who are aged 13 and below, to “secretly and unlawfully monitor and profile children, but to do so without the knowledge or consent of those children’s parents.”

This happened two months after Hector Balderas, the attorney general of New Mexico, filed a case against the company for continuing to track children outside the classroom.

Ransomware attacks. Educational institutions aren’t immune to ransomware attacks. Panama-Buena Vista Union School. Fort Worth Independent. Crystal Lake Community High School. These are just some of the total districts—284 schools in all—that were affected by ransomware from the start of 2020 until the first week of April. Unfortunately, the pandemic won’t make them less of a target—only more.

With a lot of K-12 schools adjusting to the pandemic—often introducing tools and apps that cater to remote learning without conducting security audits—it is almost expected that something bad is going to happen. The mad scrambling to address the sudden change in demand only shows how unprepared these school districts were. It’s also unfortunate that administrative staff have to figure things out and learn by themselves on how to better protect student data, especially if they don’t have a dedicated IT team. And, often, that learning curve is quite steep.

Phishing scams. In the context of the education industry, phishing scams have always been an ever-present threat. According to Doug Levin, the founder and president of the K-12 Cybersecurity Resource Center, schools are subjected to “drive-by” phishing, in particular.

“Scammers and criminals really understand the human psyche and the desire for people to get more information and to feel in some cases, I think it’s fair to say in terms of coronavirus, some level of panic,” Levin said in an interview with EdWeek. “That makes people more likely to suspend judgment for messages that might otherwise be suspicious, and more likely to click on a document because it sounds urgent and important and relevant to them, even if they weren’t expecting it.”

Security tips for parents and guardians

To ensure distance learning and homeschooled students have an uninterrupted learning experience, parents or guardians should make sure that all the tools and gadgets their kids use to start school are prepared. In fact, doing so is similar to how to keep work devices secure while working from home. For clarity’s sake, let’s flush out some general steps, shall we?

Secure your Wi-Fi
  • Make sure that the router or the hotspot is using a strong password. Not only that, switch up the password every couple months to keep it fresh.
  • Make sure that all firmware is updated.
  • Change the router’s admin credentials.
  • Turn on the router’s firewall.
Secure their device(s)
  • Make sure students’ computers or other devices are password-protected and lock automatically after a short period of time. This way, work won’t be lost by a pet running wild or a curious younger sister smashing some buttons.

    For schools that issue student laptops, the most common operating system is ChromeOS (Chromebooks). Here’s a simple and quick guide on how parents and guardians can lock Chromebooks. The password doesn’t need to be complicated, as you and your child should be able to remember it. Decide on a pass phrase together, but don’t share it with the other kids in the house.
  • Ensure that the firewall is enabled in the device.
  • Enforce two-factor authentication (2FA).
  • Ensure that the device has end-point protection installed and running in real time.
Secure your child’s data
  • Schools use a learning management solution (LMS) to track children’s activities. It is also what kids use to access resources that they need for learning.

    Make sure that your child’s LMS password follows the school’s guidelines on how to create a high entropy password. If the school doesn’t specify strong password guidelines, create a strong password yourself. Password managers can usually do this for you if you feel that thinking up a complicated one and remembering it is too much of a chore.
  • It also pays to limit the use of the device your child uses for studying to only schoolwork. If there are other devices in the house, they can be used to access social media, YouTube, video games, and other recreational activities. This will lessen their chances of encountering an online threat on the same device that stores all their student data.
Secure your child’s privacy

There was a case before where a school accidentally turned the cameras on of school-issued devices the students were using. It blew up in the news because it greatly violated one’s privacy. Although this may be considered a rare incident, assume that you can’t be too careful when the device your kid uses has a built-in camera.

Students are often required to show their faces on video conference software so teachers know they are paying attention. But for all the other time spent on assignments, it’s a good idea to cover up built-in cameras. There are laptop camera covers parents or guardians can purchase to slide across the lens when it’s not in use.

New challenges, new opportunities to learn

While education authorities have had their hands full for months now, parents and guardians can do their part, too, by keeping their transition to a new learning environment as safe and frictionless as possible. As you may already know, some states have relaxed their lockdown rules, allowing schools to re-open. However, the technology train has left the station.

Even as in-person instruction continues, educational tech will become even more integral to students’ learning experiences. Keeping those specialized software suites, apps, communication tools, and devices safe from cyberthreats and privacy invasions will be imperative for all future generations of learners.

Safe, not sorry

While IT departments in educational institutions continue to wrestle with current cybersecurity challenges, parents and guardians have to step up their efforts and contribute to K-12 cybersecurity as a whole. Lock down your children’s devices, whether they use them in the classroom or at home. True, it will not guarantee 100 percent protection from cybercriminals, but at the very least, you can be assured that your kids and their devices will remain far out of reach.

Stay safe!

The post How to keep K–12 distance learners cybersecure this school year appeared first on Malwarebytes Labs.

Categories: Malware Bytes

New web skimmer steals credit card data, sends to crooks via Telegram

Malware Bytes Security - Tue, 09/01/2020 - 10:15am

The digital credit card skimming landscape keeps evolving, often borrowing techniques used by other malware authors in order to avoid detection.

As defenders, we look for any kind of artifacts and malicious infrastructure that we might be able to identify to protect our users and alert affected merchants. These malicious artifacts can range from compromised stores to malicious JavaScript, domains, and IP addresses used to host a skimmer and exfiltrate data.

One such artifact is a so-called “gate,” which is typically a domain or IP address where stolen customer data is being sent and collected by cybercriminals. Typically, we see threat actors either stand up their own gate infrastructure or use compromised resources.

However, there are variations that involve abusing legitimate programs and services, thereby blending in with normal traffic. In this blog, we take a look at the latest web skimming trick, which consists of sending stolen credit card data via the popular instant messaging platform Telegram.

An otherwise normal shopping experience

We are seeing a large number of e-commerce sites attacked either through a common vulnerability or stolen credentials. Unaware shoppers may visit a merchant that has been compromised with a web skimmer and make a purchase while unknowingly handing over their credit card data to criminals.

Skimmers insert themselves seamlessly within the shopping experience and only those with a keen eye for detail or who are armed with the proper network tools may notice something’s not right.

Figure 1: Credit card skimmer using Telegram bot

The skimmer will become active on the payment page and surreptitiously exfiltrate the personal and banking information entered by the customer. In simple terms, things like name, address, credit card number, expiry, and CVV will be leaked via an instant message sent to a private Telegram channel.

Telegram-based skimmer

Telegram is a popular and legitimate instant messaging service that provides end-to-end encryption. A number of cybercriminals abuse it for their daily communications but also for automated tasks found in malware.

Attackers have used Telegram to exfiltrate data before, for example via traditional Trojan horses, such as the Masad stealer. However, security researcher @AffableKraut shared the first publicly documented instance of a credit card skimmer used in Telegram in a Twitter thread.

The skimmer code keeps with tradition in that it checks for the usual web debuggers to prevent being analyzed. It also looks for fields of interest, such as billing, payment, credit card number, expiration, and CVV.

Figure 2: First part of the skimmer code

The novelty is the presence of the Telegram code to exfiltrate the stolen data. The skimmer’s author encoded the bot ID and channel, as well as the Telegram API request with simple Base64 encoding to keep it away from prying eyes.

Figure 3: Skimming code containing Telegram’s API

The exfiltration is triggered only if the browser’s current URL contains a keyword indicative of a shopping site and when the user validates the purchase. At this point, the browser will send the payment details to both the legitimate payment processor and the cybercriminals.

Figure 4: A purchase where credit card data is stolen and exfiltrated

The fraudulent data exchange is conducted via Telegram’s API, which posts payment details into a chat channel. That data was previously encrypted to make identification more difficult.

For threat actors, this data exfiltration mechanism is efficient and doesn’t require them to keep up infrastructure that could be taken down or blocked by defenders. They can even receive a notification in real time for each new victim, helping them quickly monetize the stolen cards in underground markets.

Challenges with network protection

Defending against this variant of a skimming attack is a little more tricky since it relies on a legitimate communication service. One could obviously block all connections to Telegram at the network level, but attackers could easily switch to another provider or platform (as they have done before) and still get away with it.

Malwarebytes Browser Guard will identify and block this specific skimming attack without disabling or interfering with the use of Telegram or its API. So far we have only identified a couple of online stores that have been compromised with this variant, but there are likely several more.

Figure 5: Malwarebytes blocking this skimming attack

As always, we need to adapt our tools and methodologies to keep up with financially-motivated attacks targeting e-commerce platforms. Online merchants also play a huge role in derailing this criminal enterprise and preserving the trust of their customer base. By being proactive and vigilant, security researchers and e-commerce vendors can work together to defeat cybercriminals standing in the way of legitimate business.

The post New web skimmer steals credit card data, sends to crooks via Telegram appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Apple’s notarization process fails to protect

Malware Bytes Security - Mon, 08/31/2020 - 12:54pm

In macOS Mojave, Apple introduced the concept of notarization, a process that developers can go through to ensure that their software is malware-free (and must go through for their software to run on macOS Catalina). This is meant to be another layer in Apple’s protection against malware. Unfortunately, it’s starting to look like notarization may be less security and more security theater.

What is notarization?

Notarization goes hand-in-hand with another security feature: code signing. So let’s talk about that first.

Code signing is a cryptographic process that enables a developer to provide authentication to their software. It both verifies who created the software and verifies the integrity of the software. By code signing an app, developers can (to some degree) prevent it from being modified maliciously—or at the very least, make such modifications easily detectable.

The code signing process has been integral to Mac software development for years. The user has to jump through hoops to run unsigned software, so little mainstream Mac software today comes unsigned.

However, Mac software that is distributed outside the App Store never had to go through any kind of checks. This meant that malware authors would obtain a code signing certificate from Apple (for a mere $99) and use that to sign their malware, enabling it to run without trouble. Of course, when discovered, Apple can revoke the code signing certificate, thus neutralizing the malware. However, malware can often go undiscovered for years, as illustrated best by the FruitFly malware, which went undetected for at least 10 years.

In light of this problem, Apple created a process they call “notarization.” This process involves developers submitting their software to Apple. That software goes through some kind of automated scan to ensure it doesn’t contain malware, and then is either rejected or notarized (i.e., certified as malware-free by Apple—in theory).

In macOS Catalina, software that is not notarized is prevented from running at all. If you try, you will simply be told “do not pass Go, do not collect $200.” (Or in Apple’s words, it can’t be opened because “Apple cannot check it for malicious software.”)

The message displayed by Catalina for older versions of Spotify

There are, of course, ways to run software that is not signed or not notarized, but there’s no indication as to how this is done from the error message, so as far as legitimate developers are concerned, it’s not an option.

So how’s that working out so far?

The big question on everyone’s minds when notarization was announced at Apple’s WWDC conference in 2019, was, “How effective is this going to be?” Many were quite optimistic that this would spell the end of Mac malware once and for all. However, those of us in the security industry did not drink the Kool-Aid. Turns out, our skepticism was warranted.

There are a couple tricks that the bad guys are using, in light of the new requirements. One is simple: Don’t sign or notarize the apps at all.

We’re seeing quite a few cases where malware authors have stopped signing their software, and have instead been shipping it with instructions to the user on how to run it.

As can be seen from the above screenshot, the malware comes on a disk image (.dmg) file with a custom background. That background image shows instructions for opening the software, which is neither signed nor notarized.

The irony here is that we see lots of people getting infected with this malware—a variant of the Shlayer or Bundlore adware, depending on who you ask—despite the minor difficulty of opening it. Meanwhile, the installation of security software on macOS has gotten to be so difficult that we get a fair number of support cases about it.

The other option, of course, is for threat actors to get their malware notarized.

Notarize malware?! Say it ain’t so!

In theory, the notarization process is supposed to weed out anything malicious. In practice, nobody really understands exactly how notarization works, and Apple is not inclined to share details. (For good reason—if they told the bad guys how they were checking for malware, the bad guys would know how to avoid getting caught by those checks.)

All developers and security researchers know is that notarization is fast. I’ve personally notarized software quite a few times at this point, and it usually takes less than a couple minutes between submission and receipt of the e-mail confirming success of notarization. That means there’s definitely no human intervention involved in the process, as there is with App Store reviews. Whatever it is, it’s solely automated.

I’ve assumed since notarization was first introduced that it would turn out to be fallible. I’ve even toyed with the idea of testing this process, though the risk of getting my developer account “Charlie Millered” has prevented me from doing so. (Charlie Miller is a well-known security researcher who created a proof-of-concept malware app and got it into the iOS App Store in 2011. Even though he notified Apple after getting the app approved, Apple still revoked his developer account and he has been banned from further Apple development activity ever since.)

It turns out, though, that all I had to do was wait for the bad guys to run the test for me. According to new findings, Mac security researcher Patrick Wardle has discovered samples of the Shlayer adware that are notarized. Yes, that’s correct. Apple’s notarization process has allowed known malware to pass through undetected, and to be implicitly vouched for by Apple.

How did they do that?

We’re still not exactly sure what the Shlayer folks did to get their malware notarized, but increasingly, it’s looking like they did nothing at all. On the surface, little has changed.

The above screenshot shows a notarized Shlayer sample on the left, and an older one on the right. There’s no difference at all in the appearance. But what about when you dive into the code?

This screenshot is hardly a comprehensive look into the code. It simply shows the entry point, and the names of a number of the functions found in the code. Still, at this level, any differences in the code are minor.

It’s entirely possible that something in this code, somewhere, was modified to break any detection that Apple might have had for this adware. Without knowing how (if?) Apple was detecting the older sample (shown on the right), it would be quite difficult to identify whether any changes were made to the notarized sample (on the left) that would break that detection.

This leaves us facing two distinct possibilities, neither of which is particularly appealing. Either Apple was able to detect Shlayer as part of the notarization process, but breaking that detection was trivial, or Apple had nothing in the notarization process to detect Shlayer, which has been around for a couple years at this point.

What does this mean?

This discovery doesn’t change anything from my perspective, as a skeptical and somewhat paranoid security researcher. However, it should help “normal” Mac users open their eyes and recognize that the Apple stamp does not automatically mean “safe.”

Apple wants you to believe that their systems are safe from malware. Although they no longer run the infamous “Macs don’t get viruses” ads, Apple never talks about malware publicly, and loves to give the impression that its systems are secure. Unfortunately, the opposite has been proven to be the case with great regularity. Macs—and iOS devices like iPhones and iPads, for that matter—are not invulnerable, and their built-in security mechanisms cannot protect users completely from infection.

Don’t get me wrong, I still use and love Mac and iOS devices. I don’t want to give the impression that they shouldn’t be used at all. It’s important to understand, though, that you must be just as careful with what you do with your Apple devices as you would be with your Windows or Android devices. And when in doubt, an extra layer of anti-malware protection goes a long way in providing peace of mind.

The post Apple’s notarization process fails to protect appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa

Malware Bytes Security - Mon, 08/31/2020 - 11:26am

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Adam Kujawa, security evangelist and director of Malwarebytes Labs, about “security hubris,” the simple phenomenon in which businesses are less secure than they actually believe.

Ask yourself, right now, on a scale from one to ten, how cybersecure are you? Now, do you have any reused passwords for your online accounts? Does your home router still have its default password? If your business rolled out new software for you to use for working from home (WFH), do you know if those software platforms are secure?

If your original answer is looking a little more shaky, don’t be surprised. That is security hubris

Tune in to hear about the dangers of security hubris to a business, how to protect against it, and about how Malwarebytes found it within our most recent report, “Enduring from home: COVID-19’s impact on business security,” on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

You can also find us on the Apple iTunes store, Google Play Music, and Spotify, plus whatever preferred podcast platform you use.

Other cybersecurity news:
  • The US government issued a warning about North Korean hackers targeting banks worldwide. (Source: BleepingComputer)
  • A team of academics from Switzerland has discovered a security bug that can be abused to bypass PIN codes for Visa contactless payments. (Source: ZDNet)
  • For governments and armed forces around the world, the digital domain has become a potential battlefield. (Source: Public Technology)
  • A new hacker hacker-for-hire group is targeting organizations worldwide with malware hidden inside malicious 3Ds Max plugins. (Source: Security Affairs)
  • The Qbot trojan evolves to hijack legitimate email threads. (Source: BetaNews)

Stay safe, everyone!

The post Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Missing person scams: what to watch out for

Malware Bytes Security - Thu, 08/27/2020 - 11:00am

Social media has a long history of people asking for help or giving advice to other users. One common feature is the ubiquitous “missing person” post. You’ve almost certainly seen one, and may well have amplified such a Facebook post, or Tweet, or even blog.

The sheer reach and virality of social media is perfect for alerting others. It really is akin to climbing onto a rooftop with a foghorn and blasting out your message to the masses. However, the flipside is an ugly one.

Social media is also a breeding ground for phishers, scammers, trolls, and domestic abusers working themselves into missing person narratives. When this happens, there can be serious consequences.

“My friend is missing, please retweet…”

Panicked, urgent requests for information are how these missing person scams spread. They’re very popular on social media and can easily spread across the specific geographically-located demographic the message needs to go to.

If posted to platforms other than Twitter, they may well also come with a few links which offer additional information. The links may or may not be official law enforcement resources.

Occasionally, links lead to dedicated missing person detection organisations offering additional services.

You may well receive a missing person notice or request through email, as opposed something posted to the wider world at large.

All useful ways to get the word out, but also very open to exploitation.

How can this go wrong?

The ease of sharing on social media is also the biggest danger where missing person requests are concerned. If someone pops up in your timeline begging for help to find a relative who went missing overnight, the impulse to share is very strong. It takes less than a second to hit Retweet or share, and you’ve done your bit for the day.


If you’re not performing due diligence on who is doing the sharing, this could potentially endanger the person in the images. Is the person sharing the information directly a verified presence on the platform you’re using, or a newly created throwaway account?

If they are verified, are they sharing it from a position of personal interest, or simply retweeting somebody else? Do they know the person they’re retweeting, or is it a random person? Do they link to a website, and is it an official law enforcement source or something else altogether?

Even if the person sharing it first-hand is verified or they know the person they’re sharing content  from, that doesn’t mean what you’re seeing is on the level.

What if the non-verified person is a domestic abuser, looking for an easy way to track down someone who’s escaped their malign presence? What if the verified individual is the abuser? We simply don’t know, but by the time you may have considered this the Tweet has already been and gone.

When maliciousness is “Just a prank, bro”

Even if the person asking to find somebody isn’t some form of domestic abuser, there’s a rapidly sliding scale of badness waiting to pounce. Often, people will put these sorts of requests out for a joke, or as part of a meme. They’ll grab an image most likely well known in one geographic region but not another, and then share asking for information. This can often bleed into other memes.

“Have you seen this person, they stole my phone and didn’t realise it took a picture” is a popular one, often at the expense of a local D-list celebrity. In the same way, people will often make bad taste jokes but related to missing children. To avoid the gag being punctured early, they may avoid using imagery from actual abduction cases and grab a still from a random YouTube clip or something from an image resource.

A little girl, lost in Doncaster?

One such example of this happened in the last few weeks. A still image appeared to show a small child in distress, bolted onto a “missing” plea for help.

Well, she really was in distress…but as a result of an ice hockey player leaving his team in 2015, and not because she’d gone missing or been abducted. There’s a link provided claiming to offer CCTV footage of a non-existent abduction, though reports don’t say where the links took eager clickers.

A panic-filled message supplied with a link is a common tactic in these realms. The same thing happened with a similar story in April of 2019. Someone claimed their 10-year-old sister had gone missing outside school after an argument with her friend. However, it didn’t take long for the thread to unravel. Observant Facebook users noted that schools would have been closed on the day it supposedly happened.

Additionally, others mentioned that they’d seen the same missing sister message from multiple Facebook profiles. As with the most recent fake missing story, we don’t know where the link wound up. People understandably either steered clear or visited but didn’t take a screenshot and never spoke of it again.

“My child is missing”: an eternally popular scam

There was another one doing the rounds of June this year, once more claiming a child was missing. The seemingly US-centric language-oriented page appeared for British users in Lichfield, Bloxeich, Wolverhampton, and Walsall. Mentioning “police captains” and “downtown” fairly gave the game away, hinting at its generic cut and paste origins. The fact it cites multiple conflicting dates as to when the kidnapping took place is also a giveaway.

This one was apparently a Facebook phish, and was quite successful in 2020. So much so, that it first appeared in March, and then May before putting in its June performance. Scammers continue to use it because it’s easy to throw together, and it works.

Exploiting a genuine request

It’s not just scammers taking the lead and posting fake missing person scam posts. They’ll also insert themselves into other people’s misery and do whatever they can to grab some ill-gotten gains. An example of this dates to 2013, where someone mentions that they’d tried to reunite with their long-lost sister, via a “Have you seen this person” style letter.

The letter was published in a magazine, and someone got in touch. Unfortunately, that person claimed they held the sister hostage and needed to pay a ransom. The cover story quickly fell apart after they claimed certain relatives where dead when they were alive, and the missing person scam was foiled. 

Here’s a similar awful scam from 2016, where Facebook scammers claimed someone’s missing daughter was a sex worker in Atlanta. They said she was being trafficked and could be “bought back” for $70,000. A terrible thing to tell someone, but then these people aren’t looking to play fair.

Fake detection agencies

Some of these fakes will find you via post-box, as opposed merely lurking online. There have been cases where so-called “recovery bureaus” drop you a note claiming to be able to lead you to missing people. When you meet up with arranged contacts though, the demands for big slices of cash start coming. What information they do have is likely publicly sourced or otherwise easily obtainable (and not worth the asking price).

Looking for validation

Helping people is great and assisting on social media is a good thing. We just need to be careful we’re aiding the right people. While it may not always be possible for a missing person alert to come directly from an official police source, it would be worth taking a little bit of time to dig into the message, and the person posting it, before sharing further.

The issue of people going missing is bad enough; we shouldn’t look to compound misery by unwittingly aiding people up to no good.

The post Missing person scams: what to watch out for appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Good news: Stalkerware survey results show majority of people aren’t creepy

Malware Bytes Security - Wed, 08/26/2020 - 11:00am

Back in July, we sent out a survey to Malwarebytes Labs readers on the subject of stalkerware—the term used to describe apps that can potentially invade someone’s privacy. We asked one question: “Have you ever used an app to monitor your partner’s phone?” 

The results were reassuring.

We received 4,578 responses from readers all over the world to our stalkerware survey and the answer was a resounding “NO.” An overwhelming 98.23 percent of respondents said they had not used an app to monitor their partner’s phone.

For our part, Malwarebytes takes stalkerware seriously. We’ve been detecting apps with monitoring capabilities for more than six years—now Malwarebytes for WindowsMac, or Android detects and allows users to block applications that attempt to monitor your online behavior and/or physical whereabouts without your knowledge or consent. Last year, we helped co-found the Coalition Against Stalkerware with the Electronic Frontier Foundation, the National Network to End Domestic Violence, and several other AV vendors and advocacy groups.

It stands to reason that a readership comprised of Malwarebytes customers and people with a strong interest in cybersecurity would say “no” to stalkerware—we’ve spoken up about the potential privacy concerns associated with using these apps and the danger of equipping software with high-grade surveillance capabilities for a long time. We didn’t want to assume everyone agreed with us, but the data from our stalkerware survey shows our instincts were right.

No to stalkerware

Beyond a simple yes or no, we also asked our survey-takers to explain why they answered the way they did. The most common answer by far was a mutual respect and trust for their partner. In fact, “respect,” “trust,” and “privacy” were the three most commonly-used words by our participants in their responses:

“My partner and I share our lives … To monitor someone else’s phone is a tragic lack of trust.”

Many of those surveyed cited the Golden Rule (treat others the way you want to be treated) as their reason for not using stalkerware-type apps:

“I wouldn’t want anyone to monitor me so I therefore I would not monitor them.”

Others saw it as a clear-cut issue of ethics:

“People are entitled to their privacy as long as they do not do things that are illegal. Their rights end at the beginning of mine.”

Some respondents shared harrowing real-life accounts of being a victim of stalkerware or otherwise having their privacy violated:

“I have been a victim of stalking several times when vicious criminals used my own surveillance cameras to spy on my activity then used it to break into my apartment.”

Stalkerware vs. location sharing vs. parental monitoring

Many of those surveyed, answering either yes or no, made a distinction between stalkerware-type apps writ large and location-sharing apps like Apple’s Find My Phone and Google Maps. Location sharing was generally considered acceptable because users volunteered to share their own information and sharing was limited to their current location.

“My wife & myself allow Apple Find My Phone to track each other if required. I was keen that should I not arrive home from a run, she could find out where I was in the case of a health issue or accident.”

Also considered okay by our respondents were the types of parental controls packaged in by default with their various devices. Many respondents specifically mentioned tracking their child’s location:

“It would not be ok with me if someone was monitoring me and I would never do it to anyone else, the only thing I would like is be able to track my child if kidnapped.”

Some parents admitted to using monitoring of some kind with their children, but it wasn’t clear how far they were willing to go and if children were aware they were being monitored:

“The only reason I have set up parental control for my son is for his safety most importantly.”

This is the murky world of parental-monitoring apps. On one end of the spectrum there are the first-party parental controls like those built into the iPhone and Nintendo Switch. These controls allow parents to restrict screen time and approve games and additional content on an ad hoc basis. Then there are third-party apps, which provide limited capabilities to track one thing and one thing only, like, say, a child’s location, or their screen time, or the websites they are visiting.

On the other end of the spectrum, there are apps in the same parental monitoring category that can provide a far broader breadth of monitoring, from tracking all of a child’s interactions on social media to using a keylogger that might even reveal online searches meant to stay private. 

You can hear more about our take on these apps in our latest podcast episode, but the long and the short of it is that Malwarebytes doesn’t recommend them, as they can feature much of the same high-tech surveillance capabilities of nation-state malware and stalkerware, but often lack basic cybersecurity and privacy measures.

Who said ‘yes’ to stalkerware?

Of course, our stalkerware survey analysis would not be complete without taking a look at the 81 responses from those who said “yes” to using apps to monitor their partners’ phone.

Again, the majority of respondents made a distinction between consensual location-sharing apps and the more intrusive types of monitoring that stalkerware can provide. Many of those who answered “yes” to using an app to monitor their partner’s phone said things like:

“My wife and I have both enabled Google’s location sharing service. It can be useful if we need to know where each other is.”


“Only the Find My iPhone app. My wife is out running or hiking by herself quite often and she knows I want to know if she is safe.”

Of the 81 people who said they use apps to monitor their partners’ phones, only nine cited issues of trust, cheating, “being lied to” or “change in partner’s behavior.” Of those nine, two said their partner agreed to install the app.

NortonLifeLock’s online creeping study

The results of the Labs stalkerware survey are especially interesting when compared to the Online Creeping Survey conducted by NortonLifeLock, another founding member of the Coalition Against Stalkerware.

This survey of more than 2,000 adults in the United States found that 46 percent of respondents admitted to “stalking” an ex or current partner online “by checking in on them without their knowledge or consent.”

Twenty-nine percent of those surveyed admitted to checking a current or former partner’s phone. Twenty-one percent admitted to looking through a partner’s search history on one of their devices without permission. Nine percent admitted to creating a fake social media profile to check in on their partners.

When compared to the Labs stalkerware survey, it would seem that online stalking is considered more acceptable when couched under the term “checking in.” For perspective, if one were to swap the word “diary” for “phone,” we don’t think too many people would feel comfortable admitting, “Hey, I’m just ‘checking in’ on my girlfriend/wife’s diary. No big deal.”

Stalkerware in a pandemic

Finally, we can’t end this piece without at least acknowledging the strange and scary times we’re living in. Shelter-in-place orders at the start of the coronavirus pandemic became de facto jail sentences for stalkerware and domestic violence victims, imprisoning them with their abusers. No surprise, The New York Times reported an increase in the number of domestic violence victims seeking help since March.

For some users, however, the pandemic has brought on a different kind of suffering. One survey respondent best summed up the current malaise of anxiety, fear, and depression: 

“No partner to monitor lol.”

We like to think, dear reader, that they’re not laughing at themselves and the challenges of finding a partner during COVID. Rather, they’re laughing at all of us.

Stalkerware resources

As mentioned earlier, Malwarebytes for WindowsMac, or Android will detect and let users remove stalkerware-type applications. And if you think you might have stalkerware on your mobile device, be sure to check out our article on what to do when you find stalkerware or suspect you’re the victim of stalkerware.

Here are a few other important reads on stalkerware:

Stalkerware and online stalking are accepted by Americans. Why?

Stalkerware’s legal enforcement problem

Awareness of stalkerware, monitoring apps, and spyware on the rise

How to protect against stalkerware

The post Good news: Stalkerware survey results show majority of people aren’t creepy appeared first on Malwarebytes Labs.

Categories: Malware Bytes

The cybersecurity skills gap is misunderstood

Malware Bytes Security - Tue, 08/25/2020 - 11:00am

Nearly every year, a trade association, a university, an independent researcher, or a large corporation—and sometimes all of them and many in between—push out the latest research on the cybersecurity skills gap, the now-decade-plus-old idea that the global economy lacks a growing number of cybersecurity professionals who cannot be found.

It is, as one report said, a “state of emergency.” It would be nice, then, if the numbers made more sense.

In 2010, according to one study focused on the United States, the cybersecurity skills gap included at least 10,000 individuals. In 2015, according to a separate analysis, that number was 209,000. Also, in 2015, according to yet another report, that number was more than 1 million. Today, that number is both a projected 3.5 million by 2021 and a current 4.07 million, worldwide.

PK Agarwal, dean of the University of California Santa Cruz Silicon Valley Extension, has followed these numbers for years. He followed the data in personal interest, and he followed it more deeply when building programs at Northeastern University Silicon Valley, the educational hub opened by the private Boston-based university, where he most recently served as regional dean and CEO. During his research, he uncovered something.

“In terms of actual numbers, if you’re looking at the supply and demand gap in cybersecurity, you’ll see tons of reports,” Agarwal said. “They’ll be all over the map.”

He continued: “Yes, there is a shortage, but it is not a systemic shortage. It is in certain sweet spots. That’s the reality. That’s the actual truth.”

Like Agarwal said, there are “sweet spots” of truth to the cybersecurity skills gap—there can be difficulties in finding immediate need on deadline-driven projects, or in finding professionals trained in a crucial software tool that a company cannot spend time training current employees on.

But more broadly, the cybersecurity skills gap, according to recruiters, hiring managers, and academics, is misunderstood. Rather than a lack of talent, there is sometimes, on behalf of companies, a lack of understanding in how to find and hire that talent.

By posting overly restrictive job requirements, demanding contradictory skillsets, refusing to hire remote workers, offering non-competitive rates, and failing to see minorities, women, and veterans as viable candidates, businesses could miss out on the very real, very accessible cybersecurity talent out there.

In other words, if you are not able to find a cybersecurity expert for your company, that doesn’t mean they don’t exist. It means you might need help in finding them.

Number games

In 2010, the Center for Strategic & International Studies (CSIS) released its report “A Human Capital Crisis in Cybersecurity.” According to the paper, “the cyber threat to the United States affects all aspects of society, business, and government, but there is neither a broad cadre of cyber experts nor an established cyber career field to build upon, particularly within the Federal government.”

Further, according to Jim Gosler, a then-visiting NSA scientist and the founding director of the CIA’s Clandestine Information Technology Office, only 1,000 security experts were available in the US with the “specialized skills to operate effectively in cyberspace.” The country, Gosler said in interviews, needed 10,000 to 30,000.

Though the cybersecurity skills gap was likely spotted before 2010, the CSIS paper partly captures a theory that draws supports today—the skills gap is a lack of talent.

Years later, the cybersecurity skills gap reportedly grew into a chasm. It would soon span the world.  

In 2016, the Enterprise Strategy Group called the cybersecurity skills gap a “state of emergency,” unveiling research that showed that 46 percent of senior IT and cybersecurity professionals at midmarket and enterprise companies described their departments’ lack of cybersecurity skills as “problematic.” The same year, separate data compiled by the professional IT association ISACA predicted that the entire world would be short 2 million cyber security professionals by the year 2019.

But by 2019, that prediction had already come true, according to a survey published that year by the International Information System Security Certification Consortium, or (ISC)2. The world, the group said, employed 2.8 million cybersecurity professionals, but it needed 4.07 million.

At the same time, a recent study projected that the skills gap in 2021 would be lower than the (ISC)2 estimate for today—instead predicting a need of 3.5 million professionals by next year. Throughout the years, separate studies have offered similarly conflicting numbers.

The variation can be dizzying, but it can be explained by a variation in motivations, said Agarwal. He said these reports do not exist in a vacuum, but are rather drawn up for companies and, perhaps unsurprisingly, for major universities, which rely on this data to help create new programs and to develop curriculum to attract current and prospective students.

It’s a path Agarwal went down years ago when developing a Master’s program in computer science at Northeastern University Silicon Valley extension. The data, he said, supported the program, showing some 14,000 Bay Area jobs that listed a Master’s degree as a requirement, while neighboring Bay Area schools were only on track to produce fewer than 500 Master’s graduates that year.

“There was a massive gap, so we launched the campus,” Agarwal said. The program garnered interest, but not as much as the data suggested.

Agarwal remembered thinking at the time: “What the hell is going on?” 

It turns out, a lot was going on, Agarwal said. For many students, the prospect of more student debt for a potentially higher pay was not enough to get them into the program. Further, the salaries for Bachelor’s graduates and Master’s graduates were close enough that students had a difficult time seeing the value in getting the advanced degree.

That weariness towards a Master’s degree in computer science also plagues cybersecurity education today, Agarwal said, comparing it to an advanced degree in Biology.

“Cybersecurity at the Master’s level is about the same as in Biology—it has no market value,” Agarwal said. “If you have a BA [in Biology], you’re a lab rat. If you have an MA, you’re a senior lab rat.”

So, imagine the confusion for cybersecurity candidates who, when applying for jobs, find Master’s degrees listed as requirements. And yet, that is far from uncommon. The requirement, like many others, can drive candidates away.

Searching different

For companies that feel like the cybersecurity talent they need simply does not exist, recruiters and strategists have different advice: Look for cybersecurity talent in a different way. That means more lenient degree and certification requirements, more openness to working remotely, and hiring for the aptitude of a candidate, rather than going down a must-have wish list.

Jim Johnson, senior vice president and Chief Technology Officer for the international recruiting agency Robert Half, said that, when he thinks about client needs in cybersecurity, he often recalls a conference panel he watched years ago. A panel of hiring experts, Johnson said, was asked a simple question: How do you find people?

One person, Johnson recalled, said “You need to be okay hiring people who know nothing.”

The lesson, Johnson said, was that companies should hire for aptitude and the ability to learn.

“You hire the personality that fits what you’re looking for,” Johnson said. “If they don’t have everything technically, but they’re a shoo-in for being able to learn it, that’s the person you bring up.”

Johnson also explained that, for some candidates, restrictive job requirements can actually scare them away. Johnson’s advice for companies is that they understand what they’re looking for, but they don’t make the requirements for the job itself so restrictive that it causes hesitation for some potential candidates.

“You might miss a great hire because you required three certifications and they had one, or they’re in the process of getting one,” Johnson said.

Similarly, Thomas Kranz, longtime cybersecurity consultant and current cybersecurity strategy adviser for organizations, called job requirements that specifically call for degrees as “the biggest barrier companies face when trying to hire cybersecurity talent.”

“This is an attitude that belongs firmly in the last century,” Kranz wrote. ‘Must have a [Bachelor of Science] or advanced degree’ goes hand in hand with ‘Why can’t we find the candidates we need?’”

This thinking has caught on beyond the world of recruiters.

In February, more than a dozen companies, including Malwarebytes, pledged to adopt the Aspen Institute’s “Principles for Growing and Sustaining the Nation’s Cybersecurity Workforce.”

The very first principle requires companies to “widen the aperture of candidate pipelines, including expanding recruitment focus beyond applicants with four-year degrees or using non-gender biased job descriptions.”

At Malwarebytes, the practice of removing strict degree requirements from cybersecurity job descriptions has been in place for cybersecurity hires for at least a year and a half.

“I will never list a BA or BS as a hard requirement for most positions,” said Malwarebytes Chief Information Security Officer John Donovan. “Work and life experience help to round out candidates, especially for cyber-security roles.” Donovan added that, for more junior positions, there are “creative ways to broaden the applicant pool,” such as using the recruiting programs YearUp, NPower, and others.

The two organizations, like many others, help transition individuals to tech-focused careers, offering training classes, internships, and access to a corporate world that was perhaps beyond reach.

These types of career development groups can also help a company looking to broaden its search to include typically overlooked communities, including minorities, women, disabled people, and veterans.

Take, for example, the International Consortium of Minority Cybersecurity Professionals, which creates opportunities for women and minorities to advance in the field, or the nonprofit Women in CyberSecurity (WiCyS), which recently developed a veterans’ program. WiCyS primarily works to cultivate the careers of women in cybersecurity by offering training sessions, providing mentorship, granting scholarships, and working with interested corporate partners.

“In cybersecurity, there are challenges that have never existed before,” said Lynn Dohm, executive director for WiCyS. “We need multitasking, diversity of thought, and people from all different backgrounds, all genders, and all ethnicities to tackle these challenges from all different perspectives.”

Finally, for companies still having trouble finding cybersecurity talent, Robert Half’s Johnson recommended broadening the search—literally. Cybersecurity jobs no longer need to be filled by someone located within a 40-mile radius, he said, and if anything, the current pandemic has reinforced this idea.

“The affect of the pandemic, which has shifted how people do their jobs, has made us now realize that the whole working remote thing isn’t as scary as we thought,” Johnson said.

But companies should understand that remote work is as much a boon to them as it is to potential candidates. No longer are qualified candidates limited in their search by what they can physically get to—now, they can apply for jobs that may seem more appealing that are much farther from where they live.

And that, of course, will have an impact on salary, Johnson said.

“While Bay Area salaries or a New York salary, while those might not change dramatically, what is changing is the folks that might be being recruited in Des Moines or in Omaha or Oklahoma City, who have traditionally been limited [regionally], now they’re being recruited by companies on the coast,” Johnson said.

“That’s affecting local companies, which are paying those $80,000 salaries. Now those candidates are being offered $85,000 to work remotely. Now I’ve got to compete with that.”

Planning ahead

The cybersecurity skills gap need not frighten a company or a senior cybersecurity manager looking to hire. There are many actionable steps that a business can take today to help broaden their search and find the talent that perhaps other companies are ignoring.

First, stop including hard degree requirements in job descriptions. The same goes for cybersecurity certifications. Second, start accepting the idea of remote work for these teams. The value of “butts in seats” means next to nothing right now, so get used to it. Third, understand that remote work means potentially better pay for the candidates you’re trying to hire, so look at the market data and pay appropriately. Fourth, connect with a recruiting organization, like WiCyS, if you want some extra help in creating a diverse and representative team. Fifth, also considering looking inwards, as your next cybersecurity hire might actually be a cybersecurity promotion.

And the last piece of advice, at least according to Robert Half’s Johnson? Hire a recruiter.

The post The cybersecurity skills gap is misunderstood appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (August 17 – 23)

Malware Bytes Security - Mon, 08/24/2020 - 12:12pm

Last week on Malwarebytes Labs, we looked at the impact of COVID-19 on healthcare cybersecurity, dug into some pandemic stats in terms of how workforces coped with going remote, and served up a crash course on malware detection. Our most recent Lock and Code podcast explored the safety of parental monitoring apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 17 – 23) appeared first on Malwarebytes Labs.

Categories: Malware Bytes