Category Archives: Security

Those are NOT your grandchildren! FTC warns of new scam

Grandkid imposters are managing to finagle a skyrocketing amount of money out of people, the Federal Trade Commission (FTC) warned on Monday.

The FTC says that its Consumer Sentinel Network has noticed a “striking” increase in the median dollar amount that people 70 and older report losing to fraud. When they started to peel back the layers, the Commission found a number of stories that involve people of that age group having mailed “huge” amounts of cash to people who pretended to be their grandchildren.

People from all age groups report having fallen for phoney family and friends: the reported median loss for individuals is about $2,000, which is more than four times the median loss of $462 reported for all fraud types.

But that’s nothing compared with how much money is being bled out of the elderly: those who send cash reported median losses of a whopping $9,000. About one in four of the ripped-off elderly who report that they lost money to a family or friend imposter say that they sent cash: a far higher rate than the 1 in 25 of people who sent cash for all other frauds.

CBS News talked to one man who got scammed in a way that the FTC says is a common ploy.

Slick scripts

It started with a phone call one morning in April, Franc Stratton told the station. The caller pretended to be a public defender from Austin, Texas, who was calling to tell Stratton that his grandson had been in a car wreck, had been driving under the influence, and was now in jail.

Don’t be afraid, the imposter told Stratton: you can bail out your grandson by sending $8,500 in cash via FedEx. It didn’t raise flags for a good reason: Stratton had done exactly that for another family member in the past.

The cherry on top: the “attorney” briefly put Stratton’s “grandson” on the phone. The fake kid sounded injured, so Stratton drove to the bank to get the cash.

Stratton went so far as to go to a local FedEx to overnight the money to an Austin address. But later that night, he said, he and his wife looked at each other and said, Scam!

Fortunately, they came to their senses in time to call FedEx to have the package returned. He got his money back, but Stratton is still frustrated. Of all people, he should know better, he says: he’s retired now, after a career spent working in intelligence, first for the Air Force and later as a cybersecurity programmer.

That’s how slick the scammers are, with their meticulously prepared scripts, and it shows that they know exactly how to put people into a panicked state, where they’re likely to make bad decisions. Stratton said he fell for it “because of the way that they scripted it.”

I’m the last person, I thought, would ever fall for a scam like this.

The FTC says that Americans have lost $41 million in the scam this year: nearly twice as much as the $26 million lost the year before.

Self-defense for grandparents

These scams are growing more sophisticated as fraudsters do their homework, looking you and/or your grandkids up on social media to lace their scripts with personal details that make them all the more convincing.

Grandparents, no matter how savvy you are, you’ve got an Achilles heel: your love for your grandchildren. The fakers know exactly how to milk that for all it’s worth.

The FTC warns that they’ll pressure you into sending money before you’ve had time to think it through. The Commission offers this advice to keep the shysters from wringing your heart and your wallet:

  • Stop. Breathe. Check it out before you send a dime. Look up your grandkid’s phone number yourself, or call another family member.
  • Don’t overshare. Whatever you share publicly on social media becomes a weapon in the arsenals of scammers. The more personal details they know about you, the more convincing they can sound. It’s one of many reasons to be careful about what you share on social media.
  • Pass the information on to a friend. Even if you haven’t been targeted yourself, you probably know somebody who’s either already gotten a call like this or who will.
  • Report it. The FTC asks us all to please report these scams. US residents can do so online to the FTC. If you’re in the UK, report scams to ActionFraud.

Please report these scams. Doing so helps the authorities nail these imposters before they can victimize others.

Actions for Internal Audit on Cybersecurity, Data Risks

 

Cybersecurity and other data-related issues top the list of risks for heads of audit in 2019; here are key actions audit must take.

The number of cyberattacks continues to increase significantly as threat actors become more sophisticated and diversify their methods. It’s hardly surprisingly, then, that cybersecurity preparedness tops the list of internal audit priorities for 2019.

Other data and IT issues are also on the radar for internal audit, according to the Gartner Audit Plan Hot Spots. Cybersecurity topped the list of 2019’s 12 emerging risks, followed by data governance, third parties and data privacy.

“These risks, or hot spots, are the top-of-mind issues for business leaders who expect heads of internal audit to assess and mitigate them, as well as communicate their impact to organizations and stakeholders,” said Malcolm Murray, VP, Team Manager at Gartner.

What audit can do on cyberpreparedness

The Gartner 2019 Audit Key Risks and Priorities Survey shows that 77% of audit departments definitely plan to cover cybersecurity detection and prevention in audit activities during the next 12-18 months. Only 5% have no such activities planned. And yet, only 53% of audit departments are highly confident in their ability to provide assurance over cybersecurity detection and prevention risks.

Here are some steps audit can take to tackle cybersecurity preparedness:

  • Review device encryption on all devices, including mobile phones and laptops. Assess password strength and the use of multifactor identification.
  • Review access management policies and controls, and set user access and privileges by defined business needs. Swiftly amend access when roles change.
  • Review patch management policies, evaluating the average time from patch release to implementation and the frequency of updates. Make sure patches cover IoT devices.
  • Evaluate employee security training to ensure that the breadth, frequency and content is effective. Don’t forget to build awareness of common security threats such as phishing.
  • Participate in cyber working groups and committees to develop cybersecurity strategy and policies. Help determine how the organization identifies, assesses and mitigates cyberrisk and strengthens current cybersecurity controls.

Data governance

Big data increases the strategic importance of effective mechanisms to collect, use, store and manage organizational data, but many organizations still lack formal data governance frameworks and struggle to establish consistency across the organization. Few scale their programs effectively to meet the growing volume of data. Left unsolved, these governance challenges can lead to operational drag, delayed decision making and unnecessary duplication of efforts.

What audit can do:

  • Review the data assets inventory, which must include, at a minimum, the highest-value data assets of the organization. Assess the extent of both structured and unstructured data assets.
  • Review the classification of data and associated process and policies. Analyze how data will be retained and destroyed, encryption requirements and whether relevant categories of use have been established.
  • Participate in relevant working groups and committees to stay abreast of governance efforts and provide advisory input when frameworks are built.
  • Review data analytics training and talent assessments, identify skill gaps and plan how to fill them. Evaluate the content and availability of existing training.
  • Review the analytics tools inventory across the organization. Determine if IT has an approved vendor list for analytics tools and what efforts are being made to educate the business on the use of approved tools.

Third parties

Efforts to digitalize systems and processes add new, complex dimensions to third-party challenges that have been a perennial concern for organizations. Nearly 70% of chief audit executives reported third-party risk as one of their top concerns, but organizations still struggle to manage this risk. What audit can do:

  • Evaluate scenario analysis for strategic initiatives to analyze potential risks and outcomes associated with interdependent partners in the organization’s business ecosystem. Consider enterprise risk appetite and identify trigger events that would cause the organization to take corrective action.
  • Assess third-party contracts and compliance efforts, ensure contracts adequately stipulate information security, data privacy and nth-party requirements. Ensure there is monitoring of third-party adherence to contracts.
  • Investigate third-party regulatory requirements, assess how effectively senior management communicates regulatory updates across the business and how clearly it articulates requirements for third parties.
  • Evaluate the classification of third-party risk and confirm that the business conducts random checks of third parties to ensure classifications properly account for actual risk levels.

Data privacy

Companies today collect an unprecedented amount of personal information, and the costs of managing and protecting that data are rising. Seventy-seven percent of audit departments say data privacy will definitely be covered in audit activities in the next 12–18 months. What audit can do:

  • Review data protection training and ensure that employees at all levels complete the training. Include elements such as how to report a data breach and protocols for data sharing.
  • Assess current level of GDPR compliance and identify compliance gaps. Review data privacy policies to make sure the language is clear and customer consent is clearly stated.
  • Assess data access and storage. Make sure access to sensitive information is role-based and privileges are properly set and monitored.
  • Review data breach response plans. Evaluate how quickly the company identifies a breach and the mechanisms for notifying impacted consumers and regulators.
  • Assess data loss protection and review whether tools scan data at rest and in motion.

Can the police search your phone?

Enterprises work hard to protect company secrets. Here’s why the biggest threat may be law enforcement.

Can the police search your phone?

The answer to that question is getting complicated.

But it’s an important thing to know. The reason is that your phone, and the phones of every employee at your company, almost certainly contain company secrets — or provide access to those secrets.

Phones can provide access to passwords, contact lists, emails, phone call metadata, photos, spreadsheets and other company documents, location histories, photos and much more.

Proprietary data — including information that would enable systematic hacking of company servers for sabotage, industrial espionage and worse — is protected from legal exposure by a complex set of well-understood laws and norms in the United States. But that same data is accessible from company phones.

Can the police simply take that information?  Until recently, most professionals would have said no.

Why? Because business and IT professionals tend to believe that smartphones are covered by the Fourth Amendment’s strictures against “unreasonable searches and seizures,” a protection recently reaffirmed by the Supreme Court. And smartphones are also protected by the Fifth Amendment, many would say, because divulging a passcode is akin to being “compelled” to be a “witness” against yourself.

Unfortunately, these beliefs are wrong.

The trouble with passcodes

Apple last year quietly added a new feature to iPhones designed to protect smartphone data from police searches. When you quickly press the on/off button on an iPhone five times, it turns off Touch ID and Face ID.

The thinking behind the so-called cop button is that, because police can compel you to use biometrics, but not a passcode, to unlock your phone, the feature makes it impossible for the legal system to force you to hand over information.

Unfortunately, this belief has now been undermined.

We learned this week that a Florida man named William John Montanez was jailed for six months after claiming that he forgot the passcodes for his two phones.

Montanez was pulled over for a minor traffic infraction. Police wanted to search his car. He refused. The police brought in dogs, which found some marijuana and a gun. (Montanez said the gun was his mother’s.) During the arrest, his phone got a text that said, “OMG, did they find it,” prompting police to get a warrant to search his phones. That’s when Montanez claimed he didn’t remember the passcodes, and the judge sentenced him to up to six months in jail for civil contempt.

As a precedent, this cascading series of events changes what we thought we knew about the security of the data on our phones. What started as an illegal turn ended up with jail time over the inability or unwillingness to divulge what we thought was a constitutionally protected bit of information.

We’ve also learned a lot recently about the vulnerability of location data on a smartphone.

The solution for individual users who want to keep location and other data private is to simply switch off the feature, such as the Location History feature in Google’s Android operating system. Right?

Not really. It turns out Google has been storing location data even after users turn off Location History.

The fiasco was based on false information that used to exist on Google’s site. Turning off Location History, the site said, meant that “the places you go are no longer stored.” In fact, they were stored, just not in the user-accessible Location History area.

Google corrected the false language, adding, “Some location data may be saved as part of your activity on other services, like Search and Maps.”

Stored data matters.

The FBI recently demanded from Google the data about all people using location services within a 100-acre area in Portland, Maine, as part of an investigation into a series of robberies. The request included the names, addresses, phone numbers, “session” times and duration, log-in IP addresses, email addresses, log files and payment information.

The order also said that Google could not inform users of the FBI’s demand.

Google did not comply with the request. But that didn’t keep the FBI from pushing for it.

In fact, police are evolving their methods, intentions and technologies for searching smartphones.

Police data-harvesting machines

A device called GrayKey, from a company called GrayShift, can unlock any iPhone or iPad.

GrayShift licenses the devices for $15,000 per year and up to 300 phone cracks.

It’s a turnkey system. Each GrayKey has two Lightning cables. Police need only plug in a phone, and eventually the phone’s passcode appears on the phone’s screen, giving full access.

That may be why Apple introduced in the fall a new “USB Restricted Mode” for iPhones. That mode makes it harder for police (or criminals) to crack a phone via the Lightning port.

The mode is activated by default, which is to say that the “switch” in settings for USB Accessories is turned off. With that switch off, the Lightning port won’t connect to anything after an hour of the phone being locked.

Unfortunately for iPhone users, “USB Restricted Mode” is easily defeated with a widely available $39 dongle.

And the U.S. isn’t the only country with police data-harvesting machines.

A world of trouble for smartphone data

Chinese authorities have their own technology for harvesting the data from phones, and that technology is now being deployed by police in the field. Police anywhere in the country can demand that anyone hand over a phone, which is then scanned by a device, the use of which is reportedly spreading across China.

Chinese authorities have both desktop and handheld scanner devices, which automatically extract and process emails, social posts, videos, photos, call histories, text messages and contact lists to aid them in looking for transgressions.

Some reports suggest that the devices, which are made by both Israeli and Chinese companies, are unable to crack newer iPhones but can access nearly every other kind of phone.

Another factor to be considered is that the protections of the U.S. Constitution end at the border — literally at the border.

As I’ve detailed here in the past, U.S. Customs is a “gray area” for Fifth Amendment constitutional protections.

And once abroad, all bets are off. Even in friendly, pro-privacy nations such as Australia.

The Australian government on Tuesday proposed a law called the Assistance and Access Bill 2018. If it becomes law, the act would require people to unlock their phones for police or face up to ten years in prison (the current maximum is two years).

It would empower police to legally bug or hack phones and computers.

The bill would force carriers, as well as companies such as Apple, Google, Microsoft and Facebook, to give police access to the private encrypted data of their customers if technically possible.

Failure to comply would result in fines of up $7.3 million and prison time.

Police would need a warrant to crack, bug or hack a phone.

Police would need a warrant to crack, bug or hack a phone.

The bill may never become law. But Australia is just one of many nations affected by a new political will to end smartphone privacy when it comes to law enforcement.

If you take anything away from this column, please remember this: The landscape for what’s possible in the realm of police searches of smartphones is changing every day.

In general, smartphones are becoming less protected from police searches, not more protected.

That’s why the assumption of every IT department, every enterprise and every business progressional — especially those of us who travel internationally on business — must be that the data on a smartphone is not safe from official scrutiny.

It’s time to rethink company policies, training, procedures and permissions around smartphones.

Employees Actively Seeking Ways to Bypass Corporate Security Protocols in 95 % of Enterprises

In today’s world cyber incidents activities such as data theft, insider threat, malware attack most are significant security risks and some it caused by the employees of the company both intentionally or unknowingly, also around 95% of threat and Activities with access to corporate endpoints, data, and applications.

Many of the security testing among the most alarming discoveries was that 95 percent of assessments revealed employees were actively researching, installing or executing security or vulnerability testing tools in attempts to bypass corporate security.

They are using anonymity tools like  Tor,VPNs frequently to hide who is Trying to breaking the corporate security.

Christy Wyatt, CEO at Dtex Systems said, “Some of the year’s largest reported breaches are a direct result of malicious insiders or insider negligence.

People are the weakest security link

Last year survey reported by Dtex Systems said, 60 percent of all attacks are carried out by insiders. 68 percent of all insider breaches are due to negligence, 22 percent are from malicious insiders and 10 percent are related to credential theft.  Also, the current trend shows that the first and last two weeks of employment for employees are critical as 56 percent of organizations saw potential data theft from leaving or joining employees during those times.

Increased use of cloud services puts data at risk

64 percent of enterprises assessed found corporate information on the web that was publicly accessible, due in part to the increase in cloud applications and services.

To make matters worse, 87 percent of employees were using personal, web-based email on company devices. By completely removing data and activity from the control of corporate security teams, insiders are giving attackers direct access to corporate assets.

Inappropriate internet usage is driving risk

59 percent of organizations analyzed experienced instances of employees accessing pornographic websites during the work day.

43 percent had users who were engaged in online gambling activities over corporate networks, which included playing the lottery and using Bitcoin to bet on sporting events.

This type of user behavior is indicative of overall negligence and high-risk activities taking place.

Dtex Systems analyzed and prepared these risk assessments from 60 enterprises across North America, Europe and Asia with the industries like IT, Finance, Public Sector, Manufacturing, Pharmaceuticals and Media & Entertainment.

Please consider your cybersecurity posture when it comes to your employees, again people are the leading cause to “Risk”.

 

 

 

Vulnerable ship systems: Many left exposed to criminal hacking

Pen Test Partners’ Ken Munro and his colleagues – some of which are former ship crew members who really understand bridge and propulsion systems – have been probing the security of ships’ IT systems for a while now and the results are depressing: satcom terminals exposed on the Internet, admin interfaces accessible via insecure protocols, no firmware signing, easy-to-guess default credentials, and so on.

“Ship security is in its infancy – most of these types of issues were fixed years ago in mainstream IT systems,” Pen Test Partners’ Ken Munro says, and points out that the advent of always-on satellite connections has exposed shipping to hacking attacks.

A lack of security hygiene

Potential attackers can take advantage of poor security hygiene on board, but also of the poor security of protocols and systems provided by maritime product vendors.

For example, the operational technology (OT) systems that are used to control the steering gear, engines, ballast pumps and so on, communicate using NMEA 0183 messages. But there is no message authentication, encryption or validation of these messages, and they are in plain text.

“All we need to do is man in the middle and modify the data. This isn’t GPS spoofing, which is well known and easy to detect, this is injecting small errors to slowly and insidiously force a ship off course,” Munro says.

They found other examples of poor security practices in a satellite communication terminal by Cobham SATCOM: things like admin interfaces accessible over telnet and HTTP, a lack of firmware signing and no rollback protection for the firmware, admin interface passwords embedded in the configuration (and hashed with unsalted MD5!), and the possibility to edit the entire web application running on the terminal.

They shared this with the public because all these flaws can be mitigated by setting a strong admin password, but they also found other issues that have to be fixed by the vendor (and so they disclosed them privately).

Electronic chart systems are full of flaws

ECDIS – electronic chart systems that are used for navigation – are also full of security flaws. They tested over 20 different ECDIS units and found things like old operating systems and poorly protected configuration interfaces. Attackers could ‘jump’ the boat by spoofing the position of the GPS receiver on the ship, or reconfigure the ECDIS to make the ship appear to be wider and longer than it is.

“This doesn’t sound bad, until you appreciate that the ECDIS often feeds the AIS [Automatic Identification System] transceiver – that’s the system that ships use to avoid colliding with each other,” Munro noted.

“It would be a brave captain indeed to continue down a busy, narrow shipping lane whilst the collision alarms are sounding. Block the English Channel and you may start to affect our supply chain.”

Tracking vulnerable ships

Pen Test Partners also created a vulnerable ship tracker by combining Shodan’s ship tracker, which uses publicly available AIS data, and satcom terminal version details.

The tracker does not show other details except the ship’s name and real-time position because they don’t want to help hackers, but it shows just how many vulnerable ships are out there.

Hacking incidents in the shipping industry

Hacking incidents affecting firms in the shipping industry are more frequent than the general public could guess by perusing the news. Understandably, the companies are eager to keep them on the down-low, if they can, as they could negatively affect their business competitiveness, Munro recently told me.

Some attacks can’t be concealed, though. For example, when A.P. Møller-Mærsk fell victim to the NotPetya malware, operations got disrupted and estimated losses reached several hundred millions of dollars.

That particular attack thankfully did not result in the company losing control of its vessels, but future attacks might lead to shipping security incidents and be more disruptive to that aspect of companies’ activities.

“Vessel owners and operators need to address these issues quickly, or more shipping security incidents will occur,” he concluded.

 

Amazon confirms that Echo device secretly shared user’s private audio [Updated]

This really should not be big news, I’ve been stating it since Alexa came out.  The MIC is open all the time unless you “Mute” it and data is saved and transmitted to Amazon.  Make sure you understand the technology before you start adding all of these types of IoT devices in your home, as I call them “Internet of Threats”

The call that started it all: “Unplug your Alexa devices right now.”

Amazon confirmed an Echo owner’s privacy-sensitive allegation on Thursday, after Seattle CBS affiliate KIRO-7 reported that an Echo device in Oregon sent private audio to someone on a user’s contact list without permission.

“Unplug your Alexa devices right now,” the user, Danielle (no last name given), was told by her husband’s colleague in Seattle after he received full audio recordings between her and her husband, according to the KIRO-7 report. The disturbed owner, who is shown in the report juggling four unplugged Echo Dot devices, said that the colleague then sent the offending audio to Danielle and her husband to confirm the paranoid-sounding allegation. (Before sending the audio, the colleague confirmed that the couple had been talking about hardwood floors.

After calling Amazon customer service, Danielle said she received the following explanation and response: “‘Our engineers went through all of your logs. They saw exactly what you told us, exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes. ‘This is something we need to fix.'”

Danielle next asked exactly why the device sent recorded audio to a contact: “He said the device guessed what we were saying.” Danielle didn’t explain exactly how much time passed between the incident, which happened “two weeks ago,” and this customer service response.

When contacted by KIRO-7, Amazon confirmed the report and added in a statement that the company “determined this was an extremely rare occurrence.” Amazon didn’t clarify whether that meant such automatic audio-forwarding features had been built into all Echo devices up until that point, but the company added that “we are taking steps to avoid this from happening in the future.”

This follows a 2017 criminal trial in which Amazon initially fought to squash demands for audio captured by an Amazon Echo device related to a murder investigation. The company eventually capitulated.

Amazon did not immediately respond to Ars Technica’s questions about how this user’s audio-share was triggered.

Update, 5:06pm ET: Amazon forwarded an updated statement about KIRO-7’s report to Ars Technica, which includes an apparent explanation for how this audio may have been sent:
Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Amazon did not explain how so many spoken Alexa prompts could have gone unnoticed by the Echo owner in question. Second update: The company did confirm to Ars that the above explanation was sourced from device logs.

Ring Security Flaw Lets Unauthorized Parties Control Doorbell App

 

A security flaw founded in Ring’s video doorbell can let others access camera footage even if homeowners have changed their passwords, according to media sources.

This can happen after a Ring device owner gives access to the Ring app to someone else. If it is given to an ex-partner, for example, after the relationship turned sour, the partner may still monitor the activity outside the front door using the camera, and download the video and control the doorbell from the phone as an administrator.

It doesn’t matter how many times Ring device owners have changed the password, the Ring app will never ask users to sign in again after the password is changed.

Ring was notified of the issue in early January and claimed to have removed users who were no longer authorized. However, in the test carried out by media outlet The Information’s staff, these ex-users could still access the app for several hours.

Jamie Siminoff, CEO of Ring, has acknowledged the issue and responded that kicking users off the platform apparently slows down the Ring app.

After the issue was reported, Ring made another statement, suggesting that Ring customers should never share their usernames or passwords. The company recommended that other family members or partners sign in via Ring’s “Shared Users” feature.

In this way, device owners have control over who has access and can immediately remove users if they want.

“Our team is taking additional steps to further improve the password change experience,” said Ring in a statement.

Ring was acquired by Amazon for US$1 billion at the beginning of this year. Amazon operates in-home delivery service, the Amazon Key, relying on security devices at the front door such as smart doorbells, door locks and security cameras.

Any security flaws like the one found in Ring will make it difficult for the e-commerce giant to convince people that it’s safe for Amazon’s delivery people to enter their houses when nobody’s home.

Please make sure to secure all of your IoT devices as we know most of them are wide open to attacks.

IoT World

Honored to be speaking at IoT World May 14-17, 2018
Santa Clara Convention Center.
@MrMichaelReese #IOTWORLD #Cybersecurity

 

GitHub Survived the Biggest DDoS Attack Ever Recorded

On Wednesday, at about 12:15 pm EST, 1.35 terabits per second of traffic hit the developer platform GitHub all at once. It was the most powerful distributed denial of service attack recorded to date—and it used an increasingly popular DDoS method, no botnet required.

GitHub briefly struggled with intermittent outages as a digital system assessed the situation. Within 10 minutes it had automatically called for help from its DDoS mitigation service, Akamai Prolexic. Prolexic took over as an intermediary, routing all the traffic coming into and out of GitHub, and sent the data through its scrubbing centers to weed out and block malicious packets. After eight minutes, attackers relented and the assault dropped off.

The scale of the attack has few parallels, but a massive DDoS that struck the internet infrastructure company Dyn in late 2016 comes close. That barrage peaked at 1.2 terabits per second and caused connectivity issues across the US as Dyn fought to get the situation under control.

“We modeled our capacity based on fives times the biggest attack that the internet has ever seen,” Josh Shaul, vice president of web security at Akamai told WIRED hours after the GitHub attack ended. “So I would have been certain that we could handle 1.3 Tbps, but at the same time we never had a terabit and a half come in all at once. It’s one thing to have the confidence. It’s another thing to see it actually play out how you’d hope.”

Akamai defended against the attack in a number of ways. In addition to Prolexic’s general DDoS defense infrastructure, the firm had also recently implemented specific mitigations for a type of DDoS attack stemming from so-called memcached servers. These database caching systems work to speed networks and websites, but they aren’t meant to be exposed on the public internet; anyone can query them, and they’ll likewise respond to anyone. About 100,000 memcached servers, mostly owned by businesses and other institutions, currently sit exposed online with no authentication protection, meaning an attacker can access them and send them a special command packet that the server will respond to with a much larger reply.

Unlike the formal botnet attacks used in large DDoS efforts, like against Dyn and the French telecom OVH, memcached DDoS attacks don’t require a malware-driven botnet. Attackers simply spoof the IP address of their victim and send small queries to multiple memcached servers—about 10 per second per server—that are designed to elicit a much larger response. The memcached systems then return 50 times the data of the requests back to the victim.

Known as an amplification attack, this type of DDoS has shown up before. But as internet service and infrastructure providers have seen memcached DDoS attacks ramp up over the last week or so, they’ve moved swiftly to implement defenses to block traffic coming from memcached servers.

“Large DDoS attacks such as those made possible by abusing memcached are of concern to network operators,” says Roland Dobbins, a principal engineer at the DDoS and network-security firm Arbor Networks who has been tracking the memcached attack trend. “Their sheer volume can have a negative impact on the ability of networks to handle customer internet traffic.”

The infrastructure community has also started attempting to address the underlying problem, by asking the owners of exposed memcached servers to take them off the internet, keeping them safely behind firewalls on internal networks. Groups like Prolexic that defend against active DDoS attacks have already added or are scrambling to add filters that immediately start blocking memcached traffic if they detect a suspicious amount of it. And if internet backbone companies can ascertain the attack command used in a memcached DDoS, they can get ahead of malicious traffic by blocking any memcached packets of that length.

“We are going to filter that actual command out so no one can even launch the attack,” says Dale Drew, chief security strategist at the internet service provider CenturyLink. And companies need to work quickly to establish these defenses. “We’ve seen about 300 individual scanners that are searching for memcached boxes, so there are at least 300 bad guys looking for exposed servers,” Drew adds.

Most of the memcached DDoS attacks CenturyLink has seen top out at about 40 to 50 gigabits per second, but the industry had been increasingly noticing bigger attacks up to 500 gbps and beyond. On Monday, Prolexic defended against a 200 gbps memcached DDoS attack launched against a target in Munich.

Wednesday’s onslaught wasn’t the first time a major DDoS attack targeted GitHub. The platform faced a six-day barrage in March 2015, possibly perpetrated by Chinese state-sponsored hackers. The attack was impressive for 2015, but DDoS techniques and platforms—particularly Internet of Things–powered botnets—have evolved and grown increasingly powerful when they’re at their peak. To attackers, though, the beauty of memcached DDoS attacks is there’s no malware to distribute, and no botnet to maintain.

The web monitoring and network intelligence firm ThousandEyes observed the GitHub attack on Wednesday. “This was a successful mitigation. Everything transpired in 15 to 20 minutes,” says Alex Henthorne-Iwane, vice president of product marketing at ThousandEyes. “If you look at the stats you’ll find that globally speaking DDoS attack detection alone generally takes about an hour plus, which usually means there’s a human involved looking and kind of scratching their head. When it all happens within 20 minutes you know that this is driven primarily by software. It’s nice to see a picture of success.”

GitHub continued routing its traffic through Prolexic for a few hours to ensure that the situation was resolved. Akamai’s Shaul says he suspects that attackers targeted GitHub simply because it is a high-profile service that would be impressive to take down. The attackers also may have been hoping to extract a ransom. “The duration of this attack was fairly short,” he says. “I think it didn’t have any impact so they just said that’s not worth our time anymore.”

Until memcached servers get off the public internet, though, it seems likely that attackers will give a DDoS of this scale another shot.

Can You Spot the Bait in a Phishing Attack?

Hackers are always trying to find creative and new ways to steal data and information from businesses. While spam (unwanted messages in your email inbox) has been around for a very long time, phishing emails have risen in popularity because they are more effective at achieving the desired endgame. How can you make sure that phishing scams don’t harm your business in the future?

Phishing attacks come in many different forms. We’ll discuss some of the most popular ways that hackers and scammers will try to take advantage of your business through phishing scams, including phone calls, email, and social media.

Phishing Calls
Do you receive calls from strange or restricted numbers? If so, chances are that they are calls that you want to avoid. Hackers will use the phone to make phishing phone calls to unsuspecting employees. They might claim to be with IT support, and in some cases, they might even take on the identity of someone else within your office. These types of attacks can be dangerous and tricky to work around, particularly if the scammer is pretending to be someone of authority within your organization.

For example, someone might call your organization asking about a printer model or other information about your technology. Sometimes they will be looking for specific data or information that might be in the system, while other times they are simply looking for a way into your network. Either way, it’s important that your company doesn’t give in to their requests, as there is no reason why anyone would ask for sensitive information over the phone. If in doubt, you should cross-check contact information to make sure that the caller is who they say they are.

Phishing Emails
Phishing emails aren’t quite as pressing as phishing phone calls because you’re not being pressured to make an immediate decision. Still, this doesn’t lessen the importance of being able to identify phishing messages. You might receive tailor-made customized phishing messages with the sole intent of a specific user handing over important information or clicking on a link/attachment. Either way, the end result is much the same as a phone call phishing scam;

To avoid phishing emails, you should implement a spam filter and train your employees on how to identify the telltale signs of these messages. These include spelling errors, incorrect information, and anything that just doesn’t belong. Although, phishing messages have started to become more elaborate and sophisticated.

Phishing Accounts
Social media makes it incredibly easy for hackers to assume an anonymous identity and use it to attack you; or, even more terrifying, the identity of someone you know. It’s easy for a hacker to masquerade as someone that they’re not, providing an outlet for attack that can be somewhat challenging to identify. Some key pointers are to avoid any messages that come out of the blue or seemingly randomly. You can also ask questions about past interactions that tip you off that they may (or may not) be who they say they are.

Ultimately, it all comes down to approaching any phishing incident intelligently and with a healthy dose of skepticism.