Category Archives: Cyber Security

Can the police search your phone?

Enterprises work hard to protect company secrets. Here’s why the biggest threat may be law enforcement.

Can the police search your phone?

The answer to that question is getting complicated.

But it’s an important thing to know. The reason is that your phone, and the phones of every employee at your company, almost certainly contain company secrets — or provide access to those secrets.

Phones can provide access to passwords, contact lists, emails, phone call metadata, photos, spreadsheets and other company documents, location histories, photos and much more.

Proprietary data — including information that would enable systematic hacking of company servers for sabotage, industrial espionage and worse — is protected from legal exposure by a complex set of well-understood laws and norms in the United States. But that same data is accessible from company phones.

Can the police simply take that information?  Until recently, most professionals would have said no.

Why? Because business and IT professionals tend to believe that smartphones are covered by the Fourth Amendment’s strictures against “unreasonable searches and seizures,” a protection recently reaffirmed by the Supreme Court. And smartphones are also protected by the Fifth Amendment, many would say, because divulging a passcode is akin to being “compelled” to be a “witness” against yourself.

Unfortunately, these beliefs are wrong.

The trouble with passcodes

Apple last year quietly added a new feature to iPhones designed to protect smartphone data from police searches. When you quickly press the on/off button on an iPhone five times, it turns off Touch ID and Face ID.

The thinking behind the so-called cop button is that, because police can compel you to use biometrics, but not a passcode, to unlock your phone, the feature makes it impossible for the legal system to force you to hand over information.

Unfortunately, this belief has now been undermined.

We learned this week that a Florida man named William John Montanez was jailed for six months after claiming that he forgot the passcodes for his two phones.

Montanez was pulled over for a minor traffic infraction. Police wanted to search his car. He refused. The police brought in dogs, which found some marijuana and a gun. (Montanez said the gun was his mother’s.) During the arrest, his phone got a text that said, “OMG, did they find it,” prompting police to get a warrant to search his phones. That’s when Montanez claimed he didn’t remember the passcodes, and the judge sentenced him to up to six months in jail for civil contempt.

As a precedent, this cascading series of events changes what we thought we knew about the security of the data on our phones. What started as an illegal turn ended up with jail time over the inability or unwillingness to divulge what we thought was a constitutionally protected bit of information.

We’ve also learned a lot recently about the vulnerability of location data on a smartphone.

The solution for individual users who want to keep location and other data private is to simply switch off the feature, such as the Location History feature in Google’s Android operating system. Right?

Not really. It turns out Google has been storing location data even after users turn off Location History.

The fiasco was based on false information that used to exist on Google’s site. Turning off Location History, the site said, meant that “the places you go are no longer stored.” In fact, they were stored, just not in the user-accessible Location History area.

Google corrected the false language, adding, “Some location data may be saved as part of your activity on other services, like Search and Maps.”

Stored data matters.

The FBI recently demanded from Google the data about all people using location services within a 100-acre area in Portland, Maine, as part of an investigation into a series of robberies. The request included the names, addresses, phone numbers, “session” times and duration, log-in IP addresses, email addresses, log files and payment information.

The order also said that Google could not inform users of the FBI’s demand.

Google did not comply with the request. But that didn’t keep the FBI from pushing for it.

In fact, police are evolving their methods, intentions and technologies for searching smartphones.

Police data-harvesting machines

A device called GrayKey, from a company called GrayShift, can unlock any iPhone or iPad.

GrayShift licenses the devices for $15,000 per year and up to 300 phone cracks.

It’s a turnkey system. Each GrayKey has two Lightning cables. Police need only plug in a phone, and eventually the phone’s passcode appears on the phone’s screen, giving full access.

That may be why Apple introduced in the fall a new “USB Restricted Mode” for iPhones. That mode makes it harder for police (or criminals) to crack a phone via the Lightning port.

The mode is activated by default, which is to say that the “switch” in settings for USB Accessories is turned off. With that switch off, the Lightning port won’t connect to anything after an hour of the phone being locked.

Unfortunately for iPhone users, “USB Restricted Mode” is easily defeated with a widely available $39 dongle.

And the U.S. isn’t the only country with police data-harvesting machines.

A world of trouble for smartphone data

Chinese authorities have their own technology for harvesting the data from phones, and that technology is now being deployed by police in the field. Police anywhere in the country can demand that anyone hand over a phone, which is then scanned by a device, the use of which is reportedly spreading across China.

Chinese authorities have both desktop and handheld scanner devices, which automatically extract and process emails, social posts, videos, photos, call histories, text messages and contact lists to aid them in looking for transgressions.

Some reports suggest that the devices, which are made by both Israeli and Chinese companies, are unable to crack newer iPhones but can access nearly every other kind of phone.

Another factor to be considered is that the protections of the U.S. Constitution end at the border — literally at the border.

As I’ve detailed here in the past, U.S. Customs is a “gray area” for Fifth Amendment constitutional protections.

And once abroad, all bets are off. Even in friendly, pro-privacy nations such as Australia.

The Australian government on Tuesday proposed a law called the Assistance and Access Bill 2018. If it becomes law, the act would require people to unlock their phones for police or face up to ten years in prison (the current maximum is two years).

It would empower police to legally bug or hack phones and computers.

The bill would force carriers, as well as companies such as Apple, Google, Microsoft and Facebook, to give police access to the private encrypted data of their customers if technically possible.

Failure to comply would result in fines of up $7.3 million and prison time.

Police would need a warrant to crack, bug or hack a phone.

Police would need a warrant to crack, bug or hack a phone.

The bill may never become law. But Australia is just one of many nations affected by a new political will to end smartphone privacy when it comes to law enforcement.

If you take anything away from this column, please remember this: The landscape for what’s possible in the realm of police searches of smartphones is changing every day.

In general, smartphones are becoming less protected from police searches, not more protected.

That’s why the assumption of every IT department, every enterprise and every business progressional — especially those of us who travel internationally on business — must be that the data on a smartphone is not safe from official scrutiny.

It’s time to rethink company policies, training, procedures and permissions around smartphones.

Employees Actively Seeking Ways to Bypass Corporate Security Protocols in 95 % of Enterprises

In today’s world cyber incidents activities such as data theft, insider threat, malware attack most are significant security risks and some it caused by the employees of the company both intentionally or unknowingly, also around 95% of threat and Activities with access to corporate endpoints, data, and applications.

Many of the security testing among the most alarming discoveries was that 95 percent of assessments revealed employees were actively researching, installing or executing security or vulnerability testing tools in attempts to bypass corporate security.

They are using anonymity tools like  Tor,VPNs frequently to hide who is Trying to breaking the corporate security.

Christy Wyatt, CEO at Dtex Systems said, “Some of the year’s largest reported breaches are a direct result of malicious insiders or insider negligence.

People are the weakest security link

Last year survey reported by Dtex Systems said, 60 percent of all attacks are carried out by insiders. 68 percent of all insider breaches are due to negligence, 22 percent are from malicious insiders and 10 percent are related to credential theft.  Also, the current trend shows that the first and last two weeks of employment for employees are critical as 56 percent of organizations saw potential data theft from leaving or joining employees during those times.

Increased use of cloud services puts data at risk

64 percent of enterprises assessed found corporate information on the web that was publicly accessible, due in part to the increase in cloud applications and services.

To make matters worse, 87 percent of employees were using personal, web-based email on company devices. By completely removing data and activity from the control of corporate security teams, insiders are giving attackers direct access to corporate assets.

Inappropriate internet usage is driving risk

59 percent of organizations analyzed experienced instances of employees accessing pornographic websites during the work day.

43 percent had users who were engaged in online gambling activities over corporate networks, which included playing the lottery and using Bitcoin to bet on sporting events.

This type of user behavior is indicative of overall negligence and high-risk activities taking place.

Dtex Systems analyzed and prepared these risk assessments from 60 enterprises across North America, Europe and Asia with the industries like IT, Finance, Public Sector, Manufacturing, Pharmaceuticals and Media & Entertainment.

Please consider your cybersecurity posture when it comes to your employees, again people are the leading cause to “Risk”.

 

 

 

Vulnerable ship systems: Many left exposed to criminal hacking

Pen Test Partners’ Ken Munro and his colleagues – some of which are former ship crew members who really understand bridge and propulsion systems – have been probing the security of ships’ IT systems for a while now and the results are depressing: satcom terminals exposed on the Internet, admin interfaces accessible via insecure protocols, no firmware signing, easy-to-guess default credentials, and so on.

“Ship security is in its infancy – most of these types of issues were fixed years ago in mainstream IT systems,” Pen Test Partners’ Ken Munro says, and points out that the advent of always-on satellite connections has exposed shipping to hacking attacks.

A lack of security hygiene

Potential attackers can take advantage of poor security hygiene on board, but also of the poor security of protocols and systems provided by maritime product vendors.

For example, the operational technology (OT) systems that are used to control the steering gear, engines, ballast pumps and so on, communicate using NMEA 0183 messages. But there is no message authentication, encryption or validation of these messages, and they are in plain text.

“All we need to do is man in the middle and modify the data. This isn’t GPS spoofing, which is well known and easy to detect, this is injecting small errors to slowly and insidiously force a ship off course,” Munro says.

They found other examples of poor security practices in a satellite communication terminal by Cobham SATCOM: things like admin interfaces accessible over telnet and HTTP, a lack of firmware signing and no rollback protection for the firmware, admin interface passwords embedded in the configuration (and hashed with unsalted MD5!), and the possibility to edit the entire web application running on the terminal.

They shared this with the public because all these flaws can be mitigated by setting a strong admin password, but they also found other issues that have to be fixed by the vendor (and so they disclosed them privately).

Electronic chart systems are full of flaws

ECDIS – electronic chart systems that are used for navigation – are also full of security flaws. They tested over 20 different ECDIS units and found things like old operating systems and poorly protected configuration interfaces. Attackers could ‘jump’ the boat by spoofing the position of the GPS receiver on the ship, or reconfigure the ECDIS to make the ship appear to be wider and longer than it is.

“This doesn’t sound bad, until you appreciate that the ECDIS often feeds the AIS [Automatic Identification System] transceiver – that’s the system that ships use to avoid colliding with each other,” Munro noted.

“It would be a brave captain indeed to continue down a busy, narrow shipping lane whilst the collision alarms are sounding. Block the English Channel and you may start to affect our supply chain.”

Tracking vulnerable ships

Pen Test Partners also created a vulnerable ship tracker by combining Shodan’s ship tracker, which uses publicly available AIS data, and satcom terminal version details.

The tracker does not show other details except the ship’s name and real-time position because they don’t want to help hackers, but it shows just how many vulnerable ships are out there.

Hacking incidents in the shipping industry

Hacking incidents affecting firms in the shipping industry are more frequent than the general public could guess by perusing the news. Understandably, the companies are eager to keep them on the down-low, if they can, as they could negatively affect their business competitiveness, Munro recently told me.

Some attacks can’t be concealed, though. For example, when A.P. Møller-Mærsk fell victim to the NotPetya malware, operations got disrupted and estimated losses reached several hundred millions of dollars.

That particular attack thankfully did not result in the company losing control of its vessels, but future attacks might lead to shipping security incidents and be more disruptive to that aspect of companies’ activities.

“Vessel owners and operators need to address these issues quickly, or more shipping security incidents will occur,” he concluded.

 

Hack of DNA Website Exposes Data From 92 Million Accounts

Consumer genealogy website MyHeritage said that email addresses and password information linked to more than 92 million user accounts have been compromised in an apparent hacking incident.
MyHeritage said that its security officer had received a message from a researcher who unearthed a file named “myheritage” containing email addresses and encrypted passwords of 92,283,889 of its users on a private server outside the company.
“There has been no evidence that the data in the file was ever used by the perpetrators,” the company said in a statement late Monday.

MyHeritage lets users build family trees, search historical records and hunt for potential relatives. Founded in Israel in 2003, the site launched a service called MyHeritage DNA in 2016 that, like competitors Ancestry.com and 23andMe, lets users send in a saliva sample for genetic analysis. The website currently has 96 million users; 1.4 million users have taken the DNA test.

According to MyHeritage, the breach took place on Oct. 26, 2017, and affects users who signed up for an account through that date. The company said that it doesn’t store actual user passwords, but instead passwords encrypted with what’s called a one-way hash, with a different key required to access each customer’s data.  So we ask “Why did it take so long to declare a breach”

In some past breaches, however, hashing schemes have been successfully converted back into passwords. A hacker able to decrypt the hashed passwords exposed in the breach could access personal information accessible when logging into someone’s account, such as the identity of family members. But even if hackers were able to get into a customer’s account, it’s unlikely they could easily access raw genetic information, since a step in the download process includes email confirmation.
In its statement, the company emphasized that DNA data is stored “on segregated systems and are separate from those that store the email addresses, and they include added layers of security.”

MyHeritage has set up a 24/7 support team to assist customers affected by the breach. It plans to hire an independent cybersecurity firm to investigate the incident and potentially beef up security. In the meantime, users are advised to change their passwords.

Why would hackers “Criminals” want to steal and then sell DNA back for ransom?  Hackers could threaten to revoke access or post the sensitive information online if not given money.  This data could be very valuable to insurance companies (Medical, and Life), mortgage companies, and then you ask “why”?  In a world where data is posted online, it could be used to genetically discriminate against people, such as denying mortgages or increasing insurance costs.  (it doesn’t help that interpreting genetics is complicated and many people don’t understand the probabilities anyway.)  This data could be sold on the down-low or monetized to insurance companies,  You can imagine the consequences: One day, I might apply for a long-term loan and get rejected because deep in the corporate system, there is data that I am very likely to get Alzheimer’s and die before I would repay the loan. In the future, if genetic data becomes commonplace enough, people might be able to pay a fee and get access to someone’s genetic data, the way we can now access someone’s criminal background.

Case and point, Sacramento investigators tracked down East Area Rapist suspect Joseph James DeAngelo using genealogical websites that contained genetic information from a relative, the Sacramento County District Attorney’s Office confirmed Thursday.

The effort was part of a painstaking process that began by using DNA from one of the crime scenes from years ago and comparing it to genetic profiles available online through various websites that cater to individuals wanting to know more about their family backgrounds by accepting DNA samples, said Chief Deputy District Attorney Steve Grippi.

 

 

 

 

Amazon confirms that Echo device secretly shared user’s private audio [Updated]

This really should not be big news, I’ve been stating it since Alexa came out.  The MIC is open all the time unless you “Mute” it and data is saved and transmitted to Amazon.  Make sure you understand the technology before you start adding all of these types of IoT devices in your home, as I call them “Internet of Threats”

The call that started it all: “Unplug your Alexa devices right now.”

Amazon confirmed an Echo owner’s privacy-sensitive allegation on Thursday, after Seattle CBS affiliate KIRO-7 reported that an Echo device in Oregon sent private audio to someone on a user’s contact list without permission.

“Unplug your Alexa devices right now,” the user, Danielle (no last name given), was told by her husband’s colleague in Seattle after he received full audio recordings between her and her husband, according to the KIRO-7 report. The disturbed owner, who is shown in the report juggling four unplugged Echo Dot devices, said that the colleague then sent the offending audio to Danielle and her husband to confirm the paranoid-sounding allegation. (Before sending the audio, the colleague confirmed that the couple had been talking about hardwood floors.

After calling Amazon customer service, Danielle said she received the following explanation and response: “‘Our engineers went through all of your logs. They saw exactly what you told us, exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes. ‘This is something we need to fix.'”

Danielle next asked exactly why the device sent recorded audio to a contact: “He said the device guessed what we were saying.” Danielle didn’t explain exactly how much time passed between the incident, which happened “two weeks ago,” and this customer service response.

When contacted by KIRO-7, Amazon confirmed the report and added in a statement that the company “determined this was an extremely rare occurrence.” Amazon didn’t clarify whether that meant such automatic audio-forwarding features had been built into all Echo devices up until that point, but the company added that “we are taking steps to avoid this from happening in the future.”

This follows a 2017 criminal trial in which Amazon initially fought to squash demands for audio captured by an Amazon Echo device related to a murder investigation. The company eventually capitulated.

Amazon did not immediately respond to Ars Technica’s questions about how this user’s audio-share was triggered.

Update, 5:06pm ET: Amazon forwarded an updated statement about KIRO-7’s report to Ars Technica, which includes an apparent explanation for how this audio may have been sent:
Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Amazon did not explain how so many spoken Alexa prompts could have gone unnoticed by the Echo owner in question. Second update: The company did confirm to Ars that the above explanation was sourced from device logs.

Ring Security Flaw Lets Unauthorized Parties Control Doorbell App

 

A security flaw founded in Ring’s video doorbell can let others access camera footage even if homeowners have changed their passwords, according to media sources.

This can happen after a Ring device owner gives access to the Ring app to someone else. If it is given to an ex-partner, for example, after the relationship turned sour, the partner may still monitor the activity outside the front door using the camera, and download the video and control the doorbell from the phone as an administrator.

It doesn’t matter how many times Ring device owners have changed the password, the Ring app will never ask users to sign in again after the password is changed.

Ring was notified of the issue in early January and claimed to have removed users who were no longer authorized. However, in the test carried out by media outlet The Information’s staff, these ex-users could still access the app for several hours.

Jamie Siminoff, CEO of Ring, has acknowledged the issue and responded that kicking users off the platform apparently slows down the Ring app.

After the issue was reported, Ring made another statement, suggesting that Ring customers should never share their usernames or passwords. The company recommended that other family members or partners sign in via Ring’s “Shared Users” feature.

In this way, device owners have control over who has access and can immediately remove users if they want.

“Our team is taking additional steps to further improve the password change experience,” said Ring in a statement.

Ring was acquired by Amazon for US$1 billion at the beginning of this year. Amazon operates in-home delivery service, the Amazon Key, relying on security devices at the front door such as smart doorbells, door locks and security cameras.

Any security flaws like the one found in Ring will make it difficult for the e-commerce giant to convince people that it’s safe for Amazon’s delivery people to enter their houses when nobody’s home.

Please make sure to secure all of your IoT devices as we know most of them are wide open to attacks.

IoT World

Honored to be speaking at IoT World May 14-17, 2018
Santa Clara Convention Center.
@MrMichaelReese #IOTWORLD #Cybersecurity

 

Hackers built a ‘master key’ for millions of hotel rooms

Security researchers have built a master key that exploits a design flaw in a popular and widely used hotel electronic lock system, allowing unfettered access to every room in the building.

The electronic lock system, known as Vision by VingCard and built by Swedish lock manufacturer Assa Abloy, is used in more than 42,000 properties in 166 countries, amounting to millions of hotel rooms — as well as garages and storage units.

These electronic lock systems are commonplace in hotels, used by staff to provide granular controls over where a person can go in a hotel — such as their room — and even restricting the floor that the elevator stops at. And these keys can be wiped and reused when guests check-out.

It turns out these key cards aren’t as secure as first thought.

F-Secure’s Tomi Tuominen and Timo Hirvonen, who carried out the work, said they could create a master key “basically out of thin air.”

Any key card will do. Even old and expired, or discarded keys retain enough residual data to be used in the attack. Using a handheld device running custom software, the researchers can steal data off of a key card — either using wireless radio-frequency identification (RFID) or the magnetic stripe. That device then manipulates the stolen key data, which identifies the hotel, to produce an access token with the highest level of privileges, effectively serving as a master key to every room in the building.

This wasn’t an overnight effort. It took the researchers over a decade of work to get here.

The researchers started their room key bypass efforts in 2003 when a colleague’s laptop was stolen from a hotel room. With no sign of forced entry or unauthorized access to the room, the hotel staff are said to have dismissed the incident. The researchers set out to find a popular brand of smart lock to examine. In their words, finding and building the master key was far from easy, and took “several thousand hours of work” on an on-off basis, and using trial and error.

“Developing [the] attack took considerable amount of time and effort,” said Tuominen and Hirvonen, in an email to ZDNet.

“We built a RFID demo environment in 2015 and were able to create our first master key for a real hotel in March 2017,” they said. “If somebody was to do this full time, it would probably take considerably less time.

There was good news, the researchers said.

“We don’t know of anyone else performing this particular attack in the wild right now,” said the researchers, downplaying the risk to hotel customers.

Their discovery also prompted Assa Abloy to release a security patch to fix the flaws. According to their disclosure timeline, Assa Abloy was first told of the vulnerabilities a month later in April 2017, and met again over several months to fix the flaws.

The software is patched at the central server, but the firmware on each lock needs to be updated.

 

Cybersecurity for Executives


Looking forward to another local speaking event here in Sacramento:

By invitation only, DSA Technologies is hosting FBI expert Kurt Pipal and licensed Computer Forensics Investigator Michael Reese to discuss the current state of Cybercrime in the Northern California & Sacramento Area. Executives who are responsible for the public perception for their organizations should attend.
This event will feature several security topics frequently seen in the news today, including:
• Financial Fraud
• Intellectual Property Threats
• Ransomware
• Identity Theft
• Phishing/Social Engineering scams
• Attacks on Critical Infrastructure
Where: Morton’s Steakhouse
621 Capitol Mall, Sacramento, CA 95814
When: April 19th @ 11:30AM
Event Partners: FBI, Palo Alto Networks

https://info.dsatechnologies.com/cybersecurity-executives?utm_medium=email&_hsenc=p2ANqtz-87pG_MltR6-NVDUCbEqHXmas6WEnVdPihwf6CQZKXnI7oZBdlSlwOQD-on1JuQWymhLINfPsaZYxcDFufz1yiaEKOklqJGsr8ZnhofQ5pdK4P60aQ&_
hsmi=61681952&utm_content=61681952&utm_source=hs_email&hsCtaTracking=00e12be2-db07-4fe5-8ea2-5a7a5ab18189%7C9cb78923-d767-46b3-bc62-b8a4d0c88fa6

 

GitHub Survived the Biggest DDoS Attack Ever Recorded

On Wednesday, at about 12:15 pm EST, 1.35 terabits per second of traffic hit the developer platform GitHub all at once. It was the most powerful distributed denial of service attack recorded to date—and it used an increasingly popular DDoS method, no botnet required.

GitHub briefly struggled with intermittent outages as a digital system assessed the situation. Within 10 minutes it had automatically called for help from its DDoS mitigation service, Akamai Prolexic. Prolexic took over as an intermediary, routing all the traffic coming into and out of GitHub, and sent the data through its scrubbing centers to weed out and block malicious packets. After eight minutes, attackers relented and the assault dropped off.

The scale of the attack has few parallels, but a massive DDoS that struck the internet infrastructure company Dyn in late 2016 comes close. That barrage peaked at 1.2 terabits per second and caused connectivity issues across the US as Dyn fought to get the situation under control.

“We modeled our capacity based on fives times the biggest attack that the internet has ever seen,” Josh Shaul, vice president of web security at Akamai told WIRED hours after the GitHub attack ended. “So I would have been certain that we could handle 1.3 Tbps, but at the same time we never had a terabit and a half come in all at once. It’s one thing to have the confidence. It’s another thing to see it actually play out how you’d hope.”

Akamai defended against the attack in a number of ways. In addition to Prolexic’s general DDoS defense infrastructure, the firm had also recently implemented specific mitigations for a type of DDoS attack stemming from so-called memcached servers. These database caching systems work to speed networks and websites, but they aren’t meant to be exposed on the public internet; anyone can query them, and they’ll likewise respond to anyone. About 100,000 memcached servers, mostly owned by businesses and other institutions, currently sit exposed online with no authentication protection, meaning an attacker can access them and send them a special command packet that the server will respond to with a much larger reply.

Unlike the formal botnet attacks used in large DDoS efforts, like against Dyn and the French telecom OVH, memcached DDoS attacks don’t require a malware-driven botnet. Attackers simply spoof the IP address of their victim and send small queries to multiple memcached servers—about 10 per second per server—that are designed to elicit a much larger response. The memcached systems then return 50 times the data of the requests back to the victim.

Known as an amplification attack, this type of DDoS has shown up before. But as internet service and infrastructure providers have seen memcached DDoS attacks ramp up over the last week or so, they’ve moved swiftly to implement defenses to block traffic coming from memcached servers.

“Large DDoS attacks such as those made possible by abusing memcached are of concern to network operators,” says Roland Dobbins, a principal engineer at the DDoS and network-security firm Arbor Networks who has been tracking the memcached attack trend. “Their sheer volume can have a negative impact on the ability of networks to handle customer internet traffic.”

The infrastructure community has also started attempting to address the underlying problem, by asking the owners of exposed memcached servers to take them off the internet, keeping them safely behind firewalls on internal networks. Groups like Prolexic that defend against active DDoS attacks have already added or are scrambling to add filters that immediately start blocking memcached traffic if they detect a suspicious amount of it. And if internet backbone companies can ascertain the attack command used in a memcached DDoS, they can get ahead of malicious traffic by blocking any memcached packets of that length.

“We are going to filter that actual command out so no one can even launch the attack,” says Dale Drew, chief security strategist at the internet service provider CenturyLink. And companies need to work quickly to establish these defenses. “We’ve seen about 300 individual scanners that are searching for memcached boxes, so there are at least 300 bad guys looking for exposed servers,” Drew adds.

Most of the memcached DDoS attacks CenturyLink has seen top out at about 40 to 50 gigabits per second, but the industry had been increasingly noticing bigger attacks up to 500 gbps and beyond. On Monday, Prolexic defended against a 200 gbps memcached DDoS attack launched against a target in Munich.

Wednesday’s onslaught wasn’t the first time a major DDoS attack targeted GitHub. The platform faced a six-day barrage in March 2015, possibly perpetrated by Chinese state-sponsored hackers. The attack was impressive for 2015, but DDoS techniques and platforms—particularly Internet of Things–powered botnets—have evolved and grown increasingly powerful when they’re at their peak. To attackers, though, the beauty of memcached DDoS attacks is there’s no malware to distribute, and no botnet to maintain.

The web monitoring and network intelligence firm ThousandEyes observed the GitHub attack on Wednesday. “This was a successful mitigation. Everything transpired in 15 to 20 minutes,” says Alex Henthorne-Iwane, vice president of product marketing at ThousandEyes. “If you look at the stats you’ll find that globally speaking DDoS attack detection alone generally takes about an hour plus, which usually means there’s a human involved looking and kind of scratching their head. When it all happens within 20 minutes you know that this is driven primarily by software. It’s nice to see a picture of success.”

GitHub continued routing its traffic through Prolexic for a few hours to ensure that the situation was resolved. Akamai’s Shaul says he suspects that attackers targeted GitHub simply because it is a high-profile service that would be impressive to take down. The attackers also may have been hoping to extract a ransom. “The duration of this attack was fairly short,” he says. “I think it didn’t have any impact so they just said that’s not worth our time anymore.”

Until memcached servers get off the public internet, though, it seems likely that attackers will give a DDoS of this scale another shot.