Daily Archives: February 22, 2016

When Ransomware Strikes Should You Pay or Not?

Hacker2015 was a big year for ransomware exploits and it looks like they aren’t slowing down in 2016. Kaspersky reported that Cyrptolocker attacks doubled in 2015, and that a majority of workplace PCs were attacked. The perpetrators of CryptoLocker attacks send Trojans, usually via email, that when set free, infect a user’s PC and encrypt any files it can access. The attacker then demands money, often in the form of bitcoins, to decrypt the locked files. Attackers threaten all sorts of havoc if their demands aren’t meant. As the article in NetworkWorld points out, even if their demands are met, you can’t count on your attackers honoring their part of the bargain.

I recently wrote a blog that covered a new ransomware attack on Hollywood Presbyterian Medical Center. The attackers are asking for 9,000 bitcoins in order to return thousands of patient records intact, which translates into approximately $3.6 million. That’s a steep price tag for any organization, and although typically law enforcement agencies advise victims not to pay, some police departments have started succumbing to the ransom demands. The more dangerous and alarming part is that, according to the FBI, who are working on this case, some attackers aren’t skilled enough to handle the malware they’ve delivered and if that’s the case, the hospital’s data will be lost forever. As the article points out, some criminal coders can mount an attack, but they don’t know how to handle encryption and decryption. Researchers have reported a ransomware strain that unintentionally locked files that can now never be decrypted.

The hospital has not decided whether they will pay the ransom yet, but they are forced to handle all their records manually for the time being. According to cybersecurity experts, ransomware has proven to be a lucrative business with Kaspersky reporting that a hacker group they researched is getting $2.5 million to $10 million for each successful attack.

In the meantime, organizations in every sector, particularly highly regulated industries like healthcare and finance need to increase their security postures as much as possible. Here are some quick tips that could keep you from becoming a victim:

  • Make sure your employees are security aware and not prone to opening unfamiliar emails and attachments. If an email looks suspicious or an offer seems too good to be true, use caution. Also, since cyber criminals are now adept at researching employees via social media, they should approach any unknown senders with caution.
  • Get the technology you need. There are a variety of ways evasive malware can be introduced, including piggybacking on traffic on high hidden ports. If you’re security can’t monitor those ports, you’re asking for trouble.
  • Be sure you update your software and applications as well as your operating system. Criminal hackers often leverage known vulnerabilities in an application or OS that hasn’t been updated.

 

Sneaky PII: What’s Hiding in Your Data?

eDiscovery

It’s no secret that it’s important to remove personally identifiable information (PII) and other privileged information from case data before it’s produced in order to protect it from falling into the wrong hands. The amount of data to be reviewed prior to litigation continues to grow exponentially as more and more ESI enters into discovery requests, and with an increase in data to review, there’s a greater risk of accidentally disclosing PII. As the past few years have shown us, a breach of PII could have major consequences for a corporation or law firm. The problem with PII, however, is that many companies don’t sufficiently protect employee information within their own environments, and because there’s an increasing amount of overlap between employees’ work and personal lives, there are more opportunities for PII to creep up in unexpected places that are easily overlooked.

Think for a second about where you’d expect PII to show up. You’re probably thinking of HR records where employees’ Social Security numbers, addresses, and phone numbers are stored. PII is easy to spot when you’re checking in obvious places like HR files, but personal information can crop up in other places just as easily when data gets collected from a broad range of sources. If an employee has a payroll issue, they might email bank account information or Social Security numbers to the payroll department. Beyond company-related communications, they might even send scanned images of tax documents to their accountant or mortgage applications to buy a new home from their company email address rather than their personal email. If your case requires that you pull company emails between specified dates you might inadvertently collect this information. In addition to emails, employees might use the office scanner for personal documents that they then send from their personal emails – but if that file lives on the company server, it’s at risk of entering into discovery data. If there isn’t a sweep done for extraneous PII, these details will slip through the cracks and leak to opposing counsel. For this reason, it’s absolutely crucial to comb data not just for relevance and privilege, but also for PII.

It’s a slippery slope, not only because PII is ubiquitous and can easily hide in unexpected places, but there are many contributing factors that make it difficult to pin down and at the mercy of human error. While many individuals are sensitive to their own private information, the average person has low awareness of exactly what data constitutes PII and how it can be compromised, meaning they’re probably revealing their company’s and their own private information unknowingly. Even if employees are hyper-aware of sensitive data, PII differs from state to state, so definitions change constantly and new regulations are implemented frequently. What wasn’t sensitive last year might be sensitive this year, and all the information from last year is still sitting on your company servers.

PII laws are complicated and can widely vary depending on which state and country you’re in, so it’s important to have processes in place to help eliminate extraneous data. Arguing for proportionality to narrow the scope of the case will reduce the amount of unnecessary PII gathered, and making use of technology assisted review and the many e-discovery platforms that can quickly find specific data inputs will dramatically reduce the time it takes to comb through files for PII. There are also products that can assist in the identification and exclusion of PII hiding in your case data, including redaction automation tools. While there are many methods for securing PII, redaction is far and away the safest because it removes sensitive information completely. It cannot be recovered or uncoded, so it is really the best way to eliminate risk.

The sticky nature of PII means that security can’t be done on a case-by-case basis. It should be a part of company-wide best practices and have a well-vetted process in place to ensure data is properly protected, not just for individuals but the company as a whole. Implementing security policies and investing in redaction technologies can help you stay compliant and save time, resources, and your reputation.

Former NSA Chief Michael Hayden Sides With Apple, Though Admits ‘No Encryption Is Unbreakable’

iphone-6-plus-event-2014-billboard-650
An attendee demonstrates the new Apple Inc. iPhone 6 Plus after a product announcement at Flint Center in Cupertino, California, U.S., on Tuesday, Sept. 9, 2014. Apple Inc. unveiled redesigned iPhones with bigger screens, overhauling its top-selling product in an event that gives the clearest sign yet of the company’s product direction under Chief Executive Officer Tim Cook.
David Paul Morris/Bloomberg via Getty Images

Tim Cook‘s opinion that Apple should not develop a way to hack into the encrypted phone belonging to one of the San Bernardino shooters has earned an endorsement from an unlikely source, though it comes with a big “but.” Michael Hayden, the former NSA director and CIA chief — so, a bonafide spy guy, told the Wall Street Journal that America is “more secure with unbreakable end-to-end encryption,” calling it a “slam dunk” if you view it in the scope of the “broad health” of the United States.

Hayden said FBI director James Comey‘s demand for Apple to give them a tool to break into Syed Farook’s iPhone is “based on the belief that he remains the main body, and that you should accommodate your movements to the movements of him, which is the main body. I’m telling you, with regards to the cyber domain, he’s not — you are.”

Now for that “but,” which will surely disappoint all the (temporarily pleased) civil libertarians out there. Hayden said that following a setback in the mid-nineties, when the NSA failed to convince manufacturers to adopt a cryptographic device called the Clipper chip, “we then began the greatest 15 years in electronic surveillance.” The controversial chipset was an encryption device that had a built-in backdoor in case the government needed to take a lookie-loo. But, as Hayden notes, “we figured out ways to get around the quote-unquote unbreakable encryption. Number one, no encryption is unbreakable. It just takes more computing power. Number two, the way we worked around encryption is bulk collection and metadata.”

Watch the conversation:

Since 2014, Apple’s iPhones have had built-in encryption that makes it so the contents of a device can only be accessed via a phone’s passcode. The FBI’s order stipulates that Apple provide software to work only on the San Bernardino shooter’s iPhone. Cook said in an open letter that the U.S. government order would undermine encryption and potentially create a “master key, capable of opening hundreds of millions of locks” on private devices.

Cook wrote that “in the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession… The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a back door. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.”

On Wednesday, Cook’s position received support from a high-profile colleague in tech.

“Forcing companies to enable hacking could compromise users’ privacy,” wrote Google CEO Sundar Pichai in a series of Twitter posts. “We know that law enforcement and intelligence agencies face significant challenges in protecting the public against crime and terrorism. We build secure products to keep your information safe and we give law enforcement access to data based on valid legal orders. But that’s wholly different than requiring companies to enable hacking of customer devices & data. Could be a troubling precedent. Looking forward to a thoughtful and open discussion on this important issue.”

 

Apple Unlocked iPhones for the Feds 70 Times Before

48527578.cached

Apple CEO Tim Cook declared on Wednesday that his company wouldn’t comply with a government search warrant to unlock an iPhone used by one of the San Bernardino killers, a significant escalation in a long-running debate between technology companies and the government over access to people’s electronically-stored private information.

But in a similar case in New York last year, Apple acknowledged that it could extract such data if it wanted to. And according to prosecutors in that case, Apple has unlocked phones for authorities at least 70 times since 2008. (Apple doesn’t dispute this figure.)

In other words, Apple’s stance in the San Bernardino case may not be quite the principled defense that Cook claims it is. In fact, it may have as much to do with public relations as it does with warding off what Cook called “an unprecedented step which threatens the security of our customers.”

For its part, the government’s public position isn’t clear cut, either. U.S. officials insist that they cannot get past a security feature on the shooter’s iPhone that locks out anyone who doesn’t know its unique password—which even Apple doesn’t have. But in that New York case, a government attorney acknowledged that one U.S. law enforcement agency has already developed the technology to crack at least some iPhones, without the assistance from Apple that officials are demanding now.

The facts in the New York case, which involve a self-confessed methamphetamine dealer and not a notorious terrorist, tend to undermine some of the core claims being made by both Apple and the government in a dispute with profound implications for privacy and criminal investigations beyond the San Bernardino case.

In New York, as in California, Apple is refusing to bypass the passcode feature now found on many iPhones.

But in a legal brief, Apple acknowledged that the phone in the meth case was running version 7 of the iPhone operating system, which means the company can access it. “For these devices, Apple has the technical ability to extract certain categories of unencrypted data from a passcode locked iOS device,” the company said in a court brief.

Whether the extraction would be successful depended on whether the phone was “in good working order,” Apple said, noting that the company hadn’t inspected the phone yet. But as a general matter, yes, Apple could crack the iPhone for the government. And, two technical experts told The Daily Beast, the company could do so with the phone used by deceased San Bernardino shooter, Syed Rizwan Farook, a model 5C. It was running version 9 of the operating system.

Still, Apple argued in the New York case, it shouldn’t have to, because “forcing Apple to extract data… absent clear legal authority to do so, could threaten the trust between Apple and its customers and substantially tarnish the Apple brand,” the company said, putting forth an argument that didn’t explain why it was willing to comply with court orders in other cases.

“This reputational harm could have a longer term economic impact beyond the mere cost of performing the single extraction at issue,” Apple said.

Apple’s argument in New York struck one former NSA lawyer as a telling admission: that its business reputation is now an essential factor in deciding whether to hand over customer information.

“I think Apple did itself a huge disservice,” Susan Hennessey, who was an attorney in the Office of the General Counsel at the NSA, told The Daily Beast. The company acknowledged that it had the technical capacity to unlock the phone, but “objected anyway on reputational grounds,” Hennessey said. Its arguments were at odds with each other, especially in light of Apple’s previous compliance with so many court orders.

It wasn’t until after the revelations of former NSA contractor Edward Snowden that Apple began to position itself so forcefully as a guardian of privacy protection in the face of a vast government surveillance apparatus. Perhaps Apple was taken aback by the scale of NSA spying that Snowden revealed. Or perhaps it was embarassed by its own role in it. The company, since 2012, had been providing its customers’ information to the FBI and the NSA via the PRISM program, which operated pursuant to court orders.

Apple has also argued, then and now, that the government is overstepping the authority of the All Writs Act, an 18th-century statute that it claims forces Apple to conduct court-ordered iPhone searches. That’s where the “clear legal authority” question comes into play.

But that, too, is a subjective question which will have to be decided by higher courts. For now, Apple is resisting the government on multiple grounds, and putting its reputation as a bastion of consumer protection front and center in the fight.

None of this has stopped the government from trying to crack the iPhone, a fact that emerged unexpectedly in the New York case. In a brief exchange with attorneys during a hearing in October, Judge James Orenstein said he’d found testimony in another case that the Homeland Security Department “is in possession of technology that would allow its forensic technicians to override the pass codes security feature on the subject iPhone and obtain the data contained therein.”

That revelation, which went unreported in the press at the time, seemed to undercut the government’s central argument that it needed Apple to unlock a protected iPhone.

“Even if [Homeland Security] agents did not have the defendant’s pass code, they would nevertheless have been able to obtain the records stored in the subject iPhone using specialized software,” the judge said. “Once the device is unlocked, all records in it can be accessed and copied.”

A government attorney affirmed that he was aware of the tool. However, it applied only to one update of version 8 of the iPhone operating system—specifically, 8.1.2. The government couldn’t unlock all iPhones, but just phones with that software running.

Still, it made the judge question whether other government agencies weren’t also trying to break the iPhone’s supposedly unbreakable protections. And if so, why should he order the company to help?

There was, the judge told the government lawyer, “the possibility that on the intel side, the government has this capability. I would be surprised if you would say it in open court one way or the other.”

Orenstein was referring to the intelligence agencies, such as the NSA, which develop tools and techniques to hack popular operating systems, and have been particularly interested for years in trying to get into Apple products, according to documents leaked by Snowden.

There was no further explanation of how Homeland Security developed the tool, and whether it was widely used. A department spokesperson declined to comment “on specific law enforcement techniques.” But the case had nevertheless demonstrated that, at least in some cases, the government can, and has, managed to get around the very wall that it now claims impedes lawful criminal investigations.

The showdown between Apple and the FBI will almost certainly not be settled soon. The company is expected to file new legal briefs within days. And the question of whether the All Writs Act applies in such cases is destined for an appeals court decision, legal experts have said.

But for the moment, it appears that the only thing certainly standing in the way of Apple complying with the government is its decision not to. And for its part, the government must be presumed to be searching for new ways to get the information it wants.

Technically, Apple probably can find a way to extract the information that the government wants from the San Bernardino shooter’s phone, Christopher Soghoian, the principal technologist for the American Civil Liberties Union, told The Daily Beast.

“The question is, does the law give the government the ability to force Apple to create new code?” he said. “Engineers have to sit down and create something that doesn’t exist” in order to meet the government’s demands. Soghoian noted that this would only be possible in the San Bernardino case because the shooter was using an iPhone model 5C, and that newer hardware versions would be much harder for Apple to bypass.

But even that’s in dispute, according to another expert’s analysis. Dan Guido, a self-described hacker and CEO of the cybersecurity company Trail of Bits, said that Apple can, in fact, eliminate the protections that keep law enforcement authorities from trying to break into the iPhone with a so-called brute force attack, using a computer to make millions of password guesses in a short period of time. New iPhones have a feature that stops users from making repeated incorrect guesses and can trigger a kind of self-destruct mechanism, erasing all the phone’s contents, after too many failed attempts.

In a detailed blog post, Guido described how Apple could work around its own protections and effectively disarm the security protections. It wouldn’t be trivial. But it’s feasible, he said, even for the newest versions of the iPhone, which, unlike the ones in the New York and San Bernardino cases, Apple swears it cannot crack.

“The burden placed on Apple will be greater… but it will not be impossible,” Guido told The Daily Beast.