Category Archives: Cyber Security

Actions for Internal Audit on Cybersecurity, Data Risks

 

Cybersecurity and other data-related issues top the list of risks for heads of audit in 2019; here are key actions audit must take.

The number of cyberattacks continues to increase significantly as threat actors become more sophisticated and diversify their methods. It’s hardly surprisingly, then, that cybersecurity preparedness tops the list of internal audit priorities for 2019.

Other data and IT issues are also on the radar for internal audit, according to the Gartner Audit Plan Hot Spots. Cybersecurity topped the list of 2019’s 12 emerging risks, followed by data governance, third parties and data privacy.

“These risks, or hot spots, are the top-of-mind issues for business leaders who expect heads of internal audit to assess and mitigate them, as well as communicate their impact to organizations and stakeholders,” said Malcolm Murray, VP, Team Manager at Gartner.

What audit can do on cyberpreparedness

The Gartner 2019 Audit Key Risks and Priorities Survey shows that 77% of audit departments definitely plan to cover cybersecurity detection and prevention in audit activities during the next 12-18 months. Only 5% have no such activities planned. And yet, only 53% of audit departments are highly confident in their ability to provide assurance over cybersecurity detection and prevention risks.

Here are some steps audit can take to tackle cybersecurity preparedness:

  • Review device encryption on all devices, including mobile phones and laptops. Assess password strength and the use of multifactor identification.
  • Review access management policies and controls, and set user access and privileges by defined business needs. Swiftly amend access when roles change.
  • Review patch management policies, evaluating the average time from patch release to implementation and the frequency of updates. Make sure patches cover IoT devices.
  • Evaluate employee security training to ensure that the breadth, frequency and content is effective. Don’t forget to build awareness of common security threats such as phishing.
  • Participate in cyber working groups and committees to develop cybersecurity strategy and policies. Help determine how the organization identifies, assesses and mitigates cyberrisk and strengthens current cybersecurity controls.

Data governance

Big data increases the strategic importance of effective mechanisms to collect, use, store and manage organizational data, but many organizations still lack formal data governance frameworks and struggle to establish consistency across the organization. Few scale their programs effectively to meet the growing volume of data. Left unsolved, these governance challenges can lead to operational drag, delayed decision making and unnecessary duplication of efforts.

What audit can do:

  • Review the data assets inventory, which must include, at a minimum, the highest-value data assets of the organization. Assess the extent of both structured and unstructured data assets.
  • Review the classification of data and associated process and policies. Analyze how data will be retained and destroyed, encryption requirements and whether relevant categories of use have been established.
  • Participate in relevant working groups and committees to stay abreast of governance efforts and provide advisory input when frameworks are built.
  • Review data analytics training and talent assessments, identify skill gaps and plan how to fill them. Evaluate the content and availability of existing training.
  • Review the analytics tools inventory across the organization. Determine if IT has an approved vendor list for analytics tools and what efforts are being made to educate the business on the use of approved tools.

Third parties

Efforts to digitalize systems and processes add new, complex dimensions to third-party challenges that have been a perennial concern for organizations. Nearly 70% of chief audit executives reported third-party risk as one of their top concerns, but organizations still struggle to manage this risk. What audit can do:

  • Evaluate scenario analysis for strategic initiatives to analyze potential risks and outcomes associated with interdependent partners in the organization’s business ecosystem. Consider enterprise risk appetite and identify trigger events that would cause the organization to take corrective action.
  • Assess third-party contracts and compliance efforts, ensure contracts adequately stipulate information security, data privacy and nth-party requirements. Ensure there is monitoring of third-party adherence to contracts.
  • Investigate third-party regulatory requirements, assess how effectively senior management communicates regulatory updates across the business and how clearly it articulates requirements for third parties.
  • Evaluate the classification of third-party risk and confirm that the business conducts random checks of third parties to ensure classifications properly account for actual risk levels.

Data privacy

Companies today collect an unprecedented amount of personal information, and the costs of managing and protecting that data are rising. Seventy-seven percent of audit departments say data privacy will definitely be covered in audit activities in the next 12–18 months. What audit can do:

  • Review data protection training and ensure that employees at all levels complete the training. Include elements such as how to report a data breach and protocols for data sharing.
  • Assess current level of GDPR compliance and identify compliance gaps. Review data privacy policies to make sure the language is clear and customer consent is clearly stated.
  • Assess data access and storage. Make sure access to sensitive information is role-based and privileges are properly set and monitored.
  • Review data breach response plans. Evaluate how quickly the company identifies a breach and the mechanisms for notifying impacted consumers and regulators.
  • Assess data loss protection and review whether tools scan data at rest and in motion.

Phishing Report Shows Microsoft, Paypal, & Netflix as Top Targets

A new phishing report has been released that keeps track of the top 25 brands targeted by bad actors. Of these brands, Microsoft, Paypal, and Netflix are the top brands impersonated by phishing attacks.

Email security provider Vade Secure tracks the 25 most spoofed brands in North America that are impersonated in phishing attacks. In their Q3 2018 report, a total of 86 brands are tracked, which consist of 95% of all attacks detected by the company.

Overall, Vade Secure has stated that phishing attacks increased by 20.4% in the 3rd quarter with the most targeted being Microsoft, followed by PayPal, Netflix, Bank of America, and Wells Fargo.

Cloud based services and financial companies remain the two most targeted industries with Microsoft being the top targeted brand as attackers try to gain access to Office 365, One Drive, and Azure credentials.

“The primary goal of Microsoft phishing attacks is to harvest Office 365 credentials,”stated Vade Secure’s report. “With a single set of credentials, hackers can gain access to a treasure trove of confidential files, data, and contacts stored in Office 365 apps, such as SharePoint, OneDrive, Skype, Excel, CRM, etc. Moreover, hackers can use these compromised Office 365 accounts to launch additional attacks, including spear phishing, malware, and, increasingly, insider attacks targeting other users within the same organization.”

Office 365 phishing emails typically indicate that the recipient’s account has been suspended or disabled and then prompts them to login to resolve the issue. These phishing forms are almost identical to a legitimate Office 365, and by creating a sense of urgency, the attackers hope the victims will be less vigilant as they enter their credentials.

Followed by Microsoft, are PayPal phishing schemes where attackers try to gain access to victim’s money and Netflix, which is used to steal credit card information.

Of particular interest is that attackers tend to follow a pattern as to what days they send the most volume of phishing emails. According to the report, most work related attacks tend to occur during the week with Tuesday and Thursday being the largest volume days. For Netflix, the most targeted days are Sunday when people are taking a break to watch some TV.

Phishing attacks become more targeted

Vade Secure has also noticed that attackers are starting to decrease the amount of times a particular URL is used in a phishing campaign. Instead attackers are using unique URLs in each phishing email in order to bypass mail filters.

“What should be more concerning to security professionals is that phishing attacks are becoming more targeted,” continued Vade Secure’s report. “When we correlated the number of phishing URLs against the number of phishing emails blocked by our filter engine, we found that the number of emails sent per URL dropped more than 64% in Q3. This suggests that hackers are using each URL in fewer emails in order to avoid by reputation-based security defenses. In fact, we’ve seen sophisticated phishing attacks where each email contains a unique URL, essentially guaranteeing that they will bypass traditional email security tools.”

Protecting yourself from phishing attacks

As phishing attacks become more sophisticated, they also become harder to detect. Using cloud services, attackers are now able to secure their phishing forms with SSL certificates from well known and trusted companies such as Microsoft and Cloudflare. This allows the forms to look authentic to victims.

As you can see from phishing attack below, the login form looks legitimate, the site is on a Microsoft owned domain, and the page is secured. To many, this would appear as a legitimate Microsoft form. In reality, the attacker is hosting their form on a Microsoft cloud service in order to create this sense of legitimacy.

Therefore, it is always important to scrutinize a site  before entering any login credentials. If the URL looks strange, there is incorrect spelling, grammar is incorrect, or something does not feel right then you should not enter any account credentials. Instead contact your administrator or the company itself if you are concerned your account has problems.  If you don’t know the sender, don’t open the email.

Can the police search your phone?

Enterprises work hard to protect company secrets. Here’s why the biggest threat may be law enforcement.

Can the police search your phone?

The answer to that question is getting complicated.

But it’s an important thing to know. The reason is that your phone, and the phones of every employee at your company, almost certainly contain company secrets — or provide access to those secrets.

Phones can provide access to passwords, contact lists, emails, phone call metadata, photos, spreadsheets and other company documents, location histories, photos and much more.

Proprietary data — including information that would enable systematic hacking of company servers for sabotage, industrial espionage and worse — is protected from legal exposure by a complex set of well-understood laws and norms in the United States. But that same data is accessible from company phones.

Can the police simply take that information?  Until recently, most professionals would have said no.

Why? Because business and IT professionals tend to believe that smartphones are covered by the Fourth Amendment’s strictures against “unreasonable searches and seizures,” a protection recently reaffirmed by the Supreme Court. And smartphones are also protected by the Fifth Amendment, many would say, because divulging a passcode is akin to being “compelled” to be a “witness” against yourself.

Unfortunately, these beliefs are wrong.

The trouble with passcodes

Apple last year quietly added a new feature to iPhones designed to protect smartphone data from police searches. When you quickly press the on/off button on an iPhone five times, it turns off Touch ID and Face ID.

The thinking behind the so-called cop button is that, because police can compel you to use biometrics, but not a passcode, to unlock your phone, the feature makes it impossible for the legal system to force you to hand over information.

Unfortunately, this belief has now been undermined.

We learned this week that a Florida man named William John Montanez was jailed for six months after claiming that he forgot the passcodes for his two phones.

Montanez was pulled over for a minor traffic infraction. Police wanted to search his car. He refused. The police brought in dogs, which found some marijuana and a gun. (Montanez said the gun was his mother’s.) During the arrest, his phone got a text that said, “OMG, did they find it,” prompting police to get a warrant to search his phones. That’s when Montanez claimed he didn’t remember the passcodes, and the judge sentenced him to up to six months in jail for civil contempt.

As a precedent, this cascading series of events changes what we thought we knew about the security of the data on our phones. What started as an illegal turn ended up with jail time over the inability or unwillingness to divulge what we thought was a constitutionally protected bit of information.

We’ve also learned a lot recently about the vulnerability of location data on a smartphone.

The solution for individual users who want to keep location and other data private is to simply switch off the feature, such as the Location History feature in Google’s Android operating system. Right?

Not really. It turns out Google has been storing location data even after users turn off Location History.

The fiasco was based on false information that used to exist on Google’s site. Turning off Location History, the site said, meant that “the places you go are no longer stored.” In fact, they were stored, just not in the user-accessible Location History area.

Google corrected the false language, adding, “Some location data may be saved as part of your activity on other services, like Search and Maps.”

Stored data matters.

The FBI recently demanded from Google the data about all people using location services within a 100-acre area in Portland, Maine, as part of an investigation into a series of robberies. The request included the names, addresses, phone numbers, “session” times and duration, log-in IP addresses, email addresses, log files and payment information.

The order also said that Google could not inform users of the FBI’s demand.

Google did not comply with the request. But that didn’t keep the FBI from pushing for it.

In fact, police are evolving their methods, intentions and technologies for searching smartphones.

Police data-harvesting machines

A device called GrayKey, from a company called GrayShift, can unlock any iPhone or iPad.

GrayShift licenses the devices for $15,000 per year and up to 300 phone cracks.

It’s a turnkey system. Each GrayKey has two Lightning cables. Police need only plug in a phone, and eventually the phone’s passcode appears on the phone’s screen, giving full access.

That may be why Apple introduced in the fall a new “USB Restricted Mode” for iPhones. That mode makes it harder for police (or criminals) to crack a phone via the Lightning port.

The mode is activated by default, which is to say that the “switch” in settings for USB Accessories is turned off. With that switch off, the Lightning port won’t connect to anything after an hour of the phone being locked.

Unfortunately for iPhone users, “USB Restricted Mode” is easily defeated with a widely available $39 dongle.

And the U.S. isn’t the only country with police data-harvesting machines.

A world of trouble for smartphone data

Chinese authorities have their own technology for harvesting the data from phones, and that technology is now being deployed by police in the field. Police anywhere in the country can demand that anyone hand over a phone, which is then scanned by a device, the use of which is reportedly spreading across China.

Chinese authorities have both desktop and handheld scanner devices, which automatically extract and process emails, social posts, videos, photos, call histories, text messages and contact lists to aid them in looking for transgressions.

Some reports suggest that the devices, which are made by both Israeli and Chinese companies, are unable to crack newer iPhones but can access nearly every other kind of phone.

Another factor to be considered is that the protections of the U.S. Constitution end at the border — literally at the border.

As I’ve detailed here in the past, U.S. Customs is a “gray area” for Fifth Amendment constitutional protections.

And once abroad, all bets are off. Even in friendly, pro-privacy nations such as Australia.

The Australian government on Tuesday proposed a law called the Assistance and Access Bill 2018. If it becomes law, the act would require people to unlock their phones for police or face up to ten years in prison (the current maximum is two years).

It would empower police to legally bug or hack phones and computers.

The bill would force carriers, as well as companies such as Apple, Google, Microsoft and Facebook, to give police access to the private encrypted data of their customers if technically possible.

Failure to comply would result in fines of up $7.3 million and prison time.

Police would need a warrant to crack, bug or hack a phone.

Police would need a warrant to crack, bug or hack a phone.

The bill may never become law. But Australia is just one of many nations affected by a new political will to end smartphone privacy when it comes to law enforcement.

If you take anything away from this column, please remember this: The landscape for what’s possible in the realm of police searches of smartphones is changing every day.

In general, smartphones are becoming less protected from police searches, not more protected.

That’s why the assumption of every IT department, every enterprise and every business progressional — especially those of us who travel internationally on business — must be that the data on a smartphone is not safe from official scrutiny.

It’s time to rethink company policies, training, procedures and permissions around smartphones.

Employees Actively Seeking Ways to Bypass Corporate Security Protocols in 95 % of Enterprises

In today’s world cyber incidents activities such as data theft, insider threat, malware attack most are significant security risks and some it caused by the employees of the company both intentionally or unknowingly, also around 95% of threat and Activities with access to corporate endpoints, data, and applications.

Many of the security testing among the most alarming discoveries was that 95 percent of assessments revealed employees were actively researching, installing or executing security or vulnerability testing tools in attempts to bypass corporate security.

They are using anonymity tools like  Tor,VPNs frequently to hide who is Trying to breaking the corporate security.

Christy Wyatt, CEO at Dtex Systems said, “Some of the year’s largest reported breaches are a direct result of malicious insiders or insider negligence.

People are the weakest security link

Last year survey reported by Dtex Systems said, 60 percent of all attacks are carried out by insiders. 68 percent of all insider breaches are due to negligence, 22 percent are from malicious insiders and 10 percent are related to credential theft.  Also, the current trend shows that the first and last two weeks of employment for employees are critical as 56 percent of organizations saw potential data theft from leaving or joining employees during those times.

Increased use of cloud services puts data at risk

64 percent of enterprises assessed found corporate information on the web that was publicly accessible, due in part to the increase in cloud applications and services.

To make matters worse, 87 percent of employees were using personal, web-based email on company devices. By completely removing data and activity from the control of corporate security teams, insiders are giving attackers direct access to corporate assets.

Inappropriate internet usage is driving risk

59 percent of organizations analyzed experienced instances of employees accessing pornographic websites during the work day.

43 percent had users who were engaged in online gambling activities over corporate networks, which included playing the lottery and using Bitcoin to bet on sporting events.

This type of user behavior is indicative of overall negligence and high-risk activities taking place.

Dtex Systems analyzed and prepared these risk assessments from 60 enterprises across North America, Europe and Asia with the industries like IT, Finance, Public Sector, Manufacturing, Pharmaceuticals and Media & Entertainment.

Please consider your cybersecurity posture when it comes to your employees, again people are the leading cause to “Risk”.

 

 

 

Vulnerable ship systems: Many left exposed to criminal hacking

Pen Test Partners’ Ken Munro and his colleagues – some of which are former ship crew members who really understand bridge and propulsion systems – have been probing the security of ships’ IT systems for a while now and the results are depressing: satcom terminals exposed on the Internet, admin interfaces accessible via insecure protocols, no firmware signing, easy-to-guess default credentials, and so on.

“Ship security is in its infancy – most of these types of issues were fixed years ago in mainstream IT systems,” Pen Test Partners’ Ken Munro says, and points out that the advent of always-on satellite connections has exposed shipping to hacking attacks.

A lack of security hygiene

Potential attackers can take advantage of poor security hygiene on board, but also of the poor security of protocols and systems provided by maritime product vendors.

For example, the operational technology (OT) systems that are used to control the steering gear, engines, ballast pumps and so on, communicate using NMEA 0183 messages. But there is no message authentication, encryption or validation of these messages, and they are in plain text.

“All we need to do is man in the middle and modify the data. This isn’t GPS spoofing, which is well known and easy to detect, this is injecting small errors to slowly and insidiously force a ship off course,” Munro says.

They found other examples of poor security practices in a satellite communication terminal by Cobham SATCOM: things like admin interfaces accessible over telnet and HTTP, a lack of firmware signing and no rollback protection for the firmware, admin interface passwords embedded in the configuration (and hashed with unsalted MD5!), and the possibility to edit the entire web application running on the terminal.

They shared this with the public because all these flaws can be mitigated by setting a strong admin password, but they also found other issues that have to be fixed by the vendor (and so they disclosed them privately).

Electronic chart systems are full of flaws

ECDIS – electronic chart systems that are used for navigation – are also full of security flaws. They tested over 20 different ECDIS units and found things like old operating systems and poorly protected configuration interfaces. Attackers could ‘jump’ the boat by spoofing the position of the GPS receiver on the ship, or reconfigure the ECDIS to make the ship appear to be wider and longer than it is.

“This doesn’t sound bad, until you appreciate that the ECDIS often feeds the AIS [Automatic Identification System] transceiver – that’s the system that ships use to avoid colliding with each other,” Munro noted.

“It would be a brave captain indeed to continue down a busy, narrow shipping lane whilst the collision alarms are sounding. Block the English Channel and you may start to affect our supply chain.”

Tracking vulnerable ships

Pen Test Partners also created a vulnerable ship tracker by combining Shodan’s ship tracker, which uses publicly available AIS data, and satcom terminal version details.

The tracker does not show other details except the ship’s name and real-time position because they don’t want to help hackers, but it shows just how many vulnerable ships are out there.

Hacking incidents in the shipping industry

Hacking incidents affecting firms in the shipping industry are more frequent than the general public could guess by perusing the news. Understandably, the companies are eager to keep them on the down-low, if they can, as they could negatively affect their business competitiveness, Munro recently told me.

Some attacks can’t be concealed, though. For example, when A.P. Møller-Mærsk fell victim to the NotPetya malware, operations got disrupted and estimated losses reached several hundred millions of dollars.

That particular attack thankfully did not result in the company losing control of its vessels, but future attacks might lead to shipping security incidents and be more disruptive to that aspect of companies’ activities.

“Vessel owners and operators need to address these issues quickly, or more shipping security incidents will occur,” he concluded.

 

Hack of DNA Website Exposes Data From 92 Million Accounts

Consumer genealogy website MyHeritage said that email addresses and password information linked to more than 92 million user accounts have been compromised in an apparent hacking incident.
MyHeritage said that its security officer had received a message from a researcher who unearthed a file named “myheritage” containing email addresses and encrypted passwords of 92,283,889 of its users on a private server outside the company.
“There has been no evidence that the data in the file was ever used by the perpetrators,” the company said in a statement late Monday.

MyHeritage lets users build family trees, search historical records and hunt for potential relatives. Founded in Israel in 2003, the site launched a service called MyHeritage DNA in 2016 that, like competitors Ancestry.com and 23andMe, lets users send in a saliva sample for genetic analysis. The website currently has 96 million users; 1.4 million users have taken the DNA test.

According to MyHeritage, the breach took place on Oct. 26, 2017, and affects users who signed up for an account through that date. The company said that it doesn’t store actual user passwords, but instead passwords encrypted with what’s called a one-way hash, with a different key required to access each customer’s data.  So we ask “Why did it take so long to declare a breach”

In some past breaches, however, hashing schemes have been successfully converted back into passwords. A hacker able to decrypt the hashed passwords exposed in the breach could access personal information accessible when logging into someone’s account, such as the identity of family members. But even if hackers were able to get into a customer’s account, it’s unlikely they could easily access raw genetic information, since a step in the download process includes email confirmation.
In its statement, the company emphasized that DNA data is stored “on segregated systems and are separate from those that store the email addresses, and they include added layers of security.”

MyHeritage has set up a 24/7 support team to assist customers affected by the breach. It plans to hire an independent cybersecurity firm to investigate the incident and potentially beef up security. In the meantime, users are advised to change their passwords.

Why would hackers “Criminals” want to steal and then sell DNA back for ransom?  Hackers could threaten to revoke access or post the sensitive information online if not given money.  This data could be very valuable to insurance companies (Medical, and Life), mortgage companies, and then you ask “why”?  In a world where data is posted online, it could be used to genetically discriminate against people, such as denying mortgages or increasing insurance costs.  (it doesn’t help that interpreting genetics is complicated and many people don’t understand the probabilities anyway.)  This data could be sold on the down-low or monetized to insurance companies,  You can imagine the consequences: One day, I might apply for a long-term loan and get rejected because deep in the corporate system, there is data that I am very likely to get Alzheimer’s and die before I would repay the loan. In the future, if genetic data becomes commonplace enough, people might be able to pay a fee and get access to someone’s genetic data, the way we can now access someone’s criminal background.

Case and point, Sacramento investigators tracked down East Area Rapist suspect Joseph James DeAngelo using genealogical websites that contained genetic information from a relative, the Sacramento County District Attorney’s Office confirmed Thursday.

The effort was part of a painstaking process that began by using DNA from one of the crime scenes from years ago and comparing it to genetic profiles available online through various websites that cater to individuals wanting to know more about their family backgrounds by accepting DNA samples, said Chief Deputy District Attorney Steve Grippi.

 

 

 

 

Amazon confirms that Echo device secretly shared user’s private audio [Updated]

This really should not be big news, I’ve been stating it since Alexa came out.  The MIC is open all the time unless you “Mute” it and data is saved and transmitted to Amazon.  Make sure you understand the technology before you start adding all of these types of IoT devices in your home, as I call them “Internet of Threats”

The call that started it all: “Unplug your Alexa devices right now.”

Amazon confirmed an Echo owner’s privacy-sensitive allegation on Thursday, after Seattle CBS affiliate KIRO-7 reported that an Echo device in Oregon sent private audio to someone on a user’s contact list without permission.

“Unplug your Alexa devices right now,” the user, Danielle (no last name given), was told by her husband’s colleague in Seattle after he received full audio recordings between her and her husband, according to the KIRO-7 report. The disturbed owner, who is shown in the report juggling four unplugged Echo Dot devices, said that the colleague then sent the offending audio to Danielle and her husband to confirm the paranoid-sounding allegation. (Before sending the audio, the colleague confirmed that the couple had been talking about hardwood floors.

After calling Amazon customer service, Danielle said she received the following explanation and response: “‘Our engineers went through all of your logs. They saw exactly what you told us, exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes. ‘This is something we need to fix.'”

Danielle next asked exactly why the device sent recorded audio to a contact: “He said the device guessed what we were saying.” Danielle didn’t explain exactly how much time passed between the incident, which happened “two weeks ago,” and this customer service response.

When contacted by KIRO-7, Amazon confirmed the report and added in a statement that the company “determined this was an extremely rare occurrence.” Amazon didn’t clarify whether that meant such automatic audio-forwarding features had been built into all Echo devices up until that point, but the company added that “we are taking steps to avoid this from happening in the future.”

This follows a 2017 criminal trial in which Amazon initially fought to squash demands for audio captured by an Amazon Echo device related to a murder investigation. The company eventually capitulated.

Amazon did not immediately respond to Ars Technica’s questions about how this user’s audio-share was triggered.

Update, 5:06pm ET: Amazon forwarded an updated statement about KIRO-7’s report to Ars Technica, which includes an apparent explanation for how this audio may have been sent:
Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Amazon did not explain how so many spoken Alexa prompts could have gone unnoticed by the Echo owner in question. Second update: The company did confirm to Ars that the above explanation was sourced from device logs.

Ring Security Flaw Lets Unauthorized Parties Control Doorbell App

 

A security flaw founded in Ring’s video doorbell can let others access camera footage even if homeowners have changed their passwords, according to media sources.

This can happen after a Ring device owner gives access to the Ring app to someone else. If it is given to an ex-partner, for example, after the relationship turned sour, the partner may still monitor the activity outside the front door using the camera, and download the video and control the doorbell from the phone as an administrator.

It doesn’t matter how many times Ring device owners have changed the password, the Ring app will never ask users to sign in again after the password is changed.

Ring was notified of the issue in early January and claimed to have removed users who were no longer authorized. However, in the test carried out by media outlet The Information’s staff, these ex-users could still access the app for several hours.

Jamie Siminoff, CEO of Ring, has acknowledged the issue and responded that kicking users off the platform apparently slows down the Ring app.

After the issue was reported, Ring made another statement, suggesting that Ring customers should never share their usernames or passwords. The company recommended that other family members or partners sign in via Ring’s “Shared Users” feature.

In this way, device owners have control over who has access and can immediately remove users if they want.

“Our team is taking additional steps to further improve the password change experience,” said Ring in a statement.

Ring was acquired by Amazon for US$1 billion at the beginning of this year. Amazon operates in-home delivery service, the Amazon Key, relying on security devices at the front door such as smart doorbells, door locks and security cameras.

Any security flaws like the one found in Ring will make it difficult for the e-commerce giant to convince people that it’s safe for Amazon’s delivery people to enter their houses when nobody’s home.

Please make sure to secure all of your IoT devices as we know most of them are wide open to attacks.

IoT World

Honored to be speaking at IoT World May 14-17, 2018
Santa Clara Convention Center.
@MrMichaelReese #IOTWORLD #Cybersecurity

 

Hackers built a ‘master key’ for millions of hotel rooms

Security researchers have built a master key that exploits a design flaw in a popular and widely used hotel electronic lock system, allowing unfettered access to every room in the building.

The electronic lock system, known as Vision by VingCard and built by Swedish lock manufacturer Assa Abloy, is used in more than 42,000 properties in 166 countries, amounting to millions of hotel rooms — as well as garages and storage units.

These electronic lock systems are commonplace in hotels, used by staff to provide granular controls over where a person can go in a hotel — such as their room — and even restricting the floor that the elevator stops at. And these keys can be wiped and reused when guests check-out.

It turns out these key cards aren’t as secure as first thought.

F-Secure’s Tomi Tuominen and Timo Hirvonen, who carried out the work, said they could create a master key “basically out of thin air.”

Any key card will do. Even old and expired, or discarded keys retain enough residual data to be used in the attack. Using a handheld device running custom software, the researchers can steal data off of a key card — either using wireless radio-frequency identification (RFID) or the magnetic stripe. That device then manipulates the stolen key data, which identifies the hotel, to produce an access token with the highest level of privileges, effectively serving as a master key to every room in the building.

This wasn’t an overnight effort. It took the researchers over a decade of work to get here.

The researchers started their room key bypass efforts in 2003 when a colleague’s laptop was stolen from a hotel room. With no sign of forced entry or unauthorized access to the room, the hotel staff are said to have dismissed the incident. The researchers set out to find a popular brand of smart lock to examine. In their words, finding and building the master key was far from easy, and took “several thousand hours of work” on an on-off basis, and using trial and error.

“Developing [the] attack took considerable amount of time and effort,” said Tuominen and Hirvonen, in an email to ZDNet.

“We built a RFID demo environment in 2015 and were able to create our first master key for a real hotel in March 2017,” they said. “If somebody was to do this full time, it would probably take considerably less time.

There was good news, the researchers said.

“We don’t know of anyone else performing this particular attack in the wild right now,” said the researchers, downplaying the risk to hotel customers.

Their discovery also prompted Assa Abloy to release a security patch to fix the flaws. According to their disclosure timeline, Assa Abloy was first told of the vulnerabilities a month later in April 2017, and met again over several months to fix the flaws.

The software is patched at the central server, but the firmware on each lock needs to be updated.