Category Archives: Big Data

Data security – should you care?

Daily there are new articles that talk about data breaches, cyber attacks, ransom ware, etc. We all panic when our networks are down for routine maintenance – “I don’t have anything to do!” Imagine a world where everything is driven by data and machines?

Today a data breach can be uncomfortable – personal information is shared with the wrong people. However, in years to come a data breach can mean the difference between life and death. Imagine cruising down the freeway in your autonomous driving car and the system is hacked and your car stops abruptly, but other cars do not. Imagine being on the operating table and having a robot operating on you, the system is breached and instead of taking out your appendix the bad guys makes it remove your spleen or worse – kill you.

In the years to come we all need to get a lot more educated about data security and how to avoid breaches. This applies in both our personal and professional lives. We need to ask questions of organizations we provide data to and consume data from – how well are they performing and how vested are they in keeping us safe.

Data security needs to move into mainstream conversations and be an integral part of any security initiative.

It’s a bird, it’s a plane, no it’s a Perdix

What’s small, fast, and is launched from the bottom of a fighter jet? Not missiles, but a swarm of drones.

I watched a 60 minute report on Tuesday night that had me so intrigued in what the military is doing with new technology.  This is not just about Drones, it’s about where the future is going with the following technologies.

  • Unmanned ground vehicle (UGV), such as the autonomous car.
  • Unmanned aerial vehicle (UAV), unmanned aircraft commonly known as a “drone” …
  • Unmanned surface vehicle (USV), for the operation on the surface of the water.
  • Autonomous underwater vehicle (AUV) or unmanned undersea vehicle (UUV), for the operation underwater.

U.S. military officials have announced that they’ve carried out their largest ever test of a drone swarm released from fighter jets in flight. In the trials, three F/A-18 Super Hornets released 103 Perdix drones, which then communicated with each other and went about performing a series of formation flying exercises that mimic a surveillance mission.

But the swarm doesn’t know how, exactly, it will perform the task before it’s released. As William Roper of the Department of Defense explained in a statement:

Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature. Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to drones entering or exiting the team.

Releasing drones from a fast-moving jet isn’t straightforward, as high speeds and turbulence buffet them, causing them damage. But the Perdix drone, originally developed by MIT researchers and named after a Greek mythical character who was turned into a partridge, is now in its sixth iteration and able to withstand speeds of Mach 0.6 and temperatures of -10 °C during release.

A Washington Post report last year explained that they had been developed as part of a $20 million Pentagon program to augment the current fleet of military drones. It’s hoped that the small aircraft, which weigh around a pound each and are relatively inexpensive because they’re made from off-the-shelf components, could be dropped by jets to perform missions that would usually require much larger drones, like the Reaper.

Clearly, they’re well on the way to being that useful. Now the Pentagon is working with its own Silicon Valley-style innovation organization, the Defense Innovation Unit Experimental, to build fleets of the micro-drones.

I’ll be talking about some of the individual technologies in the future.

Let me know your thoughts and what you think of this type of technology.

Top Internet Outages of 2016

As we sipped on warm holiday beverages and gradually watched the year wind down, it’s rather customary to reflect on the past and contemplate the future. Following traditions, we took a stroll down memory lane analyzing the state of the Internet and the “cloud”. In today’s blog post we discuss the most impactful outages of 2016, understand common trends and evaluate some of the key learnings.

As we analyzed the outages that have hit us hard this year, we noticed four clear patterns emerge.

  • DDoS attacks took center stage and clearly dominated this past year. While the intensity and frequency of DDoS attacks have been increasing over time, the ones that plagued 2016 exposed the vulnerability of the Internet and the dependency on critical infrastructure like DNS.
  • Popular services weren’t ready for a crush of visitors. Network and capacity planning is critical to address the needs of the business during elevated traffic patterns. For example, lack of a CDN (Content Delivery Networks) frontend can prove to be costly if not factored into the network architecture during peak load.
  • Infrastructure redundancy is critical. Enterprises spend considerable time and money focusing on internal data center and link-level failure. However, there is often oversight when it comes to external services and vendors.
  • The Internet is fragile and needs to be handled with care. Cable-cuts, routing misconfigurations can have global impact and result in service instabilities and blackholed traffic.

DDoS Attacks: Hitting where it Hurts

While there were a plethora of DDoS attacks in all shapes and forms, four different attacks had the highest impact. Three out of the four attacks targeted DNS infrastructure.

On May 16th, NS1, a cloud-based DNS provider was a victim of a DDoS attack in Europe and North America. Enterprises relying on NS1 for DNS services like Yelp and Alexa were severely impacted. While this started out as an attack on DNS, it slowly spread to NS1’s online-facing assets and their website hosting provider.

The Second attack on June 25th, came in the form of 10 million packets per second, targeting all 13 of the DNS root servers. It was a large-scale attack on the most critical part of the internet infrastructure and resulted in roughly 3 hours of performance issues. Even though all the 13 root servers were impacted, we noticed varying levels of impact intensity and resilience. There was a strong correlation between the anycast DNS architecture and the impact of the attack. Root servers with greater anycast locations saw diluted attack traffic and were relatively more stable than root servers with fewer locations.

The  mother of all DNS DDoS attacks was single-handedly responsible for bringing down SaaS companies, social networks, media, gaming, music and consumer products. On October 21st, a series of three large-scale attacks were triggered against Dyn, a managed DNS provider. The attack impacted over 1200 domains that our customers were monitoring and had global reach, with heavy effects in North America and Europe. We saw impacts on 17 of the 20 Dyn data centers around the world for both free and paid managed DNS services. Customers who relied only on Dyn for DNS services were vulnerable and severely impacted, but those who load-balanced their DNS name servers across multiple providers had the luxury to fall back on the secondary vendor during the DDoS attack. For example, Amazon.com had multiple DNS providers: Ultra DNS and Dyn. As a result, it did not suffer the same unavailability issues as many of Dyn’s other customers.

The DDoS attack on the Krebs on Security website on September 13th was record-breaking in terms of the size of the attack, peaking at 555 Gbps. Both the Dyn and the Krebs attacks were triggered by the Mirai botnet of hacked consumer devices. While the Internet-of-Things is set to revolutionize the networking industry, security needs to be top-of-mind.

Targeting critical infrastructure, like DNS, is an efficient attack strategy. The Internet, for the most part runs like a well-oiled machine; however, incidents like this present a reality check on network architecture and monitoring mechanisms. Consider monitoring not just your online-facing assets but also any critical service, like DNS. Be alerted as soon as you start seeing instabilities in the network to trigger the right mitigation strategy for your environment.

Application Popularity: Overloaded Networks

When it comes to application usage and websites there is no such thing as too many visitors. Until the network underneath begins to collapse. 2016 witnessed some popular services unable to keep up with demand.

January 13th witnessed one of the largest lottery jackpots in U.S history. Unfortunately, it also witnessed the crumbling of Powerball, the website that serves up the jackpot estimates and winning numbers. Increased packet loss and extended page load times indicated that neither the network or the application could handle the uptick in traffic. In an attempt to recover, Powerball introduced Verizon’s Edgecast CDN network right around the time of the drawing. Traffic was distributed across three different data centers (Verizon Edgecast CDN, Microsoft’s data center and the Multi-State Lottery Association datacenter), but it was too late. The damage was already done and user experience to the website was sub-standard.

The summer of 2016 saw a gaming frenzy, thanks to PokemonGo. There were two separate occasions (July 16th and July 20th) when Pokemon trainers were unable to catch and train their favorite characters. The first outage, characterized by elevated packet loss for 4 hours, was a combination of the network architecture and overloaded target servers unable to handle the uptick in traffic. The second worldwide outage was caused by a software update resulting in user login issue and incomplete game content.

November 8th was a defining moment in global politics. It was also the day the Canadian Immigration webpage was brought down by scores of frantic Americans. As US states closed the presidential polls and results began trickling in, the immigration website started choking before finally giving up. We noticed 94% packet loss at one of the upstream ISP providers, an indication that the network could not keep up with the spike in traffic.

Benchmarking and capacity planning is critical for network operations. Best practices include testing your network prior to new software updates and large-scale events. Bolster your network architecture through CDN vendors and anycast architectures to maximize user-experience. Monitor to make sure your vendors are performing as promised.

Fragile Infrastructure: Cable Cuts and Routing Outages

The network is not free from those occasional cable cuts and user induced misconfigurations. Let’s see how, sometimes a simple user oversight can impact services even across geographical boundaries.

On April 22nd, AWS experienced route leaks when more specific /21 prefixes were advertised by Innofield (AS 200759) as belonging to a private AS and propagated through Hurricane Electric. This resulted in all of Amazon-destined traffic transiting Hurricane Electric, to be routed to the private AS rather than Amazon’s AS. While the impact of this route leak was minimal, it was rather tricky as the leaked prefixes were not the same as Amazon’s prefixes, but more specific and thus preferred over Amazon. This was no malicious act, but rather a misconfiguration on a route optimizer at Innofield.

Level 3 experienced some serious network issues across several locations in the U.S and U.K on May 3rd. The outage lasted for about an hour and took down services including Cisco, Salesforce, SAP and Viacom. We were able to trace down the issue to a possible misconfiguration or failure in one of the transcontinental links.

On May 17th, a series of network and BGP level issues were correlated to a possible cable fault in the cross-continental SEA-ME-WE-4 line. While the fault seemed to be located around the Western European region, it had ripple effects across half the globe, affecting Tata Communications in India and the TISparkle network in Latin America. While monitoring your networks, look for indicators of cable faults. Some examples include dropped BGP sessions or peering failure, multiple impacted networks with elevated loss and jitter.

On July 10th, JIRA, the SaaS-based project tracking tool was offline for about an hour. From a BGP reachability perspective, all routes to the /24 prefix for JIRA were withdraw from Level 3. This resulted in the self-adjusting routing algorithm searching for an alternate path. Unfortunately, the backup path funnelled all the traffic to the wrong destination AS. Traffic was terminating in NTT’s network instead of being routed to JIRA due to a misconfiguration of the backup prefix.

Looking Ahead

So, what have we learned? By its very nature, it is expected that networks are bound to have outages and security threats. Smarter networks are not the ones that are built to be foolproof, but the ones that can quickly react to failures and inconsistencies. As the internet becomes the glue that binds SaaS and service delivery, it is paramount to have visibility over its shortcomings, especially during a crisis. As you move into the new year, take stock of the past year’s events and prepare for the future. Bolster your network security, but at the same time monitor how your network is performing under adverse conditions. Detect bottlenecks, common points of failure and distribute dependencies across ISPs, DNS service providers or hosting providers. Wishing you a happy outage-free New Year !

What is blockchain?

blockchain

Blockchain is a term you see fairly much when browsing tech—and non-tech—sites these days. It is widely known as the technology that constitutes the infrastructure of Bitcoin (what’s bitcoin BTW?), a mysterious cryptocurrency created by a mysterious scientist in 2009. Some even confuse it as a synonym for bitcoin. But the reality is that blockchain is a disruptive technology that has the potential to transform a wide variety of business processes.

In this article, we will clarify what the blockchain is—and what it isn’t—what’s it’s relation to bitcoin, and what are its applications beyond the realm of cryptocurrencies.

What is blockchain anyway?

At its essence, the blockchain is a distributed ledger—or list—of all transactions across a peer-to-peer network. Put simply, you can think of blockchain as a data structure containing transactions that is shared and synced among nodes in a network (but in fact it gets much more complicated than that). Each node has a copy of the entire ledger and works with others to maintain its consistency.

Changes to the ledger are made through consensus among the participants. When someone wants to add a new record to the blockchain ledger, it has to be verified by the participants in the network, all of whom have a copy of the ledger. If a majority of the nodes agree that the transaction looks valid, it will be approved and will be inserted in a new “block” which will be appended to the ledger at all the locations where it is stored.

Along with the use of cryptography and digital signatures, this approach addresses the issue of security while obviating the need for a central authority.

Each new block can store one or more transactions and is tied to previous ones through digital signatures or hashes. Transactions are indefinitely stored and can’t be modified after they’ve been validated and committed to the ledger.

What makes blockchain unique?

Blockchain’s approach to dealing with transactions is a break from the usual centralized and broker-based model, in which a central server is responsible for processing and storing all transactions. And this is one of the key features that makes blockchain attractive. This creates fault tolerance, so there’s no single point of failure in the blockchain, while also providing security that is on par with what is being offered in the centralized paradigm.

This enables companies, entities and individuals to make and verify transactions instantaneously without relying on a central authority. This is especially useful in the finance industry where the transfer of money is usually tied to and controlled by clearing houses that maintain ledgers and take days to verify and execute a transaction, and collect considerable fees. The blockchain can verify and apply changes within milliseconds, and the costs are next to nothing. In the blockchain model, each bank in a network would have its own copy of the ledger and transactions would be verified and carried out through communications between banks, and within seconds. This will cut costs and increase efficiency.

Another unique feature of the blockchain is its immutability, i.e. it is nearly impossible to tamper with records previously stored in a blockchain. Each new block being tied to previous ones through cryptographic algorithms and calculations, which means slightest alteration in the blockchain will immediately disrupt and invalidate the entire chain. And with the ledger being replicated across many nodes, it becomes even harder to falsify transactions and the ledger’s history.

What are the applications of blockchain

Bitcoin was the first concrete application of blockchain. It was proposed in 2008 in a paper presented by a person—or a group of people, some say—called Satoshi Nakamato. Bitcoin uses blockchain to digitally send bitcoins—its namesake currency—between parties without the need for the interference of a third-party broker.

But bitcoin isn’t the only application of blockchain. The distributed ledger makes it easier to create cost-efficient business networks where virtually anything of value can be tracked and traded—without requiring a central point of control.

For instance, blockchain can be used to keep track of assets and goods as they move down the supply chain. Other industries such as stock exchange can make use of the blockchain mechanism to transfer ownership in a secure, peer-to-peer mechanism.

In the IoT industry, blockchain can help connect billions of devices in a secure way that won’t require centralized cloud servers. It can also be the backbone that will enable autonomous machines that will pay for buy and sell services from each other in the future.  (There has to be standards in place before they can be totally secured).

Other industries include retail, healthcare, gaming and many others.

Smart contracts will take the blockchain to the next level, enabling it to do more than just exchange information and get involved in more complex operations.

Different flavors of blockchain

Based on the specific needs of the application making use of blockchain, several of its characteristics might change. In fact, the different implementations of blockchain and different cryptocurrencies that are using it vary in different sectors.

Permission

Blockchains can be public or “permissionless,” such as the bitcoin blockchain, in which everyone can participate and add transactions. This is the model used by bitcoin. Other organizations are exploring the implementation of “permissioned” blockchains, in which the network is made up of known participants only. Security and authentication mechanisms vary in these different blockchains.

Anonymity

With ledgers being distributed among nodes, the level of anonymity is also a matter of importance. For instance, bitcoin does not require any personally identifiable information to send or receive payments on the blockchain. However, all transactions are recorded online for everyone to see, which lends a certain amount of transparency and makes total anonymity quite complicated. That’s why it’s known as pseudonymous.

Other implementations of blockchain, such as ZeroCoin, use other mechanisms (zero-knowledge proof) to enable verification without publishing transaction data.

Consensus

Consensus is the mechanism used by nodes in a blockchain to securely verify and validate transactions while maintaining the consistency and integrity of the ledger. The topic is a bit complicated, but the most prevalent form used is the “proof of work” consensus model used by bitcoin, in which nodes—called “miners”—spend computation cycles to run intensive hashing algorithms and prove the authenticity of the block they’re proposing to add. The PoW mechanism prevents DoS attacks and spam.

“Proof of stake” is another popular consensus model, in which nodes are required to prove ownership of certain amount of currency (their “stake”) to validate transactions.

This is just the beginning

Blockchain is a new way of communicating and transferring data. We still don’t know quite how it will evolve in the future, but what we do know is that it is bound to change quite a few things. A look at the figures presented in this Business Insider article proves why we can call it a disruptive technology.

I don’t know about you, but I’m excited about what blockchain surprises are waiting to be discovered down the horizon and will be exploring its uses more in the coming months.

 

Part 2: So how does Bitcoin work?

In traditional money systems, governments simply print more money when they need to.  But in bitcoin, money isn’t printed at all – it is discovered.  Computers around the world ‘mine’ for coins by competing with each other.

How does mining take place?

People are sending bitcoins to each other over the bitcoin network all the time, but unless someone keeps a record of all these transactions, no-one would be able to keep track of who had paid what. The bitcoin network deals with this by collecting all of the transactions made during a set period into a list, called a block. It’s the miners’ job to confirm those transactions, and write them into a general ledger.

Making a hash of it

This general ledger is a long list of blocks, known as the ‘blockchain’. It can be used to explore any transaction made between any bitcoin addresses, at any point on the network. Whenever a new block of transactions is created, it is added to the blockchain, creating an increasingly lengthy list of all the transactions that ever took place on the bitcoin network. A constantly updated copy of the block is given to everyone who participates, so that they know what is going on.

how-bitcoin-mining-works-300x185But a general ledger has to be trusted, and all of this is held digitally. How can we be sure that the blockchain stays intact, and is never tampered with? This is where the miners come in.

When a block of transactions is created, miners put it through a process. They take the information in the block, and apply a mathematical formula to it, turning it into something else. That something else is a far shorter, seemingly random sequence of letters and numbers known as a hash. This hash is stored along with the block, at the end of the blockchain at that point in time.

Hashes have some interesting properties. It’s easy to produce a hash from a collection of data like a bitcoin block, but it’s practically impossible to work out what the data was just by looking at the hash. And while it is very easy to produce a hash from a large amount of data, each hash is unique. If you change just one character in a bitcoin block, its hash will change completely.

Miners don’t just use the transactions in a block to generate a hash. Some other pieces of data are used too. One of these pieces of data is the hash of the last block stored in the blockchain.

Because each block’s hash is produced using the hash of the block before it, it becomes a digital version of a wax seal. It confirms that this block – and every block after it – is legitimate, because if you tampered with it, everyone would know.

If you tried to fake a transaction by changing a block that had already been stored in the blockchain, that block’s hash would change. If someone checked the block’s authenticity by running the hashing function on it, they’d find that the hash was different from the one already stored along with that block in the blockchain. The block would be instantly spotted as a fake.

Because each block’s hash is used to help produce the hash of the next block in the chain, tampering with a block would also make the subsequent block’s hash wrong too. That would continue all the way down the chain, throwing everything out of whack.

Competing for coins

So, that’s how miners ‘seal off’ a block. They all compete with each other to do this, using software written specifically to mine blocks. Every time someone successfully creates a hash, they get a reward of 25 bitcoins, the blockchain is updated, and everyone on the network hears about it. That’s the incentive to keep mining, and keep the transactions working.
butterfly-labs-bitforce-mini-rig-sc1-1024x8161-300x185
The problem is that it’s very easy to produce a hash from a collection of data. Computers are really good at this. The bitcoin network has to make it more difficult, otherwise everyone would be hashing hundreds of transaction blocks each second, and all of the bitcoins would be mined in minutes. The bitcoin protocol deliberately makes it more difficult, by introducing something called ‘proof of work’.

The bitcoin protocol won’t just accept any old hash. It demands that a block’s hash has to look a certain way; it must have a certain number of zeroes at the start. There’s no way of telling what a hash is going to look like before you produce it, and as soon as you include a new piece of data in the mix, the hash will be totally different.

Miners aren’t supposed to meddle with the transaction data in a block, but they must change the data they’re using to create a different hash. They do this using another, random piece of data called a ‘nonce’. This is used with the transaction data to create a hash. If the hash doesn’t fit the required format, the nonce is changed, and the whole thing is hashed again. It can take many attempts to find a nonce that works, and all the miners in the network are trying to do it at the same time. That’s how miners earn their bitcoins.

Hope this helps explain how Bitcoin Mining works.  Stayed tuned for tomorrow on “bitcoin transaction”.

Part 1: What is a Bitcoin and how does it work?

So I’ve been asked several times in the past couple of weeks, what is a Bitcoin and how does it work?

Bitcoin is a form of digital currency, created and held electronically. No one controls it. Bitcoins aren’t printed, like dollars or euros – they’re produced by people, and increasingly businesses, running computers all around the world, using software that solves mathematical problems.

It’s the first example of a growing category of money known as cryptocurrency.

What makes it different from normal currencies?

Bitcoin can be used to buy things electronically. In that sense, it’s like conventional dollars, euros, or yen, which are also traded digitally.

However, bitcoin’s most important characteristic, and the thing that makes it different to conventional money, is that it is decentralized. No single institution controls the bitcoin network. This puts some people at ease, because it means that a large bank can’t control their money.

Who created it?

A software developer called Satoshi Nakamoto proposed bitcoin, which was an electronic payment system based on mathematical proof. The idea was to produce a currency independent of any central authority, transferable electronically, more or less instantly, with very low transaction fees.

Who prints it?
bitcoins
No one. This currency isn’t physically printed in the shadows by a central bank, unaccountable to the population, and making its own rules. Those banks can simply produce more money to cover the national debt, thus devaluing their currency.

Instead, bitcoin is created digitally, by a community of people that anyone can join. Bitcoins are ‘mined’, using computing power in a distributed network.

This network also processes transactions made with the virtual currency, effectively making bitcoin its own payment network.

So you can’t churn out unlimited bitcoins?

That’s right. The bitcoin protocol – the rules that make bitcoin work – say that only 21 million bitcoins can ever be created by miners. However, these coins can be divided into smaller parts (the smallest divisible amount is one hundred millionth of a bitcoin and is called a ‘Satoshi’, after the founder of bitcoin).

What is bitcoin based on?

Conventional currency has been based on gold or silver. Theoretically, you knew that if you handed over a dollar at the bank, you could get some gold back (although this didn’t actually work in practice). But bitcoin isn’t based on gold; it’s based on mathmatics.

Around the world, people are using software programs that follow a mathematical formula to produce bitcoins. The mathematical formula is freely available, so that anyone can check it.

The software is also open source, meaning that anyone can look at it to make sure that it does what it is supposed to.

What are its characteristics?

Bitcoin has several important features that set it apart from government-backed currencies.

1. It’s decentralized

The bitcoin network isn’t controlled by one central authority. Every machine that mines bitcoin and processes transactions makes up a part of the network, and the machines work together. That means that, in theory, one central authority can’t tinker with monetary policy and cause a meltdown – or simply decide to take people’s bitcoins away from them, as the Central European Bank decided to do in Cyprus in early 2013. And if some part of the network goes offline for some reason, the money keeps on flowing.

2. It’s easy to set up

Conventional banks make you jump through hoops simply to open a bank account. Setting up merchant accounts for payment is another dauting task, beset by bureaucracy. However, you can set up a bitcoin address in seconds, no questions asked, and with no fees payable.

3. It’s anonymous

Well, kind of. Users can hold multiple bitcoin addresses, and they aren’t linked to names, addresses, or other personally identifying information. However…

4. It’s completely transparent

…bitcoin stores details of every single transaction that ever happened in the network in a huge version of a general ledger, called the blockchain. The blockchain tells all.

If you have a publicly used bitcoin address, anyone can tell how many bitcoins are stored at that address. They just don’t know that it’s yours.

There are measures that people can take to make their activities more opaque on the bitcoin network, though, such as not using the same bitcoin addresses consistently, and not transferring lots of bitcoin to a single address.

5. Transaction fees are miniscule

Your bank may (most likely) charge you a fee for international transfers. Bitcoin doesn’t.

6. It’s fast

You can send money anywhere and it will arrive minutes later, as soon as the bitcoin network processes the payment.

7. It’s non-repudiable

When your bitcoins are sent, there’s no getting them back, unless the recipient returns them to you. They’re gone forever.

So, bitcoin has a lot going for it, in theory. But how does it work, in practice? Stayed tuned for more tomorrow.

 

Internet of Things (IOT), Big Data, Business Intelligence, Data Science, Digital Transformation: Hype or Reality? Facts and Figures

analytics

The Internet of things (IoT) is the internetworking of physical devices, vehicles, connected devices and smart devises, buildings and other items, embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data without requiring human-to-human or human-to-computer interaction.

The worldwide IOT market spend will grow from $592 billion in 2014 to $1.3 trillion in 2019 according to IDC, while the installed base of IoT endpoints will grow from 9.7 billion in 2014 to 30 billion in 2020 where 40% of all data in the world will be data resulting from machines to machines communication (M2M).

Gartner survey shows that 43 % of Organizations are using or plan to implement the Internet of things in 2016. Gartner predicts $2.5M per minute in IoT spending and 1M new IoT devices sold every hour by 2021.

Industrial IOT (Internet of Things) market is estimated at $60 trillion by 2030.

By 2020, IoT will save consumers and businesses $1 trillion a year in maintenance, services and consumables.

By 2022, a blockchain-based business will be worth $10B, Blockchain being a digital platform that records and verifies transactions in a tamper and revision-proof way that is public to all.

By 2018, Cloud Computing infrastructure and platforms are predicted to grow 30% annually. Many enterprises have failed to achieve success with cloud computing, because they failed to develop a cloud strategy linked to business outcomes. Many companies are unsure how to initiate their cloud projects. The key success factors for Cloud projects are the good design of the Business Processes, the focus on the Services delivered and a good design of the transition from “As Is” to “To Be” Applications Architecture.

By 2019, Global Business Intelligence market will exceed $ 23 billion and Global Predictive Analytics market will reach $ 3.6 billion by 2020, driven by the growing need to replace uncertainty in business forecasting with probability and the increasing popularity of prediction as a key towards improved decision making. Predictive analytics is the branch of the advanced analytics which is used to make predictions about unknown future events. Predictive analytics uses many techniques from data mining, statistics, modeling, machine learning, and artificial intelligence to analyze current data to make predictions about future. It is about the increased need & desire among businesses to gain greater value from their data. Over 80% of data/information that businesses generate and collect is unstructured or semi-structured data that need special treatment using Big Data Analytics.

Big Data investments will account for over $46 Billion in 2016 reaching $72 Billion by the end of 2020.

A new brand of analysts called “data scientists” are introducing data science courses into degrees ranging from computer science to business. Data Scientists usually require a mix of skills like mathematics, statistics, computer science, algorithmic, machine learning and most importantly business knowledge. If Data Scientists are lacking business knowledge, they will definitely fail. They also need to communicate the findings to C-Level management to be able to take the right strategic decisions.

Data science needs to be a fundamental component of any digital transformation effort.

All Sectors will have to hire and educate a significant number of Data Scientists.

Let’s take the example of the Energy Sector where the Digital Transformation is playing a crucial role to reach Global and European Energy targets:

87% of CFOs agree that growth requires faster data analysis and 50% of Networked enterprises are more likely to increase their market-share.

With the 2020 energy climate package and the 2050 energy roadmap, Europe has engaged early in the transformation of its Energy system.

As the Industrial Revolution was the transition to new manufacturing processes between 1760 and1840, the digital revolution will be the disruptive transformation of the 21st century to a new economy, a new society and a new era of low-emission energy.

Many large Energy players will appoint Chief Digital Officers to drive the digital transformation of their processes and create new businesses.

Four recommendations to boost Customer Centric Energy innovations will heavily require the Digital Transformation roadmap to be adopted:

  1. Accelerate Customer innovations by making the Data available for Market participants
  2. Build massive Energy Services as downloadable Apps through Energy Exchange Platforms B2B, B2C and C2C
  3. Full Customer participation by making customer usability as simple as one click
  4. Build the pan-European Energy Union of Customer Services by extending to cross-border Energy Management

With the enablement of IOT, BI, Predictive Analytics and Data Science and the proven business models, we predict that 90% of Commercial and Industrial Customers and 70% of Residential Customers will be adopting Smart Energy technologies by 2025.

Let me ask you the following questions:

  • What are the Top 3 priorities that justifie Digital Transformation in your business?
  • Are you planning to setup a Data Science team?
  • Are you considering Digital for existing business improvement or for creating new businesses?

 

The IT Guy Becomes a Player

it

Back in the days of mainframes, the ubiquitous “IT Guy” was responsible for planning, building and maintaining in-house infrastructure, as well as developing custom solutions to automate back-office functions. And while the role evolved some over the years, the first truly tectonic shift occurred when cloud computing emerged, combined with aftershocks in the form of mobile, social and Big Data. As technology became commoditized and consumerized, some analysts suggested in-house IT would become obsolete.

In reality, the role of the IT Guy is evolving into one of greater value and significance.

Recently, IDC and Forrester Research, two of the largest technology industry research firms, released predictions that IT is poised to take the lead as companies move toward their digital futures. The reason: While many companies outsourced their initial forays into cloud and mobile applications, they can’t continue to depend on external consultancies for much longer. Digital transformation is so critical to the future of businesses, the analysts say, that relying on external parties to provide solutions will be too dangerous. In-house IT will, of necessity then, become the core driver of “how business does business.”

Taking on a more important role

Even in today’s quick moving environments, the role of the IT department has increased in value across the enterprise, as it works with various internal teams and links its goals to the wider objectives of the business. A recent survey by Forrester asked company executives to name the most important senior leader in driving or supporting business transformation and innovation, and one of the top answers was the CIO – ahead even of the CEO.

As the master of all things digital, talented CIOs are perfectly positioned to take the lead on leveraging new tech elements to help shape a business’ overall strategy – and use high-performance networks to effectively pursue it.

This new, more challenging—but much more valuable—vision of the IT Guy’s role as an innovator and strategist also seems to be widely accepted, according to a survey by Gartner Research.

The CIO as chief innovator is trending up: The Gartner survey says more CIOs are adding value to their roles by leading boardroom discussions about using cloud, mobile, analytics and social technologies to drive new product development, online marketing and other customer-facing initiatives. The research firm concludes that the perception of the CIO has evolved from an IT service provider to an enabler of digital products that support business.

And that’s only the beginning. The next great leap for businesses will be the Internet of Things (IoT), and CIOs will have the opportunity to lead by solving the challenges that will come with IoT integration.

Three types of CIOs

“IoT requires the creation of a software platform that integrates the company’s IoT ecosystem with its products and services,” says Peter Sondergaard, senior vice president, Gartner Research, adding that CIOs will be the “builders” of the new digital platforms and high-performance networks that IoT projects will require. However, while the change of role might be adventurous for some, not every CIO wants to embrace the change from being operational to innovative, according to an IDC study, “The Changing Role of IT Leadership: CIO Perspectives for 2016.”

The study outlines three types of CIOs: operational (keeping the lights on and costs down); business services manager (providing an agile portfolio of business services); and chief innovation officer (business innovator).

Business innovator is the role CIOs must play in order to have a meaningful future, says Michael Jennett, vice president for enterprise mobile strategy at IDC.

“For these executives to stay relevant, they must shift their focus to transformation and innovation,” he adds. “CIOs who stay operational will find themselves further marginalized over the next three years.”

The big question for many businesses, then, is will the IT Guy be prepared to incorporate an understanding of the company’s mission and develop value-added strategies to generate, as Jennet says, “revenue out of what you do.”

Interestingly, the IDC study found that while more than 40 percent of line-of-business executives view the CIO as an innovator, only 25 percent of CIOs describe their own role that way, with more than 40 percent viewing themselves as primarily operational, and 34 percent as business service managers.

However, with global digital commerce revenue at over $1 trillion annually, CEOs see digital as fuel for growth, and expectations for IT departments are running high. To succeed in this environment, and bring value, the IT Guy needs to rise to the occasion and take on responsibility for digital innovation, as well as maintaining the infrastructure.

 

What Is Threat Intelligence? Definition and Examples

threat-intelligence-definition

Key Takeaways

  • Threat intelligence is the output of analysis based on identification, collection, and enrichment of relevant data and information.
  • Always keep quantifiable business objectives in mind, and avoid producing intelligence “just in case.”
  • Threat intelligence falls into two categories. Operational intelligence is produced by computers, whereas strategic intelligence is produced by human analysts.
  • The two types of threat intelligence are heavily interdependent, and both rely on a skilled and experienced human analyst to develop and maintain them.

Everybody in the security world knows the term “threat intelligence.” At this point, even some non-security folks have started talking about it.

But it’s still very poorly understood.

Raw data and information is often mislabeled as intelligence, and the process and motives for producing threat intelligence are often misconstrued.

If you’re new to the field, or you think your organization could benefit from a carefully constructed threat intelligence program, here’s what you need to know first.

Defining Threat Intelligence

Although most people believe they intuitively understand the concept, it pays to work from a precise definition of threat intelligence.

Threat intelligence is the output of analysis based on identification, collection, and enrichment of relevant data and information.

As already alluded to, raw data and information do not constitute intelligence. Equally, analyzed data and information will only qualify as intelligence if the result is directly attributable to business goals.

A truly well-planned and executed threat intelligence initiative has the potential to provide enormous benefit to your organization. On the flip side, if you aren’t careful, it’s easy to sink huge amounts of resources into an intelligence program without really achieving anything.

It would be foolish, then, to invest heavily in threat intelligence without having a clear idea of what you’re trying to achieve and why.

Simply “keeping the business secure” is not a valid motive for threat intelligence, but it’s the only driver for many organizations. The issue here is that as a goal it’s spectacularly generic, and almost impossible to measure.

A threat intelligence program with this motive is at serious risk of failing to identify what is and isn’t relevant or important.

A much better business goal, which is both relevant and tangible, would be to reduce operational risk by a given margin within a specified time period. Operational risk is a regularly measured and monitored business metric, and the results (however they’re derived) are there for all to see.

As a result, a threat intelligence program designed to reduce operational risk will be far more focused on those aspects of security that can be clearly linked to the markers used to measure cyber risk. As an example, intelligence relating to recent attacks on similar organizations within the same industry would be highly relevant, whereas analysis of the most recent high-profile attack in a totally different industry would not.

Intelligence Typologies

Perhaps the single most important phase of the whole process is analysis. During this phase, large quantities of raw data and information are processed into relevant, actionable intelligence.

But the actual analysis process can vary enormously depending on the desired output. Largely speaking, depending on the form of analysis used to produce it, threat intelligence falls into two categories: operational and strategic.

Operational intelligence is produced entirely by computers, from data identification and collection through to enrichment and analysis. A common example of operational threat intelligence is the automatic detection of distributed denial of service (DDoS) attacks, whereby a comparison between indicators of compromise (IOCs) and network telemetry is used to identify attacks much more quickly than a human analyst could.

Strategic intelligence focuses on the much more difficult and cumbersome process of identifying and analyzing threats to an organization’s core assets, including employees, customers, infrastructure, applications, and vendors. To achieve this, highly skilled human analysts are required to develop external relationships and proprietary information sources; identify trends; educate employees and customers; study attacker tactics, techniques, and procedures (TTPs); and ultimately, make the defensive architecture recommendations necessary to combat identified threats.

A common example of strategic intelligence is the use of threat actor TTPs to inform proactive security measures such as enhanced vulnerability and patch management or comprehensive security awareness training.

And it’s natural at this stage to wonder …

Which Is Better?

This question is problematic for two reasons.

First, it’s the natural question to ask when presented with two options, and second, it totally misses the point.

The reality of threat intelligence is that both operational and strategic intelligence are required. More than that, though, they actively rely on each other.

For a start, the fact that the end-to-end process for producing operational intelligence involves no human analysts is misleading.  As Levi Gundert points out in his threat intelligence white paper, achieving an automated operational workflow is highly dependent on the presence of at least one talented and experienced data architect. This person is responsible for designing, creating, and calibrating tools that are capable of performing this vital intelligence function.

And the only reason that any analysts are available to produce strategic intelligence is because the operational “heavy lifting” is being done automatically by computers. If that weren’t the case, intelligence analysts would be totally bogged down with detail and false positives.

If this is starting to seem like a “chicken-and-egg” situation, let us help you out.

To build a world-class threat intelligence capability, the first thing you’ll need is at least one highly skilled and experienced human analyst. Once a person or team with the right skillset is in place, they will need to move through three stages:

  1. Develop or procure the systems needed to automate the identification, collection, and enrichment of threat data and information.
  2. Create and maintain the tools needed to produce operational threat intelligence.
  3. Focus their attentions on the production of highly targeted and valuable strategic intelligence.

Sadly, many organizations never make it past stage one. Once they have an intelligence feed in place, they take action to mitigate the most basic threats using simple information such as IOCs and vulnerability announcements, and never progress to a level that would enable them to address real business needs and objectives.

If your threat intelligence capability is stuck at this level, you’re leaving a huge proportion of the business value of your threat intelligence feed on the table.

Don’t Settle, and Don’t Get Lost in the Woods

So far in this article, we’ve presented two clear and major dangers of developing a threat intelligence capability:

  1. Settling for simple threat data and information, instead of fighting for intelligence.
  2. Wasting valuable time and resources on producing intelligence that doesn’t further business goals.

To avoid these mistakes, you’ll need to keep pushing your analysts for more and better intelligence, while also stressing the importance of keeping things relevant.

Losing sight of either of these fundamental considerations can undermine the value of your program. Keep them at the forefront, though, and over time you’ll develop a truly world-class threat intelligence capability.

Happy Birthday Internet: 25 years ago today the World Wide Web opened to the public

WWW
Above: Tim Berners-Lee, creator of the World Wide Web, speaks at LeWeb 2014
Image Credit: Chris O’Brien

On this day back in 1991, a British researcher working in Switzerland suddenly opened a little thing called the World Wide Web to the public.

And now, 25 years later, it’s safe to say that the WWW has changed just about every aspect of our lives — for better and for worse.

The child of Tim Berners-Lee, who was then working at CERN, it has had an impact so profound and complicated that it’s difficult to even know how to make sense of it all. For some entrepreneurs, it has created vast wealth. It has toppled industries and given rise to others. It has created unprecedented power to publish and bolstered free speech, even as it has coarsened public dialogue and allowed hate groups to organize.

But one thing we can marvel at today is its sheer size.
Consider:
There are 1.07 billion websites, though an estimated 75 percent are not active, according to Internet Live Stats.

The are 4.73 billion webpages.

And while the internet is more than just the World Wide Web, it’s worth noting that there are 3.4 billion people on the internet.

Finally, if you really want to go all nostalgic, be sure to check out the very first website, which went live a couple of weeks earlier on August 6. Or look at cat GIFs.