In a typical day, an employee might access data from centrally controlled document repositories, data coming through from websites, emails, blogs, twitter feeds, plus data scattered across multiple smart devices. This collection of data from various sources and types adds to the petabytes of data that organizations need to store, manage and protect. While the sources of data are growing every day, so too are those that use and produce the data.

There is no dispute; data is growing exponentially… and it will continue to grow, both in volume and complexity. In general, the more of a resource there is, the less value any individual item holds – like a commodity. Due to the economic laws of supply and demand, the more of a commodity exists, the lower its price or value. In this way, individual data points can be viewed as a commodity, similar to oil or gold. And similar to fuel or precious metals, data has been used as a means of doing business, hedging risks, investing and even speculating for decades or even hundreds of years.

In the latest Gigaom Research report “Unifying petabyte-scale unstructured data: enhancing enterprise data value”, John Webster of Gigaom Research, examines a data centric approach to storage and uncovers what forward-thinking organizations need to consider to address the problems created by accelerated data growth and the proliferation of data silos.

Thinking about venturing into the cloud? Where would you start? What you should absolutely not do is approach your transition to the cloud the same way you planned your traditional storage. You should start with understanding the value of your data; take a ‘data centric’ approach.

In today’s world, organizations generate huge volumes of data, volumes that double every two years, on average. In data-intensive industries such as financial services, healthcare, education, high performance computing, oil/gas and life sciences, most of that data is unstructured.