Having been involved in information security for over a decade, I have often talked about data management – long before my least favourite acronym (DLP) was used. As any good Information Assurance/Governance/Security professional will know, it is the data that we are most interested in protecting and making available (to the right people!) – not systems. Yet some organisations are still thinking in terms of protecting infrastructure and therefore approaching matters outside in, as opposed to inside out (I.e. starting at the data)

For those that are thinking from the data out, many are wrestling with masses of data that they have accumulated over the years – unstructured data scattered everywhere, archives creaking, throwing more and more money at storage…….in and amongst which they are then trying to figure out how and if to address cloud, consumerisation, and in the mix security.

I continue to be amazed at how many companies will happily throw significant money at storage (the myth of “storage is cheap” amuses me no end) yet put themselves at risk through cutting back on basic common sense security measures – without having a considered approach to data management.

This isn’t just about pushing users to shared drives – which will become less relevant as cloud services continue their march into the enterprise. This is about taking a proactive approach to classification right from the outset, in order to then decide how the data needs to be handled, by whom, where it should live, and for how long.

I’ve mentioned already how this is imperative if organisations are to be able to embrace cloud, although there is a further consideration here. If we look at the size of an average email, it was approx 18k back in the early days of the Internet. Depending on who you speak to, that average email is approx 250-300k today – notwithstanding the amount of times an email with document can bounce to and fro in and out of an organisation. 10 years later, an average of 50 emails a day, with many multiple copies of the same document – plus additional revisions – means that up to 80% of storage could be duplication (certainly based on projects I have seen recently)

Cutting this down and getting it under management has several benefits; firstly, significantly reduced operational costs. Secondly, makes security rather easier to implement. Thirdly, the costs for ediscovery are significantly reduced.

Where the 3rd point is concerned, the real risks around data management are yet to be felt. I have come across organisations who’s policy to email retention is 2 years – or even to allow their users to decide on deletion. There is a real risk of organisations entering or being pulled into litigation, to then either see a case fail – or worse be fined – for missing information. Even putting aside these extreme scenarios, it cannot be sensible to pay counsel to trawl through a mass of data when potentially they need only be concerned with the 20% that matters.