Big data has been advertised as a panacea for corporate problems. Everything from marketing to HR or logistics to the sales funnel can be solved by AI algorithms which feed on Big Data. Yet, only a third of the respondents from a recent Forbes survey had seen revenue boosts of over 3% after implementing Big Data, showing there is still room to grow and the processes are still not entirely understood. Why are the numbers so small, what makes projects fail and what causes CEOs to lose faith in the predictive power of AI?
Reasons why data-driven projects fail
There are several causes for sub-par results. A simple list could include unrealistic targets derived from a lack of strategy, a rigid corporate culture and missing both specialists and technology. While some of these issues can be solved by simply throwing more money at the problem, others require a rethinking of positions.
No strategy and Unrealistic targets
Joining the Big Data bandwagon just to be in line with the cool kids is costly and can take a real toll on the company’s finances, client relationships, and internal processes. Big data can neither multiply your sales or valuation nor rescue a misguided enterprise strategy. According to a study, 70% of Big Data projects fail to fulfill the initial plan due to unrealistic measurements which are not aligned with the company’s strategy and senior managers’ expectations. Big Data is, at the end of the day, just another tool that helps you achieve your goals if you already have a plan.
Inappropriate technology and No Skills
Current servers and storing facilities are not able to withstand the 4Vs of big Data (volume, velocity, variety, and veracity). A company looking to harvest the advantages of using AI powered by Big Data should be ready to either invest in their own hardware and software or pay for it as a service from a reputable provider. Hoping that you can stretch current assets for this scope is counterproductive.
Acquiring technology means nothing without training the employees in this new way of thinking about data. Originally, data was siloed in the IT department and only utilized as reports once a month or once each quarter. Now, if the information does not travel in real time, it is outdated and a cost, not an asset.
Not only does a lack of skills keep employees from leveraging Big Data, but also the lack of vision and the comfort of doing things in the same way as a few years ago. Corporate culture is one of the most significant obstacles against generating ROI. This has a simple mechanism that needs to be broken: people are doing their jobs according to customs and gut feeling instead of learning from real input provided by customers. Repeating patterns that keep the business just above the line is not enough.
Challenges of user-level data
Even when the organization is ready to embrace Big Data, the culture supports it, and there is a dedicated champion behind the initiative – the situation is far away from smooth sailing. There are numerous obstacles posed by data itself before making it ready to be used and generate revenue.
Data can be biased and full of noise
For a start, you only have access to data from people who interacted with your ads, website or any other point of contact. Even the data from the same user can be recorded differently if they switch devices and platforms (just think of a Windows user with an iPhone). The sheer volume of noise from the data can be intimidating. Cleaning it is part of the job as sometimes one of the most complicated tasks.
What constitutes real actionable information and what is just ballast? Usually, this is the point where most companies feel that Big Data is too complex and is not generating the quick growth they were hoping for. To solve this, an organization must hire at least a small team of data scientists or, ask for the services of a big data consulting company like Itransition. This is the safest way since you are only paying a fee for a contract, not bringing new people onboard.
Results are not always replicable
Some of the biggest problems with results derived from Big Data is that they come from a black box. The deep learning is difficult to deconstruct. Understanding what inputs generate which outputs and how all the factors interact with each other are sometimes impossible. The only way to do this is to introduce test data specifically designed to look at just one variable at a time and measure results. This is important when it comes to recommendations.
A company can ask itself how to replicate a past success. For example, if an individual blog article or Instagram posts goes viral, what was the element that made all the difference and how can that be used again and again? If in one scenario using AI to customize the experience yields a high increase in ROI, yet in another instance using AI may be business as usual, how can you produce the best results all the time?
The volume of data requires aggregation
Sometimes even the sheer volume of data is an obstacle, both from a logical and logistic perspective. When it comes to pattern recognition, data could need some degree of aggregation to answer complex questions related to correlations and causations. When there are different types of data involved, this task becomes complicated. For example, creating a link between the opinions expressed by clients on social media and actual sales levels, there are multiple tools necessary such as language processing, sentiment analysis, and financial analysis and these are just the tip of the iceberg.
Changing the mindset
To be declared a winner in the Big Data game, a company should change its approach to business. Everything must be open, accessible to the employees that can use the data and at the same time protected from cyber-pirates. Data and dashboards should be embedded in daily routines, and even the low-level workers need to be able to access it on user-friendly environments like tablets or smartphones since this creates a synergy effect that increases ROI.