Big Data: Generation Next

The following was first published in Analytics Magazine. Dr. Vijay Mehrotra is an associate professor, Department of Finance and Quantitative Analytics, School of Business and Professional Studies, University of San Francisco.

We have all been hearing about both the “the analytics revolution” and “the rise of Big Data” forever, or so it seems. I credit the book “Competing on Analytics” by Thomas H. Davenport and Jeanne G. Harris [Harvard Business School Press, 2007] with making “analytics” part of the mainstream business lexicon. Similarly, the McKinsey Global Institute (MGI) report entitled “Big Data: The next frontier for innovation, competition and productivity,” released in May 2011, has had the same effect for the term “Big Data.”

This MGI report formally defined Big Data as “datasets whose size is beyond the ability of typical database software tools to capture, manage and analyze,” while also identifying several vertical industries and classes of applications that can be improved by intelligent use of data for better decision-making, innovation and competitive advantage. In fact, many of the broad themes presented in this report echo the ideas presented by Davenport and Harris in “Competing on Analytics.” As such, over the past year, it has become natural to think of “analytics” and “Big Data” as being virtually synonymous with one another.

I caught up with Davenport by phone a couple of weeks ago. He was in the midst of a study on the human side of Big Data sponsored by SAS Institute and EMC/Green Plum, and he was kind enough to share some of his findings with me. Over the past few months, he had interviewed a large number of data scientists who were working in Big Data roles in an effort to understand who they are, where they are working and what they are working on. I found some of his observations insightful and others more surprising.

The data scientists who Davenport had spoken with had academic backgrounds in many different disciplines including physics, mathematics, computer science, statistics and operations research, as well as less obvious ones such as meteorology, ecology and several social science fields. Almost all had Ph.D.s, and in many cases their research had been a catalyst for the development of their deep data skills (Davenport cited one recent Ph.D. cohort of seven applied ecology students, of whom six had launched careers in Big Data, rather than academia, after finishing graduate school).

More surprising, however, was Davenport’s observation that “very few large companies are going to bother with ‘first generation’ data scientists.” While pointing to General Electric as a notable exception, he noted that the vast majority of the data scientists who he had found worked at platform companies such as Facebook, Twitter, Google, Yahoo and LinkedIn and at startup companies such as Splunk [1] see exciting entrepreneurial opportunities [2] in creating tools to enable more efficient access, visualization and mining of large streams of data from multiple sources.

“Data management seems to dominate the world of Big Data right now,” Davenport explained. “There’s a huge focus on visualization and reporting among the data scientists I talked to. The statisticians are a little bit frustrated … One of the quips I heard was, ‘Big Data = Little Math.’ ”

His conclusion: today, data-driven managerial decision-making still relies almost exclusively on small-to-medium sized datasets stored in traditional data structures.

I heard some of these same themes at the recent INFORMS Analytics Conference, most notably in a panel discussion on “Innovation and Big Data.” The panelists included Diego Klabjan (Northwestern University), Thomas Olavson (Google), Blake Johnson (Stanford University), Daniel Graham (Teradata) and Michael Zeller (Zementis, Inc).

Very early in the discussion, the panelists all agreed that there’s a huge amount of confusion about what is actually happening in this space today, and that this confusion is being amped up by the massive amount of hype about Big Data (a recent Google search on “Big Data” returns a cool 1,350,000,000 entries, and a quick query on Google Insights for Search reveals that the number of people searching on this term has grown exponentially in the past year [3]). However, as Northwestern’s Klabjan bluntly stated, “OK, with Hadoop we know how to store Big Data. But doing analytics on top of Big Data? We have a long way to go.”

The discussion often touched on the “volume, velocity and variety” [4] of today’s data and the accompanying high level of complexity that leads to a variety of challenges in extracting value from it. Teradata’s Graham acknowledged these risks explicitly when he encouraged executives in the audience to (in the words of Tom Peters) “fail forward fast,” while Google’s Olavson urged the audience to not get so caught up in the complexity of the data challenges and the power of the data management solutions that the key business problems slip out of sight.

The panelists often came back around to the human side of Big Data. Zementis’ Zeller envisioned a future in which the work done by the data scientist of today is broken up into a variety of emerging roles such as data technician and data analyst, while Stanford’s Johnson suggested that the democratization of data would create a need for a quality assurance function for not only the expanding mounds of data but also for the analytic models built on top of it. And Olavson’s final comment was that with or without Big Data, analytics is ultimately about enabling smart people to use data and tools to create business value.

Which brings me back to my earlier conversation with Davenport. At several points in our discussion, he drew a clear distinction between the data scientists of today and the “second generation” of tomorrow. Based on his research, Davenport anticipates that “as more and better data management tools come to market, less software development will be needed to work with Big Data.” In this world, a combination of large, unstructured data management skills and analytic modeling capabilities will be a powerful combination.
It will, I suspect, be here before we know it.

Vijay Mehrotra (vmehrotra@usfca.edu), senior INFORMS member and chair of the ORMS Today and Analytics Committee for INFORMS, is an associate professor, Department of Finance and Quantitative Analytics, School of Business and Professional Studies, University of San Francisco. He is also an experienced analytics consultant and entrepreneur and an angel investor in several successful analytics companies.

REFERENCES, NOTES & FURTHER READING

  1. To read about Splunk’s recent successful IPO, see http://dealbook.nytimes.com/2012/04/19/splunk-soars-in-debut/.
  2. See for example http://www.gsb.stanford.edu/news/headlines/entrepreneur-conference-2012.html.
  3. See http://www.google.com/insights/search/#q=%22Big%20Data%22&cmpt=q.
  4. The three Vs are a popular foundation for Big Data – for more background on this, see http://radar.oreilly.com/2012/01/what-is-big-data.html.

Big Data is the greenest data of all

Green is the new black…or so you’d think from the incredible amount of focus paid on efficient energy production and consumption. With so much emphasis on building a green planet and increased government and utility spending on energy efficiency, one would expect a reduction in total energy spend in commercial and residential markets. It hasn’t happened.

Despite the recent push to implement energy efficiency programs, total energy consumption and average energy prices in both markets continues to rise.  Consumers continue to face higher energy bills because the average energy price is rising faster than the reduction in customer demand.  These indicators might make you think we’re going in the wrong direction, but it is actually too early to say all of this effort isn’t working.

Give it time

The reason for optimism? Smart meters and a variety of sensors across the energy supply chain are creating the ability to collect and analyze massive amounts of energy production, transmission and consumption data. The arrival of Hadoop and other Big Data tools makes it possible for analysis to keep up with rapidly increasing data volumes. All of that means nothing if it isn’t actionable. Let’s take a look at just a couple of ways that Big Data can be Green Data.

  • Forecasting demand – We’re slowly moving toward homes and businesses having smart meters that report actual consumption back to utilities and allow decisions on how to supply energy more efficiently. Right now, those meters provide data in 15-minute increments but down the road, we can increase the frequency of reporting as the ability to take and crunch data increases. When we know more, we make better energy supply decisions.
  • Conservation ‘signals’ – To make markets behave efficiently, there needs to be a way for energy use to change with availability. The most common way this is done is through price. Once energy providers can accurately forecast demand and in-the-moment use, pricing signals will cause consumers, individual or commercial, to lower usage. This proactive approach is mostly missing from today’s energy markets. When we know more, we make more efficient usage decisions.
  • Measuring efficiency – The US Department of Energy is currently developing the SEED database, meant to be a way to allow buildings to benchmark against other facilities. While that may not seem so hard, there are big factors that need to be part of any efficiency algorithm, like weather, the number of people, what types of machines are being operated and when. Once buildings have an ‘energy value’, decisions can be made on how much real estate is worth, where to retrofit and how to design new facilities. When we know more, we can make better design decisions.

Change needs to happen

This is all great and in theory, will make our planet better for everyone. There are still a few things, however, that stand in the way. There needs to be better standards for how information is collected, stored and shared so that energy supply chains can be better analyzed and operated optimally. We also need to make sure energy providers aren’t reaping excessive benefit from more efficient consumption without passing that benefit to energy consumers, which would ‘mute’ the signals that drive positive change. Balancing the right amount of regulation with allowing market forces to operate is an age-old challenge.

Assuming we’ll overcome these challenges, where things go is wide open. There are gamification possibilities (who’s the most green in the neighborhood/city/region?) and countless other strategies on the horizon. There’s no doubt Big Data will be driving us toward a greener planet.

Big data’s big requirements

There is no shortage of information on how to use parts of the most common big data solutions, like Hadoop. But what about the other pieces of the puzzle necessary to get real business value from this technology? For starters, there is a need to make decisions around:

  • Mobile strategy and its support
  • Web delivery
  • User interactivity/experience
  • Data support and operations
  • Security
  • Storage (for both big data and traditional SQL)
  • Scalability
  • Revenue Generation models
  • Filtering knowledge and noise
  • Integration into existing applications and processes

For these areas, there is less information available and just as important a need.

Fragmented solutions

In reality, there isn’t a single application development platform that covers a full solution. There are instead many choices, each having tradeoffs in usability and scalability.

Survivability

Also, there are solutions that have already come and gone in the short time big data has been in vogue. The question arises, “How does one now what will be around and still supported in two years’ time?”  Predicting the future popularity and support for the many available tools is a significant challenge.

Maturity

Open Source is an excellent way to ramp quickly and cheaply, but the solutions aren’t necessarily as mature as market requirements. As things stand today, it would be easy to get a few months into development of a solution before a particular tool’s shortcomings become apparent.  A great example would be that basic features like multi-language support are missing from some of the common solutions. Some lack authentication capabilities.

The UI

Lastly, user interfaces are no longer a common part of the equation. Less investment has been made in UI technologies in the haste to bring back-end capabilities to market. Avoiding these problems involves having enough knowledge of the space to make sound choices.

Big Data means broad solutions to complex problems. There are enormous opportunities ahead for those who consider the ecosystem beyond the big names, like Hadoop. 

Big Data beyond the hype cycle

Big Data is more than hot. It is one of the most talked about phenomenas of the past year and will continue to be the hot topic going forward. Just like social media, there is enormous pressure on organizations to get into the Big Data game and quickly. Beyond the excitement and anxiety, there are reasons you should slow down and think about what you want to do.

Complexity

Environments are complex, requiring organizations to seek technology that is plug and play and can stand up easily in diverse infrastructures. The current status of the technology market could be described as many tools that are equally complex to understand, deploy and use. Standing them up has nuances that anyone considering a Big Data solution should understand first. The nuances fall into three categories; resources, timelines and tools.

Resources

Most new data analytics technologies were created for developers and require java skills or SQL experience. The traditional data scientist who understands data modeling, on the other hand, doesn’t come from a coding background and can’t access the data they’d like to analyze. Those data integration skills lie on one side of the technical fence and the data knowledge on the other. I guess you could say data scientists are from Mars, developers are from Venus.

Timelines

Data science is a challenging field. Data scientists are used to writing algorithms for others to develop, test and implement. The traditional cycle for doing that was six months or more in most industries. This waterfall approach is methodical but takes too long to stand up. The world can change in six months. Time to market is both a barrier to getting started and a competitive differentiator if you can shorten it.

Tools

Pieces of the solution exists. First and foremost, there is Hadoop, the premier product for distributed computing, which involves shuffling jobs between servers to run large scale analytics. Hadoop solves the problem of storage and parallel processing in an elegant way. While Hadoop is the rallying point for Big Data, by itself it isn’t a solution. It sometimes seems like the solution because when data gets large, there is nothing that can replace Hadoop. There’s a real expectation gap, however, between the engine that is Hadoop and the drive train that is required to do useful things with Big Data.

So what have companies done to address these issues? That’s another story.