This article is the final in a three-part series in which author Drew West, product marketing director at Deltek, a Herndon, Va., provider of enterprise software and information solutions, draws from his experience helping research organizations such as Millward Brown and Chadwick Martin Bailey to explain how a research firm’s own internal “big data” can help it (1) accurately plan projects, (2) efficiently deliver the work and (3) effectively evaluate results.

The first article identified the operational big data inside the four walls of a research organization, with guidance for using that data to create better and more accurate project plans. The second article moved along to delivery of the actual research work, showing how internal big data should guide appropriate adjustments when even the best-planned research projects inevitably face change. After first using internal big data to plan projects and to then deliver, this final segment looks once again to big data for the last phase of project delivery – evaluating results. — Joseph Rydholm, Quirk’s editor

Since profitable work hinges on accurate plans and efficient delivery, wouldn’t you like a highly repeatable formula to achieve both? Considering a project’s many variables  – like the client, scope, resources or manager – you might think “highly repeatable” is unlikely and a “formula” also unrealistic. (You are, after all, conducting dynamic research – not manufacturing pencils.) While neither project planning nor delivery is an exact science, deep within your organization’s internal big data is insight for avoiding past mistakes and for repeating historical successes.

Looking back. Consider a past project that didn’t have the results you’d hoped for. Perhaps profit margin didn’t meet the goal, the work took too long or failed to delight the client or resource utilization wasn’t on target. Surely you don’t want this happening again – but can you prevent it? Was it a scope change, unplanned expenses or misaligned resources? Among all these related contributing factors, isolating the true root cause helps you prevent it from happening again – so that future work won’t be similarly affected.

Looking ahead. Now think ahead to work you’re might take on. Especially in today’s environment, you’d like the price competitive and the work profitable – and need to plan for both. Being certain of either means knowing what’s historically led to successful outcomes. By going beyond merely copying a similar past engagement – like scope, pricing or perhaps the involved resources – exposing the patterns of continually successful combinations might unearth (yes) a formulaic approach that could be used in planning all new engagements.

Neither is easy. But many research organizations struggle to isolate the real issues of the past and from them, learn repeatable approaches to drive successful future results. Answers lie deep within the organization’s operational big data, the nearly infinite isolated transactions resulting from countless individual tasks which together accumulate thousands of worked hours by perhaps hundreds of resources across years of work – and each hour’s profitability hidden deep within the financial records. As we know, data is by definition big if its volume makes it difficult to aggregate and analyze – and many organizations face common challenges in their evaluating their operational big data:

  • Disconnected. Big data, sure, but usually it’s unfortunately spread among several locations – like business development, resource management, project delivery and financial management – creating information silos. Information loses context as workflow passes among the unnatural boundaries of multiple systems, so combining data is cumbersome, as is sustaining any relationships within it. 
  • Duplicated. Lost in the good intentions of stand-alone systems is the fact that most overlap in some way. Perhaps a superficially good thing, these overlaps actually mean redundant data across different systems (like worked hours in a project management tool and also the back-office financials). But redundant data isn’t always consistent – so information often must be reconciled before it can be trusted.
  • Difficult. Once combined and reconciled, actually interpreting the data can be exceptionally difficult. Analytical tools prevalent in most research organizations aren’t typically pointed at internal data. Without an efficient click-click-click path to root causes behind issues or the reasons behind success, project managers needing quick answers are unlikely to invest the time.

Wouldn’t bringing together your organization’s internal big data lead to a more meaningful evaluation of the past – and position your firm for profitable future results? To evaluate the past and better guide the future, your project managers need to aggregate internal big data, analyze it and decide what to do more (or less) of.

Aggregate. Evaluating results starts with bringing data together. For this, migrating to a unified project and financial management system may work better than using a traditional centralized data warehouse. Why? Your managers need insight that goes beyond merely combining data. While a data warehouse can combine data from several stand-alone systems, only a unified management system can maintain the relationships among clients, projects, people, work and results – without complex data-extraction and the risk of duplicate data. Since results are always connected to workflow, shouldn’t the data be? To keep this context, aggregate information, not just data – a subtle yet distinct difference that brings more meaningful analysis to evaluating results.

Analyze. Analysis starts at the end result, which managers must be easily able to identify along any of multiple dimensions – like utilization by resource or profit margin by client, service line or even project task and any mix of these. Managers can then drill-in to expose what combination of resources, tasks, work and decisions led to the result. Despite the complex relationships, you want simplicity – analysis in straightforward layman’s terms and easy, efficient operation. Again, a unified system may prove helpful for this: Managers won’t have to fumble with complex analytical tools found atop most data warehouses, trip over boundaries between systems or decode incoherent database terms just to construct a simple query. Imagine a manager planning a project, easily able to not only ask but answer: For market research projects, which tasks are typically over-budget, what is the average tenure on those resources, the commonality among clients and the average budget overage?

Decide. With data aggregated into meaningful information and organized for deep yet rapid analysis, decisions become the easy part. From the above example, the project manager sees other clients like hers are historically problematic, so she decides to adjust the task duration, staff the work with more senior skilled resources and set a budget alert to identify scope creep. Don’t you like the odds her project will bring the profit the organization expects?

Moving ahead – look inside. If your organization seems stuck in a cycle of repeating mistakes or inconsistent success, perhaps you need better ways to evaluate your own internal big data. If your managers can’t seem to find truly meaningful information or leverage that information for better results, look for ways to pull it all together. Establish the connections between decisions and results and make it easy for your managers to expose those relationships. You’ll see continuous improvement in profitable results when managers at all levels can avoid repeating mistakes and foster consistent success.

Drew West is product marketing director in Deltek’s Woburn, Mass., office. He can be reached at 617-528-2331 or at andrewwest@deltek.com. For more information visit www.deltek.com.