Thursday, November 20, 2014

Application Analytics Innovation: Wolters Kluwer, CCH Gets It

I've been working on application analytics use cases and scenarios going on nine years now – and I spend a good deal of my time supporting (and learning from) dev teams of all shapes and sizes – and having said that, I’m pleased to say, this week was a first for me. This week I had the good fortune to sit in on the final hours of a two day Code Games inside Wolters Kluwer, CCH.

Coding competitions/events like this are nothing new, but running a good one is never easy – required ingredients include a positive, nurturing culture, some serious organizational and editorial skills, and (of course) sharp developers. On this day, Wolters Kluwer, CCH had all three on display in spades.


POSITIVE NURTURING CULTURE? YES!


Now, I’m not the first to take notice (those of you that know me know that one of my favorite aphorisms is that ideas only have to be good – they don’t need to be original). Forbes covered Wolters Kluwer’s code games earlier this week in their article Top Women CEOs On How Bold Innovation Drives Business. Karen Abramson, CEO, Wolters Kluwer Tax and Accounting highlights their “Code Games” as one of the three pillars in “a constant eco-system of innovation across the organization.”

SERIOUS ORGANIZATIONAL AND EDITORIAL SKILLS? YES! (… and here’s where it gets interesting)


I wish that I could share some of the awesome presentations I heard on that night (there were truly some awesome ones), but an executed NDA was the prerequisite for my attendance. What I CAN relay is that their Code Games was the first one I've seen where “most innovative use of application analytics” was one of the award categories. At the end of the night, one of the two Code Games organizers, Elizabeth Weissman, Director of Innovation, iLab Solutions at Wolters Kluwer, CCH said that she thought the application analytics category was a perfect complement to the other wholly business-focused ones because it sent the dev teams the important message that “application analytics need to be a part of an application’s design from the very beginning – not an afterthought.”

It is worth noting that the other mainstream award categories (which were, in fact, more prestigious because they were focused squarely on core business impact) were judged on a) Innovation, b) Technical Achievement, and c) Potential Value Generation. As such, no team would have included application analytics at all if they did not believe upfront that it would contribute in some material way to one or more of these three criteria.

…but, for me, it’s bigger than that – as those teams that included app analytics presented to the Code Games judges, those judges (and the 150+ dev. audience members) also got the message that app analytics is not just for website forensics and user clicks; and in this case, the judges panel included Wolters Kluwer, CCH executives, Teresa Mackintosh, President & CEO, Mark Lawler, VP Software Development, Brian Diffin, Executive VP Global Technology, and some of Wolters Kluwer, CCH’s own VIP clients – and now they all get it too!
From right to left, Elizabeth, Teresa, Bernie, and me (photo-bombing this Wolters Kluwer, CCH "A-team")


How’d they do it? Working with Bernie Hirsch, Director, Software Development at Wolters Kluwer, CCH, (the other half of the Code Games organizer dynamic duo) we setup a privately hosted PreEmptive Analytics endpoint in an Azure VM that matched their existing production analytics environment and that allowed dev teams to securely and easily add analytics to their projects – whether or not the apps ran on-premises, used client data, connected to internal systems, etc.

SHARP PROGRAMMERS? SERIOUSLY?? (Of course YES!)


As I've already said, I can’t describe specifically what these teams built, but here are a few factoids:

  • Every team that decided to include app analytics succeeded. 
  • The teams instrumented apps running .NET, Java Script, and mobile surfaces and the apps themselves were both customer facing and internal, line of business apps. 
  • The teams collected session and usage data, exceptions, timing, and custom-app-specific telemetry too. 
  • While the applications ran the gamut from on-premises LoB and client-facing, all of the app telemetry was transmitted to Azure-hosted (private cloud) endpoints (and one app then pulled the data out and back into the very app that was being monitored! – but now I have to stop before I say too much). 
  • Not all teams incorporated analytics into their projects, but the most decorated team was one of those that did – NOT to track exceptions or page views – but as the backbone to one of their most powerful data-driven features for the business.
Developer presentations ran into the evening in front of a packed house and 150+ employees watching remotely.
So there we have it, Wolters Kluwer, CCH brought together the culture, the organizational savvy, and the technical talent to pull-off what was truly an exceptional event.  ...and I'm grateful that I had the chance to come along for the ride. Cheers!

Wednesday, November 5, 2014

Application protection – why bother?

(…and, no, this is not a rhetorical question)

Why should a developer (or parent organization) bother to protect their applications? Given what PreEmptive Solutions does, you might think I’m being snarky and rhetorical – but, I assure you, I am not. The only way to answer such a question is to first know what it is you need protection from.

If you’re tempted to answer with something like “to protect against reverse engineering or tampering,” that is not a meaningful answer – your answer needs to consider what bad things happen if/when those things happen. Are you looking to prevent piracy? Intellectual property theft? AGAIN – not good enough – the real answer is going to have to be tied to lost revenue, operational disruption resulting financial or other damage, etc. Unless you can answer this question – it is impossible to appropriately prioritize your response to these risks.

If you think I’m being pedantic or too academic, then (and forgive me for saying this) you are not the person who should be making these kinds of decisions. If, on the other hand, you’re not sure how to answer these kinds of questions – but you understand (even if only in an intuitive way) the distinction between managing risks (damage) versus preventing events that can increase risk – then I hope the following distillation of how to approach managing the unique risks that stem from developing in .NET and/or Java (managed code) will be of value.

First point to consider: managed code is easy to reverse engineer and modify by design – and there are plenty of legitimate scenarios where this is a good thing.

Your senior management needs to understand that reverse engineering and executable manipulation is well-understood and widely practiced. Therefore, if this common practice poses any material risks to your organization, they are compelled to take steps to mitigate those risks – of course, if this basic characteristic of managed code does not pose a material risk – no additional steps are needed (nor should they be recommended),

Second point to consider: reverse engineering tools don’t commit crimes – criminals do; but criminals have found many ways to commit crimes with reverse engineering (and other categories of) tools.




In order to be able to recommend an appropriate strategy, a complete list of threats is required – simply knowing that IP theft is ONE threat is not sufficient – if the circulation of counterfeit applications pose an incremental threat – you need to capture this too.

Third point to consider: Which of the incident types above are relevant to your specific needs? How important are they? How can you objectively answer these kinds of questions?


Risk management is a mature discipline with well-defined frameworks for capturing and describing risk categories; DO NOT REINVENT THE WHEEL. How significant (material) a given risk may be is defined entirely by the relative impact on well-understood risk categories. The ones listed above are commonly associated with application reverse engineering and tampering - but these are not universal nor is the list exhaustive.

Fourth point to consider: How much risk is too much? How much risk is acceptable (what is your tolerance for risk)? …and what options are available to manage (control) these various categories of risk to keep them within your organization’s “appetite for risk?”






















Tolerance (or appetite) for risk is NOT a technical topic – nor are the underlying risks. For example, an Android app developed by 4 developers as a side project may only be used by a small percentage of your clients to do relatively inconsequential tasks – the developers may even be external consultants – so the app itself has no real IP, generates no revenue, and is hardly visible to your customer base (let alone to your investors). On the other hand, if the result of a counterfeit version of that app results in client loss of data, reputation damage in public markets, and regulatory penalties – the trivial nature of that Android really won’t have mattered.

In other words, even if the technical scope of an application may be narrow, the risk – and therefore the stakeholders – can often be far reaching.

Risk management decisions must be made by risk management professionals – not developers (you wouldn't want risk managers doing code reviews would you?).

Fifth point to consider: what controls are available specifically to help manage/control the risks that stem from managed code development?



























Obfuscation is a portfolio of transformations that can be applied in any number of permutations – each with its own protective role and its own side effects.

Tamper detection and defense as well as regular feature and exception monitoring also have their own flavors and configurations.

Machine attacks, human attacks, attacks whose goal is to generate compliable code versus those designed to modify specific behaviors while leaving others in tact all call for different combinations of obfuscation, tamper defense, and analytics.

The goal is to apply the minimum levels of protection and monitoring required to bring identified risks levels down to an acceptable (tolerable) level. Any protection beyond that level is “over kill.” Anything less is wasted effort. …and this is why mapping all activity to a complete list of risks is an essential first step.

Sixth point to consider: the cure (control) cannot be worse than the disease (the underlying risk). In other words, the obfuscation and tamper defense solutions cannot be more disruptive than the risks these technologies are designed to manage.

















Focusing on the incremental risks that introducing obfuscation, tamper defense, and analytics can introduce, the following questions are often important to consider (this is a representative subset – not a complete list):
· Complexity of configuration
· Flexibility to support build scenarios across distributed development teams, build farms, etc.
· Debugging, patch scenarios, extending protection schemes across distinct components
· Marketplace, installation, and other distribution patterns
· Support for different OS and runtime frameworks
· Digital signing, runtime IL standards compliance, and watermarking workflows
· Mobile packaging (or other device specific requirements)
· For analytics there are additional issues around privacy, connectivity, bandwidth, performance, etc.
· For commercial products, vendor viability (will they be there for you in 3 years) and support levels (dedicated trained team? Response times?)

So why bother?
Only if you have well-defined risks that are unacceptably high (operational, compliance, …)
AND the control (technology + process + policy) reduces the risk to acceptable levels
WITHOUT unacceptable incremental risk or expense.