Wednesday, November 5, 2014

Application protection – why bother?

(…and, no, this is not a rhetorical question)

Why should a developer (or parent organization) bother to protect their applications? Given what PreEmptive Solutions does, you might think I’m being snarky and rhetorical – but, I assure you, I am not. The only way to answer such a question is to first know what it is you need protection from.

If you’re tempted to answer with something like “to protect against reverse engineering or tampering,” that is not a meaningful answer – your answer needs to consider what bad things happen if/when those things happen. Are you looking to prevent piracy? Intellectual property theft? AGAIN – not good enough – the real answer is going to have to be tied to lost revenue, operational disruption resulting financial or other damage, etc. Unless you can answer this question – it is impossible to appropriately prioritize your response to these risks.

If you think I’m being pedantic or too academic, then (and forgive me for saying this) you are not the person who should be making these kinds of decisions. If, on the other hand, you’re not sure how to answer these kinds of questions – but you understand (even if only in an intuitive way) the distinction between managing risks (damage) versus preventing events that can increase risk – then I hope the following distillation of how to approach managing the unique risks that stem from developing in .NET and/or Java (managed code) will be of value.

First point to consider: managed code is easy to reverse engineer and modify by design – and there are plenty of legitimate scenarios where this is a good thing.

Your senior management needs to understand that reverse engineering and executable manipulation is well-understood and widely practiced. Therefore, if this common practice poses any material risks to your organization, they are compelled to take steps to mitigate those risks – of course, if this basic characteristic of managed code does not pose a material risk – no additional steps are needed (nor should they be recommended),

Second point to consider: reverse engineering tools don’t commit crimes – criminals do; but criminals have found many ways to commit crimes with reverse engineering (and other categories of) tools.




In order to be able to recommend an appropriate strategy, a complete list of threats is required – simply knowing that IP theft is ONE threat is not sufficient – if the circulation of counterfeit applications pose an incremental threat – you need to capture this too.

Third point to consider: Which of the incident types above are relevant to your specific needs? How important are they? How can you objectively answer these kinds of questions?


Risk management is a mature discipline with well-defined frameworks for capturing and describing risk categories; DO NOT REINVENT THE WHEEL. How significant (material) a given risk may be is defined entirely by the relative impact on well-understood risk categories. The ones listed above are commonly associated with application reverse engineering and tampering - but these are not universal nor is the list exhaustive.

Fourth point to consider: How much risk is too much? How much risk is acceptable (what is your tolerance for risk)? …and what options are available to manage (control) these various categories of risk to keep them within your organization’s “appetite for risk?”






















Tolerance (or appetite) for risk is NOT a technical topic – nor are the underlying risks. For example, an Android app developed by 4 developers as a side project may only be used by a small percentage of your clients to do relatively inconsequential tasks – the developers may even be external consultants – so the app itself has no real IP, generates no revenue, and is hardly visible to your customer base (let alone to your investors). On the other hand, if the result of a counterfeit version of that app results in client loss of data, reputation damage in public markets, and regulatory penalties – the trivial nature of that Android really won’t have mattered.

In other words, even if the technical scope of an application may be narrow, the risk – and therefore the stakeholders – can often be far reaching.

Risk management decisions must be made by risk management professionals – not developers (you wouldn't want risk managers doing code reviews would you?).

Fifth point to consider: what controls are available specifically to help manage/control the risks that stem from managed code development?



























Obfuscation is a portfolio of transformations that can be applied in any number of permutations – each with its own protective role and its own side effects.

Tamper detection and defense as well as regular feature and exception monitoring also have their own flavors and configurations.

Machine attacks, human attacks, attacks whose goal is to generate compliable code versus those designed to modify specific behaviors while leaving others in tact all call for different combinations of obfuscation, tamper defense, and analytics.

The goal is to apply the minimum levels of protection and monitoring required to bring identified risks levels down to an acceptable (tolerable) level. Any protection beyond that level is “over kill.” Anything less is wasted effort. …and this is why mapping all activity to a complete list of risks is an essential first step.

Sixth point to consider: the cure (control) cannot be worse than the disease (the underlying risk). In other words, the obfuscation and tamper defense solutions cannot be more disruptive than the risks these technologies are designed to manage.

















Focusing on the incremental risks that introducing obfuscation, tamper defense, and analytics can introduce, the following questions are often important to consider (this is a representative subset – not a complete list):
· Complexity of configuration
· Flexibility to support build scenarios across distributed development teams, build farms, etc.
· Debugging, patch scenarios, extending protection schemes across distinct components
· Marketplace, installation, and other distribution patterns
· Support for different OS and runtime frameworks
· Digital signing, runtime IL standards compliance, and watermarking workflows
· Mobile packaging (or other device specific requirements)
· For analytics there are additional issues around privacy, connectivity, bandwidth, performance, etc.
· For commercial products, vendor viability (will they be there for you in 3 years) and support levels (dedicated trained team? Response times?)

So why bother?
Only if you have well-defined risks that are unacceptably high (operational, compliance, …)
AND the control (technology + process + policy) reduces the risk to acceptable levels
WITHOUT unacceptable incremental risk or expense.

Tuesday, April 22, 2014

Cross Platform Application Analytics: Adding meat to pabulum

Could I have chosen a title with less meaning and greater hype? I seriously doubt it.


We have all heard that you can gauge how important a thing or concept is to a community by the number of names and terms used to describe that thing (the cliche is Eskimos and ice) - and I proposed a corollary; you can gauge how poorly a community understands a thing or concept by how heavily it overloads multiple meanings onto a single name or term. ...and "analytics," "platform," and even "application" all fall into this latter category. 
 
What kind of analytics and for whom? What is a “platform?” And what does crossing one of these (or between them) even mean?

In this post, I'm going to take a stab at narrowing the meaning behind these terms just long enough to share some "tribal knowledge" on what effectively monitoring and measuring applications can mean - especially as the very notion of what an application can and should be is evolving even as we deploy the ones we've just built.

Application Analytics: If you care about application design and the development, test, and deployment practices that drive adoption – and if you have a stake in both the health of your applications in production and their resulting impact – then you’ll also care about the brand of application analytics that we’ll be focusing on here.

Cross Platform: If your idea of “an application” is holistic and encompasses every executable your users touch (across devices and over time) AND includes the distributed services that process transactions, publish content, and connect users to one another (as opposed to the myopic perspective of treating each of these components as standalone) – then you already understand what “a platform” really means and why, to be effective, application analytics must provide a single view across (and throughout) your application platform. 

PreEmptive Analytics

At PreEmptive, we’d like to think that we've fully internalized this worldview where applications are defined less by any one instance of an executable or script and more meaningfully treated as a collection of components that, when taken together, address one or more business or organizational needs. …and this perspective has translated directly into PreEmptive Analytics’ feature set.

Because PreEmptive Analytics instrumentation runs inside a production application (as any application analytics instrumentation must), we find it helpful to divide our feature set into two buckets;

  1. Desired, e.g. those that bring value to our users like feature tracking and 
  2. Required, e.g. those features that, if they do not behave, damage the very applications they are designed to measure.

How do you decide for yourself what’s desired versus required for your organization?


The list of “desired features” can literally be endless – and a missing “desired feature” can often be overlooked and forgiven because the user can be compensated with some other awesome feature that still makes implementing PreEmptive Analytics worthwhile. On the other hand, miss ANY SINGLE “required feature,” and the project is dead in the water – Violate privacy? Negatively impact performance or quality? Complicate application deployment? Generate regulatory, audit, or security risk? Any one of these issues is a deal breaker.

PreEmptive Analytics “required” cross platform feature set


Here’s a sampling of the kinds of features that our users often rely upon to hit their “required” cross platform feature set:

Platform, runtime, and marketplace coverage: will PreEmptive Analytics instrumentation support client, middle-tier, and server-side components?

PreEmptive Analytics instruments:

  • All .NET flavors (including 2.0 through WinRT and WP), C++, JavaScript, Java (including 8), iOS, and Android (plus special support for Xamarin generating native mobile apps across WP, iOS, & Android). 
  • Further, our instrumentation passes Apple, Microsoft, Amazon, and Google marketplace acceptance criteria.    

Network connectivity and resilience: will PreEmptive Analytics be able to capture, cache, and transport runtime telemetry across and between my users’ and our own networks?

PreEmptive instrumentation provides:

  • Automatic offline caching inside your application across all mobile, PC, cloud, and server components (with the exception of JavaScript). Special logic accommodates mobile platforms and their unique performance and storage capabilities. After automatically storing data when your application is offline, it will automatically stream the telemetry up once connectivity is reestablished. 

PreEmptive Analytics endpoints can provide:

  • Longer-term data management for networks that are completely isolated from outside networks allowing you to arrange for alternative data access or transport while respecting privacy, security, and other network-related constraints. 

Privacy and security at runtime and over time: will PreEmptive Analytics provide the flexibility to enforce your current and evolving security and privacy obligations?

PreEmptive Analytics instrumentation

  • Only collects and transmits data that has been explicitly requested by development. There is no unintended “over communication” or monitoring. 
  • When data is transmitted, telemetry is encrypted over the wire. 
  • Includes an extensible Opt-in switch that can be controlled by end users or through web-service calls allowing your organization to adjust and accommodate shifting opt-in and privacy policies without having to re-instrument and redeploy your applications. 

PreEmptive Analytics endpoints can:

  • Reside and be managed entirely under your control – either on-premises or inside a virtual machine hosted in a cloud under your direct control. 
  • They can be reconfigured, relocated, and dynamically targeted by your applications – even after your applications have been deployed. 

Performance and bandwidth: will PreEmptive Analytics instrumentation impact my application’s performance from my users’ experience or across the network?

PreEmptive instrumentation:

  • Runs inside your applications’ process space in a low priority thread – never competing for system resources. 
  • Utilizes an asynchronous queue to further optimize and minimize the collection and transmission of telemetry once captured inside your application. 
  • Has “safety valve” logic that will automatically begin throwing away data packets and ultimately shut itself down when system resources are deemed to be too scarce – helping to ensure that your users’ experiences are never impacted. 
  • Employs OS and device-specific flavors of all of the above ensuring that – even with injection post-compile – every possible step is taken to ensure that PreEmptive Analytics’ system and network footprint remains negligible. 

What about the PreEmptive Analytics “desired” cross platform feature set? (The features that make analytics worth doing) As I’ve already said, this list is literally an endless one – If I were to list only the categories (let alone the features in each category), this would make an already long post into very very long post. So, the desired feature discussion will have to come later… 

What’s the bottom Line for “Cross Platform Application Analytics?”


Be consistent – make sure your application analytics technology and practice are aligned with your definition of what an application actually is – and this is especially true when evaluating “cross-platform” architectures and semantics. A mismatch here will likely wipe out any chance of a lasting analytics solution, increase the cost of application analytics over time, and add to your technical debt.

Separate “needs” from “wants” – take every action possible to ensure that your application analytics implementation does no harm to the applications being measured and monitored either directly (performance, quality, …) or indirectly (security, reputation, compliance).

Want to put us through our paces? Visit www.preemptive.com/pa and request an eval...