Tuesday, September 1, 2015

When it comes to application risk management, you can't do it alone.

I’m often asked to estimate how many developers are required to obfuscate and harden their applications against reverse engineering and tampering – and when they say “required,” what they usually mean is what is the bare minimum number of developers that need to be licensed to use our software.

Of course it's important to get the number of licensed users just right (if the count is too high, you're wasting money - but, if it's too low, you're either not going to be efficient or effective - or worse still - you're forced to violate the license agreement to do your job).

Yet, as important as this question seems, it's not the first question that needs answering.

Staffing to effectively manage application risk is not the same as counting the number of concurrent users required to run our (or any) software at a given point in time.

Consider this:

How many people are required to run our application hardening products on a given build of an application? Actually, none at all, both Dotfsucator for .NET and DashO for Java) can be fully integrated into your automated build and (continuous) deployment processes.

However, how many people does it take to effectively protect your application assets against reverse engineering and tamperingThe answer can be no less than two. Here’s why…

  • Application risk management is made up of one (or more) controls (processes not programs). These controls must first be defined, then implemented, then applied consistently, and, lastly, monitored to ensure effective use.
  • Application hardening (obfuscation and tamper defense injection) is just such a control – a control that is embedded into a larger DevOps framework – and a control that is often the final step in a deployment process (followed only by digital signing).


Now, in order to be truly effective, application hardening cannot create more risk than it avoids – the cure cannot be worse than the disease.

What risks can come from a poorly managed application hardening control (process)?

If an application hardening task fails and goes undetected,

  • the application may be distributed unprotected into production and the risk of reverse engineering and tamper go entirely unmanaged, or 
  • the application may be shipped in a damaged state causing runtime failures in production.


If an application hardening task failure is detected, but the root cause cannot be quickly fixed, then the application can't be shipped; deadlines are missed and the software can't be used.

So, what’s the minimum number of people required to protect an application against reverse engineering and tampering?

You’ll need (at least) one person to define and implement the application hardening control.

…and you’ll need one person to manage the hardening control (monitor each time the application is hardened, detect any build issues, and resolve any issues should they arise in a timely fashion).

Could one individual design, implement and manage an application hardening control? Yes, one person can do all three tasks for sure.

However, if the software being protected is released with any frequency or with any urgency, one individual cannot guarantee that he/she will be available to manage that control on every given day at every given time – they simply must have a backup – a "co-pilot."

No organization should implement an application hardening control that’s dependent on one individual – there must be at least two individuals trained (and authorized) to run, administer, and configure your application hardening software and processes. The penalty for unexpected shipping delays and/or shipping damaged code or releasing an unprotected application asset into “the wild” is typically so severe that even though the likelihood of such an event occurring on any given day may seem remote - it cannot be rationalized.

This is nothing new in risk management – every commercial plane flies with a co-pilot for this very reason – and airline manufacturers do not build planes without a co-pilot’s seat. It would be cheaper to build and fly planes that only accommodate one pilot – and it wouldn’t be an issue for most flights – but to ignore the risk that having a single pilot brings would be more than irresponsible – it would be unethical.

Are there other reasons for additional people and processes to be included? Of course – but these are tied to development methodologies, architecture, testing and audit requirements of the development organization, etc. These are not universal practices.

If reverse engineering and/or application tampering pose Intellectual Property, privacy, compliance, piracy, or other material risks, they need to be managed accordingly - as a resilient and well-defined process. Or, in a word, when it comes to application risk management, you can't do it alone.

Tuesday, June 23, 2015

6 signs that you may be overdue for a mobile application risk review

Every organization must ultimately make their own assessment as to the level of risk they are willing to tolerate – and mobile application risk is no exception to this rule.

Yet, given the rapidly changing mobile landscape (inside and outside of every enterprise), organizations need to plan on regular assessments of their mobile risk management policies – especially as their mobile applications grow in importance and complexity.

Here are 6 indicators that you may be overdue for a mobile application risk assessment.
  1. Earlier PC/on-premises equivalents ARE hardened and/or monitored. Perhaps these risks need to be managed on mobile devices too – or, conversely, the risks no longer need to be managed at all.
  2. Enterprise mobile apps are distributed through public app marketplaces like Google Play or iTunes. Using public marketplaces exposes apps to potentially hostile users and can be used as a platform to distribute counterfeit versions of those very same apps.
  3. Mobile apps are run within a BYOD infrastructure alongside apps and services outside of corporate control. Access to a device via third-party software can lead to a variety of malicious scenarios that include other apps (yours) installed on the same device.
  4. Mobile apps embed (or directly access) proprietary business logic. Reverse engineering is a straight forward exploit. Protect against IP theft while clearly signaling an expectation of ownership and control – which is often important during a penalty phase of a criminal and/or civil trial.
  5. Mobile apps access (or have access to) personally identifiable information (or other data governed by regulatory or compliance mandates). Understanding how services are called and data is managed within an app can readily expose potential vulnerabilities and unlock otherwise secure access to high-value services.
  6. Mobile apps play a material role in generating or managing revenue or other financial assets. High value assets or processes are a natural target for bad actors. Piracy, theft, and sabotage begins by targeting “weak links” in a revenue chain. An app is often the first target.
Want to know more about how PreEmptive Solutions can help reduce IP theft, data loss, privacy violations, software piracy, and other risks uniquely tied to the rise of enterprise mobile computing? 

Visit www.preemptive.com - or contact me here - i'd welcome the contact.

In the meantime, here’s an infographic identifying leading risk categories stemming from increased reliance on mobile applications. The vulnerabilities (potential gaps) call out specific tactics often employed by bad actors; the Controls identify corresponding practices to mitigate these risks.

The bottom half of the infographic maps the capabilities of PreEmptive Solutions Mobile Application Risk Portfolio across platforms and runtimes and up to the risk categories themselves.



















For more information on PreEmptive Solutions Enterprise Mobile Application Risk product portfolio, check out: PreEmptive Solutions’ mobile application risk management portfolio: four releases in four weeks.

Friday, June 19, 2015

ISV App Analytics: 3 patterns to improve quality, sales, and your roadmap

Application analytics are playing an increasingly important role in DevOps and Application Lifecycle Management more broadly – but ISV-specific use cases for application analytics have not gotten as much attention. ISV use cases – and by extension, the analytics patterns employed to support them – are unique. Three patterns described here are Beta, Trial, and Production builds. Clients and/or prospects using these “product versions” come with different expectations and hold different kinds of value to the ISV – and, as such – each instance of what is essentially the same application should be instrumented differently.

The case for injection

Typically, application instrumentation is implemented via APIs inside the application itself. While this approach offers the greatest control, any change requires a new branch or version of the app itself. With injection – the process of embedding instrumentation post-compile – the advantage is that you are able to introduce wholly different instrumentation patterns without having to rebuild or branch an application's code base.

The following illustration highlights the differences in instrumentation patterns across product version – patterns that we, at PreEmptive, use inside our own products.


Beta and/or Preview

  • Measure new key feature discovery and usage 
  • Track every exception that occurs throughout the beta cycle 
  • Measure impact and satisfaction of new use cases (value versus usage) 
  • *PreEmptive also injects “Shelf Life” – custom deactivation behaviors triggered by the end of the beta cycle 

Trial

  • License key allowing for tracking individual user activity in the context of the organization they represent (the prospective client) - this is CONNECTED to CRM records after the telemetry is delivered
  • Performance and quality metrics that are likely to influence outcome of a successful evaluation through better timed and more effective support calls 
  • Feature usage that suggest user-specific requirements – again, increasing the likelihood of a successful evaluation 
  • * Preemptive injects “Shelf Life” logic to automatically end evaluations (or extend them) based upon sales cycle 

Production

  • Enforce organization’s opt-in policy to ensure privacy and compliance. NO personally identifying information (PII) is collected in the case of PreEmptive’s production instrumentation. 
  • Feature usage, default setting, and runtime stack information to influence development roadmap and improve proactive support. 
  • Exception and performance metrics to improve service levels. 
  • * PreEmptive injects Shelf Life functionality to enforce annual subscription usage. 

The stakeholders and their requirements are often not well understood at the start of a development project (and often change over time). Specifically, sales and line of business management may not know their requirements until the product is closer to release – or after the release when there's greater insight into the sales process. A development team could not use an analytics API even if they had wanted to. …and this is one very strong case for using analytics injection over traditional APIs.

PreEmptive Solutions ISV application analytics examples

Here are recent screen grabs of Dotfuscator CE usage (preview release) inside Visual Studio 2015.
Here is a similar collection of analytics Key Performance Indicators (KPIs) – this time focusing on current user evaluations.



…and lastly, here are a set of representative KPIs tracking production usage of DashO for Java.


If you’re building software for sale – and you’d like to streamline your preview releases, shorten your sales cycles and increase your win rates – and better align your product roadmap with what your existing clients are actually doing – then application analytics should be a part of your business – and – most likely – injection as a means of instrumentation is for you as well.

Wednesday, April 15, 2015

Five tenets for innovation and sustained competitive advantage through application development

I'm privileged to spend most of my working days in front of smart people doing interesting work across a wide spectrum of industries - and in the spirit of "ideas don't have to be original - they just have to be good(c)" (the copyright is my attempt at humor RE other people's good ideas versus my silly aphorism) - anyhow, back to my central point - mobile, cloud, the rise of big data, etc. are all contributing to a sense that business (and the business of IT) is entering an entirely new phase fueled by technology, globalization, etc... and with this scale of change comes confusion  ...but in spite of all this background noise, I'm witnessing many of our smartest customers and partners converge on the following five tenets - tenets that I know are serving some of the smartest people in the coolest organizations  extremely well - cheers.

1.       Organizations must innovate or be rendered obsolete.
       Challenge: Applications now serve as a hub of innovation and a primary means of differentiation – across every industry and facet of our modern economy.
       Response: Innovative organizations use applications to uniquely engage with their markets and to streamline their operations.
2.       Genuine innovation is a continuous process – to be scaled and sustained.
       Challenge: Development/IT must internalize evolving business models and emerging technologies while sustaining ongoing IT operations and managing increasingly complex regulatory and compliance obligations.
       Response: Leading IT organizations imagine and deliver high-value applications through agile feedback-driven development practices and accelerated development cycles that place a premium on superior software quality and exceptional user experiences.
3.       Modern applications bring modern risks.
       Challenge: In order to sustain competitive advantage through application innovation, organizations must effectively secure and harden their application asset portfolios against the risks of revenue loss, Intellectual Property theft, denial of service attacks, privacy breaches, and regulatory and compliance violations.
       Response: Successful organizations ensure that security, privacy, and monitoring requirements are captured and managed throughout the application lifecycle from design through deployment and deprecation – as reflected in appropriate investments and upgrades in processes and technologies.
4.       Every organization is a hybrid organization – every IT project starts in the middle.
       Challenge: Organizations must balance the requirement to innovate with the requirement to operate competitively with existing IT assets.
       Response: Mature organizations do not hard-wire development, security, analytics, or DevOps practices to one technology generation or another. The result is materially lower levels of technical debt and the capacity to confidently embrace new and innovative technologies and the business opportunities they represent.
5.       Enterprise IT requirements cannot be satisfied with consumer technologies – shared mobile platforms and BYOD policies do not alter this tenet.
       Challenge: Enterprise security, compliance, and integration requirements cannot (and will not) be satisfied by mobile/web development and analytics platforms designed for consumer-focused, standalone app development (and the business models they support).

       Response: Mature IT organizations drive mobile app innovation without compromising core enterprise ALM, analytics, or governance standards by extending proven practices and enterprise-focused platforms and technologies. 

Tuesday, April 7, 2015

Darwin and Application Analytics

Survival of the fittest

Technological evolution is more than a figure of speech.

Survival, e.g. adoption (technology proliferation and usage) favors the species (technology) that adapts most effectively to environmental changes and most successfully competes for limited resources required for day-to-day sustenance. In other words, the technology that is most agile wins in this winner take all Darwinian world.

You might think you know where I’m headed – that I’m going to position application analytics and PreEmptive Analytics in particular as being best able to ensure the agility and resilience applications need to survive – and while that’s true – that’s not the theme of today’s post.

A rose by any other name… and applications are (like) people too!

Today’s theme is on properly classifying application analytics (and PreEmptive Analytics in particular) among all of the other related (and in some cases, competing) technologies – are they fish or fowl? Animal, vegetable, or mineral? Before you can decide if application analytics is valuable – you have to first identify what it is and how it fits into your existing ecosystem – food chain - biosphere.

In biology, all life forms are organized into a hierarchy (taxonomy) of seven levels (ranks) where each level is a super set of the levels below. Here, alongside people and roses, is a proposed “taxonomic hierarchy” for application analytics.





















What’s the point here?


What does this tell us about the species “PreEmptive Analyticus”? The hierarchy (precedence of the levels) and their respective traits are what ultimately gives each species their identity.  ...and this holds true for application analytics (and PreEmptive Analytics in particular) too.

Commercial Class software is supported by a viable vendor (PreEmptive Solutions in this case) committed to ensuring the technology’s lasting Survival (with resources and a roadmap to address evolving requirements).

Homegrown solutions are like mules – great for short term workloads, but they’re infertile with no new generations to come or capacity to evolve.

Analytics is the next most significant rank (Order) – PreEmptive Analytics shares a common core of functionality (behavior) with every other commercial analytics solution out there today (and into the future)

HOWEVER, while common functionality may be shared, it is not interchangeable.

Hominids are characterized as Primates with “relatively flat faces” and “three dimensional vision” – both humans and chimpanzees obviously qualify, but no one would confuse the face of a human for that of a chimpanzee. Each species uniquely adapts these common traits to compete and to thrive in its own way.

The Family (analytics focused more specifically on software data) and the Genus (specifically software data emitted from/by applications) each translate into increasingly unique and distinct capabilities – each of which, in turn, drive adoption.

In other words, in order to qualify as a Species in its own right, PreEmptive Analytics must have functionality driving its own proliferation and usage (adoption) distinct from other species e.g. profilers, performance monitors, website monitoring solutions, etc. while also establishing market share (successfully competing). 


How do you know if you've found a genuine new species?


According to biologists and zoologists alike, the basic guidelines are pretty simple, you need a description of the species, a name, and some specimens.

In this spirit, I offer the following description of PreEmptive Analytics – for a sampling of “specimens” (case studies and references) - contact me and I’m more than happy to oblige…

The definition enumerates distinguishing traits and the "taxonomic ranking" that each occupies - so this is not your typical functional outline or marketecture diagram.





















CAUTION – keep in mind that common capabilities can be shared across species, but they are not interchangeable - each trait is described in terms of its general function, how it's been specialized for PreEmptive Analytics and how/why its adaptable to our changing world (and therefore more likely to succeed!) - I’m not going to say who’s the monkey in my analytics analogy here, but I do want to caution against bringing a chimp to a do a (wo)man’s job.

PreEmptive Analytics

Core Analytics functionality

Specialized: The ingestion, data management, analytics computations, and the visualization capabilities include “out of the box” support for application analytics specific scenarios including information on usage, users, feature usage patterns, exceptions, and runtime environment demographics.

Adaptable: In addition to these canned analytics features, extensibility points (adaptability) ensure that whatever unique analytics metrics are most relevant to each application stakeholder (product owner, architect, development manager, etc.) can also be supported. 

Software Data (Family traits)


Incident Detection: PreEmptive Analytics (for TFS) analyzes patterns of application exceptions to identify production incidents and to automatically schedule work items (tasks).

Data transport: The PreEmptive Analytics Data Hub routes and distributed incoming telemetry to one or more analytics endpoints for analysis and publication.

Specialized: “Out of the box” support for common exception patterns, automatic offline-caching and common hybrid network scenarios are all built-in.

Adaptable: User-defined exception patterns and support for on-premises deployments, isolated networks, and high volume deployments are all supported. 

Application Data (Genus traits)

Application instrumentation (collecting session, feature, exception, and custom data): PreEmptive Analytics APIs plus Dotfuscator and DashO (for injection of instrumentation without coding) support the full spectrum of PC, web, mobile, back-end, and cloud runtimes, languages, and application types.

Application quality (ensuring that data collection and transmission does not compromise application quality, performance, scale…): PreEmptive Analytics runtime libraries (regardless of the form of instrumentation used) are built to “always be on” and to run without impacting the service level of the applications being monitored.

Runtime data emission and governance (opt-in policy enforcement, offline-caching, encryption on the wire…): The combination of the runtime libraries and the development patterns supported with the instrumentation tools ensure that security, privacy and compliance obligations are met.

Specialized: the instrumentation patterns support every scale of organization from the entrepreneurial to the highly regulated and secure.

Adaptable: Application-specific data collection, opt-in policy enforcement, and data emission is efficiently and transparently configurable supporting every class of application deployment from consumer to financial, to manufacturing, and beyond… 

PreEmptive Analytics (Species traits)


Every organization must continuously pursue differentiation in order to remain relevant (to Survive). In a time when almost all business that organizations do is digitized and runs on software, custom applications are essential in providing this differentiation.

Specialized: PreEmptive Analytics has integrated and adapted all of these traits (from instrumentation to incident detection) to focus on connecting application usage and adoption to the business imperatives that fund/justify their development. As such, PreEmptive Analytics is built for the non-technical business manager, application owners, and product managers as well as development managers and architects.

Adaptable: Deployment, privacy, performance, and specialized data requirements are supported across industries, geographies, and architectures providing a unified analytics view on every application for the complete spectrum of application stakeholder.

So what are you waiting for? Put down your brontosaurus burger and move your development out of the stone age.

Monday, March 23, 2015

Application Analytics: measure twice, code once

Microsoft recently announced the availability of Visual Studio 2015 CTP 6 – included with all of the awesome capabilities and updates was the debut of Dotfuscator Community Edition (CE) 2015. …and, in addition to updates to user functionality (protection and analytics instrumentation capabilities), this is the first version of Dotfuscator CE to include it’s own analytics (we’re using PreEmptive analytics to anonymously measure basic adoption, usage, and user preferences). Here’s some preliminary results… (and these could all be yours too of course using the very same capabilities from PreEmptive Analytics!)

Users by day comparing new and returning users shows extremely low returning users – this indicates that users are validating that the functionality is present, but not actually using the technology as part of a build process – this makes sense given that this is the first month of a preview release – users are validating the IDE – not building real products on that IDE.


Feature usage and user preferences including timing of key activities like what % of users are opting in (of course opt in policy exists and is enforced), what runtimes they care about (including things like Silverlight and ClickOnce and Windows Phone…), the split between those who care about protection and/or analytics, and timing of critical activities that can impact DevOps are all readily available




Broad geolocation validates international interest and highlights unexpected synergies (or issues) that may be tied to localized issues (language, training, regulation, accessibility, etc.)

This is an example of the most general, aggegrated, and generic usage collection - of course the same analytics plumbing can be used to capture all flavor of exception, user behavior, etc. - but ALWAYS determined by your own design goals and the telemetry is ALWAYS under your control and governance - from "cradle to grave."

BOTTOM LINE: the faster you can iterate – the better your chances for a successful, agile, application launch – building a feedback driven, continuous ALM/DevOps organization cries out for effective, secure, and ubiquitous application analytics – how is your organization solving for this requirement?

Wednesday, November 5, 2014

Application protection – why bother?

(…and, no, this is not a rhetorical question)

Why should a developer (or parent organization) bother to protect their applications? Given what PreEmptive Solutions does, you might think I’m being snarky and rhetorical – but, I assure you, I am not. The only way to answer such a question is to first know what it is you need protection from.

If you’re tempted to answer with something like “to protect against reverse engineering or tampering,” that is not a meaningful answer – your answer needs to consider what bad things happen if/when those things happen. Are you looking to prevent piracy? Intellectual property theft? AGAIN – not good enough – the real answer is going to have to be tied to lost revenue, operational disruption resulting financial or other damage, etc. Unless you can answer this question – it is impossible to appropriately prioritize your response to these risks.

If you think I’m being pedantic or too academic, then (and forgive me for saying this) you are not the person who should be making these kinds of decisions. If, on the other hand, you’re not sure how to answer these kinds of questions – but you understand (even if only in an intuitive way) the distinction between managing risks (damage) versus preventing events that can increase risk – then I hope the following distillation of how to approach managing the unique risks that stem from developing in .NET and/or Java (managed code) will be of value.

First point to consider: managed code is easy to reverse engineer and modify by design – and there are plenty of legitimate scenarios where this is a good thing.

Your senior management needs to understand that reverse engineering and executable manipulation is well-understood and widely practiced. Therefore, if this common practice poses any material risks to your organization, they are compelled to take steps to mitigate those risks – of course, if this basic characteristic of managed code does not pose a material risk – no additional steps are needed (nor should they be recommended),

Second point to consider: reverse engineering tools don’t commit crimes – criminals do; but criminals have found many ways to commit crimes with reverse engineering (and other categories of) tools.




In order to be able to recommend an appropriate strategy, a complete list of threats is required – simply knowing that IP theft is ONE threat is not sufficient – if the circulation of counterfeit applications pose an incremental threat – you need to capture this too.

Third point to consider: Which of the incident types above are relevant to your specific needs? How important are they? How can you objectively answer these kinds of questions?


Risk management is a mature discipline with well-defined frameworks for capturing and describing risk categories; DO NOT REINVENT THE WHEEL. How significant (material) a given risk may be is defined entirely by the relative impact on well-understood risk categories. The ones listed above are commonly associated with application reverse engineering and tampering - but these are not universal nor is the list exhaustive.

Fourth point to consider: How much risk is too much? How much risk is acceptable (what is your tolerance for risk)? …and what options are available to manage (control) these various categories of risk to keep them within your organization’s “appetite for risk?”






















Tolerance (or appetite) for risk is NOT a technical topic – nor are the underlying risks. For example, an Android app developed by 4 developers as a side project may only be used by a small percentage of your clients to do relatively inconsequential tasks – the developers may even be external consultants – so the app itself has no real IP, generates no revenue, and is hardly visible to your customer base (let alone to your investors). On the other hand, if the result of a counterfeit version of that app results in client loss of data, reputation damage in public markets, and regulatory penalties – the trivial nature of that Android really won’t have mattered.

In other words, even if the technical scope of an application may be narrow, the risk – and therefore the stakeholders – can often be far reaching.

Risk management decisions must be made by risk management professionals – not developers (you wouldn't want risk managers doing code reviews would you?).

Fifth point to consider: what controls are available specifically to help manage/control the risks that stem from managed code development?



























Obfuscation is a portfolio of transformations that can be applied in any number of permutations – each with its own protective role and its own side effects.

Tamper detection and defense as well as regular feature and exception monitoring also have their own flavors and configurations.

Machine attacks, human attacks, attacks whose goal is to generate compliable code versus those designed to modify specific behaviors while leaving others in tact all call for different combinations of obfuscation, tamper defense, and analytics.

The goal is to apply the minimum levels of protection and monitoring required to bring identified risks levels down to an acceptable (tolerable) level. Any protection beyond that level is “over kill.” Anything less is wasted effort. …and this is why mapping all activity to a complete list of risks is an essential first step.

Sixth point to consider: the cure (control) cannot be worse than the disease (the underlying risk). In other words, the obfuscation and tamper defense solutions cannot be more disruptive than the risks these technologies are designed to manage.

















Focusing on the incremental risks that introducing obfuscation, tamper defense, and analytics can introduce, the following questions are often important to consider (this is a representative subset – not a complete list):
· Complexity of configuration
· Flexibility to support build scenarios across distributed development teams, build farms, etc.
· Debugging, patch scenarios, extending protection schemes across distinct components
· Marketplace, installation, and other distribution patterns
· Support for different OS and runtime frameworks
· Digital signing, runtime IL standards compliance, and watermarking workflows
· Mobile packaging (or other device specific requirements)
· For analytics there are additional issues around privacy, connectivity, bandwidth, performance, etc.
· For commercial products, vendor viability (will they be there for you in 3 years) and support levels (dedicated trained team? Response times?)

So why bother?
Only if you have well-defined risks that are unacceptably high (operational, compliance, …)
AND the control (technology + process + policy) reduces the risk to acceptable levels
WITHOUT unacceptable incremental risk or expense.

Tuesday, April 22, 2014

Cross Platform Application Analytics: Adding meat to pabulum

Could I have chosen a title with less meaning and greater hype? I seriously doubt it.


We have all heard that you can gauge how important a thing or concept is to a community by the number of names and terms used to describe that thing (the cliche is Eskimos and ice) - and I proposed a corollary; you can gauge how poorly a community understands a thing or concept by how heavily it overloads multiple meanings onto a single name or term. ...and "analytics," "platform," and even "application" all fall into this latter category. 
 
What kind of analytics and for whom? What is a “platform?” And what does crossing one of these (or between them) even mean?

In this post, I'm going to take a stab at narrowing the meaning behind these terms just long enough to share some "tribal knowledge" on what effectively monitoring and measuring applications can mean - especially as the very notion of what an application can and should be is evolving even as we deploy the ones we've just built.

Application Analytics: If you care about application design and the development, test, and deployment practices that drive adoption – and if you have a stake in both the health of your applications in production and their resulting impact – then you’ll also care about the brand of application analytics that we’ll be focusing on here.

Cross Platform: If your idea of “an application” is holistic and encompasses every executable your users touch (across devices and over time) AND includes the distributed services that process transactions, publish content, and connect users to one another (as opposed to the myopic perspective of treating each of these components as standalone) – then you already understand what “a platform” really means and why, to be effective, application analytics must provide a single view across (and throughout) your application platform. 

PreEmptive Analytics

At PreEmptive, we’d like to think that we've fully internalized this worldview where applications are defined less by any one instance of an executable or script and more meaningfully treated as a collection of components that, when taken together, address one or more business or organizational needs. …and this perspective has translated directly into PreEmptive Analytics’ feature set.

Because PreEmptive Analytics instrumentation runs inside a production application (as any application analytics instrumentation must), we find it helpful to divide our feature set into two buckets;

  1. Desired, e.g. those that bring value to our users like feature tracking and 
  2. Required, e.g. those features that, if they do not behave, damage the very applications they are designed to measure.

How do you decide for yourself what’s desired versus required for your organization?


The list of “desired features” can literally be endless – and a missing “desired feature” can often be overlooked and forgiven because the user can be compensated with some other awesome feature that still makes implementing PreEmptive Analytics worthwhile. On the other hand, miss ANY SINGLE “required feature,” and the project is dead in the water – Violate privacy? Negatively impact performance or quality? Complicate application deployment? Generate regulatory, audit, or security risk? Any one of these issues is a deal breaker.

PreEmptive Analytics “required” cross platform feature set


Here’s a sampling of the kinds of features that our users often rely upon to hit their “required” cross platform feature set:

Platform, runtime, and marketplace coverage: will PreEmptive Analytics instrumentation support client, middle-tier, and server-side components?

PreEmptive Analytics instruments:

  • All .NET flavors (including 2.0 through WinRT and WP), C++, JavaScript, Java (including 8), iOS, and Android (plus special support for Xamarin generating native mobile apps across WP, iOS, & Android). 
  • Further, our instrumentation passes Apple, Microsoft, Amazon, and Google marketplace acceptance criteria.    

Network connectivity and resilience: will PreEmptive Analytics be able to capture, cache, and transport runtime telemetry across and between my users’ and our own networks?

PreEmptive instrumentation provides:

  • Automatic offline caching inside your application across all mobile, PC, cloud, and server components (with the exception of JavaScript). Special logic accommodates mobile platforms and their unique performance and storage capabilities. After automatically storing data when your application is offline, it will automatically stream the telemetry up once connectivity is reestablished. 

PreEmptive Analytics endpoints can provide:

  • Longer-term data management for networks that are completely isolated from outside networks allowing you to arrange for alternative data access or transport while respecting privacy, security, and other network-related constraints. 

Privacy and security at runtime and over time: will PreEmptive Analytics provide the flexibility to enforce your current and evolving security and privacy obligations?

PreEmptive Analytics instrumentation

  • Only collects and transmits data that has been explicitly requested by development. There is no unintended “over communication” or monitoring. 
  • When data is transmitted, telemetry is encrypted over the wire. 
  • Includes an extensible Opt-in switch that can be controlled by end users or through web-service calls allowing your organization to adjust and accommodate shifting opt-in and privacy policies without having to re-instrument and redeploy your applications. 

PreEmptive Analytics endpoints can:

  • Reside and be managed entirely under your control – either on-premises or inside a virtual machine hosted in a cloud under your direct control. 
  • They can be reconfigured, relocated, and dynamically targeted by your applications – even after your applications have been deployed. 

Performance and bandwidth: will PreEmptive Analytics instrumentation impact my application’s performance from my users’ experience or across the network?

PreEmptive instrumentation:

  • Runs inside your applications’ process space in a low priority thread – never competing for system resources. 
  • Utilizes an asynchronous queue to further optimize and minimize the collection and transmission of telemetry once captured inside your application. 
  • Has “safety valve” logic that will automatically begin throwing away data packets and ultimately shut itself down when system resources are deemed to be too scarce – helping to ensure that your users’ experiences are never impacted. 
  • Employs OS and device-specific flavors of all of the above ensuring that – even with injection post-compile – every possible step is taken to ensure that PreEmptive Analytics’ system and network footprint remains negligible. 

What about the PreEmptive Analytics “desired” cross platform feature set? (The features that make analytics worth doing) As I’ve already said, this list is literally an endless one – If I were to list only the categories (let alone the features in each category), this would make an already long post into very very long post. So, the desired feature discussion will have to come later… 

What’s the bottom Line for “Cross Platform Application Analytics?”


Be consistent – make sure your application analytics technology and practice are aligned with your definition of what an application actually is – and this is especially true when evaluating “cross-platform” architectures and semantics. A mismatch here will likely wipe out any chance of a lasting analytics solution, increase the cost of application analytics over time, and add to your technical debt.

Separate “needs” from “wants” – take every action possible to ensure that your application analytics implementation does no harm to the applications being measured and monitored either directly (performance, quality, …) or indirectly (security, reputation, compliance).

Want to put us through our paces? Visit www.preemptive.com/pa and request an eval... 


Thursday, November 7, 2013

What can Jay-Z teach us about application analytics?

If you want to move your audience, then a whole lot actually…


The gold standard for analytics is “actionable insight;” how much smarter, faster, or efficient do we become when the right people get the right information in the right format at the right time?

General purpose analytics solutions are typically built to ingest anything and everything. “Adapters” translate data sources into a common (proprietary) analytics framework – and then the slicing and dicing begins! While obviously flexible, this approach only works if users have a safe and reliable means to collect (and deliver) raw data into their systems; with application analytics, this is rarely the case. 

Recording applications “in the wild” is not an easy or simple task. In addition to the functional requirements to capture the right kinds of runtime telemetry, application instrumentation must meet a host of performance, privacy, quality, and security requirements as well – requirements that vary wildly by industry, use case, and target audience. …and, the demand for high fidelity application analytics has never been greater; you can thank the adoption of feedback driven-development practices coupled with the operational complexity of mobile and cloud computing plus the ever-evolving concerns around privacy and security for that.

So what’s a development team to do? Well, it turns out that there’s nothing new about having to record complex real-world events and then package them up to inspire and move audiences – media moguls and hit makers have been doing all along!

Developers, if you know you should be including analytics inside your application development process, I recommend that you take a page from the recording industry – it turns out they know a little something about the complexities of capturing user behavior across heterogeneous devices and in diverse settings (the only big difference is that they call their users “musicians”).

I've taken the liberty of condensing a post from a site dedicated to teaching the art and business of audio production and mapping it to the patterns and practices of effective application analytics implementation. You can see the original post at the recording process if you want to check my work.

The infograph below maps each step in "the recording process" to its app analytics analog. I've underlined key points in the original post and added my own take-away.




People will tell you that new technology changes everything – for me, this is just one more concrete example proving just the opposite.

Monday, September 16, 2013

Mobile development takes root; application analytics go mainstream

We’ve just finished up another survey tapping ~8,000 developers; mostly (although definitely not exclusively) of the .NET variety – and I think there’s little room for doubt; the rise of mobile and modern apps is having a profound impact on the way developers work and the tools they use.

What a difference a year makes

We did a similar survey in September 2012 (Who cares about application analytics? Lots of people for lots of reasons) and, even then, the interest in analytics was obvious – but interest had not yet translated into action. 

In Sept of 2012, we reported that 77% of development and their management had identified “insight into production application usage” as influential, important or essential to their work, and 71% identified “near real-time notification of unhandled, caught, and/or thrown exceptions” in the same way.

…BUT, at the same time, only 30% indicated that they were doing any kind of analytics in practice (exception reporting, feature tracking, etc.).
“More people believe that the world is flat than doubt the positive role of application analytics on development.”
Today, that 77% and 71% of developers who “got the value of analytics” is now a solid 100% and 99.5% respectively (for those that don’t do surveys, you have to appreciate that a 100% opinion is virtually impossible to find – you’d have a hard time getting a 100% consensus on the shape of the planet (round or flat) or even the role that aliens play in picking Super Bowl winners (are they pro AFC or pro NFC?).

Even more impressive is the rise of actual use of analytics. The 30% of development teams that claimed to use some sort of analytics has, in just one year, ballooned to 62%.

The rise of mobile development

Mobile devices have unique capabilities (accelerometer, augmented reality, gyroscope, camera/scanner, gesture recognition, GPS and navigation…) that drive unique development requirements which, in turn, spawn new development patterns and practices – and one of the most notable (in my opinion anyway) is the expectation that some form of application analytics always be included.

This is worth saying again; in traditional PC apps, adding analytics is the exception, not the rule – in mobile apps, the situation is reversed; embedding analytics is the norm.  

This is the other major shift in our year-over-year survey results. In 2012, only 25% of the development teams reported that they were developing mobile apps (iOS, Android, …) – in 2013, that number has more than doubled to 56%. Is it a coincidence that the rise of analytics use is proportionate to the rise in mobile development?

Analytics go mainstream

For analytics to “go mainstream,” mobile analytics development patterns need to be applied (and adapted) beyond narrow consumer-centric scenarios (as lucrative as those scenarios may be) to include line-of-business and “enterprise” apps (with all of the attendant infrastructure, IT governance, and data integration requirements that this implies).  …and we’re seeing evidence of this too.

94% of respondents are building mobile apps targeting consumers, BUT 40% are also deploying apps “used by employees” to “support a larger business,” e.g. enterprise apps!

65% of enterprise mobile app dev teams (essentially the same percentage as their consumer-centric counterparts) also report using (some form) of analytics.

Analytics: one size fits all?

Of course not – the specialization of application analytics technologies is another inevitable outcome of all of this change – and developers are on the front-lines trying to figure all of this out.

The following chart lists the analytics technologies our respondents have reported using – Google’s (and to a lesser degree, Flurry’s) prominence should come as no surprise. …but what’s the deal with the homegrown category?

Developers "doing it themselves" would strongly suggest that the reigning champions of consumer-centered mobile analytics are failing to meet a growing set of analytics requirements.


Is it a coincidence that the homegrown and PreEmptive analytics adoption rates map so closely to  the enterprise mobile app market share listed above? (40%)

These tools, they are a-changing

Analytics is not the only development tool category undergoing change and reinvention. When asked to enter specialized mobile development tools, responses included both “the familiar” and “the brand new.” (Note, this was not a multiple choice – this was an open text box where anything – or nothing – could be entered)

The familiar: Visual Studio was cited as a “specialized toolset” by 24.6% of those listing at least one specialized mobile app development tool – of the 49 unique tools that were cited, this was the #1 response – and should give the Visual Studio product team some satisfaction as they are clearly establishing Visual Studio as something more than just a .NET-centric dev environment.

The brand new: Xamarin, the cross platform mobile app development platform, was the most common new – and/or truly mobile-specific – toolset (they released a major refresh of their solution in 2013). Xamarin was cited by 9.5% of those listing at least one specialized mobile app development tool. 

(Are you using Xamarin? Contact me if you’d like to learn more about our soon to be released analytics integration with Xamarin – or visit the PreEmptive website if you’re reading this during or after Q4/2013)

The complete list of tools mentioned at least once include:

While Visual Studio was cited most often, relative newcomer, Xamarin, is already making its mark.

Game over? Are you kidding!? We haven’t even figured out the rules yet…

Have development organizations figured out how they’re going to tackle current and future mobile development requirements? (That, my friends, is what we call a rhetorical question)

The rise and assimilation of mobile devices is far, far from over and, sadly, I would suggest that picking new tools and expanding technical skillsets is the least of a development organizations’ worries – grappling with entirely new sets of operational, legal, social, security, and privacy obligations (that are themselves changing and often inconsistent) pose (in my view) the most serious risk (a.k.a. opportunity) for today’s development shop. 

…and those that lack a sense of urgency around these issues, that take the posture of waiting until these issues come to them, are in for a world of hurt.

For example, 

Personally Identifiable Information (PII)
  • 15% of respondents that collect personally identifiable information (PII) do not offer their users a way to opt-out 
  • 18% that collect PII do not offer a link to their privacy policy (there was only a 6% overlap between these two groups) 

To know that you’re collecting PII and to not provide these mechanisms is a serious omission (both from a development and an operations perspective) – and this is the easy stuff! This question also presumes that developers are using the most up-to-date and appropriate PII definition – a stretch to be sure.

Regulatory and Compliance

For those that indicated that their apps have “regulatory or compliance requirements” (29.9% of respondents) – their obligations are, by their very nature, more complex, ambiguous, and fluid.
  • 36.6% of respondents whose apps are subject to compliance and/or regulatory oversight do not offer their users a way to opt-out 
  • 16.7% of respondents whose apps are subject to compliance and/or regulatory oversight do not offer a link to their privacy policy.

…and what about collecting application usage information?
  • 41.7% of respondents whose apps are subject to compliance and/or regulatory oversight use Google Analytics or Flurry – analytics providers whose business model is predicated on harvesting and monetizing usage telemetry! 

Have these development organizations reconciled their regulatory obligations with Google’s and Flurry’s usage terms or privacy policies? 

In confusion, there’s opportunity

…and I think everyone can agree – mobile application development is full of opportunity.

Tuesday, September 10, 2013

(Zinfandel + BBQ = $$$) - I told you so

Back in February of 2011, I posted Riddle me this! Where can French, Italians, and Germans all agree? focusing on how a collection of early Windows Phone developers were leveraging analytics; the 10 apps included games, media apps, and a foodie app that paired food and wine by VinoMatch. In this last example, our analytics tracked user behaviors (which foods users chose) and which wines they selected during the pairing.
Analysis of food selection by users' nationality showing Italians' special interest in BBQ

I was surprised to learn a) Italian interest in pairing wine with BBQ and b) the implied potential to market Zinfandel to Italians as an American wine for BBQ (because Zinfandel was bred from a cheap table-variety Italian grape, Italians have typically been a hard sell).

So… imagine my surprise now in 2013, as I see a series of targeted marketing campaigns with exactly this message (from multiple wineries). I wonder how many hundreds of thousands of dollars in market research these guys spent when all they had to do was instrument an app?!
  Cin Cin!

Monday, September 9, 2013

Your phone can be a very scary place

Mobile apps are changing our social, cultural, and economic landscapes – and, with the many opportunities and perks that these changes promise, come an equally impressive collection of risks and potential exploits.

This post is probably way overdue – it’s an update (supplement really) to an article I wrote for The ISSA Journal on Assessing and Managing Security Risks Unique to Java and .NET way back in 09’. The article laid out reverse engineering and tampering risks stemming from the use of managed code (Java and .NET).  The technical issues were really secondary – what needed to be emphasized was the importance of having a consistent and rational framework to assess the materiality (relative danger) of those risks (piracy, IP theft, data engineering…).

In other words, the simple fact that it’s easy to reverse engineer and tamper with a piece of managed code does not automatically lead to a conclusion that a development team should make any moves to prevent that from happening. The degree of danger (risk) should be the only motivation (justification) to invest in preventative or detective measures; and, by implication, risk mitigation investments should be in proportion to those risks (low risk, low investment).

Here’s a graphic I used in 09’ to show the progression from managed apps (.NET and Java) to the risks that stem naturally from their use.
Risks stemming from the use of Java and .NET circa 2009

Managed code risks in the mobile world

Of course, managed code is also playing a central role in the rise of mobile computing as well as the ubiquitous “app marketplace,” e.g. Android and, to a lesser degree, Windows Phone and WindowsRT – and, as one might predict, these apps are introducing their own unique cross-section of potential risks and exploits. 

Here is an updated “hierarchy of risks” for today’s mobile world:
Risks stemming from the use of Java and .NET in today’s mobile world

The graphic above highlights risks that have either evolved or emerged within the mobile ecosystem – and these are probably best illustrated with real world incidents and trends (also highlighted below):

Earlier this year, a mobile development company documented how to turn one of the most popular paid Android apps (SwiftKey Keyboard) into a keylogger (something that captures everything you do and sends it somewhere else).  

This little example illustrates all of the risks listed above:
  • IP theft (this is a paid app that can now be side loaded for free)
  • Content theft (branding, documentation, etc. are stolen)
  • Counterfeiting (it is not a REAL SwiftKey instance – it’s a fake – more than a cracked instance)
  • Service theft (if the SwiftKey app makes any web service calls that the true developers must pay for – then these users are driving up cloud expenses – and if any of these users write-in for support, then human resources are being burned here too)
  • Data loss and privacy violations (obviously there is no “opt-in” to the keylogging and the passwords, etc. that are sent are clearly private data)
  • Piracy (users should be paying the licensing fee normally charged)
  • Malware (the keylogging is the malware in this case)
In this scenario, the “victim” would have needed to go looking for “free versions” of the app away from the sanctioned marketplace – but that’s not always the case.


Symantec recently reported finding counterfeit apps inside the Amazon Appstore (and Amazon has one of the most rigorous curating and analysis check-in processes). I, myself, have had my content stripped and look alike apps published across marketplaces too - see my earlier posts Hoisted by my own petard: or why my app is number two (for now) and Ryan is Lying – well, actually stealing, cheating and lying - again). 

Now these anecdotes are all too real, and sadly, they are also by no means unique. Trend Micro found that 1 in every 10 Android apps are malicious and that 22% of apps inappropriately leaked user data – that is crazy!

For a good overview of Android threats, checkout this free paper by Trend Micro, Android Under Siege: Popularity Comes at a Price).

To obfuscate (or not)?

As I’ve already written – you shouldn’t do anything simply to make reverse engineering and tampering more difficult – you should only take action if the associated risks are significant enough to you and said “steps” would reduce those risks to an acceptable level (your “appetite for risk.”)  

…but, seriously, who cares what I think? What do the owners of these platforms have to say?

Android “highly recommends” obfuscating all code and emphasizes this in a number of specific areas such as: “At a minimum, we recommend that you run an obfuscation tool” when developing billing logic. …and, they go so far as to include an open source obfuscator, Proguard – where again, Android “highly recommends” that all Android apps be obfuscated.

Microsoft also recommends that all modern apps be obfuscated (see Windows Phone policy) and they also offer a “community edition” obfuscator (our own Dotfuscator CE) as a part of Visual Studio. 

Tamper detection, exception monitoring, and usage profiling

Obfuscation “prevents” reverse engineering and tampering; but it does not actively detect when attackers are successful (and, with enough skill and time – all attackers can eventually succeed). Nor would obfuscation defend against attacks or include a notification mechanism – that’s what tamper defense, exception monitoring, and usage profiling do. If you care enough to prevent an attack, chances are you care enough to detect when one is underway or has succeeded. 

Application Hardening Options (representative – not exhaustive)

If you decide that you do agree with Android’s and Microsoft’s recommendation to obfuscate – then you have to decide which technology is most appropriate to meet your needs – again, a completely subjective process to be sure, but hopefully, the following table can serve as a comparative reference.


Thursday, June 13, 2013

Mobile Analytics: like playing horseshoes or bocce ball? (When close is “good enough”)

A recent post on Flurry’s “industry insight” blog caught my eye. The post, The iOS and Android Two-Horse Race: A Deeper Look into Market Share, called out the fact that iOS app users spend more time inside applications than their Android counterparts and then posited three potential underlying causes (condensed here – visit their post for the full narrative):
  • One was that the two dominant operating systems have tended to attract different types of users (we’ll get back to this shortly – this is close).
  • A second possible reason was that the fragmented nature of the Android ecosystem creates greater obstacles to app development and therefore limits availability of app content (suggesting app quality is the driving force).
  • The third possible explanation offered by Flurry was that iOS device owners use apps so developers create apps for iOS users and that in turn generates positive experiences, word-of-mouth, and further increases in app use (combining the two reasons above I suppose).

What struck me in this post was that, while there’s no disputing Flurry’s observation about “time spent in apps” across platforms, the lack of precision within the “2.8 billion app sessions” they track every day made genuine root cause analysis virtually impossible – and led to, in my view, an erroneous conclusion (or, more precisely, a false set of options where the real mechanics were all but invisible). 

Back in January, I published the blog post Marketplaces Matter and I’ve got the analytics to prove it where I compared two versions of one of my apps, Yoga-pedia, published through Google Play and Amazon marketplaces. What’s noteworthy here is that the apps are genuinely identical – functionality, UX, everything - …and yet, the total time spent inside the app distributed through the Amazon marketplace was 40% higher than from Google Play. Which, if you pivot the ratio, total time spent inside the app sourced from Google Play was 72% of the time spent inside the (identical) app sourced from Amazon. 

Now, if I’m interpreting Flurry’s graph in the above blog for January 2013 properly (when my earlier stats were generated), it shows a nearly identical ratio (the total time in “Android apps” was ~75-80% of total time in iOS). So what does that suggest?

  1. iOS users and Android users clearly use different marketplaces – but marketplace source is not something tracked.
  2. iOS apps themselves are of course always different from Android apps (I have an iOS version of Yoga-pedia that is close to my Android flavors – but even these are different). This is a major variable that Flurry analytics cannot separate out – they are looking at the roll-up of all iOS apps and comparing them to all Android apps. 
  3. Treating all Android apps as a single data set (which includes multiple marketplaces) – further obscures what may be one of the key drivers of user behavior – the marketplace community.

So – going back to the first hypothesis, that Android attracts a different class of user than does iOS, I think that is as close as they could come given the kind of data available – the real answer is most likely that the Apple marketplace attracts a different kind of user than does Google Play (and the mix of Amazon Android app users is probably not significant enough to move the big needle).

…And so that begs my original question – is this kind of imprecise (but still accurate) intelligence “good enough” (like horseshoes, bocce ball, and nuclear war)? If this was as far as true application analytics could take me – then maybe… 

BUT, once I had identified the potential role that marketplaces can have – I was able to drill down even deeper to identify the other marketplace delta’s that were (at least to me), extremely valuable including:
  • Amazon click through rate (CTR) was 164% higher than the Google Play CTR 
  • Google Play Ad Delivery Failure rate (ADFR) was 199% higher than the Amazon ADFR  
  • Amazon user upgrade rate was 54% higher than the Google Play upgrade rate (from free to paid app version).

So, in my case, owning my own data and having an instrumentation and analytics platform able to capture data points specific to my needs (precision) turns out to be very important indeed.

So why would anyone use technology like Flurry’s? LOTS OF REASONS relating to ad revenue and all of the other monetization services they offer app developers (that’s why they’re in business) – and that’s I guess the big point. Services and technologies like Flurry’s are built for app monetization – and to the extent that some analytics are an important ingredient in their recipe – you can bet that they’ll nail it – but to do more would be over engineering at best and, more likely, pose a material risk to their entire business model.

For advertising across huge inventories of mobile apps, analytics should be a bit like playing horseshoes – knowing that I can expect iOS to generally perform better than Android is useful. 

On the other hand, as a development organization, if I really want to fine tune my app and optimize for adoption, specific behaviors, and operational/development ROI – I need an application analytics solution built with that use case in mind – not only are alternative analytics solutions missing key capabilities, there are solid business reasons that say those alternative technologies should actively avoid adding those very capabilities for all time.