Friday, December 12, 2014

Welcome to The Show(1) HockeyApp

In what can be described as the latest Snipe(2) in the Barn Burner(3) game we call application analytics, Microsoft announced its acquisition of HockeyApp. Most of the early commentary I’ve read seems to focus on the fact that Microsoft has invested in a native iOS/Android API, but to me that’s not the most interesting nugget… what’s most interesting to me is that HockeyApp has been built to be a hardcore Stay-at-home defenseman (4) (that’s the last hockey pun, I swear, see definitions below). 
(1) The Show (noun): the NHL, used in the context of “making it to The Show”.
(2) Snipe (noun): a powerful or well-placed shot that results in a pretty goal.
(3) Barn Burner (noun): used to describe a game that is high scoring, fast paced, and exciting to watch.
(4) Stay-at-home defenseman: A defenseman who plays very defensively. He doesn't skate with the puck toward the offensive zone very often but will look to pass first. Usually the last player to leave his defensive zone.

 Maybe I’m too close to this space, but hasn't Microsoft been consistent and clear in their communications that it was always the plan to have Application Insights support native iOS and Android apps? – so they bought some technology and talent to accelerate the process …that’s neither novel nor controversial – seems like a smart move.

What DOES strike me as interesting is HockeyApp’s focus on analytics for testing versus production – and for enterprise use versus consumer-facing.

Microsoft is in an all-out sprint to deliver a comprehensive and fully integrated dev-devops-ops ecosystem where the distinctions between enterprise, b2b, and consumer categories dissolve – and, with HockeyApp, they appear to be killing two birds with one stone; native iOS and Android APIs AND an analytics framework optimized for test and other (relatively) small and well-defined user communities (such as some enterprise scenarios) – two areas where Microsoft has traditionally been quite successful.

Support for side loading, user-by-user index buckets, and a privately managed feaux marketplace all work well in these scenarios, but (I would suggest) the very same implementations will struggle under the strain of high volume 24x7 operations - but that's OK, that's not the intent.

It’s clear that, as application analytics matures as a category, we should expect to see increasing specialization and segmentation – software built to track shopping cart behavior and user loyalty (or generate system logs) is not going to be able to cover these increasingly well-defined use cases. 

Here's my latest chart comparing the various "categories" of application analytics solutions... (all errors and omissions are obviously mine and mine alone)

As always, checkout the latest on PreEmptive Analytics at 

Thursday, November 20, 2014

Application Analytics Innovation: Wolters Kluwer, CCH Gets It

I've been working on application analytics use cases and scenarios going on nine years now – and I spend a good deal of my time supporting (and learning from) dev teams of all shapes and sizes – and having said that, I’m pleased to say, this week was a first for me. This week I had the good fortune to sit in on the final hours of a two day Code Games inside Wolters Kluwer, CCH.

Coding competitions/events like this are nothing new, but running a good one is never easy – required ingredients include a positive, nurturing culture, some serious organizational and editorial skills, and (of course) sharp developers. On this day, Wolters Kluwer, CCH had all three on display in spades.


Now, I’m not the first to take notice (those of you that know me know that one of my favorite aphorisms is that ideas only have to be good – they don’t need to be original). Forbes covered Wolters Kluwer’s code games earlier this week in their article Top Women CEOs On How Bold Innovation Drives Business. Karen Abramson, CEO, Wolters Kluwer Tax and Accounting highlights their “Code Games” as one of the three pillars in “a constant eco-system of innovation across the organization.”

SERIOUS ORGANIZATIONAL AND EDITORIAL SKILLS? YES! (… and here’s where it gets interesting)

I wish that I could share some of the awesome presentations I heard on that night (there were truly some awesome ones), but an executed NDA was the prerequisite for my attendance. What I CAN relay is that their Code Games was the first one I've seen where “most innovative use of application analytics” was one of the award categories. At the end of the night, one of the two Code Games organizers, Elizabeth Weissman, Director of Innovation, iLab Solutions at Wolters Kluwer, CCH said that she thought the application analytics category was a perfect complement to the other wholly business-focused ones because it sent the dev teams the important message that “application analytics need to be a part of an application’s design from the very beginning – not an afterthought.”

It is worth noting that the other mainstream award categories (which were, in fact, more prestigious because they were focused squarely on core business impact) were judged on a) Innovation, b) Technical Achievement, and c) Potential Value Generation. As such, no team would have included application analytics at all if they did not believe upfront that it would contribute in some material way to one or more of these three criteria.

…but, for me, it’s bigger than that – as those teams that included app analytics presented to the Code Games judges, those judges (and the 150+ dev. audience members) also got the message that app analytics is not just for website forensics and user clicks; and in this case, the judges panel included Wolters Kluwer, CCH executives, Teresa Mackintosh, President & CEO, Mark Lawler, VP Software Development, Brian Diffin, Executive VP Global Technology, and some of Wolters Kluwer, CCH’s own VIP clients – and now they all get it too!
From right to left, Elizabeth, Teresa, Bernie, and me (photo-bombing this Wolters Kluwer, CCH "A-team")

How’d they do it? Working with Bernie Hirsch, Director, Software Development at Wolters Kluwer, CCH, (the other half of the Code Games organizer dynamic duo) we setup a privately hosted PreEmptive Analytics endpoint in an Azure VM that matched their existing production analytics environment and that allowed dev teams to securely and easily add analytics to their projects – whether or not the apps ran on-premises, used client data, connected to internal systems, etc.


As I've already said, I can’t describe specifically what these teams built, but here are a few factoids:

  • Every team that decided to include app analytics succeeded. 
  • The teams instrumented apps running .NET, Java Script, and mobile surfaces and the apps themselves were both customer facing and internal, line of business apps. 
  • The teams collected session and usage data, exceptions, timing, and custom-app-specific telemetry too. 
  • While the applications ran the gamut from on-premises LoB and client-facing, all of the app telemetry was transmitted to Azure-hosted (private cloud) endpoints (and one app then pulled the data out and back into the very app that was being monitored! – but now I have to stop before I say too much). 
  • Not all teams incorporated analytics into their projects, but the most decorated team was one of those that did – NOT to track exceptions or page views – but as the backbone to one of their most powerful data-driven features for the business.
Developer presentations ran into the evening in front of a packed house and 150+ employees watching remotely.
So there we have it, Wolters Kluwer, CCH brought together the culture, the organizational savvy, and the technical talent to pull-off what was truly an exceptional event.  ...and I'm grateful that I had the chance to come along for the ride. Cheers!

Wednesday, November 5, 2014

Application protection – why bother?

(…and, no, this is not a rhetorical question)

Why should a developer (or parent organization) bother to protect their applications? Given what PreEmptive Solutions does, you might think I’m being snarky and rhetorical – but, I assure you, I am not. The only way to answer such a question is to first know what it is you need protection from.

If you’re tempted to answer with something like “to protect against reverse engineering or tampering,” that is not a meaningful answer – your answer needs to consider what bad things happen if/when those things happen. Are you looking to prevent piracy? Intellectual property theft? AGAIN – not good enough – the real answer is going to have to be tied to lost revenue, operational disruption resulting financial or other damage, etc. Unless you can answer this question – it is impossible to appropriately prioritize your response to these risks.

If you think I’m being pedantic or too academic, then (and forgive me for saying this) you are not the person who should be making these kinds of decisions. If, on the other hand, you’re not sure how to answer these kinds of questions – but you understand (even if only in an intuitive way) the distinction between managing risks (damage) versus preventing events that can increase risk – then I hope the following distillation of how to approach managing the unique risks that stem from developing in .NET and/or Java (managed code) will be of value.

First point to consider: managed code is easy to reverse engineer and modify by design – and there are plenty of legitimate scenarios where this is a good thing.

Your senior management needs to understand that reverse engineering and executable manipulation is well-understood and widely practiced. Therefore, if this common practice poses any material risks to your organization, they are compelled to take steps to mitigate those risks – of course, if this basic characteristic of managed code does not pose a material risk – no additional steps are needed (nor should they be recommended),

Second point to consider: reverse engineering tools don’t commit crimes – criminals do; but criminals have found many ways to commit crimes with reverse engineering (and other categories of) tools.

In order to be able to recommend an appropriate strategy, a complete list of threats is required – simply knowing that IP theft is ONE threat is not sufficient – if the circulation of counterfeit applications pose an incremental threat – you need to capture this too.

Third point to consider: Which of the incident types above are relevant to your specific needs? How important are they? How can you objectively answer these kinds of questions?

Risk management is a mature discipline with well-defined frameworks for capturing and describing risk categories; DO NOT REINVENT THE WHEEL. How significant (material) a given risk may be is defined entirely by the relative impact on well-understood risk categories. The ones listed above are commonly associated with application reverse engineering and tampering - but these are not universal nor is the list exhaustive.

Fourth point to consider: How much risk is too much? How much risk is acceptable (what is your tolerance for risk)? …and what options are available to manage (control) these various categories of risk to keep them within your organization’s “appetite for risk?”

Tolerance (or appetite) for risk is NOT a technical topic – nor are the underlying risks. For example, an Android app developed by 4 developers as a side project may only be used by a small percentage of your clients to do relatively inconsequential tasks – the developers may even be external consultants – so the app itself has no real IP, generates no revenue, and is hardly visible to your customer base (let alone to your investors). On the other hand, if the result of a counterfeit version of that app results in client loss of data, reputation damage in public markets, and regulatory penalties – the trivial nature of that Android really won’t have mattered.

In other words, even if the technical scope of an application may be narrow, the risk – and therefore the stakeholders – can often be far reaching.

Risk management decisions must be made by risk management professionals – not developers (you wouldn't want risk managers doing code reviews would you?).

Fifth point to consider: what controls are available specifically to help manage/control the risks that stem from managed code development?

Obfuscation is a portfolio of transformations that can be applied in any number of permutations – each with its own protective role and its own side effects.

Tamper detection and defense as well as regular feature and exception monitoring also have their own flavors and configurations.

Machine attacks, human attacks, attacks whose goal is to generate compliable code versus those designed to modify specific behaviors while leaving others in tact all call for different combinations of obfuscation, tamper defense, and analytics.

The goal is to apply the minimum levels of protection and monitoring required to bring identified risks levels down to an acceptable (tolerable) level. Any protection beyond that level is “over kill.” Anything less is wasted effort. …and this is why mapping all activity to a complete list of risks is an essential first step.

Sixth point to consider: the cure (control) cannot be worse than the disease (the underlying risk). In other words, the obfuscation and tamper defense solutions cannot be more disruptive than the risks these technologies are designed to manage.

Focusing on the incremental risks that introducing obfuscation, tamper defense, and analytics can introduce, the following questions are often important to consider (this is a representative subset – not a complete list):
· Complexity of configuration
· Flexibility to support build scenarios across distributed development teams, build farms, etc.
· Debugging, patch scenarios, extending protection schemes across distinct components
· Marketplace, installation, and other distribution patterns
· Support for different OS and runtime frameworks
· Digital signing, runtime IL standards compliance, and watermarking workflows
· Mobile packaging (or other device specific requirements)
· For analytics there are additional issues around privacy, connectivity, bandwidth, performance, etc.
· For commercial products, vendor viability (will they be there for you in 3 years) and support levels (dedicated trained team? Response times?)

So why bother?
Only if you have well-defined risks that are unacceptably high (operational, compliance, …)
AND the control (technology + process + policy) reduces the risk to acceptable levels
WITHOUT unacceptable incremental risk or expense.

Wednesday, October 8, 2014

Welcome Xamarin Insights (seeing the forest through the trees)

First, let me state for the record that I am a huge fan of Xamarin - when I say this, I mean to include both their great technology and their people (I've only met a few, but they've never disappointed). So with that out of the way, I listened with great interest as they announced Xamarin Insights at their user group this morning. As someone with a personal stake in the broad category of application analytics, you can imagine that when a company like Xamarin enters my space, they're going to get my undivided attention.

My first reaction was that the name "Xamarin Insights" sounded a lot like Microsoft's "Application Insights" and as I watched the presentation and then reviewed the web content, the similarities grew even stronger.

Of course, if you're a developer on either of the (*) Insights teams you're going to be mildly offended by this last statement as you no doubt see STARK differences - and, at some important level, you're probably right - but I'm not on either dev team, I'm part of the PreEmptive Analytics team and so this is the area where I see the "STARK differences." ...and so that has prompted me to populate the following table comparing all three, Xamarin Insights, Application Insights, and PreEmptive Analytics.

I've tried to focus on material differences that are most likely to make one approach more effective than the other two - and to make this crystal clear - there are scenarios where each option is better suited than the other two - so understanding YOUR requirements is the first and MOST IMPORTANT step in selecting your optimal analytics solution.

Xamarin Insights
Application Insights
PreEmptive Analytics
Targeted appeal
Enterprises and ISVs targeting modern platforms
Enterprises and ISVs with established app portfolios driving large, regulated, and secure operations extending into modern/mobile
Release status
Free with pricing TBD
Free with pricing TBD
Licensed by product component
Applications supported
API for C#/F# supporting native Xamarin targets (end-user apps only)
API for C/C#/F#, JavaScript supporting Microsoft targets (MODERN client-side AND server-side apps/components)
All apps supported by (*)Insights PLUS C, C++, Java, traditional .NET, middle-tier, on-premises, etc.
Endpoint/analytics engines and portal
Multi-tenant hosted by Xamarin
Multi-tenant hosted by Microsoft
On-premises or hosted – hosting can be by 3rd party or PreEmptive.
Events: Atomic mobile & page

Exceptions: Unhandled and caught
Custom: Strings

System and performance: mobile only
Events: Atomic mobile & page

Exceptions: Unhandled

Custom: Strings

System and performance: Modern only
Events: All  (*)Insights PLUS arbitrary workflow and in-code spans
Exceptions: Unhandled, caught, thrown
Custom: Strings, serialized data structures from multiple sources
System and performance: all runtimes and surfaces
Supported organizations
Xamarin devs ONLY
Microsoft-based devs ONLY
All devs supported by (*)Insights PLUS all other enterprise, ISV, and embedded app devs.

Data dimensions
Only data originated inside an app can be analyzed
Data inside an app AND data accessible from within Azure account can be analyzed
Any data source available within an enterprise or via external services can be mashed up to enrich telemetry
Opt-in/out policy enforcement
Offline caching
Extensible indexing and UI on a role-by-role basis (app owner, dev mgr, etc.)
Injection of instrumentation for managed code
User and organization metrics
Yes including integration with Enterprise credentials
Automatic creation of TFS work items based upon business rules and patterns
Embedded inside Visual Studio
Starting with VS2013/14
Since 2010

 One thing i know for sure - no one will be building applications without analytics in the next few years - figuring this out for YOUR dev requirements will be a critical requirement soon enough - it's not a question of IF - only WHEN - so, if applications are an important part of your life - this is something that you cannot postpone for much longer (it may already be too late!) Enjoy!

Tuesday, April 22, 2014

Cross Platform Application Analytics: Adding meat to pabulum

Could I have chosen a title with less meaning and greater hype? I seriously doubt it.

We have all heard that you can gauge how important a thing or concept is to a community by the number of names and terms used to describe that thing (the cliche is Eskimos and ice) - and I proposed a corollary; you can gauge how poorly a community understands a thing or concept by how heavily it overloads multiple meanings onto a single name or term. ...and "analytics," "platform," and even "application" all fall into this latter category. 
What kind of analytics and for whom? What is a “platform?” And what does crossing one of these (or between them) even mean?

In this post, I'm going to take a stab at narrowing the meaning behind these terms just long enough to share some "tribal knowledge" on what effectively monitoring and measuring applications can mean - especially as the very notion of what an application can and should be is evolving even as we deploy the ones we've just built.

Application Analytics: If you care about application design and the development, test, and deployment practices that drive adoption – and if you have a stake in both the health of your applications in production and their resulting impact – then you’ll also care about the brand of application analytics that we’ll be focusing on here.

Cross Platform: If your idea of “an application” is holistic and encompasses every executable your users touch (across devices and over time) AND includes the distributed services that process transactions, publish content, and connect users to one another (as opposed to the myopic perspective of treating each of these components as standalone) – then you already understand what “a platform” really means and why, to be effective, application analytics must provide a single view across (and throughout) your application platform. 

PreEmptive Analytics

At PreEmptive, we’d like to think that we've fully internalized this worldview where applications are defined less by any one instance of an executable or script and more meaningfully treated as a collection of components that, when taken together, address one or more business or organizational needs. …and this perspective has translated directly into PreEmptive Analytics’ feature set.

Because PreEmptive Analytics instrumentation runs inside a production application (as any application analytics instrumentation must), we find it helpful to divide our feature set into two buckets;

  1. Desired, e.g. those that bring value to our users like feature tracking and 
  2. Required, e.g. those features that, if they do not behave, damage the very applications they are designed to measure.

How do you decide for yourself what’s desired versus required for your organization?

The list of “desired features” can literally be endless – and a missing “desired feature” can often be overlooked and forgiven because the user can be compensated with some other awesome feature that still makes implementing PreEmptive Analytics worthwhile. On the other hand, miss ANY SINGLE “required feature,” and the project is dead in the water – Violate privacy? Negatively impact performance or quality? Complicate application deployment? Generate regulatory, audit, or security risk? Any one of these issues is a deal breaker.

PreEmptive Analytics “required” cross platform feature set

Here’s a sampling of the kinds of features that our users often rely upon to hit their “required” cross platform feature set:

Platform, runtime, and marketplace coverage: will PreEmptive Analytics instrumentation support client, middle-tier, and server-side components?

PreEmptive Analytics instruments:

  • All .NET flavors (including 2.0 through WinRT and WP), C++, JavaScript, Java (including 8), iOS, and Android (plus special support for Xamarin generating native mobile apps across WP, iOS, & Android). 
  • Further, our instrumentation passes Apple, Microsoft, Amazon, and Google marketplace acceptance criteria.    

Network connectivity and resilience: will PreEmptive Analytics be able to capture, cache, and transport runtime telemetry across and between my users’ and our own networks?

PreEmptive instrumentation provides:

  • Automatic offline caching inside your application across all mobile, PC, cloud, and server components (with the exception of JavaScript). Special logic accommodates mobile platforms and their unique performance and storage capabilities. After automatically storing data when your application is offline, it will automatically stream the telemetry up once connectivity is reestablished. 

PreEmptive Analytics endpoints can provide:

  • Longer-term data management for networks that are completely isolated from outside networks allowing you to arrange for alternative data access or transport while respecting privacy, security, and other network-related constraints. 

Privacy and security at runtime and over time: will PreEmptive Analytics provide the flexibility to enforce your current and evolving security and privacy obligations?

PreEmptive Analytics instrumentation

  • Only collects and transmits data that has been explicitly requested by development. There is no unintended “over communication” or monitoring. 
  • When data is transmitted, telemetry is encrypted over the wire. 
  • Includes an extensible Opt-in switch that can be controlled by end users or through web-service calls allowing your organization to adjust and accommodate shifting opt-in and privacy policies without having to re-instrument and redeploy your applications. 

PreEmptive Analytics endpoints can:

  • Reside and be managed entirely under your control – either on-premises or inside a virtual machine hosted in a cloud under your direct control. 
  • They can be reconfigured, relocated, and dynamically targeted by your applications – even after your applications have been deployed. 

Performance and bandwidth: will PreEmptive Analytics instrumentation impact my application’s performance from my users’ experience or across the network?

PreEmptive instrumentation:

  • Runs inside your applications’ process space in a low priority thread – never competing for system resources. 
  • Utilizes an asynchronous queue to further optimize and minimize the collection and transmission of telemetry once captured inside your application. 
  • Has “safety valve” logic that will automatically begin throwing away data packets and ultimately shut itself down when system resources are deemed to be too scarce – helping to ensure that your users’ experiences are never impacted. 
  • Employs OS and device-specific flavors of all of the above ensuring that – even with injection post-compile – every possible step is taken to ensure that PreEmptive Analytics’ system and network footprint remains negligible. 

What about the PreEmptive Analytics “desired” cross platform feature set? (The features that make analytics worth doing) As I’ve already said, this list is literally an endless one – If I were to list only the categories (let alone the features in each category), this would make an already long post into very very long post. So, the desired feature discussion will have to come later… 

What’s the bottom Line for “Cross Platform Application Analytics?”

Be consistent – make sure your application analytics technology and practice are aligned with your definition of what an application actually is – and this is especially true when evaluating “cross-platform” architectures and semantics. A mismatch here will likely wipe out any chance of a lasting analytics solution, increase the cost of application analytics over time, and add to your technical debt.

Separate “needs” from “wants” – take every action possible to ensure that your application analytics implementation does no harm to the applications being measured and monitored either directly (performance, quality, …) or indirectly (security, reputation, compliance).

Want to put us through our paces? Visit and request an eval... 

Friday, March 14, 2014

Application Analytics: Security and Privacy for All

In my previous post, I tried to illustrate the distinction between required capabilities and desired capabilities – and how, with application analytics, this distinction is particularly tricky since true requirements are more likely to come from the users of apps versus the developers of apps (the latter being the app analytics customer and the former – app end users – are often completely out of reach from the analytics solution provider).

I also posited that the most common areas where end user requirements drive important app analytics requirements fall into performance, quality, security, and privacy domains.

In this post, I’m going to drill down into security and privacy a bit. Let’s break out the application analytics supply chain into four parts; Telemetry creation, ingestion, processing, and publication.
  • Telemetry creation (where the app itself or an external agent actually creates the raw telemetry)
  • Ingestion (the steps required to bundle, transport, and deliver the raw data for processing)
  • Processing (parsing, indexing, computing, aggregating, storage, etc. required to transform raw telemetry into publish-ready data) 
  • Publication (the selection, transformation, formatting, and delivery of targeted data to a specific user or external system)

Figure 1: Application Analytics Supply Chain: the dicey part is that application telemetry is typically collected in the context of the “App User’s” world subject to their expectations for privacy, security, etc. and then must be delivered across a great divide into the “App Analytics” user’s world. Application analytics solutions must enforce whatever "app user" policies are required during app usage, then navigate the ingestion process that typically bridges the two worlds, and finally help maintain whatever data governance obligations app users (or their legislatures) require. 

PreEmptive Analytics

To make this “real” and for illustration purposes – here’s a summary of the features offered within PreEmptive Analytics that target these security and privacy challenges. 

Telemetry Creation

Application instrumentation

Activation: No “accidental” or “inadvertent” application monitoring. Application instrumentation is typically accomplished through post-compile injection. The default setting is “off.” In other words, injection must be manually activated avoiding “accidental” application instrumentation. 

Configuration: No data, other than what is explicitly requested by development, is ever transmitted. Once “activated,” each individual data component must then be explicitly identified for data capture. This is true for either the injection pattern or when using the PreEmptive Analytics API directly inside an app’s code. 


Definition: PreEmptive Analytics “opt-in” requires a Boolean “True” value to be set before any data monitoring functionality is initiated (which is prior to transmission). The default value of this setting is “False” and must be explicitly reset by the application at the start of every application session. There are, in fact, two opt-in settings. The first covers general usage and the second covers exception monitoring.
  • Application usage: opt-in covers session, feature and system data previously identified by development prior to deployment. 
  • Exception monitoring: opt-in covers unhandled, caught and thrown exception data previously identified by development prior to deployment. 

Privacy Policy

PreEmptive Analytics permits development to encode a link to the company’s own privacy policy that can be communicated to an end-user prior (or during) a request for an informed consent to opt-in. 


Data transmission

SSL Encryption: by default, all data transmitted from an application to an endpoint is first encrypted before transmission. This can only be overridden by development prior to the release of the application. 

Content Management

Runtime data collected for management and analysis is owned by the development organization. PreEmptive Solutions has no access and no rights to reuse runtime data – either in part or in aggregate.
  • On-premises: Endpoints that are “on-premises” or “client-managed” are completely under the development organization’s control. 
  • Managed service: Data managed by endpoints owned by PreEmptive Solutions are managed solely for our clients’ benefit. There is no other access or use authorized or permitted. 


In addition to PreEmptive component localized authentication, application, identity, and role-based frameworks are respected and enforced, e.g. you cannot provision a TFS project of PreEmptive Analytics without (at least) Admin privileges for that TFS project.

Application security (bonus)

In addition to the thorough, “end-to-end,” approach to information security and privacy, PreEmptive Solutions also provides technology and associated controls to minimize the risk of application reverse engineering or tampering that may lead to the disclosure of application vulnerabilities that can be exploited or the tampering (modification) of applications to alter behavior (to introduce exploitable vulnerabilities where none had previously existed). These include: 

Preventative controls

Obfuscation: prevents reverse engineering and recompilation. 

Detective controls

Tamper detection and defense: provides real-time defense and alert notification when application tampering (modification post-compile) has been detected.

Taken as a whole, PreEmptive Analytics is designed to provide a complete and comprehensive application analytics security and privacy solution – built to encode and enforce the wide variety (and ever-evolving) application and information security and privacy policies, mandates and controls.

As application analytics evolves beyond tracking marketing funnels on the internet, the entire application analytics pipeline will (must) be governed by the same security and privacy policies as the applications they are monitoring and the business and operational content that your organization is managing.

Thursday, March 13, 2014

Application Analytics: fulfilling your every desire

We need to take care in how we use the term “requirement” in development. Development requirements are really carefully crafted descriptions (distillations) of the “behaviors we care about.” They get assigned priorities and scheduled and (typically) only a small number of requirements are actually implemented in any given app release.

Ddevelopment requirements are usually NOT required

Here’s my beef - development requirements are usually NOT required – they are, in fact, desired. …development’s goal is to fulfill desires – but desires are emotional and so we try to use requirements in their place (requirements are for the most part concrete and objectively measured). …of course, we can nail “requirements” and still build the wrong thing and fail – so development cannot escape their true lot in life - to build the most desirable apps possible – and app analytics suppliers are no different – we are subject to the same laws as every other development niche (it’s just that our users are all app stakeholders of some sort with their own users that are not our users).

A silent killer of application analytics implementations

…and herein lies a silent killer of application analytics implementations (and what sets this specialty apart from other analytics categories); app stakeholders care more about app user satisfaction than the app users themselves.

Think about it, if an app user doesn’t like an app, they just stop using it (or go with a competitor). …but when users drop your app, you’re out of a job. So – before an app analytics provider can even think about fulfilling their users’ desires – they must first ensure (and prove) that they “do no harm” to this murky, extended user community once removed; and this is a genuine “requirement” in the truest sense of the word.

Why is this a “silent killer” that exclusively stalks application analytics implementations? Because application instrumentation (the generator of raw telemetry – step one of any app analytics implementation) must either run inside a client’s application (or inside the same runtime “looking in”) – application instrumentation can never be a part FROM runtime applications – it is unavoidably a part OF each runtime application. …and, application user desires are not the desires of application analytics users.

…so, as the great Stevie Wonder has written in Songs in the Key of Life (As), “…make sure when you say you're in it but not of it You're not helping to make this earth a place sometimes called Hell.”

It is a true requirement that app analytics instrumentation cannot, in any way, impact an app user’s satisfaction. If (and only if) this requirement is satisfied, will an app analytics solution be given the opportunity to fulfill its users’ desires (desires like “show me feature usage” or “send me exceptions”).

How can instrumentation fail? Lots of ways sadly – but the most common revolve around performance, stability, security, and privacy at runtime (recall that the expectations that must be met are those of the app user – NOT the app stakeholder).

Stay tuned – I’ll be posting my suggestions on how to distinguish what you may require from what you may desire.