Wednesday, October 8, 2014

Welcome Xamarin Insights (seeing the forest through the trees)

First, let me state for the record that I am a huge fan of Xamarin - when I say this, I mean to include both their great technology and their people (I've only met a few, but they've never disappointed). So with that out of the way, I listened with great interest as they announced Xamarin Insights at their user group this morning. As someone with a personal stake in the broad category of application analytics, you can imagine that when a company like Xamarin enters my space, they're going to get my undivided attention.

My first reaction was that the name "Xamarin Insights" sounded a lot like Microsoft's "Application Insights" and as I watched the presentation and then reviewed the web content, the similarities grew even stronger.

Of course, if you're a developer on either of the (*) Insights teams you're going to be mildly offended by this last statement as you no doubt see STARK differences - and, at some important level, you're probably right - but I'm not on either dev team, I'm part of the PreEmptive Analytics team and so this is the area where I see the "STARK differences." ...and so that has prompted me to populate the following table comparing all three, Xamarin Insights, Application Insights, and PreEmptive Analytics.

I've tried to focus on material differences that are most likely to make one approach more effective than the other two - and to make this crystal clear - there are scenarios where each option is better suited than the other two - so understanding YOUR requirements is the first and MOST IMPORTANT step in selecting your optimal analytics solution.

Xamarin Insights
Application Insights
PreEmptive Analytics
Targeted appeal
Enterprises and ISVs targeting modern platforms
Enterprises and ISVs with established app portfolios driving large, regulated, and secure operations extending into modern/mobile
Release status
Free with pricing TBD
Free with pricing TBD
Licensed by product component
Applications supported
API for C#/F# supporting native Xamarin targets (end-user apps only)
API for C/C#/F#, JavaScript supporting Microsoft targets (MODERN client-side AND server-side apps/components)
All apps supported by (*)Insights PLUS C, C++, Java, traditional .NET, middle-tier, on-premises, etc.
Endpoint/analytics engines and portal
Multi-tenant hosted by Xamarin
Multi-tenant hosted by Microsoft
On-premises or hosted – hosting can be by 3rd party or PreEmptive.
Events: Atomic mobile & page

Exceptions: Unhandled and caught
Custom: Strings

System and performance: mobile only
Events: Atomic mobile & page

Exceptions: Unhandled

Custom: Strings

System and performance: Modern only
Events: All  (*)Insights PLUS arbitrary workflow and in-code spans
Exceptions: Unhandled, caught, thrown
Custom: Strings, serialized data structures from multiple sources
System and performance: all runtimes and surfaces
Supported organizations
Xamarin devs ONLY
Microsoft-based devs ONLY
All devs supported by (*)Insights PLUS all other enterprise, ISV, and embedded app devs.

Data dimensions
Only data originated inside an app can be analyzed
Data inside an app AND data accessible from within Azure account can be analyzed
Any data source available within an enterprise or via external services can be mashed up to enrich telemetry
Opt-in/out policy enforcement
Offline caching
Extensible indexing and UI on a role-by-role basis (app owner, dev mgr, etc.)
Injection of instrumentation for managed code
User and organization metrics
Yes including integration with Enterprise credentials
Automatic creation of TFS work items based upon business rules and patterns
Embedded inside Visual Studio
Starting with VS2013/14
Since 2010

 One thing i know for sure - no one will be building applications without analytics in the next few years - figuring this out for YOUR dev requirements will be a critical requirement soon enough - it's not a question of IF - only WHEN - so, if applications are an important part of your life - this is something that you cannot postpone for much longer (it may already be too late!) Enjoy!

Tuesday, April 22, 2014

Cross Platform Application Analytics: Adding meat to pabulum

Could I have chosen a title with less meaning and greater hype? I seriously doubt it.

We have all heard that you can gauge how important a thing or concept is to a community by the number of names and terms used to describe that thing (the cliche is Eskimos and ice) - and I proposed a corollary; you can gauge how poorly a community understands a thing or concept by how heavily it overloads multiple meanings onto a single name or term. ...and "analytics," "platform," and even "application" all fall into this latter category. 
What kind of analytics and for whom? What is a “platform?” And what does crossing one of these (or between them) even mean?

In this post, I'm going to take a stab at narrowing the meaning behind these terms just long enough to share some "tribal knowledge" on what effectively monitoring and measuring applications can mean - especially as the very notion of what an application can and should be is evolving even as we deploy the ones we've just built.

Application Analytics: If you care about application design and the development, test, and deployment practices that drive adoption – and if you have a stake in both the health of your applications in production and their resulting impact – then you’ll also care about the brand of application analytics that we’ll be focusing on here.

Cross Platform: If your idea of “an application” is holistic and encompasses every executable your users touch (across devices and over time) AND includes the distributed services that process transactions, publish content, and connect users to one another (as opposed to the myopic perspective of treating each of these components as standalone) – then you already understand what “a platform” really means and why, to be effective, application analytics must provide a single view across (and throughout) your application platform. 

PreEmptive Analytics

At PreEmptive, we’d like to think that we've fully internalized this worldview where applications are defined less by any one instance of an executable or script and more meaningfully treated as a collection of components that, when taken together, address one or more business or organizational needs. …and this perspective has translated directly into PreEmptive Analytics’ feature set.

Because PreEmptive Analytics instrumentation runs inside a production application (as any application analytics instrumentation must), we find it helpful to divide our feature set into two buckets;

  1. Desired, e.g. those that bring value to our users like feature tracking and 
  2. Required, e.g. those features that, if they do not behave, damage the very applications they are designed to measure.

How do you decide for yourself what’s desired versus required for your organization?

The list of “desired features” can literally be endless – and a missing “desired feature” can often be overlooked and forgiven because the user can be compensated with some other awesome feature that still makes implementing PreEmptive Analytics worthwhile. On the other hand, miss ANY SINGLE “required feature,” and the project is dead in the water – Violate privacy? Negatively impact performance or quality? Complicate application deployment? Generate regulatory, audit, or security risk? Any one of these issues is a deal breaker.

PreEmptive Analytics “required” cross platform feature set

Here’s a sampling of the kinds of features that our users often rely upon to hit their “required” cross platform feature set:

Platform, runtime, and marketplace coverage: will PreEmptive Analytics instrumentation support client, middle-tier, and server-side components?

PreEmptive Analytics instruments:

  • All .NET flavors (including 2.0 through WinRT and WP), C++, JavaScript, Java (including 8), iOS, and Android (plus special support for Xamarin generating native mobile apps across WP, iOS, & Android). 
  • Further, our instrumentation passes Apple, Microsoft, Amazon, and Google marketplace acceptance criteria.    

Network connectivity and resilience: will PreEmptive Analytics be able to capture, cache, and transport runtime telemetry across and between my users’ and our own networks?

PreEmptive instrumentation provides:

  • Automatic offline caching inside your application across all mobile, PC, cloud, and server components (with the exception of JavaScript). Special logic accommodates mobile platforms and their unique performance and storage capabilities. After automatically storing data when your application is offline, it will automatically stream the telemetry up once connectivity is reestablished. 

PreEmptive Analytics endpoints can provide:

  • Longer-term data management for networks that are completely isolated from outside networks allowing you to arrange for alternative data access or transport while respecting privacy, security, and other network-related constraints. 

Privacy and security at runtime and over time: will PreEmptive Analytics provide the flexibility to enforce your current and evolving security and privacy obligations?

PreEmptive Analytics instrumentation

  • Only collects and transmits data that has been explicitly requested by development. There is no unintended “over communication” or monitoring. 
  • When data is transmitted, telemetry is encrypted over the wire. 
  • Includes an extensible Opt-in switch that can be controlled by end users or through web-service calls allowing your organization to adjust and accommodate shifting opt-in and privacy policies without having to re-instrument and redeploy your applications. 

PreEmptive Analytics endpoints can:

  • Reside and be managed entirely under your control – either on-premises or inside a virtual machine hosted in a cloud under your direct control. 
  • They can be reconfigured, relocated, and dynamically targeted by your applications – even after your applications have been deployed. 

Performance and bandwidth: will PreEmptive Analytics instrumentation impact my application’s performance from my users’ experience or across the network?

PreEmptive instrumentation:

  • Runs inside your applications’ process space in a low priority thread – never competing for system resources. 
  • Utilizes an asynchronous queue to further optimize and minimize the collection and transmission of telemetry once captured inside your application. 
  • Has “safety valve” logic that will automatically begin throwing away data packets and ultimately shut itself down when system resources are deemed to be too scarce – helping to ensure that your users’ experiences are never impacted. 
  • Employs OS and device-specific flavors of all of the above ensuring that – even with injection post-compile – every possible step is taken to ensure that PreEmptive Analytics’ system and network footprint remains negligible. 

What about the PreEmptive Analytics “desired” cross platform feature set? (The features that make analytics worth doing) As I’ve already said, this list is literally an endless one – If I were to list only the categories (let alone the features in each category), this would make an already long post into very very long post. So, the desired feature discussion will have to come later… 

What’s the bottom Line for “Cross Platform Application Analytics?”

Be consistent – make sure your application analytics technology and practice are aligned with your definition of what an application actually is – and this is especially true when evaluating “cross-platform” architectures and semantics. A mismatch here will likely wipe out any chance of a lasting analytics solution, increase the cost of application analytics over time, and add to your technical debt.

Separate “needs” from “wants” – take every action possible to ensure that your application analytics implementation does no harm to the applications being measured and monitored either directly (performance, quality, …) or indirectly (security, reputation, compliance).

Want to put us through our paces? Visit and request an eval... 

Friday, March 14, 2014

Application Analytics: Security and Privacy for All

In my previous post, I tried to illustrate the distinction between required capabilities and desired capabilities – and how, with application analytics, this distinction is particularly tricky since true requirements are more likely to come from the users of apps versus the developers of apps (the latter being the app analytics customer and the former – app end users – are often completely out of reach from the analytics solution provider).

I also posited that the most common areas where end user requirements drive important app analytics requirements fall into performance, quality, security, and privacy domains.

In this post, I’m going to drill down into security and privacy a bit. Let’s break out the application analytics supply chain into four parts; Telemetry creation, ingestion, processing, and publication.
  • Telemetry creation (where the app itself or an external agent actually creates the raw telemetry)
  • Ingestion (the steps required to bundle, transport, and deliver the raw data for processing)
  • Processing (parsing, indexing, computing, aggregating, storage, etc. required to transform raw telemetry into publish-ready data) 
  • Publication (the selection, transformation, formatting, and delivery of targeted data to a specific user or external system)

Figure 1: Application Analytics Supply Chain: the dicey part is that application telemetry is typically collected in the context of the “App User’s” world subject to their expectations for privacy, security, etc. and then must be delivered across a great divide into the “App Analytics” user’s world. Application analytics solutions must enforce whatever "app user" policies are required during app usage, then navigate the ingestion process that typically bridges the two worlds, and finally help maintain whatever data governance obligations app users (or their legislatures) require. 

PreEmptive Analytics

To make this “real” and for illustration purposes – here’s a summary of the features offered within PreEmptive Analytics that target these security and privacy challenges. 

Telemetry Creation

Application instrumentation

Activation: No “accidental” or “inadvertent” application monitoring. Application instrumentation is typically accomplished through post-compile injection. The default setting is “off.” In other words, injection must be manually activated avoiding “accidental” application instrumentation. 

Configuration: No data, other than what is explicitly requested by development, is ever transmitted. Once “activated,” each individual data component must then be explicitly identified for data capture. This is true for either the injection pattern or when using the PreEmptive Analytics API directly inside an app’s code. 


Definition: PreEmptive Analytics “opt-in” requires a Boolean “True” value to be set before any data monitoring functionality is initiated (which is prior to transmission). The default value of this setting is “False” and must be explicitly reset by the application at the start of every application session. There are, in fact, two opt-in settings. The first covers general usage and the second covers exception monitoring.
  • Application usage: opt-in covers session, feature and system data previously identified by development prior to deployment. 
  • Exception monitoring: opt-in covers unhandled, caught and thrown exception data previously identified by development prior to deployment. 

Privacy Policy

PreEmptive Analytics permits development to encode a link to the company’s own privacy policy that can be communicated to an end-user prior (or during) a request for an informed consent to opt-in. 


Data transmission

SSL Encryption: by default, all data transmitted from an application to an endpoint is first encrypted before transmission. This can only be overridden by development prior to the release of the application. 

Content Management

Runtime data collected for management and analysis is owned by the development organization. PreEmptive Solutions has no access and no rights to reuse runtime data – either in part or in aggregate.
  • On-premises: Endpoints that are “on-premises” or “client-managed” are completely under the development organization’s control. 
  • Managed service: Data managed by endpoints owned by PreEmptive Solutions are managed solely for our clients’ benefit. There is no other access or use authorized or permitted. 


In addition to PreEmptive component localized authentication, application, identity, and role-based frameworks are respected and enforced, e.g. you cannot provision a TFS project of PreEmptive Analytics without (at least) Admin privileges for that TFS project.

Application security (bonus)

In addition to the thorough, “end-to-end,” approach to information security and privacy, PreEmptive Solutions also provides technology and associated controls to minimize the risk of application reverse engineering or tampering that may lead to the disclosure of application vulnerabilities that can be exploited or the tampering (modification) of applications to alter behavior (to introduce exploitable vulnerabilities where none had previously existed). These include: 

Preventative controls

Obfuscation: prevents reverse engineering and recompilation. 

Detective controls

Tamper detection and defense: provides real-time defense and alert notification when application tampering (modification post-compile) has been detected.

Taken as a whole, PreEmptive Analytics is designed to provide a complete and comprehensive application analytics security and privacy solution – built to encode and enforce the wide variety (and ever-evolving) application and information security and privacy policies, mandates and controls.

As application analytics evolves beyond tracking marketing funnels on the internet, the entire application analytics pipeline will (must) be governed by the same security and privacy policies as the applications they are monitoring and the business and operational content that your organization is managing.

Thursday, March 13, 2014

Application Analytics: fulfilling your every desire

We need to take care in how we use the term “requirement” in development. Development requirements are really carefully crafted descriptions (distillations) of the “behaviors we care about.” They get assigned priorities and scheduled and (typically) only a small number of requirements are actually implemented in any given app release.

Ddevelopment requirements are usually NOT required

Here’s my beef - development requirements are usually NOT required – they are, in fact, desired. …development’s goal is to fulfill desires – but desires are emotional and so we try to use requirements in their place (requirements are for the most part concrete and objectively measured). …of course, we can nail “requirements” and still build the wrong thing and fail – so development cannot escape their true lot in life - to build the most desirable apps possible – and app analytics suppliers are no different – we are subject to the same laws as every other development niche (it’s just that our users are all app stakeholders of some sort with their own users that are not our users).

A silent killer of application analytics implementations

…and herein lies a silent killer of application analytics implementations (and what sets this specialty apart from other analytics categories); app stakeholders care more about app user satisfaction than the app users themselves.

Think about it, if an app user doesn’t like an app, they just stop using it (or go with a competitor). …but when users drop your app, you’re out of a job. So – before an app analytics provider can even think about fulfilling their users’ desires – they must first ensure (and prove) that they “do no harm” to this murky, extended user community once removed; and this is a genuine “requirement” in the truest sense of the word.

Why is this a “silent killer” that exclusively stalks application analytics implementations? Because application instrumentation (the generator of raw telemetry – step one of any app analytics implementation) must either run inside a client’s application (or inside the same runtime “looking in”) – application instrumentation can never be a part FROM runtime applications – it is unavoidably a part OF each runtime application. …and, application user desires are not the desires of application analytics users.

…so, as the great Stevie Wonder has written in Songs in the Key of Life (As), “…make sure when you say you're in it but not of it You're not helping to make this earth a place sometimes called Hell.”

It is a true requirement that app analytics instrumentation cannot, in any way, impact an app user’s satisfaction. If (and only if) this requirement is satisfied, will an app analytics solution be given the opportunity to fulfill its users’ desires (desires like “show me feature usage” or “send me exceptions”).

How can instrumentation fail? Lots of ways sadly – but the most common revolve around performance, stability, security, and privacy at runtime (recall that the expectations that must be met are those of the app user – NOT the app stakeholder).

Stay tuned – I’ll be posting my suggestions on how to distinguish what you may require from what you may desire.

Friday, December 6, 2013

PreEmptive Analytics Supports Xamarin Developers

A first in “the last frontier” of application analytics instrumentation

Xamarin lets a developer write in C# and then generate native iOS, Android, Windows Phone, and Win8 applications. With PreEmptive Analytics API for Xamarin, the PreEmptive Analytics API (C#) can be consumed by Xamarin to produce fully instrumented native Android, iOS, Windows Phone, and WinRT apps.

PreEmptive’s application instrumentation (the portion of our analytics solution that collects usage and exceptions and transmits the resulting application telemetry for analysis) already covers virtually every contemporary runtime (.NET, Win8, Windows Phone, JavaScript, Java, Android, iOS, and C++), BUT, for each runtime supported, our instrumentation must be introduced either through post-compile injection directly into the assembly/executable (very cool in its own right) and/or via a PreEmptive API.

However, PreEmptive Analytics Instrumentation for Xamarin establishes an important precedent – it is the first application analytics instrumentation API built to work within an application generator rather than the target runtime itself. the rest of the Xamarin experience, application instrumentation can be a "code once" and "deploy to a heterogeneous set of optimized native apps" many times experience... 

Application Instrumentation: a cornerstone of application analytics


In addition to data analysis, Application Analytics solutions must provide specialized instrumentation and telemetry transmission functionality. General purpose analytics solutions are typically built to “Ingest everything” providing “adaptors” that translate external data sources into a proprietary analytics framework. While flexible, this approach is predicated on the assumption that a safe and reliable means to collect and transport raw data is available; with application analytics, this is rarely the case.

In addition to the functional requirements to capture the right kinds of runtime telemetry, an application instrumentation solution must meet a host of performance, privacy, quality, and security requirements as well – requirements that vary wildly by industry, use case, and target audience.

Incomplete instrumentation solutions force development to instrument a single app multiple times or omit valuable telemetry from their analytics solution.

PreEmptive Analytics instrumentation is optimized to efficiently, securely, and reliably capture application telemetry without compromising user experience, privacy or compliance obligations.

PreEmptive Analytics Instrumentation for Xamarin

For more information, visit or email – NOTE – while registration is required, the API itself is free to download and use.

Is there a catch? Not really - but if you really want to avoid licensing fees entirely, you will want to install the Community Edition of PreEmptive Analytics for TFS (included with all SKUs of Visual Studio & TFS other than Express). You will need this to serve as the endpoint that receives your application telemetry. For a general overview of this SKU and Application Analytics in general, check out my article inside MSDN's Visual Studio 2013 ALM site: Application Analytics: What Every Developer Should Know

If you're interested in scaled up capabilities, you may want to consider PreEmptive's commercial offerings:
In EVERY case - these endpoints can be installed on-premises and are always development managed (PreEmptive can't touch your data).

Here are a few more technical details around the new API;

Adding Analytics

REMEMBER – code once in C# and have all of this functionality manifest inside your native iOS and Android apps! 

Tracking Feature Use

The most common usage of analytics is to track which features are popular among users and how they interact with them. You can indicate that a feature was used by using the FeatureTick method. You can track the duration of a feature's use by using FeatureStart and FeatureStop. 

Sending Custom Data

You can send custom data to the configured endpoint with any type of message. To send over the data you construct an object to hold key-value pairs. One common use case is to report the arguments a method was called with and what the method will return. 

Reporting Exceptions

The API provides a simple way to report exceptional conditions in your application. The exception reports can be used to track exceptions reported by your application or from third party software. The report can also have user added information added to it to aid support staff. And of course you can always add Extended Key information to track application state. 

Off-line Storage

Your application is not required to always have network connectivity. By default, the API will store messages locally when the configured endpoint cannot be reached. The messages will automatically be sent and removed from offline storage once the endpoint can be reached.

Message Queuing and Transmission

Messages are not immediately sent to the configured endpoint. The API queues messages and sends them either when a certain amount of time has elapsed, or when a number of messages have accumulated. On platforms where transmission may have a performance impact, such as on mobile devices, the transmission of messages can be directly controlled by your program.

"Application Analytics - what every developer should know"

Thursday, November 7, 2013

What can Jay-Z teach us about application analytics?

If you want to move your audience, then a whole lot actually…

The gold standard for analytics is “actionable insight;” how much smarter, faster, or efficient do we become when the right people get the right information in the right format at the right time?

General purpose analytics solutions are typically built to ingest anything and everything. “Adapters” translate data sources into a common (proprietary) analytics framework – and then the slicing and dicing begins! While obviously flexible, this approach only works if users have a safe and reliable means to collect (and deliver) raw data into their systems; with application analytics, this is rarely the case. 

Recording applications “in the wild” is not an easy or simple task. In addition to the functional requirements to capture the right kinds of runtime telemetry, application instrumentation must meet a host of performance, privacy, quality, and security requirements as well – requirements that vary wildly by industry, use case, and target audience. …and, the demand for high fidelity application analytics has never been greater; you can thank the adoption of feedback driven-development practices coupled with the operational complexity of mobile and cloud computing plus the ever-evolving concerns around privacy and security for that.

So what’s a development team to do? Well, it turns out that there’s nothing new about having to record complex real-world events and then package them up to inspire and move audiences – media moguls and hit makers have been doing all along!

Developers, if you know you should be including analytics inside your application development process, I recommend that you take a page from the recording industry – it turns out they know a little something about the complexities of capturing user behavior across heterogeneous devices and in diverse settings (the only big difference is that they call their users “musicians”).

I've taken the liberty of condensing a post from a site dedicated to teaching the art and business of audio production and mapping it to the patterns and practices of effective application analytics implementation. You can see the original post at the recording process if you want to check my work.

The infograph below maps each step in "the recording process" to its app analytics analog. I've underlined key points in the original post and added my own take-away.

People will tell you that new technology changes everything – for me, this is just one more concrete example proving just the opposite.

Monday, September 16, 2013

Mobile development takes root; application analytics go mainstream

We’ve just finished up another survey tapping ~8,000 developers; mostly (although definitely not exclusively) of the .NET variety – and I think there’s little room for doubt; the rise of mobile and modern apps is having a profound impact on the way developers work and the tools they use.

What a difference a year makes

We did a similar survey in September 2012 (Who cares about application analytics? Lots of people for lots of reasons) and, even then, the interest in analytics was obvious – but interest had not yet translated into action. 

In Sept of 2012, we reported that 77% of development and their management had identified “insight into production application usage” as influential, important or essential to their work, and 71% identified “near real-time notification of unhandled, caught, and/or thrown exceptions” in the same way.

…BUT, at the same time, only 30% indicated that they were doing any kind of analytics in practice (exception reporting, feature tracking, etc.).
“More people believe that the world is flat than doubt the positive role of application analytics on development.”
Today, that 77% and 71% of developers who “got the value of analytics” is now a solid 100% and 99.5% respectively (for those that don’t do surveys, you have to appreciate that a 100% opinion is virtually impossible to find – you’d have a hard time getting a 100% consensus on the shape of the planet (round or flat) or even the role that aliens play in picking Super Bowl winners (are they pro AFC or pro NFC?).

Even more impressive is the rise of actual use of analytics. The 30% of development teams that claimed to use some sort of analytics has, in just one year, ballooned to 62%.

The rise of mobile development

Mobile devices have unique capabilities (accelerometer, augmented reality, gyroscope, camera/scanner, gesture recognition, GPS and navigation…) that drive unique development requirements which, in turn, spawn new development patterns and practices – and one of the most notable (in my opinion anyway) is the expectation that some form of application analytics always be included.

This is worth saying again; in traditional PC apps, adding analytics is the exception, not the rule – in mobile apps, the situation is reversed; embedding analytics is the norm.  

This is the other major shift in our year-over-year survey results. In 2012, only 25% of the development teams reported that they were developing mobile apps (iOS, Android, …) – in 2013, that number has more than doubled to 56%. Is it a coincidence that the rise of analytics use is proportionate to the rise in mobile development?

Analytics go mainstream

For analytics to “go mainstream,” mobile analytics development patterns need to be applied (and adapted) beyond narrow consumer-centric scenarios (as lucrative as those scenarios may be) to include line-of-business and “enterprise” apps (with all of the attendant infrastructure, IT governance, and data integration requirements that this implies).  …and we’re seeing evidence of this too.

94% of respondents are building mobile apps targeting consumers, BUT 40% are also deploying apps “used by employees” to “support a larger business,” e.g. enterprise apps!

65% of enterprise mobile app dev teams (essentially the same percentage as their consumer-centric counterparts) also report using (some form) of analytics.

Analytics: one size fits all?

Of course not – the specialization of application analytics technologies is another inevitable outcome of all of this change – and developers are on the front-lines trying to figure all of this out.

The following chart lists the analytics technologies our respondents have reported using – Google’s (and to a lesser degree, Flurry’s) prominence should come as no surprise. …but what’s the deal with the homegrown category?

Developers "doing it themselves" would strongly suggest that the reigning champions of consumer-centered mobile analytics are failing to meet a growing set of analytics requirements.

Is it a coincidence that the homegrown and PreEmptive analytics adoption rates map so closely to  the enterprise mobile app market share listed above? (40%)

These tools, they are a-changing

Analytics is not the only development tool category undergoing change and reinvention. When asked to enter specialized mobile development tools, responses included both “the familiar” and “the brand new.” (Note, this was not a multiple choice – this was an open text box where anything – or nothing – could be entered)

The familiar: Visual Studio was cited as a “specialized toolset” by 24.6% of those listing at least one specialized mobile app development tool – of the 49 unique tools that were cited, this was the #1 response – and should give the Visual Studio product team some satisfaction as they are clearly establishing Visual Studio as something more than just a .NET-centric dev environment.

The brand new: Xamarin, the cross platform mobile app development platform, was the most common new – and/or truly mobile-specific – toolset (they released a major refresh of their solution in 2013). Xamarin was cited by 9.5% of those listing at least one specialized mobile app development tool. 

(Are you using Xamarin? Contact me if you’d like to learn more about our soon to be released analytics integration with Xamarin – or visit the PreEmptive website if you’re reading this during or after Q4/2013)

The complete list of tools mentioned at least once include:

While Visual Studio was cited most often, relative newcomer, Xamarin, is already making its mark.

Game over? Are you kidding!? We haven’t even figured out the rules yet…

Have development organizations figured out how they’re going to tackle current and future mobile development requirements? (That, my friends, is what we call a rhetorical question)

The rise and assimilation of mobile devices is far, far from over and, sadly, I would suggest that picking new tools and expanding technical skillsets is the least of a development organizations’ worries – grappling with entirely new sets of operational, legal, social, security, and privacy obligations (that are themselves changing and often inconsistent) pose (in my view) the most serious risk (a.k.a. opportunity) for today’s development shop. 

…and those that lack a sense of urgency around these issues, that take the posture of waiting until these issues come to them, are in for a world of hurt.

For example, 

Personally Identifiable Information (PII)
  • 15% of respondents that collect personally identifiable information (PII) do not offer their users a way to opt-out 
  • 18% that collect PII do not offer a link to their privacy policy (there was only a 6% overlap between these two groups) 

To know that you’re collecting PII and to not provide these mechanisms is a serious omission (both from a development and an operations perspective) – and this is the easy stuff! This question also presumes that developers are using the most up-to-date and appropriate PII definition – a stretch to be sure.

Regulatory and Compliance

For those that indicated that their apps have “regulatory or compliance requirements” (29.9% of respondents) – their obligations are, by their very nature, more complex, ambiguous, and fluid.
  • 36.6% of respondents whose apps are subject to compliance and/or regulatory oversight do not offer their users a way to opt-out 
  • 16.7% of respondents whose apps are subject to compliance and/or regulatory oversight do not offer a link to their privacy policy.

…and what about collecting application usage information?
  • 41.7% of respondents whose apps are subject to compliance and/or regulatory oversight use Google Analytics or Flurry – analytics providers whose business model is predicated on harvesting and monetizing usage telemetry! 

Have these development organizations reconciled their regulatory obligations with Google’s and Flurry’s usage terms or privacy policies? 

In confusion, there’s opportunity

…and I think everyone can agree – mobile application development is full of opportunity.

Tuesday, September 10, 2013

(Zinfandel + BBQ = $$$) - I told you so

Back in February of 2011, I posted Riddle me this! Where can French, Italians, and Germans all agree? focusing on how a collection of early Windows Phone developers were leveraging analytics; the 10 apps included games, media apps, and a foodie app that paired food and wine by VinoMatch. In this last example, our analytics tracked user behaviors (which foods users chose) and which wines they selected during the pairing.
Analysis of food selection by users' nationality showing Italians' special interest in BBQ

I was surprised to learn a) Italian interest in pairing wine with BBQ and b) the implied potential to market Zinfandel to Italians as an American wine for BBQ (because Zinfandel was bred from a cheap table-variety Italian grape, Italians have typically been a hard sell).

So… imagine my surprise now in 2013, as I see a series of targeted marketing campaigns with exactly this message (from multiple wineries). I wonder how many hundreds of thousands of dollars in market research these guys spent when all they had to do was instrument an app?!
  Cin Cin!

Monday, September 9, 2013

Your phone can be a very scary place

Mobile apps are changing our social, cultural, and economic landscapes – and, with the many opportunities and perks that these changes promise, come an equally impressive collection of risks and potential exploits.

This post is probably way overdue – it’s an update (supplement really) to an article I wrote for The ISSA Journal on Assessing and Managing Security Risks Unique to Java and .NET way back in 09’. The article laid out reverse engineering and tampering risks stemming from the use of managed code (Java and .NET).  The technical issues were really secondary – what needed to be emphasized was the importance of having a consistent and rational framework to assess the materiality (relative danger) of those risks (piracy, IP theft, data engineering…).

In other words, the simple fact that it’s easy to reverse engineer and tamper with a piece of managed code does not automatically lead to a conclusion that a development team should make any moves to prevent that from happening. The degree of danger (risk) should be the only motivation (justification) to invest in preventative or detective measures; and, by implication, risk mitigation investments should be in proportion to those risks (low risk, low investment).

Here’s a graphic I used in 09’ to show the progression from managed apps (.NET and Java) to the risks that stem naturally from their use.
Risks stemming from the use of Java and .NET circa 2009

Managed code risks in the mobile world

Of course, managed code is also playing a central role in the rise of mobile computing as well as the ubiquitous “app marketplace,” e.g. Android and, to a lesser degree, Windows Phone and WindowsRT – and, as one might predict, these apps are introducing their own unique cross-section of potential risks and exploits. 

Here is an updated “hierarchy of risks” for today’s mobile world:
Risks stemming from the use of Java and .NET in today’s mobile world

The graphic above highlights risks that have either evolved or emerged within the mobile ecosystem – and these are probably best illustrated with real world incidents and trends (also highlighted below):

Earlier this year, a mobile development company documented how to turn one of the most popular paid Android apps (SwiftKey Keyboard) into a keylogger (something that captures everything you do and sends it somewhere else).  

This little example illustrates all of the risks listed above:
  • IP theft (this is a paid app that can now be side loaded for free)
  • Content theft (branding, documentation, etc. are stolen)
  • Counterfeiting (it is not a REAL SwiftKey instance – it’s a fake – more than a cracked instance)
  • Service theft (if the SwiftKey app makes any web service calls that the true developers must pay for – then these users are driving up cloud expenses – and if any of these users write-in for support, then human resources are being burned here too)
  • Data loss and privacy violations (obviously there is no “opt-in” to the keylogging and the passwords, etc. that are sent are clearly private data)
  • Piracy (users should be paying the licensing fee normally charged)
  • Malware (the keylogging is the malware in this case)
In this scenario, the “victim” would have needed to go looking for “free versions” of the app away from the sanctioned marketplace – but that’s not always the case.

Symantec recently reported finding counterfeit apps inside the Amazon Appstore (and Amazon has one of the most rigorous curating and analysis check-in processes). I, myself, have had my content stripped and look alike apps published across marketplaces too - see my earlier posts Hoisted by my own petard: or why my app is number two (for now) and Ryan is Lying – well, actually stealing, cheating and lying - again). 

Now these anecdotes are all too real, and sadly, they are also by no means unique. Trend Micro found that 1 in every 10 Android apps are malicious and that 22% of apps inappropriately leaked user data – that is crazy!

For a good overview of Android threats, checkout this free paper by Trend Micro, Android Under Siege: Popularity Comes at a Price).

To obfuscate (or not)?

As I’ve already written – you shouldn’t do anything simply to make reverse engineering and tampering more difficult – you should only take action if the associated risks are significant enough to you and said “steps” would reduce those risks to an acceptable level (your “appetite for risk.”)  

…but, seriously, who cares what I think? What do the owners of these platforms have to say?

Android “highly recommends” obfuscating all code and emphasizes this in a number of specific areas such as: “At a minimum, we recommend that you run an obfuscation tool” when developing billing logic. …and, they go so far as to include an open source obfuscator, Proguard – where again, Android “highly recommends” that all Android apps be obfuscated.

Microsoft also recommends that all modern apps be obfuscated (see Windows Phone policy) and they also offer a “community edition” obfuscator (our own Dotfuscator CE) as a part of Visual Studio. 

Tamper detection, exception monitoring, and usage profiling

Obfuscation “prevents” reverse engineering and tampering; but it does not actively detect when attackers are successful (and, with enough skill and time – all attackers can eventually succeed). Nor would obfuscation defend against attacks or include a notification mechanism – that’s what tamper defense, exception monitoring, and usage profiling do. If you care enough to prevent an attack, chances are you care enough to detect when one is underway or has succeeded. 

Application Hardening Options (representative – not exhaustive)

If you decide that you do agree with Android’s and Microsoft’s recommendation to obfuscate – then you have to decide which technology is most appropriate to meet your needs – again, a completely subjective process to be sure, but hopefully, the following table can serve as a comparative reference.

Thursday, June 13, 2013

Mobile Analytics: like playing horseshoes or bocce ball? (When close is “good enough”)

A recent post on Flurry’s “industry insight” blog caught my eye. The post, The iOS and Android Two-Horse Race: A Deeper Look into Market Share, called out the fact that iOS app users spend more time inside applications than their Android counterparts and then posited three potential underlying causes (condensed here – visit their post for the full narrative):
  • One was that the two dominant operating systems have tended to attract different types of users (we’ll get back to this shortly – this is close).
  • A second possible reason was that the fragmented nature of the Android ecosystem creates greater obstacles to app development and therefore limits availability of app content (suggesting app quality is the driving force).
  • The third possible explanation offered by Flurry was that iOS device owners use apps so developers create apps for iOS users and that in turn generates positive experiences, word-of-mouth, and further increases in app use (combining the two reasons above I suppose).

What struck me in this post was that, while there’s no disputing Flurry’s observation about “time spent in apps” across platforms, the lack of precision within the “2.8 billion app sessions” they track every day made genuine root cause analysis virtually impossible – and led to, in my view, an erroneous conclusion (or, more precisely, a false set of options where the real mechanics were all but invisible). 

Back in January, I published the blog post Marketplaces Matter and I’ve got the analytics to prove it where I compared two versions of one of my apps, Yoga-pedia, published through Google Play and Amazon marketplaces. What’s noteworthy here is that the apps are genuinely identical – functionality, UX, everything - …and yet, the total time spent inside the app distributed through the Amazon marketplace was 40% higher than from Google Play. Which, if you pivot the ratio, total time spent inside the app sourced from Google Play was 72% of the time spent inside the (identical) app sourced from Amazon. 

Now, if I’m interpreting Flurry’s graph in the above blog for January 2013 properly (when my earlier stats were generated), it shows a nearly identical ratio (the total time in “Android apps” was ~75-80% of total time in iOS). So what does that suggest?

  1. iOS users and Android users clearly use different marketplaces – but marketplace source is not something tracked.
  2. iOS apps themselves are of course always different from Android apps (I have an iOS version of Yoga-pedia that is close to my Android flavors – but even these are different). This is a major variable that Flurry analytics cannot separate out – they are looking at the roll-up of all iOS apps and comparing them to all Android apps. 
  3. Treating all Android apps as a single data set (which includes multiple marketplaces) – further obscures what may be one of the key drivers of user behavior – the marketplace community.

So – going back to the first hypothesis, that Android attracts a different class of user than does iOS, I think that is as close as they could come given the kind of data available – the real answer is most likely that the Apple marketplace attracts a different kind of user than does Google Play (and the mix of Amazon Android app users is probably not significant enough to move the big needle).

…And so that begs my original question – is this kind of imprecise (but still accurate) intelligence “good enough” (like horseshoes, bocce ball, and nuclear war)? If this was as far as true application analytics could take me – then maybe… 

BUT, once I had identified the potential role that marketplaces can have – I was able to drill down even deeper to identify the other marketplace delta’s that were (at least to me), extremely valuable including:
  • Amazon click through rate (CTR) was 164% higher than the Google Play CTR 
  • Google Play Ad Delivery Failure rate (ADFR) was 199% higher than the Amazon ADFR  
  • Amazon user upgrade rate was 54% higher than the Google Play upgrade rate (from free to paid app version).

So, in my case, owning my own data and having an instrumentation and analytics platform able to capture data points specific to my needs (precision) turns out to be very important indeed.

So why would anyone use technology like Flurry’s? LOTS OF REASONS relating to ad revenue and all of the other monetization services they offer app developers (that’s why they’re in business) – and that’s I guess the big point. Services and technologies like Flurry’s are built for app monetization – and to the extent that some analytics are an important ingredient in their recipe – you can bet that they’ll nail it – but to do more would be over engineering at best and, more likely, pose a material risk to their entire business model.

For advertising across huge inventories of mobile apps, analytics should be a bit like playing horseshoes – knowing that I can expect iOS to generally perform better than Android is useful. 

On the other hand, as a development organization, if I really want to fine tune my app and optimize for adoption, specific behaviors, and operational/development ROI – I need an application analytics solution built with that use case in mind – not only are alternative analytics solutions missing key capabilities, there are solid business reasons that say those alternative technologies should actively avoid adding those very capabilities for all time.

Saturday, February 2, 2013

The link between privacy and analytics gets stronger still: FTC moves to establish policy and best practices in today’s mobile “Wild West”

As federal and state regulatory agencies become increasingly assertive in defining and enforcing app user rights, application analytics (like PreEmptive Analytics) that embed opt-in policy enforcement and limit data access and ownership are becoming increasingly strategic (and essential) to development organizations.

Today, in a strong move to protect American privacy, the Federal Trade Commission published the report Mobile Privacy Disclosures: Building Trust Through Transparency (PDF). For those that don’t want to read the entire report, checkout the coverage in the NY Times: F.T.C. Suggests Privacy Guidelines for Mobile Apps for a nice overview (not sure how long that link will be live though).

The take away from my perspective is this – while app marketplaces like Apple and Google and advertising services like Flurry continue to fall under increasing scrutiny, the app developer is no longer flying under the radar or going to be given a pass for not understanding the rapidly emerging policies, recommended practices and general principles.

From the referenced NY Times article above…

“We‘ve been looking at privacy issues for decades,” said Jon Leibowitz, the F.T.C. chairman. “But this is necessary because so much commerce is moving to mobile, and many of the rules and practices in the mobile space are sort of like the Wild West.”


The F.T.C. also has its sights on thousands of small businesses that create apps that smartphone users can download for a specific service. The introduction of the iPhone created a sort of gold rush among start-ups to create apps featuring games, music, maps and consumer services like shopping and social networking.

“This says if you’re outside the recommended behavior, you’re at a higher risk of enforcement action,” said Mary Ellen Callahan, a partner at Jenner & Block and former chief privacy officer for the Department of Homeland Security.

Even before this report, “the F.T.C. has not been meek,” said Lisa J. Sotto, managing partner of Hunton & Williams in New York. “They have brought a number of enforcement actions,” she said. “Those in the mobile ecosystem know they’re in the regulators’ sights.”

…but do app developers really know?

In an earlier post of mine, COPPAesthetics: form follows function yet again, I lay out in more detail both the privacy concepts that the FTC are developing and the technical and functional capabilities (and business models) that distinguish application analytics from the other analytics categories out there. These features include opt-in policy enforcement (for both regular usage and exception handling), encryption on the wire, greater control of data collection and more…

COPPA is a much more formal set of requirements to protect children with severe sentencing guidelines and a growing set of precedents where app developers are being fined with increasing regularity

– BUT there is little doubt that the FTC is not limiting itself to children’s rights – in its latest report, the FTC recommends that:

“App developers should provide just-in-time disclosures and obtain affirmative express consent when collecting sensitive information outside the platform’s API, such as financial, health, or children’s data or sharing sensitive data with third parties.” (Page 29 of the report)

If you’re building mobile apps or services that support mobile apps and have been “getting by” using marketplace and marketing analytics services to get user and app usage feedback – be very careful – expect these services to become more and more restrictive – (even dropping apps that appear to be too risky). They will (rightly so) limit their data collection to fall within (and probably well within) regulatory constraints leaving developers to operate their apps “in the dark.” (or assume the risk of non-compliance)

Again from the NY Times article: “Morgan Reed, executive director of the Association for Competitive Technology, a trade group representing app developers, said that the organization generally supported the commission’s report but that it had some concerns about what he called “unintended consequences.” If app stores are worried about their own liability over whether they have adequately checked the privacy protections of a mobile app they sell, they might err on the side of caution and not screen for privacy at all, he said.”

App developers are welcome to collect runtime data necessary to operate (and improve) their applications (see my COPPA post for more clarity here) – collecting data usually only becomes an issue when that data is shared or used for other purposes or by other parties – and that is at the heart of application analytics and what distinguishes it from its peers. 

Application analytics is all about improving application quality, ensuring operational excellence and delivering a superlative user experience – there is no ulterior motive or agenda.

Thursday, January 31, 2013

Marketplaces Matter and I’ve got the analytics to prove it


As I've covered many times in earlier posts, I've used PreEmptive Analytics to instrument a family of mobile yoga apps from TheMobileYogi. These apps are deployed across iOS, Android and Windows. The yoga apps are packaged in a variety of ways. Two apps – Yoga-pedia (free) and A Pose for That (premium) – are direct-to-consumer using a “freemium” model that includes embedded ads inside yoga-pedia. There are also a white-labeled app platform that can quickly generate a “re-skinned” app personalized for yoga studios, retailers and other “wellness-centered” businesses. And with all of these combined, I’m happy to report that we've passed the 110K download mark and still growing by the thousands each week.

The Issue at Hand

One adoption/monetization “variable” that is rarely measured in a clean way is the impact/influence that an app’s marketplace can have on the success of the app itself. This is in large part a practical issue – it’s not easy to compare, for example, Apple’s App Store with Google Play because the apps themselves are often quite distinct from one another – and so isolating the marketplace influence from the apps themselves can be tricky. However, with Android, we publish identical apps through two very different marketplaces; Amazon’s Android App Store and Google’s Google Play marketplace. By focusing on apps that are identical in every way BUT the API calls to the respective marketplaces, we can start to drill into the direct and indirect consequences of marketplace selection. 

Android makes up roughly 51% of TheMobileYogi downloads.

Android downloads combine both Amazon and Google Play adoption.

Android Downloads of Yoga-pedia

As of January 29, 2012, the total downloads of Yoga-pedia were:
  • 21,109 Amazon (36% of the total) 
  • 36,981 Google Play (64% of the total) or said another way, 
Google Play downloads were 75% greater than from Amazon.

…But downloads only tell a very small part of the story. What are users doing AFTER they download the app? How often do they use the app, for how long, and what exactly are they doing when they are inside?

Yoga-pedia Sessions

Using PreEmptive Analytics Runtime Intelligence, we see that there are in fact striking differences between the Google Play user population and the Amazon user population.

One glaring difference is the total number of users in each community.

The total unique users of from Google Play is 208% higher than that of Amazon.

If we were to stop here, I think our conclusion would be obvious – Google play delivers more downloads and more unique users than Amazon – and that has to make it a clear winner right? (Note, there has been no difference in marketing, advertising, etc. between the two marketplaces – specifically, we have done none).

…but if we were to stop here, we would be making a very big mistake!

How much time is spent inside the app? 

Another glaring difference that our analytics reveal is the difference between the average session length of our users – Amazon users tend to stay inside the app almost 3 times longer!

So – if we multiply the total number of sessions by the average session length, we can calculate how many hours were spent inside Yoga-pedia.
  • Amazon: (41,937 sessions) X (13.88 minutes per session) = 9,701 hours 
  • Google Play: (75,346 sessions) X (5.5 minutes per session) = 6,907 hours 
Total time spent inside the app distributed through the Amazon marketplace is 40% higher than from Google Play. 

If I am trying to maximize ad impressions, establish a brand or hold my user’s attention toward some other objective, Amazon now looks significantly more attractive to me than Google Play.

User behavior

Since Amazon users spend so much more time inside Yoga-pedia – how is their behavior different and how does that translate into measurable value? 

Returning users

Returning users (in red) form the majority of the Amazon session activity – Google Play users are less likely to use the app multiple times – they are “tire kickers’ for the most part. Returning users are roughly equivalent across the two marketplaces even though there are many more Google Play users overall.

Returning users are loyal and a lasting “relationship” can be established – whether you’re selling something, hoping to influence their behavior, or tap their expertise – recurring users are always “premium.” 

Ad Click Through Rate (CTR)

Moving to a more concrete metric – we can compare total impressions, Ad Click through Rates (CTR) as well as Ad Server Errors – for this analysis, we’re just looking at 30 days. Note: in both cases, the apps use AdMob.

Google Play
Ad Impressions
Ad Delivery Failure
Ad Failure Rate
Click Through Count

Amazon CTR is 164% higher than the Google Play CTR 

Google Play Ad Delivery Failure rate is (ADFR) 199% higher than the Amazon ADFR 

Now, it’s not really possible to isolate WHY these differences exist – but we can make some educated guesses. For CTR percentages – are Amazon users simply more conditioned or likely to buy stuff as compared to the typical Google Play user?

For ADFR percentages, we’re using the same ad service API, so the ad service itself is not to blame. Are the devices being used by Google Play users (as a total population) of lower quality or are they connecting through networks that are not as reliable?

Regardless, that kind of conversion delta is nothing to ignore.


As I've already mentioned, in addition to pushing ads, Yoga-pedia is one half of a freemium model where we hope to get these users to upgrade to our commercial version, A Pose for That.

With PreEmptive Analytics, I’ve instrumented the app to track the feature that takes a user back to their respective marketplace (positioned on the app upgrade page). The ratio of unique users (not sessions) to upgrade clicks tells another important story; how likely is an Amazon user versus a Google Play user to upgrade to our paid app?

Google Play
Upgrade Marketplace
Unique Users
Conversion Rate

Amazon user conversion rate is 54% higher than the Google Play conversion rate. 

User behavior within my app

Yoga-pedia offers its users two locations where a user can click to upgrade; in a “tell me more” about the premium app page and at the end of an “Intro” to the current Yoga-pedia app.

By looking at the split of where users are more likely to “convert,” we can learn something important about the app’s design in general AND the differences between user patterns across marketplaces in particular. As a proportion, Amazon users are more likely to convert from the Intro page than their Google Play counterparts. The Intro page is “deeper” in the app (harder to find) and so this difference in usage pattern may imply a more thorough reading of embedded pages by Amazon users (and this would be supported by the much longer session times).


Exceptions not only interrupt a user’s experience (with all of the bad things that flow from that), they are also a material expense (support, development, etc.). Given that we are talking about two virtually identical apps – would we expect one version to be more unstable (and therefore expensive) than the other?

Google Play
Errors per session

Whether or not we expected it, the Google Play version of Yoga-pedia has an error rate per session that is 15% higher than its Amazon equivalent. 

Again – the analytics at this level can’t tell us why – but we can still make an educated guess regarding the differences in phone type and network stability of the two populations. 


Of course, if you want to drill down into the specific exceptions (and examine stack traces, device types, carriers, etc – all of that is available through analytics as well.

Here are exception details for the error rates described above. Anyone want to help me debug these?

Do marketplaces matter? Of course they do. 

Of course, different apps will yield different results – but I don’t think that there can be any question that each marketplace comes with its own unique bundle of user experience, service level, and general appeal – and that, taken together, these attract their own distinct constituencies (communities) with their own behaviors, likes, dislikes and demographics.

App developers who chose to ignore the market, commerce and security characteristics that come with each marketplace will do so at their peril – the differences are real, they should influence your design and marketing requirements, and they will undoubtedly impact your bottom line and your chances of delivering a truly successful app.