Tuesday, September 1, 2015

When it comes to application risk management, you can't do it alone.

I’m often asked to estimate how many developers are required to obfuscate and harden their applications against reverse engineering and tampering – and when they say “required,” what they usually mean is what is the bare minimum number of developers that need to be licensed to use our software.

Of course it's important to get the number of licensed users just right (if the count is too high, you're wasting money - but, if it's too low, you're either not going to be efficient or effective - or worse still - you're forced to violate the license agreement to do your job).

Yet, as important as this question seems, it's not the first question that needs answering.

Staffing to effectively manage application risk is not the same as counting the number of concurrent users required to run our (or any) software at a given point in time.

Consider this:

How many people are required to run our application hardening products on a given build of an application? Actually, none at all, both Dotfsucator for .NET and DashO for Java) can be fully integrated into your automated build and (continuous) deployment processes.

However, how many people does it take to effectively protect your application assets against reverse engineering and tamperingThe answer can be no less than two. Here’s why…

  • Application risk management is made up of one (or more) controls (processes not programs). These controls must first be defined, then implemented, then applied consistently, and, lastly, monitored to ensure effective use.
  • Application hardening (obfuscation and tamper defense injection) is just such a control – a control that is embedded into a larger DevOps framework – and a control that is often the final step in a deployment process (followed only by digital signing).

Now, in order to be truly effective, application hardening cannot create more risk than it avoids – the cure cannot be worse than the disease.

What risks can come from a poorly managed application hardening control (process)?

If an application hardening task fails and goes undetected,

  • the application may be distributed unprotected into production and the risk of reverse engineering and tamper go entirely unmanaged, or 
  • the application may be shipped in a damaged state causing runtime failures in production.

If an application hardening task failure is detected, but the root cause cannot be quickly fixed, then the application can't be shipped; deadlines are missed and the software can't be used.

So, what’s the minimum number of people required to protect an application against reverse engineering and tampering?

You’ll need (at least) one person to define and implement the application hardening control.

…and you’ll need one person to manage the hardening control (monitor each time the application is hardened, detect any build issues, and resolve any issues should they arise in a timely fashion).

Could one individual design, implement and manage an application hardening control? Yes, one person can do all three tasks for sure.

However, if the software being protected is released with any frequency or with any urgency, one individual cannot guarantee that he/she will be available to manage that control on every given day at every given time – they simply must have a backup – a "co-pilot."

No organization should implement an application hardening control that’s dependent on one individual – there must be at least two individuals trained (and authorized) to run, administer, and configure your application hardening software and processes. The penalty for unexpected shipping delays and/or shipping damaged code or releasing an unprotected application asset into “the wild” is typically so severe that even though the likelihood of such an event occurring on any given day may seem remote - it cannot be rationalized.

This is nothing new in risk management – every commercial plane flies with a co-pilot for this very reason – and airline manufacturers do not build planes without a co-pilot’s seat. It would be cheaper to build and fly planes that only accommodate one pilot – and it wouldn’t be an issue for most flights – but to ignore the risk that having a single pilot brings would be more than irresponsible – it would be unethical.

Are there other reasons for additional people and processes to be included? Of course – but these are tied to development methodologies, architecture, testing and audit requirements of the development organization, etc. These are not universal practices.

If reverse engineering and/or application tampering pose Intellectual Property, privacy, compliance, piracy, or other material risks, they need to be managed accordingly - as a resilient and well-defined process. Or, in a word, when it comes to application risk management, you can't do it alone.

Thursday, August 6, 2015

Visual Studio 2015 Dotfuscator CE Adoption: The first 500 users

For the first time since its launch in 2003, Dotfuscator Community Edition (CE), the no cost obfuscator included with Visual Studio, can send anonymous usage data for analysis (only after opt-in of course). and after only one week and 500 users, the resulting application analytics is already proving its worth. 

Before I share some of the insights, let me also point out that the instrumentation now inside Dotfuscator CE is the very same as what Dotfuscator CE has been able to inject into .NET apps for Visual Studio users since VS2010. In other words, you can use Dotfuscator CE to achieve these same kinds of results.

Opt-in: of course we collect no information without a user’s informed consent. Happily, 93% of all of our users opted-in (thank you Dotfuscator CE users!). Note, Dotfuscator can inject opt-in policy enforcement too.

Visual Studio SKUs: One important data point we were keen to know was the distribution of our users across the new Visual Studio 2015 SKU’s (Enterprise, Professional and Community Edition). One important historical note: Dotfuscator CE was never included in Visual Studio Express – only the paid SKU’s. For the first time, Dotfuscator CE is now available inside the new (free) Visual Studio CE SKU – and over half of our initial users are coming from this new community. Does this suggest that our total Dotfuscator CE community will double as Visual Studio users migrate to VS 2015? We sure hope so! …and how does this tie back to user registrations? You can bet that we will be tracking this closely.
.NET Runtimes: We can infer quite a bit about the kinds of development our users are focused on by the .NET runtimes their apps target (not the .NET runtime installed with VS2015 which his 4.5 of course). While everyone is excited by the promise of modern development and universal apps, many development organizations’ current projects are still grounded in earlier .NET generations. Clearly, our application protection and analytics will need to support these runtimes for some time to come.
Equipment Manufacturers: In much the same vein, when looking at the underlying devices of these early VS2015 users, we see that they are mostly running on-prem (note, we are looking exclusively at VS2015 Dotfuscator CE users). While everyone is excited about Windows 10 and embracing Azure; early use is still mostly on-prem and, it is for this exact scenario, that our application protection and analytics support on-prem dev environments (and app deployments).

Follow links for:
A comparison of Dotfuscator CE and our other Dotfuscator versions
General information on PreEmptive Analytics

Stay tuned for more Visual Studio 2015 insights via Dotfuscator CE.

Thursday, July 30, 2015

Can your Application Analytics APIs do this?

Writing data to disk is easy – developing a database is not.

Posting data to a URL is easy – developing an application analytics ingestion pipeline is not.

If you’ve written even a single line of code (in any language), I probably don’t have to explain why writing data to disk is easy – but developing a database is not (for those that have never written any code – it’s the extra database “machinery” required to handle scale, concurrency, resilience, security, etc. that demands a horde of PhD's and rock-star developers).

…and so it is with application analytics…

Posting data to a URL is easy – developing an application analytics ingestion pipeline is not.

Unlike the well-understood database scenario described above (which, ironically, includes analytics repositories - really little more than a specialized database use case), I still find development organizations that don’t have the same respect for application analytics instrumentation and ingestion stacks.

…just yesterday I was speaking with a senior developer from a large, extremely successful ISV (who shall remain anonymous); he confessed that their homegrown application analytics solution (built off of an analytics ISV acquisition of theirs) was generating WAY too much data, wreaking havoc on their infrastructure, alarming their clients, and yielding very few actionable insights.

Coincidentally, while I was hobnobbing away at this conference, our development team released updates to our Linux and Win32 (C++, etc) application analytics APIs – reminding me once again how deep you need to go if you want to have commercial grade application analytics. 

As with database development – unless it is actually your core business – you should not get into the business of developing this kind of “machinery.” Here’s a sampling of new features included in these latest API releases – with references to the existing “machinery” already in place:

1. New app analytics API capability: Cached message envelopes are automatically deleted if they are deemed “too old” – the aging threshold is configurable by development at build or runtime. 

Does your API
  • Provide automatic offline caching? 
If yes…
  • How is the size of that cache managed and how will it behave should the cache hit capacity?
  • Can your application “prune” a growing cache based upon the aging of its data to avoid the cache growing too large because of prolonged isolation?
2. New app analytics API capability: Application analytics message envelopes are split if/when they exceed a configurable size.

Does your API
  • Bundle telemetry packets into larger envelopes and queue them for asynchronous transmission away from your production app?
If yes...
  • Can the frequency of transmission be optimized to minimize bandwidth requirements (important for mobile connections)?
  • Can the timing of transmission to optimized to minimize network and CPU contention (important for real time systems like simulators, games, transaction processing, etc.)
  • Can developers further control the size of envelopes – and specifically the size of custom data payloads)? 
  • …can your API accommodate custom payloads at all?
3. New app analytics API capability: Development can set user-defined HTTP headers to better support more intelligent routing and distribution of incoming telemetry PRIOR to unpacking the larger envelopes.

Does your application analytics ingestion pipeline 
  • Support dynamic routing of incoming production telemetry without having to re-instrument or redeploy your application?
If yes...
  • Under what conditions can you re-direct incoming to new analytics endpoints?
  • Can incoming telemetry be distributed to multiple analytics endpoints in parallel?
These “deep” capabilities are required to safely and effectively scale application analytics implementations inside applications that may run on-premises and/or in 3rd party environments and/or be isolated from networks and/or subject to regulatory or compliance obligations, etc.

And I have not even scratched the surface on this topic – does your API
  • Enforce opt-in policies? - dynamically?
  • Consistently capture data across devices, OS’s, and application tiers?
  • Capture runtime stack, application session, all manner of exception, feature and workflow, and custom data dimensions?
  • Integrate into your SDLC and DevOps tooling and processes?
The bottom line – no matter how awesome your developers are – and even if they can literally build anything – nobody can do everything; respect the stack.

Want more information on PreEmptive Analytics APIs (for Linux, Win32, iOS, Android, Windows Phone, WinRT, Java, and/or JavaScript)? 

Visit http://www.preemptive.com/support/api-support 

Tuesday, June 23, 2015

PreEmptive Solutions’ mobile application risk management portfolio: four releases in four weeks

Preventing IP theft, data loss, privacy violations, software piracy, and a growing list of other risks uniquely tied to the rise of enterprise mobile computing.

Enterprise mobile application risk

Mobile computing’s impact on society, our economy and the workplace is – and will continue to be – profound; there’s nothing controversial about that.

Yet, mobile technology – like the Internet and the PC “revolutions” that came before – cannot change everything.

Governance, risk, and compliance obligations will continue to frame – for better or for worse – every organization’s operational and strategic roadmaps.

Successful organizations innovate within these frameworks – effectively embracing new technology while preserving their standards for risk management, operational transparency, scale and resilience.

It is for these organizations – our traditional enterprise client-base – that PreEmptive Solutions offers, for the first time, its Mobile Application Risk Product Portfolio. The Mobile Portfolio includes preventative and detective controls – across both mobile applications and the backend services they rely upon – in a modular and extensible format.

Enterprise mobile application risk management

The infographic below identifies leading risk categories stemming from increased reliance on mobile applications. The vulnerabilities (potential gaps) call out specific tactics often employed by bad actors; the Controls identify the corresponding practices to mitigate these risks.

The bottom half of the infographic maps the capabilities of PreEmptive Solutions Mobile Application Risk Portfolio across platforms and runtimes and up to the risk categories themselves.

What’s new?

In the past four weeks, PreEmptive Solutions has filled out its mobile application portfolio with the following:
  1. PreEmptive Protection for iOS (PPiOS) – the first enterprise obfuscation solution for iOS applications fully integrated into Xcode, application performance optimizations, and our usual live technical support and continuously improving releases. The PPiOS User Guide can be seen here.
  2. Dotfuscator Professional Update – enhancements include deeper support for Windows 8.1, Windows Phone, and Xamarin obfuscation. Stay tuned for Windows 10 updates as they become available. You can see the latest change log here.
  3. DashO Update - enhancements include support for the latest Android releases and frameworks. The DashO change log is here.
  4. PreEmptive Analytics for Linux – extending feature and exception analytics from your IoT devices to your back office servers. Contact us for immediate access to our Linux SDK. 

The result is a comprehensive mobile application risk management and monitoring platform for the enterprise – extending across (and beyond) the mobile applications themselves.

Want to learn more about these products? Visit www.preemptive.com.

Not sure if your organization needs to better manage mobile application risk? Check out 6 signs that you may be overdue for a mobile application risk review.

6 signs that you may be overdue for a mobile application risk review

Every organization must ultimately make their own assessment as to the level of risk they are willing to tolerate – and mobile application risk is no exception to this rule.

Yet, given the rapidly changing mobile landscape (inside and outside of every enterprise), organizations need to plan on regular assessments of their mobile risk management policies – especially as their mobile applications grow in importance and complexity.

Here are 6 indicators that you may be overdue for a mobile application risk assessment.
  1. Earlier PC/on-premises equivalents ARE hardened and/or monitored. Perhaps these risks need to be managed on mobile devices too – or, conversely, the risks no longer need to be managed at all.
  2. Enterprise mobile apps are distributed through public app marketplaces like Google Play or iTunes. Using public marketplaces exposes apps to potentially hostile users and can be used as a platform to distribute counterfeit versions of those very same apps.
  3. Mobile apps are run within a BYOD infrastructure alongside apps and services outside of corporate control. Access to a device via third-party software can lead to a variety of malicious scenarios that include other apps (yours) installed on the same device.
  4. Mobile apps embed (or directly access) proprietary business logic. Reverse engineering is a straight forward exploit. Protect against IP theft while clearly signaling an expectation of ownership and control – which is often important during a penalty phase of a criminal and/or civil trial.
  5. Mobile apps access (or have access to) personally identifiable information (or other data governed by regulatory or compliance mandates). Understanding how services are called and data is managed within an app can readily expose potential vulnerabilities and unlock otherwise secure access to high-value services.
  6. Mobile apps play a material role in generating or managing revenue or other financial assets. High value assets or processes are a natural target for bad actors. Piracy, theft, and sabotage begins by targeting “weak links” in a revenue chain. An app is often the first target.
Want to know more about how PreEmptive Solutions can help reduce IP theft, data loss, privacy violations, software piracy, and other risks uniquely tied to the rise of enterprise mobile computing? 

Visit www.preemptive.com - or contact me here - i'd welcome the contact.

In the meantime, here’s an infographic identifying leading risk categories stemming from increased reliance on mobile applications. The vulnerabilities (potential gaps) call out specific tactics often employed by bad actors; the Controls identify corresponding practices to mitigate these risks.

The bottom half of the infographic maps the capabilities of PreEmptive Solutions Mobile Application Risk Portfolio across platforms and runtimes and up to the risk categories themselves.

For more information on PreEmptive Solutions Enterprise Mobile Application Risk product portfolio, check out: PreEmptive Solutions’ mobile application risk management portfolio: four releases in four weeks.

Friday, June 19, 2015

ISV App Analytics: 3 patterns to improve quality, sales, and your roadmap

Application analytics are playing an increasingly important role in DevOps and Application Lifecycle Management more broadly – but ISV-specific use cases for application analytics have not gotten as much attention. ISV use cases – and by extension, the analytics patterns employed to support them – are unique. Three patterns described here are Beta, Trial, and Production builds. Clients and/or prospects using these “product versions” come with different expectations and hold different kinds of value to the ISV – and, as such – each instance of what is essentially the same application should be instrumented differently.

The case for injection

Typically, application instrumentation is implemented via APIs inside the application itself. While this approach offers the greatest control, any change requires a new branch or version of the app itself. With injection – the process of embedding instrumentation post-compile – the advantage is that you are able to introduce wholly different instrumentation patterns without having to rebuild or branch an application's code base.

The following illustration highlights the differences in instrumentation patterns across product version – patterns that we, at PreEmptive, use inside our own products.

Beta and/or Preview

  • Measure new key feature discovery and usage 
  • Track every exception that occurs throughout the beta cycle 
  • Measure impact and satisfaction of new use cases (value versus usage) 
  • *PreEmptive also injects “Shelf Life” – custom deactivation behaviors triggered by the end of the beta cycle 


  • License key allowing for tracking individual user activity in the context of the organization they represent (the prospective client) - this is CONNECTED to CRM records after the telemetry is delivered
  • Performance and quality metrics that are likely to influence outcome of a successful evaluation through better timed and more effective support calls 
  • Feature usage that suggest user-specific requirements – again, increasing the likelihood of a successful evaluation 
  • * Preemptive injects “Shelf Life” logic to automatically end evaluations (or extend them) based upon sales cycle 


  • Enforce organization’s opt-in policy to ensure privacy and compliance. NO personally identifying information (PII) is collected in the case of PreEmptive’s production instrumentation. 
  • Feature usage, default setting, and runtime stack information to influence development roadmap and improve proactive support. 
  • Exception and performance metrics to improve service levels. 
  • * PreEmptive injects Shelf Life functionality to enforce annual subscription usage. 

The stakeholders and their requirements are often not well understood at the start of a development project (and often change over time). Specifically, sales and line of business management may not know their requirements until the product is closer to release – or after the release when there's greater insight into the sales process. A development team could not use an analytics API even if they had wanted to. …and this is one very strong case for using analytics injection over traditional APIs.

PreEmptive Solutions ISV application analytics examples

Here are recent screen grabs of Dotfuscator CE usage (preview release) inside Visual Studio 2015.
Here is a similar collection of analytics Key Performance Indicators (KPIs) – this time focusing on current user evaluations.

…and lastly, here are a set of representative KPIs tracking production usage of DashO for Java.

If you’re building software for sale – and you’d like to streamline your preview releases, shorten your sales cycles and increase your win rates – and better align your product roadmap with what your existing clients are actually doing – then application analytics should be a part of your business – and – most likely – injection as a means of instrumentation is for you as well.

Wednesday, April 15, 2015

Five tenets for innovation and sustained competitive advantage through application development

I'm privileged to spend most of my working days in front of smart people doing interesting work across a wide spectrum of industries - and in the spirit of "ideas don't have to be original - they just have to be good(c)" (the copyright is my attempt at humor RE other people's good ideas versus my silly aphorism) - anyhow, back to my central point - mobile, cloud, the rise of big data, etc. are all contributing to a sense that business (and the business of IT) is entering an entirely new phase fueled by technology, globalization, etc... and with this scale of change comes confusion  ...but in spite of all this background noise, I'm witnessing many of our smartest customers and partners converge on the following five tenets - tenets that I know are serving some of the smartest people in the coolest organizations  extremely well - cheers.

1.       Organizations must innovate or be rendered obsolete.
       Challenge: Applications now serve as a hub of innovation and a primary means of differentiation – across every industry and facet of our modern economy.
       Response: Innovative organizations use applications to uniquely engage with their markets and to streamline their operations.
2.       Genuine innovation is a continuous process – to be scaled and sustained.
       Challenge: Development/IT must internalize evolving business models and emerging technologies while sustaining ongoing IT operations and managing increasingly complex regulatory and compliance obligations.
       Response: Leading IT organizations imagine and deliver high-value applications through agile feedback-driven development practices and accelerated development cycles that place a premium on superior software quality and exceptional user experiences.
3.       Modern applications bring modern risks.
       Challenge: In order to sustain competitive advantage through application innovation, organizations must effectively secure and harden their application asset portfolios against the risks of revenue loss, Intellectual Property theft, denial of service attacks, privacy breaches, and regulatory and compliance violations.
       Response: Successful organizations ensure that security, privacy, and monitoring requirements are captured and managed throughout the application lifecycle from design through deployment and deprecation – as reflected in appropriate investments and upgrades in processes and technologies.
4.       Every organization is a hybrid organization – every IT project starts in the middle.
       Challenge: Organizations must balance the requirement to innovate with the requirement to operate competitively with existing IT assets.
       Response: Mature organizations do not hard-wire development, security, analytics, or DevOps practices to one technology generation or another. The result is materially lower levels of technical debt and the capacity to confidently embrace new and innovative technologies and the business opportunities they represent.
5.       Enterprise IT requirements cannot be satisfied with consumer technologies – shared mobile platforms and BYOD policies do not alter this tenet.
       Challenge: Enterprise security, compliance, and integration requirements cannot (and will not) be satisfied by mobile/web development and analytics platforms designed for consumer-focused, standalone app development (and the business models they support).

       Response: Mature IT organizations drive mobile app innovation without compromising core enterprise ALM, analytics, or governance standards by extending proven practices and enterprise-focused platforms and technologies. 

Tuesday, April 7, 2015

Darwin and Application Analytics

Survival of the fittest

Technological evolution is more than a figure of speech.

Survival, e.g. adoption (technology proliferation and usage) favors the species (technology) that adapts most effectively to environmental changes and most successfully competes for limited resources required for day-to-day sustenance. In other words, the technology that is most agile wins in this winner take all Darwinian world.

You might think you know where I’m headed – that I’m going to position application analytics and PreEmptive Analytics in particular as being best able to ensure the agility and resilience applications need to survive – and while that’s true – that’s not the theme of today’s post.

A rose by any other name… and applications are (like) people too!

Today’s theme is on properly classifying application analytics (and PreEmptive Analytics in particular) among all of the other related (and in some cases, competing) technologies – are they fish or fowl? Animal, vegetable, or mineral? Before you can decide if application analytics is valuable – you have to first identify what it is and how it fits into your existing ecosystem – food chain - biosphere.

In biology, all life forms are organized into a hierarchy (taxonomy) of seven levels (ranks) where each level is a super set of the levels below. Here, alongside people and roses, is a proposed “taxonomic hierarchy” for application analytics.

What’s the point here?

What does this tell us about the species “PreEmptive Analyticus”? The hierarchy (precedence of the levels) and their respective traits are what ultimately gives each species their identity.  ...and this holds true for application analytics (and PreEmptive Analytics in particular) too.

Commercial Class software is supported by a viable vendor (PreEmptive Solutions in this case) committed to ensuring the technology’s lasting Survival (with resources and a roadmap to address evolving requirements).

Homegrown solutions are like mules – great for short term workloads, but they’re infertile with no new generations to come or capacity to evolve.

Analytics is the next most significant rank (Order) – PreEmptive Analytics shares a common core of functionality (behavior) with every other commercial analytics solution out there today (and into the future)

HOWEVER, while common functionality may be shared, it is not interchangeable.

Hominids are characterized as Primates with “relatively flat faces” and “three dimensional vision” – both humans and chimpanzees obviously qualify, but no one would confuse the face of a human for that of a chimpanzee. Each species uniquely adapts these common traits to compete and to thrive in its own way.

The Family (analytics focused more specifically on software data) and the Genus (specifically software data emitted from/by applications) each translate into increasingly unique and distinct capabilities – each of which, in turn, drive adoption.

In other words, in order to qualify as a Species in its own right, PreEmptive Analytics must have functionality driving its own proliferation and usage (adoption) distinct from other species e.g. profilers, performance monitors, website monitoring solutions, etc. while also establishing market share (successfully competing). 

How do you know if you've found a genuine new species?

According to biologists and zoologists alike, the basic guidelines are pretty simple, you need a description of the species, a name, and some specimens.

In this spirit, I offer the following description of PreEmptive Analytics – for a sampling of “specimens” (case studies and references) - contact me and I’m more than happy to oblige…

The definition enumerates distinguishing traits and the "taxonomic ranking" that each occupies - so this is not your typical functional outline or marketecture diagram.

CAUTION – keep in mind that common capabilities can be shared across species, but they are not interchangeable - each trait is described in terms of its general function, how it's been specialized for PreEmptive Analytics and how/why its adaptable to our changing world (and therefore more likely to succeed!) - I’m not going to say who’s the monkey in my analytics analogy here, but I do want to caution against bringing a chimp to a do a (wo)man’s job.

PreEmptive Analytics

Core Analytics functionality

Specialized: The ingestion, data management, analytics computations, and the visualization capabilities include “out of the box” support for application analytics specific scenarios including information on usage, users, feature usage patterns, exceptions, and runtime environment demographics.

Adaptable: In addition to these canned analytics features, extensibility points (adaptability) ensure that whatever unique analytics metrics are most relevant to each application stakeholder (product owner, architect, development manager, etc.) can also be supported. 

Software Data (Family traits)

Incident Detection: PreEmptive Analytics (for TFS) analyzes patterns of application exceptions to identify production incidents and to automatically schedule work items (tasks).

Data transport: The PreEmptive Analytics Data Hub routes and distributed incoming telemetry to one or more analytics endpoints for analysis and publication.

Specialized: “Out of the box” support for common exception patterns, automatic offline-caching and common hybrid network scenarios are all built-in.

Adaptable: User-defined exception patterns and support for on-premises deployments, isolated networks, and high volume deployments are all supported. 

Application Data (Genus traits)

Application instrumentation (collecting session, feature, exception, and custom data): PreEmptive Analytics APIs plus Dotfuscator and DashO (for injection of instrumentation without coding) support the full spectrum of PC, web, mobile, back-end, and cloud runtimes, languages, and application types.

Application quality (ensuring that data collection and transmission does not compromise application quality, performance, scale…): PreEmptive Analytics runtime libraries (regardless of the form of instrumentation used) are built to “always be on” and to run without impacting the service level of the applications being monitored.

Runtime data emission and governance (opt-in policy enforcement, offline-caching, encryption on the wire…): The combination of the runtime libraries and the development patterns supported with the instrumentation tools ensure that security, privacy and compliance obligations are met.

Specialized: the instrumentation patterns support every scale of organization from the entrepreneurial to the highly regulated and secure.

Adaptable: Application-specific data collection, opt-in policy enforcement, and data emission is efficiently and transparently configurable supporting every class of application deployment from consumer to financial, to manufacturing, and beyond… 

PreEmptive Analytics (Species traits)

Every organization must continuously pursue differentiation in order to remain relevant (to Survive). In a time when almost all business that organizations do is digitized and runs on software, custom applications are essential in providing this differentiation.

Specialized: PreEmptive Analytics has integrated and adapted all of these traits (from instrumentation to incident detection) to focus on connecting application usage and adoption to the business imperatives that fund/justify their development. As such, PreEmptive Analytics is built for the non-technical business manager, application owners, and product managers as well as development managers and architects.

Adaptable: Deployment, privacy, performance, and specialized data requirements are supported across industries, geographies, and architectures providing a unified analytics view on every application for the complete spectrum of application stakeholder.

So what are you waiting for? Put down your brontosaurus burger and move your development out of the stone age.

Monday, March 23, 2015

Application Analytics: measure twice, code once

Microsoft recently announced the availability of Visual Studio 2015 CTP 6 – included with all of the awesome capabilities and updates was the debut of Dotfuscator Community Edition (CE) 2015. …and, in addition to updates to user functionality (protection and analytics instrumentation capabilities), this is the first version of Dotfuscator CE to include it’s own analytics (we’re using PreEmptive analytics to anonymously measure basic adoption, usage, and user preferences). Here’s some preliminary results… (and these could all be yours too of course using the very same capabilities from PreEmptive Analytics!)

Users by day comparing new and returning users shows extremely low returning users – this indicates that users are validating that the functionality is present, but not actually using the technology as part of a build process – this makes sense given that this is the first month of a preview release – users are validating the IDE – not building real products on that IDE.

Feature usage and user preferences including timing of key activities like what % of users are opting in (of course opt in policy exists and is enforced), what runtimes they care about (including things like Silverlight and ClickOnce and Windows Phone…), the split between those who care about protection and/or analytics, and timing of critical activities that can impact DevOps are all readily available

Broad geolocation validates international interest and highlights unexpected synergies (or issues) that may be tied to localized issues (language, training, regulation, accessibility, etc.)

This is an example of the most general, aggegrated, and generic usage collection - of course the same analytics plumbing can be used to capture all flavor of exception, user behavior, etc. - but ALWAYS determined by your own design goals and the telemetry is ALWAYS under your control and governance - from "cradle to grave."

BOTTOM LINE: the faster you can iterate – the better your chances for a successful, agile, application launch – building a feedback driven, continuous ALM/DevOps organization cries out for effective, secure, and ubiquitous application analytics – how is your organization solving for this requirement?

Monday, January 12, 2015

Re-imagined Applications Demand Re-imagined Application Analytics

Traditional applications are being replaced with the many-to-many pairing of Apps to Services where core functionality is supplied via cloud-based software and delivered via a multitude of apps running across devices and runtimes. Beyond the obvious runtime combinatory complexity, the apps and services are typically developed by different organizations with independent release cycles under disparate business models. As a consequence, an application’s scope – the sum total of its software and content – has shifted from concrete to ethereal where ingredients can change or evolve from session to session.

Outdated analytics patterns can only offer limited insight

Analytics solutions built to focus on a single stack (or analyze stacks side-by-side), e.g. mobile apps or web sites or internal servers - or focus on a single stakeholder or persona, e.g. IT ops or web commerce – are poorly positioned to capture the dynamic, interoperable nature of modern application deployments or the increasingly diverse community of application stakeholders. 

Shared runtime data is the tie that binds components into an organic application

Apps track the services they call and services track the apps they serve through tokens and other shared parameters. Not every argument exchanged plays this role – consequently, a working knowledge of the components and their interfaces is required to effectively piece together individual sessions, users, and activities. 

PreEmptive Analytics: built for modern deployments and diverse stakeholder requirements

PreEmptive Analytics has been built from the ground up to offer an instrument-once and distribute-many approach supporting a portfolio of analytics endpoints as dynamic as the application components it monitors.

The following working sample illustrates how PreEmptive Analytics can instrument client and cloud components to provide unprecedented insight into app design, user behavior, and IT operations. The latest version of this app, the instrumentation, and the extensions to the PreEmptive Analytics Workbench can be found at GitHub PreEmptive Analytics Use Case Example

The sample app and the sample service

The sample app lets users submit anticipated expenses for pre-approval. The user identifies the expense category and estimated expense and submits the record to a managed service for centralized approval or rejection. The approval policies reside in the hosted service as do the historical records.

Every user session has an associated organization and unique user ID associated with it – this drives both policy and provides the “hook” to connect the client’s activity with the supporting software services.

The sample app is written in C#, instrumented with the PreEmptive Analytics API and then Xamarin is used to generate both Android and iOS instances.

The sample service is written in C#, also instrumented with PreEmptive Analytics and runs in an Azure Windows VM.

PLEASE NOTE – the analytics functionality demonstrated here is in no way dependent upon or specific to C#, .NET, Xamarin, Azure, Android, or iOS – this is one specific example to illustrate the general principles and capabilities of PreEmptive Analytics that can just as readily be applied to any WPF, Java, or C++ component – running on-premises and/or distributed across cloud services and devices.

Sample App Functionality

The app allows a user to time arbitrary workflows, throw exceptions, express preferences, and – last but not least – submit an anticipated expense for pre-approval.

The sample managed service

A user selects the expense category and estimated expense and submits the information for approval.

Based on the amount and other factors, the remote software service either approves or rejects the request. The client app informs the user in one of two ways.

As mentioned above, users can track arbitrary work flows that span (or work within) page and/or method boundaries by starting and stopping the following timer.


Also, as mentioned above – each user can observe the department and ID they are working under in each session.

PreEmptive Analytics Results

The following dashboards illustrate the cross-section of analytics supporting the full spectrum of application stakeholders from dev to DevOps to business owners.


The overview page requires no special configuration and is immediately populated (a latency measured in seconds) as runtime telemetry comes in from production. All versions of all components are available for inspection across client devices and cloud-services.

Even the vanilla overview page offers insights across component and stakeholder domains as illustrated by these four "feature" stats - all relate to the "expense approval" activity - but each represents a different perspective - providing insights into all of the moving pieces that come together to create the integrated user experience.

Timing may be everything but all time is relative

User behavior, user experience, application service level, managed service service levels

Even without any special configuration, PreEmptive Analytics automatically breaks out usage and timing of:
  1. The Azure-based approval service (item 1 above) – IT operations cares about this perspective, 
  2. The client-side call up to the Azure-based approval service (item 2 above) – dev and devops cares about this perspective, 
  3. The time spent on the “mobile page” for expense approval (item 3 above) – UX design/dev care about this perspective, and 
  4. The time inside the larger workflow that leads a user to the mobile page (item 4 above) – app owner cares about this perspective

The close up of the feature tracking panel shows that 688ms of the client request is outside of the time actually consumed by the Azure service itself (690ms – 2ms). It also shows that once a user lands on the expense page, they spend almost 40 seconds filling it out and lastly, that the true workflow that takes the user into and out of this page is just over 50 seconds on average.

Application service levels

Deeper analysis is readily available as well – here the max, min, and average times that clients need to fulfill a client request are shown over time – alongside a “threshold” indicating a service level goal for the client-side service.

Business activity

PreEmptive Analytics combines the multi-tiered instrumentation outlined above with application-specific data capture and analysis – enabling powerful business activity insights. The following chart shows the volume, ratios, and trending of expense request approvals versus rejections over time. This particular instrumentation is generated from the cloud-based service – ensuring an enterprise-wide view across applications, platforms, and users.

Server record of expense requests


PreEmptive Analytics goes far beyond counting occurrences of Application-specific data – any data point can be used to segment runtime telemetry as well – providing powerful, contextual insights as illustrated below.

Usage and experience

Recall that each client session is assigned a department (or role) and a user ID. The following panel breaks out usage, users and exceptions by organization (a server-side lookup of the user ID) AND by the role.

NOTE that these dimensions can also be used as filters allowing stakeholders to focus on the most important organizations and the most importance roles inside those organizations. Below is a view into approvals and rejections by organization and role.

Business activity

Selecting any combination of organizations and roles sets the focus to the most important constituents to my operation – for the first time, I can segment, monitor, and optimize for the organizations and people that matter most. 

Bias by organization

A user can simply select an organization (which is indexed through a CRM look-up of the license key as the data streams in from production at runtime) and usage, stability, and quality of only users from that organization (one or more can be selected).

After selecting "Up And Away Inc." you can view both system activity and a business activity summary.

Bias by role

Similarly, selecting just the "VIP" role shows VIP activity across organizations.  

Keep in mind that the data in these tables is a "joined" view combining client-side information (role and activity) and cloud-based computing (request approval statistics).

The same business optimization can be applied to production incidents to support DevOps and support. The following panel shows the activity by user ID drilling down into specific exceptions.

Bias for DevOps

What’s next?

If your business is (or will soon be) dependent on applications whose logic is distributed across devices and runtimes and you believe that application development should be AT LEAST as customer-centric and attuned to your business’ priorities as any other part of your organization – then upgrading application analytics needs to be a priority (not much different than building application security into the dev process and not after).

Contact me directly or visit PreEmptive’s site to explore how we’re helping organizations develop their application analytics practice to improve quality, satisfaction, and development ROI.

Friday, December 12, 2014

Welcome to The Show(1) HockeyApp

In what can be described as the latest Snipe(2) in the Barn Burner(3) game we call application analytics, Microsoft announced its acquisition of HockeyApp. Most of the early commentary I’ve read seems to focus on the fact that Microsoft has invested in a native iOS/Android API, but to me that’s not the most interesting nugget… what’s most interesting to me is that HockeyApp has been built to be a hardcore Stay-at-home defenseman (4) (that’s the last hockey pun, I swear, see definitions below). 
(1) The Show (noun): the NHL, used in the context of “making it to The Show”.
(2) Snipe (noun): a powerful or well-placed shot that results in a pretty goal.
(3) Barn Burner (noun): used to describe a game that is high scoring, fast paced, and exciting to watch.
(4) Stay-at-home defenseman: A defenseman who plays very defensively. He doesn't skate with the puck toward the offensive zone very often but will look to pass first. Usually the last player to leave his defensive zone.

 Maybe I’m too close to this space, but hasn't Microsoft been consistent and clear in their communications that it was always the plan to have Application Insights support native iOS and Android apps? – so they bought some technology and talent to accelerate the process …that’s neither novel nor controversial – seems like a smart move.

What DOES strike me as interesting is HockeyApp’s focus on analytics for testing versus production – and for enterprise use versus consumer-facing.

Microsoft is in an all-out sprint to deliver a comprehensive and fully integrated dev-devops-ops ecosystem where the distinctions between enterprise, b2b, and consumer categories dissolve – and, with HockeyApp, they appear to be killing two birds with one stone; native iOS and Android APIs AND an analytics framework optimized for test and other (relatively) small and well-defined user communities (such as some enterprise scenarios) – two areas where Microsoft has traditionally been quite successful.

Support for side loading, user-by-user index buckets, and a privately managed feaux marketplace all work well in these scenarios, but (I would suggest) the very same implementations will struggle under the strain of high volume 24x7 operations - but that's OK, that's not the intent.

It’s clear that, as application analytics matures as a category, we should expect to see increasing specialization and segmentation – software built to track shopping cart behavior and user loyalty (or generate system logs) is not going to be able to cover these increasingly well-defined use cases. 

Here's my latest chart comparing the various "categories" of application analytics solutions... (all errors and omissions are obviously mine and mine alone)

As always, checkout the latest on PreEmptive Analytics at www.preemptive.com/pa 

Thursday, November 20, 2014

Application Analytics Innovation: Wolters Kluwer, CCH Gets It

I've been working on application analytics use cases and scenarios going on nine years now – and I spend a good deal of my time supporting (and learning from) dev teams of all shapes and sizes – and having said that, I’m pleased to say, this week was a first for me. This week I had the good fortune to sit in on the final hours of a two day Code Games inside Wolters Kluwer, CCH.

Coding competitions/events like this are nothing new, but running a good one is never easy – required ingredients include a positive, nurturing culture, some serious organizational and editorial skills, and (of course) sharp developers. On this day, Wolters Kluwer, CCH had all three on display in spades.


Now, I’m not the first to take notice (those of you that know me know that one of my favorite aphorisms is that ideas only have to be good – they don’t need to be original). Forbes covered Wolters Kluwer’s code games earlier this week in their article Top Women CEOs On How Bold Innovation Drives Business. Karen Abramson, CEO, Wolters Kluwer Tax and Accounting highlights their “Code Games” as one of the three pillars in “a constant eco-system of innovation across the organization.”

SERIOUS ORGANIZATIONAL AND EDITORIAL SKILLS? YES! (… and here’s where it gets interesting)

I wish that I could share some of the awesome presentations I heard on that night (there were truly some awesome ones), but an executed NDA was the prerequisite for my attendance. What I CAN relay is that their Code Games was the first one I've seen where “most innovative use of application analytics” was one of the award categories. At the end of the night, one of the two Code Games organizers, Elizabeth Weissman, Director of Innovation, iLab Solutions at Wolters Kluwer, CCH said that she thought the application analytics category was a perfect complement to the other wholly business-focused ones because it sent the dev teams the important message that “application analytics need to be a part of an application’s design from the very beginning – not an afterthought.”

It is worth noting that the other mainstream award categories (which were, in fact, more prestigious because they were focused squarely on core business impact) were judged on a) Innovation, b) Technical Achievement, and c) Potential Value Generation. As such, no team would have included application analytics at all if they did not believe upfront that it would contribute in some material way to one or more of these three criteria.

…but, for me, it’s bigger than that – as those teams that included app analytics presented to the Code Games judges, those judges (and the 150+ dev. audience members) also got the message that app analytics is not just for website forensics and user clicks; and in this case, the judges panel included Wolters Kluwer, CCH executives, Teresa Mackintosh, President & CEO, Mark Lawler, VP Software Development, Brian Diffin, Executive VP Global Technology, and some of Wolters Kluwer, CCH’s own VIP clients – and now they all get it too!
From right to left, Elizabeth, Teresa, Bernie, and me (photo-bombing this Wolters Kluwer, CCH "A-team")

How’d they do it? Working with Bernie Hirsch, Director, Software Development at Wolters Kluwer, CCH, (the other half of the Code Games organizer dynamic duo) we setup a privately hosted PreEmptive Analytics endpoint in an Azure VM that matched their existing production analytics environment and that allowed dev teams to securely and easily add analytics to their projects – whether or not the apps ran on-premises, used client data, connected to internal systems, etc.


As I've already said, I can’t describe specifically what these teams built, but here are a few factoids:

  • Every team that decided to include app analytics succeeded. 
  • The teams instrumented apps running .NET, Java Script, and mobile surfaces and the apps themselves were both customer facing and internal, line of business apps. 
  • The teams collected session and usage data, exceptions, timing, and custom-app-specific telemetry too. 
  • While the applications ran the gamut from on-premises LoB and client-facing, all of the app telemetry was transmitted to Azure-hosted (private cloud) endpoints (and one app then pulled the data out and back into the very app that was being monitored! – but now I have to stop before I say too much). 
  • Not all teams incorporated analytics into their projects, but the most decorated team was one of those that did – NOT to track exceptions or page views – but as the backbone to one of their most powerful data-driven features for the business.
Developer presentations ran into the evening in front of a packed house and 150+ employees watching remotely.
So there we have it, Wolters Kluwer, CCH brought together the culture, the organizational savvy, and the technical talent to pull-off what was truly an exceptional event.  ...and I'm grateful that I had the chance to come along for the ride. Cheers!

Wednesday, November 5, 2014

Application protection – why bother?

(…and, no, this is not a rhetorical question)

Why should a developer (or parent organization) bother to protect their applications? Given what PreEmptive Solutions does, you might think I’m being snarky and rhetorical – but, I assure you, I am not. The only way to answer such a question is to first know what it is you need protection from.

If you’re tempted to answer with something like “to protect against reverse engineering or tampering,” that is not a meaningful answer – your answer needs to consider what bad things happen if/when those things happen. Are you looking to prevent piracy? Intellectual property theft? AGAIN – not good enough – the real answer is going to have to be tied to lost revenue, operational disruption resulting financial or other damage, etc. Unless you can answer this question – it is impossible to appropriately prioritize your response to these risks.

If you think I’m being pedantic or too academic, then (and forgive me for saying this) you are not the person who should be making these kinds of decisions. If, on the other hand, you’re not sure how to answer these kinds of questions – but you understand (even if only in an intuitive way) the distinction between managing risks (damage) versus preventing events that can increase risk – then I hope the following distillation of how to approach managing the unique risks that stem from developing in .NET and/or Java (managed code) will be of value.

First point to consider: managed code is easy to reverse engineer and modify by design – and there are plenty of legitimate scenarios where this is a good thing.

Your senior management needs to understand that reverse engineering and executable manipulation is well-understood and widely practiced. Therefore, if this common practice poses any material risks to your organization, they are compelled to take steps to mitigate those risks – of course, if this basic characteristic of managed code does not pose a material risk – no additional steps are needed (nor should they be recommended),

Second point to consider: reverse engineering tools don’t commit crimes – criminals do; but criminals have found many ways to commit crimes with reverse engineering (and other categories of) tools.

In order to be able to recommend an appropriate strategy, a complete list of threats is required – simply knowing that IP theft is ONE threat is not sufficient – if the circulation of counterfeit applications pose an incremental threat – you need to capture this too.

Third point to consider: Which of the incident types above are relevant to your specific needs? How important are they? How can you objectively answer these kinds of questions?

Risk management is a mature discipline with well-defined frameworks for capturing and describing risk categories; DO NOT REINVENT THE WHEEL. How significant (material) a given risk may be is defined entirely by the relative impact on well-understood risk categories. The ones listed above are commonly associated with application reverse engineering and tampering - but these are not universal nor is the list exhaustive.

Fourth point to consider: How much risk is too much? How much risk is acceptable (what is your tolerance for risk)? …and what options are available to manage (control) these various categories of risk to keep them within your organization’s “appetite for risk?”

Tolerance (or appetite) for risk is NOT a technical topic – nor are the underlying risks. For example, an Android app developed by 4 developers as a side project may only be used by a small percentage of your clients to do relatively inconsequential tasks – the developers may even be external consultants – so the app itself has no real IP, generates no revenue, and is hardly visible to your customer base (let alone to your investors). On the other hand, if the result of a counterfeit version of that app results in client loss of data, reputation damage in public markets, and regulatory penalties – the trivial nature of that Android really won’t have mattered.

In other words, even if the technical scope of an application may be narrow, the risk – and therefore the stakeholders – can often be far reaching.

Risk management decisions must be made by risk management professionals – not developers (you wouldn't want risk managers doing code reviews would you?).

Fifth point to consider: what controls are available specifically to help manage/control the risks that stem from managed code development?

Obfuscation is a portfolio of transformations that can be applied in any number of permutations – each with its own protective role and its own side effects.

Tamper detection and defense as well as regular feature and exception monitoring also have their own flavors and configurations.

Machine attacks, human attacks, attacks whose goal is to generate compliable code versus those designed to modify specific behaviors while leaving others in tact all call for different combinations of obfuscation, tamper defense, and analytics.

The goal is to apply the minimum levels of protection and monitoring required to bring identified risks levels down to an acceptable (tolerable) level. Any protection beyond that level is “over kill.” Anything less is wasted effort. …and this is why mapping all activity to a complete list of risks is an essential first step.

Sixth point to consider: the cure (control) cannot be worse than the disease (the underlying risk). In other words, the obfuscation and tamper defense solutions cannot be more disruptive than the risks these technologies are designed to manage.

Focusing on the incremental risks that introducing obfuscation, tamper defense, and analytics can introduce, the following questions are often important to consider (this is a representative subset – not a complete list):
· Complexity of configuration
· Flexibility to support build scenarios across distributed development teams, build farms, etc.
· Debugging, patch scenarios, extending protection schemes across distinct components
· Marketplace, installation, and other distribution patterns
· Support for different OS and runtime frameworks
· Digital signing, runtime IL standards compliance, and watermarking workflows
· Mobile packaging (or other device specific requirements)
· For analytics there are additional issues around privacy, connectivity, bandwidth, performance, etc.
· For commercial products, vendor viability (will they be there for you in 3 years) and support levels (dedicated trained team? Response times?)

So why bother?
Only if you have well-defined risks that are unacceptably high (operational, compliance, …)
AND the control (technology + process + policy) reduces the risk to acceptable levels
WITHOUT unacceptable incremental risk or expense.

Wednesday, October 8, 2014

Welcome Xamarin Insights (seeing the forest through the trees)

First, let me state for the record that I am a huge fan of Xamarin - when I say this, I mean to include both their great technology and their people (I've only met a few, but they've never disappointed). So with that out of the way, I listened with great interest as they announced Xamarin Insights at their user group this morning. As someone with a personal stake in the broad category of application analytics, you can imagine that when a company like Xamarin enters my space, they're going to get my undivided attention.

My first reaction was that the name "Xamarin Insights" sounded a lot like Microsoft's "Application Insights" and as I watched the presentation and then reviewed the web content, the similarities grew even stronger.

Of course, if you're a developer on either of the (*) Insights teams you're going to be mildly offended by this last statement as you no doubt see STARK differences - and, at some important level, you're probably right - but I'm not on either dev team, I'm part of the PreEmptive Analytics team and so this is the area where I see the "STARK differences." ...and so that has prompted me to populate the following table comparing all three, Xamarin Insights, Application Insights, and PreEmptive Analytics.

I've tried to focus on material differences that are most likely to make one approach more effective than the other two - and to make this crystal clear - there are scenarios where each option is better suited than the other two - so understanding YOUR requirements is the first and MOST IMPORTANT step in selecting your optimal analytics solution.

Xamarin Insights
Application Insights
PreEmptive Analytics
Targeted appeal
Enterprises and ISVs targeting modern platforms
Enterprises and ISVs with established app portfolios driving large, regulated, and secure operations extending into modern/mobile
Release status
Free with pricing TBD
Free with pricing TBD
Licensed by product component
Applications supported
API for C#/F# supporting native Xamarin targets (end-user apps only)
API for C/C#/F#, JavaScript supporting Microsoft targets (MODERN client-side AND server-side apps/components)
All apps supported by (*)Insights PLUS C, C++, Java, traditional .NET, middle-tier, on-premises, etc.
Endpoint/analytics engines and portal
Multi-tenant hosted by Xamarin
Multi-tenant hosted by Microsoft
On-premises or hosted – hosting can be by 3rd party or PreEmptive.
Events: Atomic mobile & page

Exceptions: Unhandled and caught
Custom: Strings

System and performance: mobile only
Events: Atomic mobile & page

Exceptions: Unhandled

Custom: Strings

System and performance: Modern only
Events: All  (*)Insights PLUS arbitrary workflow and in-code spans
Exceptions: Unhandled, caught, thrown
Custom: Strings, serialized data structures from multiple sources
System and performance: all runtimes and surfaces
Supported organizations
Xamarin devs ONLY
Microsoft-based devs ONLY
All devs supported by (*)Insights PLUS all other enterprise, ISV, and embedded app devs.

Data dimensions
Only data originated inside an app can be analyzed
Data inside an app AND data accessible from within Azure account can be analyzed
Any data source available within an enterprise or via external services can be mashed up to enrich telemetry
Opt-in/out policy enforcement
Offline caching
Extensible indexing and UI on a role-by-role basis (app owner, dev mgr, etc.)
Injection of instrumentation for managed code
User and organization metrics
Yes including integration with Enterprise credentials
Automatic creation of TFS work items based upon business rules and patterns
Embedded inside Visual Studio
Starting with VS2013/14
Since 2010

 One thing i know for sure - no one will be building applications without analytics in the next few years - figuring this out for YOUR dev requirements will be a critical requirement soon enough - it's not a question of IF - only WHEN - so, if applications are an important part of your life - this is something that you cannot postpone for much longer (it may already be too late!) Enjoy!