Friday, June 24, 2011

Improving Ad performance: Correlating ad activity with feature usage and user behavior

In this third installment on application analytics patterns and practices I’m going to focus on how Runtime Intelligence can be used to shed light on Ad activity within the context of one or more applications. While the use cases covered here are nowhere near exhaustive, I’m going to show how to answer the following questions (and hopefully give some indication as to why you may care about the answers):

  • What are ad impression volumes across multiple apps?
  • What are the click-through rates (the ratio of users clicking on ads to the volume of impressions) across various pivots?
  • What influence does culture (country of origin) have on click-through rates, e.g. are Germans more or less likely to click on ads versus Italians?
  • What carriers/ISP providers are giving me the most business, e.g. where are my users most likely to be found?
  • Where are users spending most of their time inside an app? Does that usage pattern correlate with a user’s likelihood of clicking on an ad?
  • Do returning users interact with ads differently than first time users or power users?
Many of these metrics are valuable in scenarios other than ad effectiveness of course (knowing where users spend their time and understanding how power users behave are two obvious examples), but for this installment, I am going to focus exclusively on how Ad interaction can be viewed across these metrics.

Implementation
I'm using the same trusty one line method WhatPoseWhen that I described in the first installment – this time, I call the method on the New Ad event (to count impressions) and the Ad Engaged event (to count clicks on ads). I could just as easily collect data on any other ad-related event and grab any data that is available to the program at that point in its execution as well. Here is the code for that method in its entirety:


private void WhatPoseWhen (string page, string selection)
{ return; }

The first parameter tells me from which page the method is being called and the second parameter tells me why I might care, e.g. was a new ad displayed, etc.)

I pass the page name and the ad event into WhatPoseWhen and Runtime Intelligence grabs these parameters and sends them up to the repository (no programming for this). I can then correlate the ad activity within the context of sessions, feature usage, and runtime stack data that I am getting as a part of runtime intelligence.

For these metrics, I export my CSV data into a regular excel spreadsheet and then generate the pivot tables shown below.

App background
I always like to use data from true production apps rather than fabricate data sets; I am using two apps that I wrote and launched on the marketplace that are both ad driven, Yoga-pedia and A Free WPC Yogi – the former is a free version of a yoga app that (hopefully) helps to drive sales of a for an upgrade to A Pose for That. A Free WPC Yogi plays a similar role for The WPC Yogi, a tailored version of A Pose for That targeting WPC 2011 attendees.


The following post uses their Ad activity over the same one week period.

Impression counts
The following pie chart shows the “new ad” event count by application. As you can see, Yoga-pedia has roughly 4X the number of ad impressions and given the fact that these apps are very similar (but not identical) in their behavior, this also roughly correlates to the volume of usage as well.


Click-through rates
However, when I divide the total number of “ad engaged” events by the total number of “new ads,” I see that A Free WPC Yogi has a 28% higher click-through rate (1.78% versus 1.37). In point of fact the demographics of the app users are quite different (randomized consumers versus MSFT partners who are attending WPC 2011).


Advantage: This intelligence helps to segment users by differences in their behavior and to do a better job of targeting those differences across apps.




Impressions by country (or culture)
Runtime Intelligence can grab the IP address of the sending tower – this is not personally identifiable and cannot be used to locate an individual with any precision – but it is more than adequate to identify country, state, and city. In the following graph, I simply count new ad events by country and show the top 10 countries by impression volume.


Advantage: If your app has a cultural bias that would benefit from localization, understanding where your users are can help prioritize those localization efforts.



Click-through rates by country (or culture)
The following bar chart calculates the click-through rates for the top 10 countries listed above. What is interesting here is that there appears to be a significant difference in click through rates by country (culture).


Advantage: Understanding when/if users from specific cultures are significantly more likely to respond to (click on) ads can further help to prioritize localization or marketing investments.




Impressions by ISP provider (top 25)
To produce the next graph, I used an application to tell me who owned the IP addresses that my mobile clients are using (I used IP2Location – but there are many of them out there).

This is a nice way to see who my users favor in terms of their carrier. Here I only show the top 25.

Advantage: Understanding carrier popularity will help focus business development/marketing efforts and better manage potential risks associated with how your users may be negatively impacted by upgrade schedules (delays). Will your next app be dependent upon Mango?



Sessions per app page
In the raw CSV files that can be exported from the Runtime Intelligence portal, there is a column, ApplicationGroupId. The value in this column is unique for all signals (messages) that are sent from within a single app session. In other words, I can use this field to organize all user activity into the relative user sessions using this field. This is helpful for plotting specific user patterns.

The following graph simply counts the unique occurrences of ApplicationGroupId values by page name value (recall that this is the first parameter of the WhatPoseWhen method). This avoids counting multiple views of a single page within a single session and tells me how popular specific pages are across my user base. For this posting and for illustration, I’m only showing data for five specific pages.


FindAPoseDetail and BrowseSelectPose are central to the user experience (browsing for yoga poses and then drilling into a specific pose for detailed imagery and instruction). TellMeMore is the page where I describe what comes with the paid version of the app (nice to see that 10% of my users deliberately choose to investigate the upgrade possibility) and AppGuide and TopicList are essentially app documentation and I can see that these pages are not hit very often – and that’s not a bad thing – users should not need to use the documentation after their first use.

So – this graph is telling me that

a) My users are spending their time using the app rather than trying to use the app
b) I am at least getting my user’s attention regarding a possible upgrade – perhaps my content is not compelling enough if my conversion rate does not correlate.

Advantage: broad user proofing can be used to validate developer assumptions about user experience and effectiveness of pages for their specific purpose.
Ads shown per page compared to volume of times viewed
Next I calculate the average number of ads shows per page by dividing the total count of New Ad messages by page (this combines the two parameters, page name and the even New Ad) by the total count of the times the page is shown. TOTAL ADS SHOWN PER PAGE / TOTAL TIMES PAGE VISITED

I use the same ad duration interval across all of my pages – so this is actually another means of calculating how much time my users are spending on each page (this can be done with Runtime Intelligence alone, but in this case, I don’t have to do that).

The graph below shows the average number of ads shown per page and maps them to where they rank in terms of how often the page is visited.


Happily, the two core pages of my app also get the most ads (and are also where my users are stopping to spend time). I can also see that users spend more time on detailed pose descriptions than they do browsing – even though the browse more often than they drill down (which makes perfect sense).

Sadly, my upsell page is getting the least love – I definitely have to work on making this page more engaging.


Advantage: Ad frequency by page provides insight into where users spend their time. Calculating click-through rates by page identifies where users stop to look around and may be most open to suggestion.

Returning users and sessions per user
Another column in the CSV extract is the ANID – this is either the result of hashing the true ANID from a user’s phone (it is not the actual ANID value), or, if they opt-out of that, it will contain a GUID generated by our software and written to isolated storage. In either case, this value acts as a unique user identifier.

The ANID can be used to identify new and returning users. Dividing session count (ApplicationGroupId) by ANID gives the average number of sessions per user. The following bar chart takes the 10 ANIDs with the highest session counts and compares the resulting sessions per user value to the rest of the user base (whose count is roughly 500 other users).


What I see is that there is a core group of users that are heavily using my apps (YAY!). Now that I know who they are, I can zero in on their specific behaviors, how they relate to my ads, what features they use most heavily, etc.


Advantage: Segmenting users into new, returning, and power categories dramatically improves a developer’s ability to target, prioritize, and validate development, marketing, and support activities.

Conclusion

I hope to have shown how using Runtime Intelligence, developers can materially improve their ability to build more effective applications and refine their advertising strategy while coordinating that strategy with complimentary upsell strategies as well.


Advantage: Development!

Tuesday, June 14, 2011

Increasing App sales with Analytics: Free apps versus trials

In my previous entry I introduced my app, A Pose for That, explained how I had instrumented my app with Runtime Intelligence to better track user experience and behavior. As a case in point, I illustrated how strategically placing upgrade opportunities in various locations inside my trial version, I was able to increase my conversion rates – perhaps by as much as 50%!

HOWEVER, when I was at MIX11, a very experienced developer (let’s call him David because that’s actually his name) told me that he had already established the optimal app blend to maximize revenue – it was to have a free app (not a trial version) that also offered ways to upgrade to the premium app. He pointed out that trial apps do not show up as free in the marketplace and are therefore almost always overlooked by most casual marketplace browsers. A free app gets the eyeballs that a trial misses.

For those who know me, they know one of my core principles is that my ideas never have to be original, they only have to be good – so is David’s idea really a good one?

To test it out, I created Yoga-pedia, a free app that included the browsing capabilities of A Pose for That with good imagery and instruction, but did not include the pairing of poses to real-world situations (a feature I believe is valuable) or flows (the stringing together of multiple poses). On the welcome page (and one or two other places) I give users a chance to learn more about our software and upgrade; Here is the welcome page and the “tell me more about why I should upgrade page.”

























I instrumented the various points where users can upgrade in both the trial and the free app so that I can compare BOTH the usage levels of the two apps AND the upgrade requests that stem from that usage. So… let’s go to the video tape – or better yet, Runtime Intelligence. (Note – these specific graphs are built by extracting the data from the runtime intelligence repository into a spreadsheet and then generating a simply pivot table).

By looking at application starts (not downloads in the marketplace sense of the word), the graph showing App Runs seems to support David’s logic; my free version, Yoga-Pedia, takes off like a rocket and within 24 hours eclipses trial activity in dramatic fashion. …but, it also seems to be cannibalizing trial activity too – Should I care? (NOTE – I am combining usage of multiple applications – not always easy to do with canned dashboards)




I probably should care IF users are more likely to upgrade from A Pose for That trials versus from Yoga-pedia. In other words, are my sales going up because of the free app even though it is depressing my trial volume? Let’s go to Runtime Intelligence one more time…



What the graph above shows is that upgrades from my trials also decreased dramatically with the launch of Yoga-pedia, BUT the volume of upgrade requests from within Yoga-pedia more than made up for that shortfall. (NOTE – I am combining feature usage across multiple applications – not always easy to do with canned dashboards)

In the one week where both the free and the trial versions lived side-by-side, the free version generated 86% of the upgrades.



More important is the bottom line: I saw an 85% increase in the total number of upgrades when I had the combination of both a free and trial version of my app available.

Coming up next (I promise this time) will be a discussion of the last leg of David’s magic formula for success – making your free version ad-driven. What will Runtime Intelligence be able to tell us about that?

Wednesday, June 8, 2011

Implementing Customer Feedback Forms AND fine tuning try/buy strategies with Runtime Intelligence

My Adventures in WP7 App Development: a beginner’s tale

I deployed my first WP7 app to the marketplace on May 13th. Prior to that, I had not written a line of code in nearly 20 years and so I think I can safely call myself a beginner. The fact that I might actually have something to share with the broader (and almost universally more experienced) development community shows how effective development tools have become and how wide-open the smartphone market is at this point in history.

My app (A Pose for That and the free alternative Yoga-pedia) pairs user situations and ailments with yoga poses – the app essentially uses the smartphone as an intelligent, just-in-time publishing platform.

USER SURVEY DATA TRANSPORT, MANAGEMENT, AND REPORTING

One of the capabilities I wanted to include in the app was a simple user survey form – I wanted to know how often users practiced yoga on their own and whether they hoped that this app would increase or improve their yoga practice – but I didn’t want to ask users to write an email (too much effort for them) and I did not want to incur the extra programming, setup, and expense of implementing my own content management store (too much effort for me).

Here’s what I did, for free, and with (virtually) no programming whatsoever…

I used Expression Blend to build my form (no programming), Dotfuscator for Windows Phone to inject the data collection transport logic (no programming), and the Runtime Intelligence Service for Windows Phone to store, manage, and publish user responses (again, no programming). I had to write one method (one that I actually reuse in a variety of ways including try/buy strategy tuning and ad monitoring that I will blog more on later).

That method (in its entirety) is:

private void WhatPoseWhen (string page, string selection)
{ return; }

…but I am getting ahead of myself. Here is a screen shot of the survey form:

It asks my two basic questions with two 3-way radio buttons to indicate true, false, or no comment. When the user leaves this page for any reason other than a tombstone event, I construct a single string that captures the user’s response. For example, if the user answered in the affirmative on both counts, the app assembles the string “I practice yoga 2X per week or more And I hope that this app will increase and/or improve my practice.” and puts it in a local variable UserFeedBack. If answer in the negative, I just assign the string “And.” Then, I call my custom method (above) like so:

WhatPoseWhen("feedback", UserFeedBack);

That’s it for my coding – I just build the app.

Now, I go to Dotfuscator for Windows Phone. It takes about 3 minutes to register for the service at www.preemptive.com/windowsphone7 (fill out the form at the bottom of the page) and another 5-10 minutes to point Dotfuscator to the XAP file in question, exclude third party assemblies (in my case, Telerik’s WP7 controls used elsewhere in my app), and tag the entry and exit points of my app inside the Dotfuscator UI.

The last step required to complete my user feedback form and service is to add one attribute for Dotfuscator, a feature attribute as follows:

I right-clicked on the method WhatPoseWhen (in the left pane) and selected Add Feature Attribute – all I needed to type into the form on the right-hand side of the screen was a name for the feature (WhatPoseWhen) and insert an * in the ExtendedKeyMethodArguments property. This tells Dotfuscator to grab any/all parameter values passed into this method whenever it is called and send it up to the Runtime Intelligence portal. In this case, I am identifying the context (feedback) and passing the string that I constructed based upon their responses inside the variable UserFeedBack.

This takes 2 minutes tops to configure and then I press the “build” button and out pops my new and enhanced XAP file. I submitted my app to the marketplace with no special handling required and then waiting for the numbers to roll in. This takes days between marketplace processing, user adoption, and Runtime Intelligence number crunching. Days is still much faster than the weekly marketplace statistics and obviously much more flexible but slower than the ad-server stats – it’s right in the middle.

The results can be seen (in part) in this screen capture – I can log into my Runtime Intelligence account and select custom data to see the following (note - alternatively, I can extract CSV files for further analysis)

The highlighted row “page” and “feedback” shows that 24 users went to the feedback page during the selected interval (I scratched out the actual interval because of the sales numbers that are reflected here too - it’s none of yur beeswax). In the last row shown here (also highlighted) you can see that of the 24 page views, 16 of these users indicated that they did NOT practice 2X per week and they did NOT expect this app to change that. (the more positive responses can be seen lower down on the page but are not shown here).

The bottom line is that I was able to implement a user survey mechanism including secure transmission, storage, and basic analysis with essentially no programming and no requirement to setup a hosted content management system - ALL IN LESS THAN AN HOUR.

FINE TUNING TRY/BUY STRATEGIES

I also call my trusty little method WhatPoseWhen throughout the app during trial periods. The screen capture above also shows basic try/buy behaviors. A Pose for That implements a Try/Buy mechanism. If a user is in Trial mode, they are presented with an opportunity to upgrade right on the main page of the app. That is the “UpgradeNow” option. Additionally, whenever a user selects functionality that is NOT included in the trial (say showing a large image of a pose with detailed instructions), they are presented with a screen letting them know that they have bumped up against the limits of the trial version and would they like to upgrade right then and there – more of an impulse upgrade.

What the screen capture above is telling me is the following:

1) Users were presented with the “impulse upgrade” option inside a trial 130 times during the selected time interval.

2) When presented with this choice, users chose to NOT upgrade and return to the previous page 111 times (or 85% of the time the said thanks but no thanks). However, it also shows that 19 times (15% of the time) they DID choose to upgrade on the spot.

3) During the same interval, 38 users selected the “Upgrade Now” button on the main page.

I have not chosen to do true A/B testing in this case, but one thing that I am almost certain about is that some of the 19 users who upgraded “inside” the app would NOT have gone back to the main page at a later time and upgraded via the standard menu choice.

My two-pronged upgrade pattern may have increased my conversion counts during this interval from 38 to 57 or an increase of 51%!

Using the CSV extracts, I can dive deeper to see what features are more likely to result in an upgrade and also get a sense of how much is too much, e.g. users that abandon the app and never come back. (Note, I am not EVER transmitting ANID or other PIID information).

COMING UP NEXT: USING RUNTIME INTELLIGENCE TO TRACK AND OPTIMIZE AD PLACEMENT STRATEGIES