Wednesday, June 8, 2011

Implementing Customer Feedback Forms AND fine tuning try/buy strategies with Runtime Intelligence

My Adventures in WP7 App Development: a beginner’s tale

I deployed my first WP7 app to the marketplace on May 13th. Prior to that, I had not written a line of code in nearly 20 years and so I think I can safely call myself a beginner. The fact that I might actually have something to share with the broader (and almost universally more experienced) development community shows how effective development tools have become and how wide-open the smartphone market is at this point in history.

My app (A Pose for That and the free alternative Yoga-pedia) pairs user situations and ailments with yoga poses – the app essentially uses the smartphone as an intelligent, just-in-time publishing platform.


One of the capabilities I wanted to include in the app was a simple user survey form – I wanted to know how often users practiced yoga on their own and whether they hoped that this app would increase or improve their yoga practice – but I didn’t want to ask users to write an email (too much effort for them) and I did not want to incur the extra programming, setup, and expense of implementing my own content management store (too much effort for me).

Here’s what I did, for free, and with (virtually) no programming whatsoever…

I used Expression Blend to build my form (no programming), Dotfuscator for Windows Phone to inject the data collection transport logic (no programming), and the Runtime Intelligence Service for Windows Phone to store, manage, and publish user responses (again, no programming). I had to write one method (one that I actually reuse in a variety of ways including try/buy strategy tuning and ad monitoring that I will blog more on later).

That method (in its entirety) is:

private void WhatPoseWhen (string page, string selection)
{ return; }

…but I am getting ahead of myself. Here is a screen shot of the survey form:

It asks my two basic questions with two 3-way radio buttons to indicate true, false, or no comment. When the user leaves this page for any reason other than a tombstone event, I construct a single string that captures the user’s response. For example, if the user answered in the affirmative on both counts, the app assembles the string “I practice yoga 2X per week or more And I hope that this app will increase and/or improve my practice.” and puts it in a local variable UserFeedBack. If answer in the negative, I just assign the string “And.” Then, I call my custom method (above) like so:

WhatPoseWhen("feedback", UserFeedBack);

That’s it for my coding – I just build the app.

Now, I go to Dotfuscator for Windows Phone. It takes about 3 minutes to register for the service at (fill out the form at the bottom of the page) and another 5-10 minutes to point Dotfuscator to the XAP file in question, exclude third party assemblies (in my case, Telerik’s WP7 controls used elsewhere in my app), and tag the entry and exit points of my app inside the Dotfuscator UI.

The last step required to complete my user feedback form and service is to add one attribute for Dotfuscator, a feature attribute as follows:

I right-clicked on the method WhatPoseWhen (in the left pane) and selected Add Feature Attribute – all I needed to type into the form on the right-hand side of the screen was a name for the feature (WhatPoseWhen) and insert an * in the ExtendedKeyMethodArguments property. This tells Dotfuscator to grab any/all parameter values passed into this method whenever it is called and send it up to the Runtime Intelligence portal. In this case, I am identifying the context (feedback) and passing the string that I constructed based upon their responses inside the variable UserFeedBack.

This takes 2 minutes tops to configure and then I press the “build” button and out pops my new and enhanced XAP file. I submitted my app to the marketplace with no special handling required and then waiting for the numbers to roll in. This takes days between marketplace processing, user adoption, and Runtime Intelligence number crunching. Days is still much faster than the weekly marketplace statistics and obviously much more flexible but slower than the ad-server stats – it’s right in the middle.

The results can be seen (in part) in this screen capture – I can log into my Runtime Intelligence account and select custom data to see the following (note - alternatively, I can extract CSV files for further analysis)

The highlighted row “page” and “feedback” shows that 24 users went to the feedback page during the selected interval (I scratched out the actual interval because of the sales numbers that are reflected here too - it’s none of yur beeswax). In the last row shown here (also highlighted) you can see that of the 24 page views, 16 of these users indicated that they did NOT practice 2X per week and they did NOT expect this app to change that. (the more positive responses can be seen lower down on the page but are not shown here).

The bottom line is that I was able to implement a user survey mechanism including secure transmission, storage, and basic analysis with essentially no programming and no requirement to setup a hosted content management system - ALL IN LESS THAN AN HOUR.


I also call my trusty little method WhatPoseWhen throughout the app during trial periods. The screen capture above also shows basic try/buy behaviors. A Pose for That implements a Try/Buy mechanism. If a user is in Trial mode, they are presented with an opportunity to upgrade right on the main page of the app. That is the “UpgradeNow” option. Additionally, whenever a user selects functionality that is NOT included in the trial (say showing a large image of a pose with detailed instructions), they are presented with a screen letting them know that they have bumped up against the limits of the trial version and would they like to upgrade right then and there – more of an impulse upgrade.

What the screen capture above is telling me is the following:

1) Users were presented with the “impulse upgrade” option inside a trial 130 times during the selected time interval.

2) When presented with this choice, users chose to NOT upgrade and return to the previous page 111 times (or 85% of the time the said thanks but no thanks). However, it also shows that 19 times (15% of the time) they DID choose to upgrade on the spot.

3) During the same interval, 38 users selected the “Upgrade Now” button on the main page.

I have not chosen to do true A/B testing in this case, but one thing that I am almost certain about is that some of the 19 users who upgraded “inside” the app would NOT have gone back to the main page at a later time and upgraded via the standard menu choice.

My two-pronged upgrade pattern may have increased my conversion counts during this interval from 38 to 57 or an increase of 51%!

Using the CSV extracts, I can dive deeper to see what features are more likely to result in an upgrade and also get a sense of how much is too much, e.g. users that abandon the app and never come back. (Note, I am not EVER transmitting ANID or other PIID information).



Unknown said...

Great post blog!
Some more interesting information about a specific type of customer feedback suitable for contributing to new product development can be buy/try.

Shahzaib said...

Thank you for providing such a thoughtful post, I really enjoyed it.Your have great insight about the subject of your post. Your blog is really excellent.