Tuesday, November 29, 2011

The Microsoft sponsored service for WP7 ends – the PreEmptive sponsored service debuts

At 24:00 EST on December 9, 2011 the Microsoft sponsored protection and analytics service for Windows Phone 7 will be shut-off.

A different service fully and solely subsidized by PreEmptive Solutions will take its place. This service is materially different – please read the following notice carefully for information on how to continue to work with PreEmptive Solutions technology for Windows Phone.

Background

While Microsoft’s sponsorship expired on September 30, 2011, PreEmptive continued the service for an additional 60 days at our own expense while we explored a variety of options to continue our support for the Windows Phone development community. Microsoft’s sponsored service has ended, but our commitment and support for this community continues unabated.

PreEmptive’s challenge was to find an affordable means to support for the burgeoning WP7 development community without compromising the quality and capabilities unique to our protection and application analytics technologies; we believe the following provides both a valuable set of services at little and no cost for small WP7 development efforts with a smooth “on-ramp” for larger development projects and for organizations with more demanding service levels, governance or scalability requirements.

Summary

Obfuscation and Instrumentation continues at no cost through 12/31/2012.

Dotfuscator for Windows Phone, the post-compile tool that obfuscates and injects application instrumentation will continue to be offered to Windows Phone 7 developers at no cost through December 31, 2012.

Mobile analytics endpoint (wp7.runtimeintelligence.com) will be shut-off on 12/9/2011

The current analytics endpoint will be discontinued. However, developers have a number of options that they can consider;

· Subscribe to the PreEmptive Solutions commercial endpoint. This is a fee-based option that includes all of the features currently offered PLUS an advanced mobile portal, a RESTful API, and a higher service level (plus support beyond WP7). For more information, email sales@preemptive.com.

· License PreEmptive Analytics for TFS. This is also a fee-based option. This solution is an on-premises solution focused on exceptions rather than feature tracking. For more information, see Using Analytics for Windows Phone and Azure Exception Tracking - User Community Virtual Series and email sales@preemptive.com.

· Develop and host a homegrown endpoint. This is a no-fee option but development will be required. The CodePlex Runtime Intelligence Endpoint Starter Kit repository starter kit project may be of some help.

· Plan to migrate to the PreEmptive Analytics for TFS community edition to be included with Dev-11. This is a no-fee option but is NOT yet generally available from Microsoft. For more information, see the video A Lap Around PreEmptive Analytics for TFS with Justin Marks.

· Publish their app as a CodePlex project and utilize the CodePlex analytics endpoint (this is different than the option above). This is a no-fee option. For more information, see this tutorial (note that this assumes the developer is limited to the Community Edition of Dotfuscator – but the WP7 edition has full functionality).

The following feature summary table highlights the three principle options available to the Windows Phone 7 development community with a comparison to the discontinued Microsoft sponsored service. (click thumbnail to enlarge)










FAQ

Q: Will developers have to republish my WP7 app on or before December 9, 2011?

No

Q: Will users notice any difference in app behavior after December 9, 2011?

No

Q: Will I have to re-register my installation of Dotfuscator for Windows Phone 7?

No

Q: If I have only been using Dotfuscator for obfuscation, will I lose any functionality or will I have to do anything differently?

No

Q: Will developers have access to earlier runtime data generated by Runtime Intelligence for Windows Phone after December 9, 2011?

They will not.

Q: Where can developers ask additional questions regarding migration, upgrades or discontinuing use of PreEmptive Solutions technologies?

Post to the PreEmptive forum at http://www.preemptive.com/forum/index.php?f=26&sid=dfe90c2ba80de07692372dae962c58b2&rb_v=viewforum

PLEASE NOTE – this is NOT a moderated forum.

Q: Why would a development organization upgrade to a professional SKU of PreEmptive Analytics?

· Multi-platform (WP7, Android, JavaScript, all .NET and Java, native API…)

· Private endpoint (for large-scale enterprises with demanding scalability, governance, or other unique requirements).

· Analytics for TFS option (out-of-the-box integration with Microsoft Team Foundation Server).

· True application analytics such as custom data fields, development ownership of data, true SLA and support for developers, etc.


Conclusion

Of course we would have preferred that Microsoft had opted to extend their sponsorship for Runtime Intelligence for Windows Phone, but that was not the decision that they ultimately made. However, over the past 12 months, we had a front row seat watching a flood of innovative apps launch. We know that a good percentage of the WP7 development community relied upon PreEmptive Solutions for both analytics and protection (in a recent analysis of the marketplace, it was shown that 17% of all apps used either protection, analytics or both).

This experience combined with our confidence in the future of the Windows Phone platform has prompted us to extend free access to Dotfuscator for Windows Phone. We look forward to our continued support and participation in the growth and success of this exciting technology and marketplace.

Monday, October 10, 2011

60 days – déjà vu all over again?

Hi all – today I sent out a notice to registered Runtime Intelligence for Windows Phone users letting them know that a change would be coming to our service on (or perhaps after) December 9th. I have already received messages from some rather annoyed developers who feel like we are playing some kind of Machiavellian pricing game; the basic flaw in this view is that it presumes we sit in the Prince’s chair – which we do not.

First, let me start by stating categorically that we remain excited and committed to the Windows Phone platform.

Our approach to analytics has always been squarely focused on development organizations rather than on marketing; this is not a “spin,” rather, this approach informs and influences our product’s architecture and business model.

I’ll drill into this distinction in a moment, but first, I want to point out that it is this distinction coupled with our belief in WP7 as a platform and the value of our approach to application analytics in general that led us to invest in building out the WP7-specific service. While I cannot speak to Microsoft’s motivations (in any context, let alone this one), it is public knowledge that they funded this service so that it could be offered at no charge to the WP7 dev community (but now that’s no longer the case).

Why was this even necessary? Why couldn’t PreEmptive just give away our analytics and protection service like Google does?

“Marketing-centric” analytic services make their money off of your data – but first they have to cover their costs; the “back end” or analytics portal is where the majority of that “cost” sits. Today, Google or Flurry (or any other similar service) each rely upon web and other mobile platforms to generate the data volume that ultimately funds their backend service. At this early stage, WP7 app data can fund client API’s (perhaps) but is nowhere near the volume-levels required to pay for an entire service.

But there’s more – focusing exclusively on resalable content (and by extension, divesting from any requirements that do not) have profound feature and architecture implications.

Given a marketing-centric focus, you can readily understand why these solutions all share the following traits:

· Custom data fields and data types are limited in scope and volume (unique/heterogeneous data points cannot be easily rolled-up, sold or used to better target users)

· No built-in enforcement for opt-out policies – this is left entirely to the developer (if you do not provide user data to these services, you are of no value)

· No investment in analytics of software away from the presentation layer (this data has no relevance to advertising, profiling etc.)

· Software-specific data is only marginally supported – unhandled exceptions can be captured, caught and thrown exceptions are out of scope *same as above – does not contribute to monetization model.

· They own your data (privacy rules notwithstanding – the have full rights to monetize your data in any way they see fit). This is the entire reason they are in business – if they don’t own the data, how can they make money?

· They can provide their analytics software for free – since they have built a monetization strategy based on your data and do not invest in any development that does not directly contribute to that strategy, they can afford to give you the picks and shovels for free in exchange for the gold you produce for them (that’s big of them don’t you think?)

Conversely, focusing on application stakeholders as we do, we invest in features that do not contribute to a roll-up across companies and apps to monetize user preferences – in fact, some of these features can materially impede that objective – yet they are equally as important for developers who want to build better, faster and more effective software…

For example:

· Supporting custom data (app-specific fields and states) and data types (other than strings)

· Built-in exception tracking (unhandled, caught and thrown) to not only send you stack traces but provide insight into user experience and runtime environments (hostile or otherwise)

· Built-in opt-out policy enforcement (local, regional, cultural, legal and regulatory rules are so complex – serious development organizations need to centrally manage and reliably enforce compliance – could you sell a gun without a safety?!)

· Analytics away from the presentation layer (want to know how your Azure or other distributed services are performing and behaving?)

· AND LAST BUT NOT LEAST, you own your data. (given the heterogeneity and specificity of each application’s data – it’s value is exponentially higher to development stakeholders and proportionately lower in aggregate)

So what’s the takeaway?

With Microsoft, we invested in this mobile platform because we believed (and now we know) that it has significant value.

Given our focus on development rather than advertising, we have to innovate on both the technical and business ends.

We had hoped that our previous relationship with MSFT Windows Phone division had lasted a bit longer, but it is what it is.

We have a number of very interesting scenarios in mind that we think will prove to be both exciting technically and innovative from a business perspective – but we are not ready to discuss that yet (as soon as we can – we will).

If you don’t care about these added value features and don’t mind the strings that come with web/mobile analytics services like Google – then you should certainly use them. They are not flawed – they are just fundamentally different.

If you value the services we have been offering or have ideas on how we can improve them even further – PLEASE LET US KNOW…

Registered users will be getting a survey in the next 24-48 hours – please let us know what you think and what’s working (and not) for you.

As I've already said, we remain excited and committed to the Windows Phone platform.


(now i have to break to prepare my wp7 training materials; i've got a crew from my son's high school writing WP7 apps for independent study credit)

Monday, September 26, 2011

Runtime Intelligence and Dotfuscator for WP7 developers speak (Mikey likes it!)

In my last post, I drew a correlation between apps that used Runtime Intelligence and their relative (positive) success as measured by user ratings and engagement. While it was fairly clear that developers who chose to use Runtime Intelligence built more successful apps than their counterparts, it really said nothing about a) Runtime Intelligence analytics’ contribution to their success or b) developer satisfaction with Runtime Intelligence overall.

Well, there’s really only one way to answer these questions …and that’s to ASK THEM.

I sent out an electronic survey starting on Monday of last week (September 19) and have received over 200 developer responses. Here is what they said…

Who is doing what?
  • 32% indicated that they are only using Dotfuscator to protect their application
  • 24.5% said they were only using Runtime Intelligence
  • And 43.4% indicated that they were using both.



















Do these “smarter than the average bear” developers see the value?

In a word, YES.

Looking only at those developers who indicated that they already had their applications in the marketplace (representing over 100 development organizations):
  • 60% indicated that analytics set the wp7 platform apart from all other platforms or added significant value to the platform.
  • 68% indicated that protection set the wp7 platform apart from all other platforms or added significant value to the platform.




Analytics’ perceived value increases by 450% with developer experience

When looking at those developers that indicated that analytics and/or protection “set the WP7 development platform apart from all others,” analytics’ value actually increased by 450% (from 2% to 9%) as developers moved from no app, to less than four weeks to a ship date, to actually having an app in the marketplace (and getting analytics back). Interestingly, obfuscation (protection) peaked in value just prior to shipping.

So what’s the takeaway?

In my last post, we established the user of Runtime Intelligence were more successful than other WP7 developers. In this post, we see that these developers credit their success, to some material degree, to either Runtime Intelligence or Dotfuscator protection (or both).

In their own words
(selected - but unedited - responses to the open-ended question "what are you most excited about?")

I like being able to get crash reports without much additional work.

It gives the developer ability to know about usage patterns in an application. Obviously code obfuscation is a necessity, especially for paid apps.

It offers a unique way to see how users interact with the application, and with the latest release it also has error reporting. Awesome!

I'm an excitable person.

Fabulous data provided by RIS to analyze the performance, usage and app demographics.

So I can know what is happening in my app and protect my code.

Used correctly, the analytics really let me see how and by whom my application is being used. I get more insight into this information than I could if I set up a usability lab or just did extensive user testing. There is no better way to observe than to do so in production.

The concept of attaching runtime analytics after the compilation process is very useful for us (standard software development, single application in various customer-specific configurations), since we are able to attach this on a per-customer basis and don't have to manage it in code.

UI for parameterization

It gives detailed statistics about the usage of all parts of the software and helps to recognize the hot features of the software are and which parts are less used. This adds great value into the effort of making software better.

Really gives me insight into what my customers are doing with my applications. They help me to understand where I can enhance functionality and add value.

Quality of product

Analytics give me an idea on what I should work on next to improve my application

Runtime analytics is cool because there is no code to write.

I can collect the exact information i need.

It allows me to phase out or strengthen certain parts of my apps. I currently have seven apps and the instrumentation is crucial.

Because i can have a deep analysis of when and especially how my application is being used. If you add the fact that all these data are aggregated and presented in such a nice way by the portal, you end up with a great product

I produce libraries (DLL's) that are handed to third parties, hence the need for obfuscation.

Kickass obfuscator.

It helps me keep track of any bugs. And it allows feature tracking. And it gives me the cool world map that shows where some of the users are.

WP7 apps with Runtime Intelligence have higher ratings and engagement rates

In my last post, I lamented that only %2.5 of the apps in the marketplace were actively using Runtime Intelligence (RI). This begged an obvious question; were these developers leaders or laggards? Were RI-enabled apps more successful and effective and, by extension, worth emulating?

The short answer appears to be yes – to be clear, I am NOT saying that merely turning Runtime Intelligence on increases an app’s success. What I think the following data does show is that the developers who chose to include Runtime Intelligence as a component of their development process are indeed more successful than those that did not.

I took a second look at the 26,469 apps that we downloaded from the marketplace and compared those apps that were instrumented with Runtime Intelligence with those that were not.

Specifically, apps with Runtime Intelligence (analytics) are:

Ranked 25% higher: RI-enabled apps averaged roughly 4 stars (8.02 out of 10). Non-RI apps came in at 3 ½ stars (7.6 out of 10). NB: apps with no ratings whatsoever were excluded from this calculation.

Have a 50% higher rate of engagement: 73% of the apps using RI had at least one rating while only 48% of non-RI apps had any rating whatsoever.

As I've already said, simply turning on RI will not get you a 1/2 star bump in your ratings - but clearly, the kinds of developers who are achieving higher user satisfaction are the kinds of developers who are choosing to use RI.

Coming next: Users of RI and Dotfuscator for WP7 make themselves heard!

Thursday, September 22, 2011

WP7 Marketplace share (or how we became a victim of our own success)

This post tells the story of how good faith estimates of our WP7 marketplace penetration were under-reported by 500%. This is not a “gotcha blog” – there are only good actors with the best of intentions in this story; but that’s why I think it’s a story worth telling.

You see, we don’t just obfuscate – we hide the fact that your app is obfuscated. We don’t just offer application instrumentation and monitoring, we inject that logic to simplify and streamline packaging and improve performance. …and therein lies the rub.

Something was in the air

While at Build soaking up the heady atmosphere of Windows 8 and all that’s coming with it, a few MVP’s and also the Lowdermilk brothers took a moment to ask me if I’d read this awesome blog post where a developer had downloaded and analyzed all of the XAPs in WP7 marketplace. Apparently the market insight was killer and covered everything from the most popular libraries to snagging cloned apps. No more guessing – the facts were all laid out. And they each mentioned to me how surprised and even a little worried they were by how few apps were being obfuscated. I didn’t think too much of it at the time as I am among the first to point out that not every app needs to be obfuscated – but I did make a mental note to be sure to check out the blog.

When I got back home, I finally had a chance to track this “all seeing blog” down – it was Justin Angel’s blog and the post was Windows Phone 7 Marketplace Statistics. And it really is a fascinating post with both initiative and insight; and then I got to the obfuscation section.

According to Justin’s analysis, only 3% of the apps in the market were obfuscated! And as I scanned down, there were comments to the effect of “gee, since no one else is doing it, perhaps I shouldn’t bother either.”

Even more surprising to me was the fact that our analytics (Runtime Intelligence) was not even listed in a very long list of third party tools – when I knew for a fact that we have nearly a thousand apps sending data.

This can’t be right! (and I was)

Given the nearly 6,000 downloads of Runtime Intelligence and Dotfuscator for WP7 and the activity that I had been seeing over the past year, these numbers just didn’t seem right. I wrote to Justin who was quick to share his detection logic (in fact he posted the source on his blog) and just as quick to invite any comments or refinements that I might have to offer.

To put a fine point on this, Justin was in no way defensive and was as interested in getting to the right answer as I was.

Hung by our own petard

Without going into a lot of detail, Justin’s approach was to bust open the XAP and examine the various files and manifests to separate Silverlight from XNA and to identify the presence of third party tools. This approach proved to be effective because frameworks, tools and components leave behind files and other telltale fingerprints as a matter of course. There is one limitation though; this approach cannot detect when an application is modified or extended through IL manipulation or injection. And that’s exactly what we do.

From Build to Bill’ed (new numbers and how)

As I like to say, my ideas only have to be good, they don’t have to be original (trademark and patent laws not withstanding) and so I did what I often do when confronted with a conundrum – I asked Bill Leach, our CTO, for help. He quickly (dare I say magically?) authored our own “marketplace crawler” that populated our own XAP repository. Rather than look at XAP contents at a component level, he wrote some code that examined the binaries themselves.

The first pass looked for the custom attribute DotfuscatorAttribute inside the binaries. This is a good way (but not an absolute way) to determine if a file has been processed by Dotfuscator (for either obfuscation or injection of analytics). It’s not infallible because developers can remove that attribute if they chose (to further cloak the fact that they have used Dotfuscator). Here is what we found:

We downloaded 26,159 XAP files and 14.5% to have been processed by Dotfuscator.

This is basically 5X as many apps as Justin’s analysis had found (and that does not include the developers who configured Dotfuscator to remove the attribute we were searching for – so the number is certainly a bit higher).

In fact, we were surprised that Justin had found any at all – where did his 3% come from? Upon inspection, we think it’s an unexpected side-effect of how XAP’s are assembled – there are some instances where the configuration file of Dotfuscator gets pulled into the XAP – this is unnecessary and should never happen. We will document this behavior and make sure that users know how to prevent this from happening. In short, his 3% showed the prevalence of a bug – not the use of Dotfuscator.

To determine if an application was instrumented (rather than obfuscated), we applied some heuristics that are less obvious but can be shared if someone is interested (we looked for the existence of some high-level classes).

2.6% of marketplace apps are instrumented.

From my perspective, this is a low number – but to put it in perspective (or let’s be honest – I’m looking for the silver lining) we have a larger share than Google Analytics and Admob but a slightly lower number than Flurry.

Attack of the clones

Just one more point to be made in this post. If one were to consider each family of cloned apps as essentially a single, re-skinned app – these numbers have the potential to change materially. We may take a look at that, but I think we have already gotten most of what we can from the static analysis of the marketplace.

So – is that the whole story? (Of course not)

Don’t give me (just) static

As interesting as the static analysis of the WP7 marketplace is (and it is), static analysis only gives us a backwards facing snapshot of what’s already been deployed. We get no insight into:

  • Best practices that we would want to replicate (which are different that common practices),
  • Developer motivations behind their development choices
  • Future trends especially when driven by new technology and market opportunities
In the context of Dotfuscator and Runtime Intelligence, I would want to know
  • If the developers who built the 14.5% of “Dotfuscator processed” apps leaders or laggards?
  • Do they have special requirements that set them apart?
  • In short, do they have anything to teach the rest of us?
Want to know more about the 14.5% apps and what they have to teach us?

Coming soon, WP7 Developer Survey Results. We’ve been running surveys since the WP7 launch last year (you can checkout survey 1 and survey 2). As part of this ongoing effort, we have just closed out our third survey in the series and I will be posting results in the next few days – stay tuned!

Saturday, August 20, 2011

SCHNEL! Or why patience is a virtue except when testing on Windows Phone

Mystery solved! As I had promised in my last blog entry, I added exception reporting to my two apps, A Pose for That and Yoga-pedia to determine exactly what was going on with the exceptions that the Microsoft Marketplace was reporting but I had never seen. I needed to know:

  1. The cause of the exceptions (the stack traces were too cryptic for me to figure out)
  2. How to fix the problem(s)
  3. Solve the mystery as to why I have never seen a crash even though there is little doubt that they are indeed happening out there in the wild. If I can’t be confident that my testing is complete, I can never be confident that my app will behave when in matters most – in production.

Remember these three objectives because you will be tested later.

Now, I know that my entries are sometimes kind of long – so here are the conclusions…. And if you want to know how I back them up – then (hopefully) you will enjoy the rest of the post.

(PLUS, there’s a teaser at the very end).

Conclusions:

  1. Always account for “loss of context” in WP7 apps – probably the try-catch is the best approach but I will defer to “real developers” for the specific strategy. At least with Silverlight, impatient users can always force your app into an invalidOperationException.
  2. Culture matters in both user preferences and user expectations (and therefore user satisfaction). If at all possible, represent all relevant cultures in your test populations. How do you know what the relevant populations are? Analytics of course…
  3. Software quality, user experience and user profile are all intimately connected. Systems that only monitor user behavior (marketing) or only profile software stability (debugging) or only profile runtime configurations (marketplaces) are inherently weaker than an approach that accounts for the influence that each has on the others.
  4. Without Runtime Intelligence (or another comparable application analytics solution), no development team can be confident in either the quality of their app or their users’ experience.

And here’s the how and why I have come to these conclusions…


HOW – first I had to add my own exception reporting.

Exception Reporting: Adding exception reporting with Runtime Intelligence is very simple. All I had to do was add one exception reporting attribute as follows (from within Dotfuscator for Windows Phone)




Note that in the properties of this attribute I am asking that the method ExceptionExtendedData method be run. A Runtime Intelligence system probe attribute works fine during normal operations, but if I want custom data after an unhandled exception, this is a more reliable technique. Here is the method that I put in the App class:

As a side note, if I wanted to track thrown exceptions or handled, I could place the exception attribute down at the method level to get much more targeted data. Anyhow, after this simple step, I deployed the re-instrumented app to the marketplace and (sadly) watched the exceptions roll in…


Runtime Intelligence Exception reporting
Logging into my Runtime Intelligence portal account and selecting the date range I was interested in and then selecting “Exceptions”, presents me with the following:


I can see the total exceptions over time; the type of exceptions (I am only getting one – and that seems like it might be good news) and I have a list of all of the specific exceptions on the right. Clicking on any one of these shows me the detail as follows:


The graphic above shows screen captures from three different stack traces.

Good news item 1 is that (unlike the marketplace stack traces), I can see the diagnostic message. This may not mean much to the serious developers who enjoy offsets and cryptic traces – but I need these to go back to MSDN and other resources to see what is really going on and what I can do about them.

It turns out that there were seven different exceptions coming from my app – BUT ALL OF THEM HAD TO DO WITH TIMING – not some error in my general logic (in other words, I’m not dividing by zero or trying to display a non-existent image, etc.). For some reason my app is getting vertigo in my customers’ hands and losing track of what page was current resulting in any number of “InvalidOperationException.”

Good new item 2 is that there is a pretty standard way to manage this behavior; the try-catch statement. I’m in no position to explain how this works, but visit the link above for a great explanation.

So with basic Runtime Intelligence exception reporting I have addressed my first two requirements; to diagnose my app’s problem and identify a fix. BUT – I have not addressed the deeper and perhaps more troubling issue of why I have never seen this problem myself – what’s this all about? If I can’t improve my quality control, I can never feel comfortable that my app will perform in the wild as it does for me.

Good new item 3 is that I have Runtime Intelligence to give me EVEN MORE context on my app and my users. The fundamental flaw in almost every exception handling solution I have ever seen is that they (by necessity) can only look at the app when exceptions occur – they are too heavy-weight and/or too invasive to run all the time everywhere – no so with Runtime Intelligence.

If you ONLY have exception data, you are robbed of one of the most effective diagnostic heuristics available – the process of comparing populations in order to identify material differences between them and thus leading to a likely root cause. This is the fastest and cheapest way to figure out why I had never seen a crash.

What I did next was to compare the set of users who experienced exceptions with the general population of users and myself – was there something specific about their phones? Their software? Their behavior?

It turns out that the answers to these three questions are no, no and YES!

Process of elimination: First, I compared the system data of exception users and phone with the general population as defined in ExceptionExtendedData defined above… I won’t bore you with all of the metrics I was able to eliminate, but I will show one; manufacturer.

The two pie charts show the relative percentages of manufacturers in the general population of my users with the population that had exceptions – one can eyeball these and pretty quickly see that there is virtually no difference. The bar chart puts a fine point on this by showing the relative difference in share; Dell had only 1% of the total share and was not statistically significant – looking at the other three manufacturers, we can see that there is no more than a 20% variance between the two populations. This kind of range was consistent across all of the metrics I had been collecting except one.

Schnel!

In my last blog I had noted that there had appeared to be a disproportionate percentage of German speaking users in the exception population and it turns out that this was not a random blip – it showed up again in this latest exception data as follows:


The top bar chart shows the relative percentage of users by culture that experienced exceptions alongside the relative percentage of that culture in the general population. The second bar chart shows the relative difference in share by culture and it is truly surprising (at least to me).

Germans crashed my app 13X more often than norm, Austrians and the Dutch crashed the apps 4X what their relative share would suggest with the Malaysians right behind.

Given the relative distance between these populations and the different carriers and jurisdictions that these populations live under, it seems pretty clear that what these users have in common is their behavior. These users are simply more impatient than the rest of my users. They hit the “show pose” or “take me to the marketplace” or whatever more quickly and more often and so they are that much more likely to cause my app to lose its place.

Not only am I more patient (being an American and at one with the universe ;), but because I know my app and the areas where it may take a beat (or two) to respond – I naturally did not repeat my commands impatiently at those critical times – and therefore, I did not crash my app! Mysteries solved!

Conclusions: (AGAIN)

  1. Always account for “loss of context” in WP7 apps – probably the try-catch is the best approach but I will defer to “real developers” for the specific strategy. At least with Silverlight, impatient users can always force your app into an invalidOperationException.
  2. Culture matters in both user preferences and user expectations (and therefore user satisfaction). If at all possible, represent all relevant cultures in your test populations. How do you know what the relevant populations are? Analytics of course…
  3. Software quality, user experience and user profile are all intimately connected. Systems that only monitor user behavior (marketing) or only profile software stability (debugging) or only profile runtime configurations (marketplaces) are inherently weaker than an approach that accounts for the influence that each has on the others.
  4. Without Runtime Intelligence (or another comparable application analytics solution), no development team can be confident in either the quality of their app or their users’ experience.

TEASER – WOULDN’T BE AWESOME IF WE COULD DO ALL OF THIS PROFILING AND EXCEPTION ANALYSIS WITH HTML5/JAVASCRIPT TOO? STAY TUNED (IN)!

Tuesday, July 19, 2011

The new WP7 App Hub reporting is great – and it’s even better with analytics!

Warning – this is a cliff hanger post. If you don’t like mysteries, come back in two weeks…

Like anyone else who has an app inside the WP7 App Marketplace, I noticed that the App Hub was down most of yesterday with the promise of a functional upgrade in the works – and today I was very pleasantly surprised to see the result; a streamlined experience with expanded capabilities.

One of the first things that caught my attention was the exception reporting by app and by date; very useful indeed. Of course, MSFT is quick to point out that (and I quote) “Crash count alone isn’t a direct measure of app quality. Popular apps may have higher crash counts due to higher usage.

Well that seems self-evident, but without usage metrics how can I evaluate the severity of my exception report counts? …. (and now, unless this is the first post of mine that you have ever read, you must know what’s coming).

To the cloud! (Sorry, I couldn’t resist). Using Runtime Intelligence for Windows Phone, I’m able to measure total sessions – by extracting these counts by day and mashing it up with exception counts from the marketplace – I can now supply the missing ingredient to make the exception count on the App Hub meaningful. (NOTE – I had to manually transcribe exception counts from the App Hub as there is no tabular option and the detailed download drops the daily count as it de-dupes the exceptions).

The App Hub is careful to point out that only apps running NODO (or Mango) can report exceptions, so I first had to remove the Runtime Intelligence session data coming from earlier versions of WP7 (an interesting statistic on its own).

Here is what I see… (and a warning here – the numbers aren’t pretty)

I took two apps of mine; Yoga-pedia and A Pose for That and looked at their respective usage on NODO+ phones via Runtime Intelligence and exception reports from the App Hub and then calculated the ratio of sessions to exceptions.

The time period I used for this test was the two weeks from June 12 to June 25. During that time, this is what I observed:
  • 66% of A Pose for That sessions were run on NODO.
  • 58% of Yoga-pedia sessions were run on NODO.
Here is the ratio of exceptions reported by MSFT and sessions from Runtime Intelligence… (click to enlarge)

Ratio of session counts and exception counts by day

Now there are three likely scenarios here.
  1. Over this two week period, both apps were crashing every 1 in 10 times they were run (HORRIBLE). I don’t think this is the case because I have run these apps myself on multiple phones hundreds of times and they have NEVER crashed.
  2. The App Hub is over-reporting exceptions (or somehow incorrectly associating exceptions with these apps). This is a beta feature on the App Hub – it’s certainly possible.
  3. Runtime Intelligence is way under-reporting the total number of sessions in a given day. Certainly possible, but given the unit testing I have done, I don’t see this as being a major contributing factor to these ratios – but certainly a possibility.

Now, I had already put a “feature tick” on the default unhandled exception handler to count how many times it was invoked during this same period. The counts I have are well below the App Hub numbers (which might suggest number 2 above is the culprit – BUT NOT SO FAST). It is more than likely that certain exceptions (perhaps a majority) would interrupt the normal feature tracking transmission mechanism so I would expect that count from Runtime Intelligence to be artificially LOW.

As is often the case when managing an application "in the wild", an unanticipated question has arisen and I find that I don’t have enough data. That’s why its ALWAYS so important to
  • plan in advance what data is worth collecting to minimize the likelihood that you will end up in this situation and
  • be sure that your analytic solution supports rapid and easy iterations and refinements to compensate for when your planning falls short.

So how am I going to determine if
  1. my apps offer a LOUSY customer experience everywhere except for my personal phones or
  2. one or both exception reporting counts and session tracking counts are flawed?
Easy - I’m going to post an update of my apps to the marketplace this weekend with Runtime Intelligence Exception reporting turned on. What?

Runtime Intelligence for Windows Phone includes its own exception tracking capabilities – it does require that the developer activate it (that’s why I don’t have that data now), but it offers a lot more data and it can be invoked for unhandled, handled, and thrown exceptions. Further, it can be configured to collect additional information (custom for the app), AND it can be extended to offer the user a dialogue to provide additional feedback if they like.

I will post my results over the next few weeks – meanwhile, if anyone has any suggestions or ideas – please let me know… I honestly have no idea how this little mystery will play itself out.

Before I sign off – here is one more tantalizing clue (although it may also be a red herring). When I look at the limited unhandled exception data currently being returned by Runtime Intelligence (I can see tower location, device manufacturer, OS, etc.), I see that well over 50% of the phones that had an exception were localized to a language OTHER THAN en-US – and that is way out of proportion to the actual usage trends that I have been tracking (and posted in earlier entries). Further, the localizations that had the greatest “disproportionate” number of unhandled exceptions were de-DE and de-AT. Coincidence? Conspiracy? We don’t need to guess – we will soon have the facts!

PS here are two links that may be of interest:

Enjoy!

Saturday, June 25, 2011

A Webinar on Monetizing Mobile Apps with Analytics

For those who want a little more detail on the specific coding steps as well as an update on the latest application of analytics to mobile app development, we've scheduled a webinar. Here's the info..... (the first timeslot of 100 filled up in a few hours - so this is a later date. We'll keep scheduling these as long as interest is there) Cheers.

Title: Monetize Mobile Apps with Analytics (August 4th at 11:30 EDT)
Registration: https://www3.gotomeeting.com/register/676857398

Description: In this 60 minute webinar, we will take a live WP7 app and use real-world analytics to illustrate:
  1. The impact of try/buy scenarios on paid apps
  2. The relationship between free and paid versions of an app
  3. Strategies for ad-driven app design that consider page location, first time, occasional, and power user patterns, cultural trends, and other demographics including carrier and model profiling.

Preparation: NONE required. However, attendees are likely to get more from the presentation if they have already:
• Installed and are familiar with the Microsoft Windows Phone 7 development tools
• Installed and have some familiarity with PreEmptive Solutions Runtime Intelligence
• Installed and navigated around the free SKU of the sample application that will be referenced in the presentation. The free app is Yoga-pedia.

Friday, June 24, 2011

Improving Ad performance: Correlating ad activity with feature usage and user behavior

In this third installment on application analytics patterns and practices I’m going to focus on how Runtime Intelligence can be used to shed light on Ad activity within the context of one or more applications. While the use cases covered here are nowhere near exhaustive, I’m going to show how to answer the following questions (and hopefully give some indication as to why you may care about the answers):

  • What are ad impression volumes across multiple apps?
  • What are the click-through rates (the ratio of users clicking on ads to the volume of impressions) across various pivots?
  • What influence does culture (country of origin) have on click-through rates, e.g. are Germans more or less likely to click on ads versus Italians?
  • What carriers/ISP providers are giving me the most business, e.g. where are my users most likely to be found?
  • Where are users spending most of their time inside an app? Does that usage pattern correlate with a user’s likelihood of clicking on an ad?
  • Do returning users interact with ads differently than first time users or power users?
Many of these metrics are valuable in scenarios other than ad effectiveness of course (knowing where users spend their time and understanding how power users behave are two obvious examples), but for this installment, I am going to focus exclusively on how Ad interaction can be viewed across these metrics.

Implementation
I'm using the same trusty one line method WhatPoseWhen that I described in the first installment – this time, I call the method on the New Ad event (to count impressions) and the Ad Engaged event (to count clicks on ads). I could just as easily collect data on any other ad-related event and grab any data that is available to the program at that point in its execution as well. Here is the code for that method in its entirety:


private void WhatPoseWhen (string page, string selection)
{ return; }

The first parameter tells me from which page the method is being called and the second parameter tells me why I might care, e.g. was a new ad displayed, etc.)

I pass the page name and the ad event into WhatPoseWhen and Runtime Intelligence grabs these parameters and sends them up to the repository (no programming for this). I can then correlate the ad activity within the context of sessions, feature usage, and runtime stack data that I am getting as a part of runtime intelligence.

For these metrics, I export my CSV data into a regular excel spreadsheet and then generate the pivot tables shown below.

App background
I always like to use data from true production apps rather than fabricate data sets; I am using two apps that I wrote and launched on the marketplace that are both ad driven, Yoga-pedia and A Free WPC Yogi – the former is a free version of a yoga app that (hopefully) helps to drive sales of a for an upgrade to A Pose for That. A Free WPC Yogi plays a similar role for The WPC Yogi, a tailored version of A Pose for That targeting WPC 2011 attendees.


The following post uses their Ad activity over the same one week period.

Impression counts
The following pie chart shows the “new ad” event count by application. As you can see, Yoga-pedia has roughly 4X the number of ad impressions and given the fact that these apps are very similar (but not identical) in their behavior, this also roughly correlates to the volume of usage as well.


Click-through rates
However, when I divide the total number of “ad engaged” events by the total number of “new ads,” I see that A Free WPC Yogi has a 28% higher click-through rate (1.78% versus 1.37). In point of fact the demographics of the app users are quite different (randomized consumers versus MSFT partners who are attending WPC 2011).


Advantage: This intelligence helps to segment users by differences in their behavior and to do a better job of targeting those differences across apps.




Impressions by country (or culture)
Runtime Intelligence can grab the IP address of the sending tower – this is not personally identifiable and cannot be used to locate an individual with any precision – but it is more than adequate to identify country, state, and city. In the following graph, I simply count new ad events by country and show the top 10 countries by impression volume.


Advantage: If your app has a cultural bias that would benefit from localization, understanding where your users are can help prioritize those localization efforts.



Click-through rates by country (or culture)
The following bar chart calculates the click-through rates for the top 10 countries listed above. What is interesting here is that there appears to be a significant difference in click through rates by country (culture).


Advantage: Understanding when/if users from specific cultures are significantly more likely to respond to (click on) ads can further help to prioritize localization or marketing investments.




Impressions by ISP provider (top 25)
To produce the next graph, I used an application to tell me who owned the IP addresses that my mobile clients are using (I used IP2Location – but there are many of them out there).

This is a nice way to see who my users favor in terms of their carrier. Here I only show the top 25.

Advantage: Understanding carrier popularity will help focus business development/marketing efforts and better manage potential risks associated with how your users may be negatively impacted by upgrade schedules (delays). Will your next app be dependent upon Mango?



Sessions per app page
In the raw CSV files that can be exported from the Runtime Intelligence portal, there is a column, ApplicationGroupId. The value in this column is unique for all signals (messages) that are sent from within a single app session. In other words, I can use this field to organize all user activity into the relative user sessions using this field. This is helpful for plotting specific user patterns.

The following graph simply counts the unique occurrences of ApplicationGroupId values by page name value (recall that this is the first parameter of the WhatPoseWhen method). This avoids counting multiple views of a single page within a single session and tells me how popular specific pages are across my user base. For this posting and for illustration, I’m only showing data for five specific pages.


FindAPoseDetail and BrowseSelectPose are central to the user experience (browsing for yoga poses and then drilling into a specific pose for detailed imagery and instruction). TellMeMore is the page where I describe what comes with the paid version of the app (nice to see that 10% of my users deliberately choose to investigate the upgrade possibility) and AppGuide and TopicList are essentially app documentation and I can see that these pages are not hit very often – and that’s not a bad thing – users should not need to use the documentation after their first use.

So – this graph is telling me that

a) My users are spending their time using the app rather than trying to use the app
b) I am at least getting my user’s attention regarding a possible upgrade – perhaps my content is not compelling enough if my conversion rate does not correlate.

Advantage: broad user proofing can be used to validate developer assumptions about user experience and effectiveness of pages for their specific purpose.
Ads shown per page compared to volume of times viewed
Next I calculate the average number of ads shows per page by dividing the total count of New Ad messages by page (this combines the two parameters, page name and the even New Ad) by the total count of the times the page is shown. TOTAL ADS SHOWN PER PAGE / TOTAL TIMES PAGE VISITED

I use the same ad duration interval across all of my pages – so this is actually another means of calculating how much time my users are spending on each page (this can be done with Runtime Intelligence alone, but in this case, I don’t have to do that).

The graph below shows the average number of ads shown per page and maps them to where they rank in terms of how often the page is visited.


Happily, the two core pages of my app also get the most ads (and are also where my users are stopping to spend time). I can also see that users spend more time on detailed pose descriptions than they do browsing – even though the browse more often than they drill down (which makes perfect sense).

Sadly, my upsell page is getting the least love – I definitely have to work on making this page more engaging.


Advantage: Ad frequency by page provides insight into where users spend their time. Calculating click-through rates by page identifies where users stop to look around and may be most open to suggestion.

Returning users and sessions per user
Another column in the CSV extract is the ANID – this is either the result of hashing the true ANID from a user’s phone (it is not the actual ANID value), or, if they opt-out of that, it will contain a GUID generated by our software and written to isolated storage. In either case, this value acts as a unique user identifier.

The ANID can be used to identify new and returning users. Dividing session count (ApplicationGroupId) by ANID gives the average number of sessions per user. The following bar chart takes the 10 ANIDs with the highest session counts and compares the resulting sessions per user value to the rest of the user base (whose count is roughly 500 other users).


What I see is that there is a core group of users that are heavily using my apps (YAY!). Now that I know who they are, I can zero in on their specific behaviors, how they relate to my ads, what features they use most heavily, etc.


Advantage: Segmenting users into new, returning, and power categories dramatically improves a developer’s ability to target, prioritize, and validate development, marketing, and support activities.

Conclusion

I hope to have shown how using Runtime Intelligence, developers can materially improve their ability to build more effective applications and refine their advertising strategy while coordinating that strategy with complimentary upsell strategies as well.


Advantage: Development!

Tuesday, June 14, 2011

Increasing App sales with Analytics: Free apps versus trials

In my previous entry I introduced my app, A Pose for That, explained how I had instrumented my app with Runtime Intelligence to better track user experience and behavior. As a case in point, I illustrated how strategically placing upgrade opportunities in various locations inside my trial version, I was able to increase my conversion rates – perhaps by as much as 50%!

HOWEVER, when I was at MIX11, a very experienced developer (let’s call him David because that’s actually his name) told me that he had already established the optimal app blend to maximize revenue – it was to have a free app (not a trial version) that also offered ways to upgrade to the premium app. He pointed out that trial apps do not show up as free in the marketplace and are therefore almost always overlooked by most casual marketplace browsers. A free app gets the eyeballs that a trial misses.

For those who know me, they know one of my core principles is that my ideas never have to be original, they only have to be good – so is David’s idea really a good one?

To test it out, I created Yoga-pedia, a free app that included the browsing capabilities of A Pose for That with good imagery and instruction, but did not include the pairing of poses to real-world situations (a feature I believe is valuable) or flows (the stringing together of multiple poses). On the welcome page (and one or two other places) I give users a chance to learn more about our software and upgrade; Here is the welcome page and the “tell me more about why I should upgrade page.”

























I instrumented the various points where users can upgrade in both the trial and the free app so that I can compare BOTH the usage levels of the two apps AND the upgrade requests that stem from that usage. So… let’s go to the video tape – or better yet, Runtime Intelligence. (Note – these specific graphs are built by extracting the data from the runtime intelligence repository into a spreadsheet and then generating a simply pivot table).

By looking at application starts (not downloads in the marketplace sense of the word), the graph showing App Runs seems to support David’s logic; my free version, Yoga-Pedia, takes off like a rocket and within 24 hours eclipses trial activity in dramatic fashion. …but, it also seems to be cannibalizing trial activity too – Should I care? (NOTE – I am combining usage of multiple applications – not always easy to do with canned dashboards)




I probably should care IF users are more likely to upgrade from A Pose for That trials versus from Yoga-pedia. In other words, are my sales going up because of the free app even though it is depressing my trial volume? Let’s go to Runtime Intelligence one more time…



What the graph above shows is that upgrades from my trials also decreased dramatically with the launch of Yoga-pedia, BUT the volume of upgrade requests from within Yoga-pedia more than made up for that shortfall. (NOTE – I am combining feature usage across multiple applications – not always easy to do with canned dashboards)

In the one week where both the free and the trial versions lived side-by-side, the free version generated 86% of the upgrades.



More important is the bottom line: I saw an 85% increase in the total number of upgrades when I had the combination of both a free and trial version of my app available.

Coming up next (I promise this time) will be a discussion of the last leg of David’s magic formula for success – making your free version ad-driven. What will Runtime Intelligence be able to tell us about that?

Wednesday, June 8, 2011

Implementing Customer Feedback Forms AND fine tuning try/buy strategies with Runtime Intelligence

My Adventures in WP7 App Development: a beginner’s tale

I deployed my first WP7 app to the marketplace on May 13th. Prior to that, I had not written a line of code in nearly 20 years and so I think I can safely call myself a beginner. The fact that I might actually have something to share with the broader (and almost universally more experienced) development community shows how effective development tools have become and how wide-open the smartphone market is at this point in history.

My app (A Pose for That and the free alternative Yoga-pedia) pairs user situations and ailments with yoga poses – the app essentially uses the smartphone as an intelligent, just-in-time publishing platform.

USER SURVEY DATA TRANSPORT, MANAGEMENT, AND REPORTING

One of the capabilities I wanted to include in the app was a simple user survey form – I wanted to know how often users practiced yoga on their own and whether they hoped that this app would increase or improve their yoga practice – but I didn’t want to ask users to write an email (too much effort for them) and I did not want to incur the extra programming, setup, and expense of implementing my own content management store (too much effort for me).

Here’s what I did, for free, and with (virtually) no programming whatsoever…

I used Expression Blend to build my form (no programming), Dotfuscator for Windows Phone to inject the data collection transport logic (no programming), and the Runtime Intelligence Service for Windows Phone to store, manage, and publish user responses (again, no programming). I had to write one method (one that I actually reuse in a variety of ways including try/buy strategy tuning and ad monitoring that I will blog more on later).

That method (in its entirety) is:

private void WhatPoseWhen (string page, string selection)
{ return; }

…but I am getting ahead of myself. Here is a screen shot of the survey form:

It asks my two basic questions with two 3-way radio buttons to indicate true, false, or no comment. When the user leaves this page for any reason other than a tombstone event, I construct a single string that captures the user’s response. For example, if the user answered in the affirmative on both counts, the app assembles the string “I practice yoga 2X per week or more And I hope that this app will increase and/or improve my practice.” and puts it in a local variable UserFeedBack. If answer in the negative, I just assign the string “And.” Then, I call my custom method (above) like so:

WhatPoseWhen("feedback", UserFeedBack);

That’s it for my coding – I just build the app.

Now, I go to Dotfuscator for Windows Phone. It takes about 3 minutes to register for the service at www.preemptive.com/windowsphone7 (fill out the form at the bottom of the page) and another 5-10 minutes to point Dotfuscator to the XAP file in question, exclude third party assemblies (in my case, Telerik’s WP7 controls used elsewhere in my app), and tag the entry and exit points of my app inside the Dotfuscator UI.

The last step required to complete my user feedback form and service is to add one attribute for Dotfuscator, a feature attribute as follows:

I right-clicked on the method WhatPoseWhen (in the left pane) and selected Add Feature Attribute – all I needed to type into the form on the right-hand side of the screen was a name for the feature (WhatPoseWhen) and insert an * in the ExtendedKeyMethodArguments property. This tells Dotfuscator to grab any/all parameter values passed into this method whenever it is called and send it up to the Runtime Intelligence portal. In this case, I am identifying the context (feedback) and passing the string that I constructed based upon their responses inside the variable UserFeedBack.

This takes 2 minutes tops to configure and then I press the “build” button and out pops my new and enhanced XAP file. I submitted my app to the marketplace with no special handling required and then waiting for the numbers to roll in. This takes days between marketplace processing, user adoption, and Runtime Intelligence number crunching. Days is still much faster than the weekly marketplace statistics and obviously much more flexible but slower than the ad-server stats – it’s right in the middle.

The results can be seen (in part) in this screen capture – I can log into my Runtime Intelligence account and select custom data to see the following (note - alternatively, I can extract CSV files for further analysis)

The highlighted row “page” and “feedback” shows that 24 users went to the feedback page during the selected interval (I scratched out the actual interval because of the sales numbers that are reflected here too - it’s none of yur beeswax). In the last row shown here (also highlighted) you can see that of the 24 page views, 16 of these users indicated that they did NOT practice 2X per week and they did NOT expect this app to change that. (the more positive responses can be seen lower down on the page but are not shown here).

The bottom line is that I was able to implement a user survey mechanism including secure transmission, storage, and basic analysis with essentially no programming and no requirement to setup a hosted content management system - ALL IN LESS THAN AN HOUR.

FINE TUNING TRY/BUY STRATEGIES

I also call my trusty little method WhatPoseWhen throughout the app during trial periods. The screen capture above also shows basic try/buy behaviors. A Pose for That implements a Try/Buy mechanism. If a user is in Trial mode, they are presented with an opportunity to upgrade right on the main page of the app. That is the “UpgradeNow” option. Additionally, whenever a user selects functionality that is NOT included in the trial (say showing a large image of a pose with detailed instructions), they are presented with a screen letting them know that they have bumped up against the limits of the trial version and would they like to upgrade right then and there – more of an impulse upgrade.

What the screen capture above is telling me is the following:

1) Users were presented with the “impulse upgrade” option inside a trial 130 times during the selected time interval.

2) When presented with this choice, users chose to NOT upgrade and return to the previous page 111 times (or 85% of the time the said thanks but no thanks). However, it also shows that 19 times (15% of the time) they DID choose to upgrade on the spot.

3) During the same interval, 38 users selected the “Upgrade Now” button on the main page.

I have not chosen to do true A/B testing in this case, but one thing that I am almost certain about is that some of the 19 users who upgraded “inside” the app would NOT have gone back to the main page at a later time and upgraded via the standard menu choice.

My two-pronged upgrade pattern may have increased my conversion counts during this interval from 38 to 57 or an increase of 51%!

Using the CSV extracts, I can dive deeper to see what features are more likely to result in an upgrade and also get a sense of how much is too much, e.g. users that abandon the app and never come back. (Note, I am not EVER transmitting ANID or other PIID information).

COMING UP NEXT: USING RUNTIME INTELLIGENCE TO TRACK AND OPTIMIZE AD PLACEMENT STRATEGIES

Friday, May 13, 2011

It takes an ecosystem

This is my simple tale of how two people with an idea and no programming skills conceived, developed and launched a smartphone app in just a few weeks’ time. …And how it would never have happened without development tools AND PEOPLE committed to helping others express themselves through code. Here goes…

My wife is a spectacular yoga teacher – and, in support of that, I have from time to time worked with her to produce DVD’s – typically planned to ship around her birthday, May 8 (you can see a sample of our handiwork on Amazon at Introduction to Qi-Yoga).

We had a lot of fun working on these and, while sales were modest, they were global and the reviews were universally positive – given the fact that I knew nothing at all about video/sound editing/production before our first project – we were happy with our results.

None of this would have been possible without the help of Apple who, with their FinalCut tools, broke the back of high-end (expensive, complex) video editing workstation vendors (like Avid). Apple made it possible for non-professionals (like me) to produce professional video content.

…but DVD’s are so 2008 – smartphones are where it’s at now – right?

My wife agreed - the world needed a yoga App for that!

And given Apple’s dominance of the smartphone market, this should be a snap – right?

Sadly, that proved to be very very wrong.

It turns out that Apple does not love the development world in the same way that they loved the design world. Ironically, Apple is actually the “Avid” of smartphone development. In point of fact, it’s been Microsoft that’s been having a long term love affair with development all these years…

…Enter Windows Phone 7.

A little over a month ago, I approached my wife with the idea for developing a yoga app that does more than push content – we wanted one that actually paired poses with everyday situations and delivered just the right amount of information at just the right time. She loved the idea and we went to work – she developed the knowledgebase and I was tasked with writing the code – but the fly in the ointment was that I had not written a line of code in over 20 years (and trust me, comparing 80’s programming tools to today’s is like comparing a cassette tape player to a MP3 player) – I was now an absolute beginner.

The only advantage that I had was that I was aware of the software, training, and social resources that Microsoft and their partners were pushing. Here is how we did it… (to be clear – I have made NO attempt to build a comprehensive list of WP7 resources – there are lots of those out there – the following are the specific steps and resources that I used. )

1) Download the development tools from Microsoft and join the App Hub. Go to http://create.msdn.com/en-US/

2) Download analytics from PreEmptive Solutions. Go to http://www.preemptive.com/windowsphone7.html

3) I signed up for a promotion from Telerik to get their controls for free. Their components were awesome and the support was fantastic (meaning patient with me). Go to http://www.telerik.com/products/windows-phone/getting-started/user-groups.aspx

Then I had to get started…. Remember, I had no C# or .NET development skills – so I began with

4) Windows Phone 7 Development for Absolute Beginners. http://channel9.msdn.com/Series/Windows-Phone-7-Development-for-Absolute-Beginners

While I did not go back to these videos once I got rolling (video is just too slow to navigate around), these were essential to getting me started.

5) The following specific links include lots of sample code and all levels of instruction (from step-by-step to technical reference manual level) – the ability to be spoon-fed and then drill down when you need a specific piece of detail is what made it possible for me to avoid having to learn what professional developers have to learn.

* Building a Windows Phone 7 Application from Start to Finish http://msdn.microsoft.com/en-us/library/gg680270(v=PandP.11).aspx?ppud=4

* MSDN Blogs > Silverlight SDK http://blogs.msdn.com/b/silverlight_sdk/

* The APP HUB blogs, community and resources http://create.msdn.com/en-US/

* Binary WasteLand materials on CSHARP and WP7 http://binarywasteland.com/category/programming/languages/csharp/

Where am I now?

I built and tested my app and submitted it to the Marketplace on May 10. On May 12th, my app officially hit the Microsoft Marketplace (it hasn’t even been 24 hours yet).

If you have a Windows Phone - check out A Pose for That now.

Now comes the next question – when it came to our DVD’s, Apple did nothing to help us promote our video – they only made it possible to produce it. I will be writing more to cover what works (and what doesn’t) when it comes to marketing a WP7 app.

The ecosystem is more than a platform and more than “a village” – It’s both

Before I end my tale, I must shout out to the people who helped us along the way (I am not giving full names since I have not asked permission). Everyone here helped me on their own time just because they are passionate about the platform. They were David and Bill from PreEmptive, Valio from Telerik, David from Wirestone, Pierre from VinoMatch, and Gergely from Cocktail flow. Their shared tribal knowledge shortened this project and improved the result - of course, I take full responsibility for all mistakes and hacks.

Microsoft has long understood the value of a development ecosystem, but I don’t think even they would have predicted how transformational a development-centered approach could become with the emergence of the smartphone. I know that there are lots of factors going into the success or failure of Windows Phone 7, but the value of Microsoft’s focus on helping non-professional developers produce professional-grade code cannot be overstated.

My wife and I already have our product roadmap down – there’s a lot more to come - look out!

Blog Archive