Friday, October 9, 2015

EU's highest court throws out privacy framework for US companies: small businesses suffer

Three ways that small tech businesses are just like every other small business – except when we’re not

Here’s the issue; small tech companies have all of the awesome characteristics of small businesses in the broadest sense (they’re job creators, innovators, revenue makers…) but they often find themselves having to navigate complex regulatory and compliance issues that have historically been reserved for (large) multi-national corporations – all while building their businesses on technology that’s evolving way faster than the regulations that govern them. (As a footnote here, let me throw in a commercial plug for ACT – a trade association focused on exactly these issues).
Tuesday's nullification of the Safe Harbor framework (a system that streamlined the transfer of EU user data to US businesses) in what everyone pretty much agrees was a consequence of the NSA spying scandals is a perfect example. In this case, we see how small tech businesses can get caught in the middle of a p^%*ing match between the EU and the US federal govt. …and I don’t care what side of the aisle you’re on – everyone loves small business growth and innovation right?Here’s a great bipartisan issue that our lawmakers should be able to address – don’t you think?
Three ways that small tech businesses are just like every other small business – except when we’re not
One: Like every small business, we can’t afford to have a permanent team of lawyers on our payroll …but small tech businesses can go international overnight - having to navigate across international jurisdictions.
The Safe Harbor system eliminated a raft of complexity and potentially 1000’s of hours of legal work required to manage EU user data – making it feasible for small tech businesses to do business inside the EU.
Small businesses simply cannot be expected to navigate a maze of international privacy obligations – each with their own rules – and penalties. Without the Safe Harbor system (or something to replace it), previously open markets will soon be out of reach.
Two: Like every small business, we often rely on 3rd party service providers for professional services (legal, payroll, HR, etc.) …but small tech business also rely upon 3rd party providers for services rendered inside their apps (versus inside their offices) while those apps are being used by their clients; for example, payment processing and application analytics.
This distributed notion of computing introduces multiple layers of business entities at the very sensitive point where the application is being used in production – exponentially expanding the legal and compliance problems (each service provider must also have their own agreements within each country/jurisdiction).
This is now more than just unmanageably large and expensive –it’s potentially unsolvable. Small businesses deal with lots of unknowns, (security vulnerabilities for example), but this new wrinkle will almost certainly have a chilling effect – either on how we serve EU markets AND/OR how we rely on 3rd party service providers (a core development pattern that, if abandoned, would make US dev firms less competitive).
Three: Like every small business, small tech companies cannot change direction with the swipe of a pen the way laws and regulations can come and go.
While the Safe Harbor framework was instantaneously nullified with one verdict, applications that were compliant moments before are now potentially in jeopardy – and they’re still running and still sending data – whether the app owner likes it or not.
Bottom line, this is a regulatory and governance issue and we need governments to work out……
Everyone loves small businesses right? We need…
  • To know what’s expected of us
  • Agreement on what compliance looks like
  • Visibility into enforcement and penalty parameters
Then, we can do what we know how to do – make smart technical and business development investments.

Other material

Here are three more links:
Two days ago, when the Safe Harbor ruling first came down, I posted an explanation of how (Link 1PreEmptive Analytics can re-direct application usage data to support the kind of seismic shifts in architecture that might follow (Link 2here.
That same evening, I was put in touch with Elizabeth Dwoskin, a WSJ reporter who was writing a piece on the impact that this sudden move would have on small businesses – my conversations with her are actually what prompted this post (WSJ has already posted her well-written article,(Link 3Small Firms Worry, as Big-Data Pact Dies).
You might ask, if her article is so well-written (which it is), why would I have anything to add? She was looking for a “man-on-the-street” (dev-in-the-trenches) perspective on this one particular news item, BUT, the Safe Harbor ambush is just one example of the larger issues I hope I was able to outline here.

How will today's Safe Harbor ruling impact users of multi-tenant application analytics services?

Earlier today, the Safe Harbor system was just overturned (see Europe-U.S. data transfer deal used by thousands of firms is ruled invalid). 
The legal, operational, and risk implications are huge for companies that have, up until today, relied on this legal system (either directly or through third parties that relied on Safe Harbor) to meet EU's privacy obligations. 
What are the implications for application analytics solutions (homegrown or commercially offered)? It's not clear at this moment in time, but one thing is for sure - it is a lot harder to turn off an application, re-architect a multi-tier system, or force an upgrade than it is to simply sign a revised privacy agreement. 
Multi-national companies that continue to transfer and process personal data from European citizens without implementing an alternative contractual solution or receiving the authorization from a data protection authority are at risk for legal action, monetary fines, or a prohibition on data transfers from the EU to US. 
If this transfer of data is embedded inside an application/system's architecture - then a wholesale development/re-architecture plan may be required. Of course, re-architecting systems to keep data local within a country or region, may simply be impossible (efficiency, cost effective, ...) UNLESS the system is, itself, built to provide that kind of flexibility already.  
Happily, PreEmptive Analytics is. 
  • PreEmptive Analytics endpoints (in addition to on-prem of course) can live inside any Microsoft Azure VM. Clients with very specific requirements as to where their actual VM’s are being hosted would always be able to meet those requirements with us. …and what about when a client gets even more specific (country borders for example) or when they want to support multiple jurisdictions with one app? (this leads to the second point…)
  • PreEmptive Analytics instrumentation supports runtime/dynamic selection of target endpoints. While this would take a little bit of custom code on the developer’s part – our instrumentation would allow an application – at runtime – to determine where it should send it’s telemetry (perhaps a service that is called at startup that has a lookup table – if the app is running in Germany – send it to …, if it’s in China, send it to …, if it’s in the US…). This would allow an app developer with an international user base to support conflicting privacy and governance obligations with one application.
It may turn out that keeping German application analytics data in Germany may be as important to US companies now as it is to German companies. One thing's for sure - the cadence and road map for application analytics cannot be tied to the cadence and road map of any one application - the two have to live side-by-side - but independently.

Instrumentation Injection versus API

Instrumentation Injection – the process of embedding (inserting) instructions into a binary post-compile with no programming whatsoever – offers a powerful means of improving application monitoring, application security, and application lifecycle management. There are a number of scenarios where code injection makes a lot of sense - for this post, I'm really focusing only on the injection of application analytics instrumentation.
Injection is not a panacea and it is not always the best approach - but it often IS the best approach - yet, developers often don't warm to injection for a number of legitimate reasons (control and precision being the two most noteworthy), BUT injection offers a number of distinct advantages too:
  • No coding frees developers up for more critical activities and reduces the cost of development. 
  • Static analysis that goes along with injection (before instrumentation can be injected, the target binary must be "analyzed" - it's like finding a vein before the needle goes in) allowing for addition coding requirements to be automated (eliminated) as well - for example, exception monitoring is simplified by auto-generated tri catch logic across java and .NET (and inside existing handler frameworks), the logic to package custom data for transmission is required to transmit non-standard data points, and the linking of analytics libraries into existing binaries all require additional developer time and effort and are eliminated with injection.
  • Support for multiple instrumentation patterns across release phases, e.g. beta, production, trial, etc. without having to branch code is really only possible with injection because the decision as to how and where to instrument can be made independently of the code itself,
  • The configuration file that determines what gets injected is a standalone artifact that can be preserved as an audit trail for governance and compliance obligations, and
  • Since injection patterns can be done independently of each dev. team, standards and conventions around instrumentation can be implemented across applications and development teams – including those published through enterprise marketplaces.
Interested in seeing how injection fits with your instrumentation requirements? The two most widely deployed and trusted application injection platforms areDashO for Java and Dotfuscator for .NET  supporting:
Application Analytics: the injection of feature, session, exception, and custom data instrumentation.
Tamper Defense: the injection of tamper detection, real-time defense, and notification services.
Shelf Life: the injection of end-of-life (expiry) logic to gracefully and safely end-of-life deployed applications.
Check'em out.