Thursday, May 17, 2018

Root detection: Xamarin devs stop hackers before they begin

How important is root detection?

  • Rooted devices can be extremely dangerous: When running on a rooted device, an otherwise harmless App can unmount file systems, kill processes, or run any arbitrary command.
  • Rooted devices are plentiful: In the annual Android Security 2017 Year in Review, Google reported that its SafetyNet service identifies over 14 million rooted devices DAILY.
  • Sensitive applications must include controls to mitigate these risks: Recent PCI Security Council guidelines and NIST controls are just two notable examples where rooted device detection and response obligations are explicitly assigned to development organizations. More generally, rooted access is synonomous with unauthorized privilege escalation and is, therefore, incorporated by reference in virtually every privacy obligation developers face, e.g. GDPR, HIPAA...

What’s new for Xamarin.Android developers?

New with Dotfuscator Professional 4.35.0 and Dotfuscator Community Edition (CE) 5.35.0, developers can, for the first time, inject rooted device detection and response controls into Xamarin.Android apps (injection means the logic is inserted post-compile – no coding required).

Want to dig deep?

Read this month’s MSDN Magazine article, Detect and Respond to Rooted Android Devices from Xamarin Apps that steps you through a detailed explanation of the feature, with links to sample code.
The article takes a sample Xamarin app, TodoAzureAuth authored by Xamarin’s David Britch, and adds rooted device detection and response in a way that maps to the PCI Mobile Payment Acceptance Security Guidelines published on 9/2017.
  • Detect that an app is running on a rooted device (offline or on a network)
  • Abort the initial session and permanently quarantine the app in future sessions
  • Report the incident to a central compliance service
  • Obfuscate the app to prevent analysis and tampering of the above controls
  • Automatically log the above implementation to demonstrate compliance for each build

Rooted Response

The sample app highlighted in the article extends the TodoAzureAuth with the behaviors illustrated in figure 1.
Figure 1: Flow illustrating TodoAzureAuth rooted device response behavior after it has been injected with the Dotfuscator Control. Note that Root detection serves as an effective proxy for Android emulator detection as well. 

Obfuscated binaries

Dotfuscator also obfuscates the TodoAzureAuth app to prevent hackers from
  • Identifing where and how the rooted device detection and response controls are implemented
  • Reverse-engineering embedded intellectual property (IP).
Figure 2: Sample output from obfuscated version of TodoAzureAuth.

Reporting via Microsoft App Center Integration

The custom code injected by Dotfuscator connects each rooted device detection event with the app owner’s App Center account.
Figure 3: App Center integration

Automatically generated audit records

The following Build Output can be stored and used to demonstrate that specific controls were injected on any given release.
Figure 4: Auto-logging of Build Reports

Post-compile injection configured through Dotfuscator UI

All of these controls plus obfuscation are configured through the Dotfuscator UI. Once configured, Dotfuscator can be invoked automatically as part of a continuous build process ensuring that every version of every app is effectively secured. 
Figure 5: Dotfuscator configuration options.

Closing thoughts (for the week of May 7th 2018 at least)

With the latest release of Dotfuscator, Xamarin.Android developers can rely upon the same application hardening and runtime detection and response controls that classic .NET developers have been able to rely upon for anti-tamper and anti-debugger detection and response - and Android developers rely upon using our DashO for Android solution.

An app hardening use case: Filling the PCI prescription for preventing privilege escalation in mobile apps

Preventing Privilege Escalation in mobile payment apps (PCI Mobile Payment Acceptance Security Guidelines Section 4.3)

Regulators, standards bodies and IT auditors have become increasingly likely to recommend an absolute prohibition of rooted Android devices in production environments. As the 2017 PCI Mobile Payment Acceptance Security Guidelines state, “Bypassing permissions can allow untrusted security decisions to be made, thus increasing the number of possible attack vectors.”
It is only natural that the apps themselves rise up to act as a ubiquitous governance, risk, and compliance management layer – preventing, detecting, responding, and reporting on threats - including those posed by unauthorized rooted devices.
The PCI Mobile Payment Security Guidelines recommend the following (4) controls be in place:
Section 4.3 Prevent Escalation of Privileges
“Controls should exist to prevent the escalation of privileges on the device (e.g., root or group privileges). … (1) the device should be monitored for activities that defeat operating system security controls (e.g., jailbreaking or rooting) and, when detected, (2) the device should be quarantined by a solution that removes it from the network, removes the payment-acceptance application from the device, or (3) disables the payment application.
(4) Offline jailbreak and root detection are key since some attackers may attempt to put the device in an offline state to further circumvent detection.” 
DashO for Android can fulfill these PCI requirements. DashO can be configured to:
  • Enforce a no rooted device policy wherever DashO hardened apps are run, and
  • Ensure that DashO hardened apps trigger real-time responses including notifications, auto-exit, and even a permanent disabling of the app (a quarantine or bricking). For a more thorough treatment of anti-root controls see DashO Root Detection & Defense is one Check that will not bounce!
In short, DashO can – with little or no programming required – inject sophisticated root detection logic as well as the logic your app needs to defend itself against these evolving attacks.
This post-compile approach to injecting runtime controls (detect, respond, and report) are also available to meet similar anti-debugger and/or anti-tamper requirements.

Encryption’s unfortunate, unavoidable, and unfix-able gap - and how to fill it

When perimeters are breached, identities stolen and malware launched, encryption stands as information’s last line of defense. Without effective encryption policies, you will first be victimized and then held liable (punished) by every information stakeholder (customers, partners, investors, regulators, the courts, etc.).
Just this week, Wired led with the headline Tinder’s Lack of Encryption Lets Strangers Spy on your Swipes where they wrote in part:
“In 2018, You'd be forgiven for assuming that any sensitive app encrypts its connection from your phone to the cloud, … But if you assumed that basic privacy protection for the world's most popular dating app, you'd be mistaken.”
Whether or not Tinder faces any legal or regulatory jeopardy, the press coverage in Fortune Magazine, Wired, and even my local morning news shows cannot be doing their market share or brand any good.
For a longer treatment of another encryption catastrophe that resulted in $300M in fines and other expenses, see the sidebar “Punishing the Victim: Anthem data breach” inside The Six Degrees of Application Risk.
The hard truth is that when data is stored in the clear (unencrypted) that data cannot be secured– and every information stakeholder knows this to be true (including bad actors). That is why, even though it is too early to predict what civil, criminal or market penalties Tinder will face, the reporter’s incredulous disbelief is so pronounced.
Best practices dictate that sensitive data be encrypted whenever and wherever possible;
  • When data is at rest (in files and databases – and especially with portable media) and
  • When data is in motion (transmitted between applications, services, and networks – and especially over public networks - as with the Tinder example above).
There is, however, one unfortunate, unavoidable and unfix-able hole in the encryption story. When data-is-in-use (being processed by an application rather than sitting on a disk or flying across a wire), that data must be processed in the clear.
In fact, as encryption policies become increasingly effective, hackers are inexorably drawn to the next best thing, application hacking as the attack vector of choice.


Fortunately, even though data-in-use must be processed in the clear – that data is typically found only in app memory – and reading data in memory requires specialized utilities – and access to those tools can be limited. Debuggers are the hacker’s favorite because in addition to accessing unauthorized data, they have the added advantage of being able to modify a running application to circumvent identity verification, authorization, and other critical controls as well. 
…and every stakeholder – especially hackers, regulators and the courts – know this to be true too.

Mitigating the encryption gap

Since data in use can’t be encrypted, the next best strategy is to restrict unauthorized use of debuggers, rooted mobile devices, emulators and other tools that hackers rely upon to access and modify application-resident data. Preventative, detective, and responsive controls combine to secure your applications and – by extension – the data that flows through them.
PREVENTION: Where possible, use OS and compile-time configuration settings to disable debugging and prevent remote code execution. Mobile devices, web app servers, build settings, etc. include these options to prevent precisely these kinds of exploits.
While effective as a first line of defense, these settings can be overridden, bypassed, and/or modified (assuming they are set properly in the first place). To effectively secure sensitive data-in-use, additional controls are required to detect and respond when hackers attach debuggers or tamper with an app.
DETECT & RESPOND at a minimum to the following three progressively material scenarios must be addressed:
1)    Configuration values prohibiting debugging and preventing remote execution are NOT properly set (making it especially easy to execute this kind of exploit)
2)    A debugger is attached to a running app processing sensitive data (indicating that an attack is in progress or, at a minimum, unauthorized probing is underway), and lastly
3)    An app has been modified/tampered post build (suggesting that there has been a successful hack and the resulting compromised version is executing).

Securing apps and the data that flows through them

Writing code to enforce these policies is time consuming, requires new development skills, must be coordinated across development teams, and potentially introduces additional runtime risks.

There is a better way - post-compile injection.

As offered by PreEmptive Solutions Dotfuscator for .NET and DashO for Android and Java, post-compile injection of anti-root and anti-tamper, and a variety of related runtime controls offers a compelling alternative to coding.
Dotfuscator and DashO Injection advantages include:
·   As a post-compile step, anti-debug and anti-tamper functionality can be
o  included into a DevOps tool chain simplifying your development’s tasks and
o  Injected into existing executables and libraries by rebuilding rather than coding.
·   Little or no additional development effort is required
·   Root detection and other evolving algorithms are continuously updated to keep up with new platforms and new attack strategies,
·   While most of the injected functionality can be considered “turnkey”, there are also – by design – straightforward extensibility points to support the inclusion of proprietary defenses and reporting capabilities, and
·    The configuration file that dictates how and where controls are injected do double duty by serving as an audit trail as you harden your code.

Application development compliance and the law

Security standards bodies, regulators, legislators, and the courts recognize the necessity to secure data in use. In addition to increasing your risk of a data-related breach, failure to implement appropriate, well-understood and accepted security controls will almost certainly result in increased liability, fines, and dissatisfaction.
For illustration, consider the following excerpts from regulatory, legal and standards bodies that demonstrate the general acceptance of these principles.

Financial Compliance

PCI Mobile Payment Acceptance Security Guidelines for Developers • September 2017
4.3 Prevent escalation of privileges. (Emphasis added)
Controls should exist to prevent the escalation of privileges on the device (e.g., root or group privileges). Bypassing permissions can allow untrusted security decisions to be made, thus increasing the number of possible attack vectors. Therefore, the device should be monitored for activities that defeat operating system security controls—e.g., jailbreaking or rooting—and, when detected, the device should be quarantined by a solution that removes it from the network, removes the payment-acceptance application from the device, or disables the payment application. Offline jailbreak and root detection and auto quarantine are key since some attackers may attempt to put the device in an offline state to further circumvent detection.
Hardening of the application is a method to that may help prevent escalation of privileges in a mobile device.
Controls should include, but are not limited to providing the capability for the device to produce an alarm or warning if there is an attempt to root or jailbreak the device.

Privacy Legislation

General Data Protection Regulation (GDPR)
Recital 83 Security of processing (Emphasis added)
In order to maintain security and to prevent processing in infringement of this Regulation, the controller or processor should evaluate the risks inherent in the processing and implement measures to mitigate those risks, such as encryption. Those measures should ensure an appropriate level of security, including confidentiality, taking into account the state of the art… In assessing data security risk, consideration should be given to the risks that are presented by personal data processing, such as … unauthorised disclosure of, or access topersonal data transmitted, stored or otherwise processed…”

Application Security Standards 

Open Web Application Security Project (OWASP)
OWASP Mobile Application Security Verification Standard v1.0
V8: RESILIENCE REQUIREMENTS: Control objective: Impede Dynamic Analysis and Tampering
8.1 The app detects, and responds to, the presence of a rooted or jailbroken device either by alerting the user or terminating the app.
8.7 The app implements multiple mechanisms in each defense category.
8.8 The detection mechanisms trigger responses of different types, including delayed and stealthy responses.

You don’t need to be the fastest, but you cannot afford to be the slowest when running from the bear

If you care about data-at-rest and data-in-motion, then you need to care about data-in-use – it’s the same data and it brings with it the same responsibilities, risks and liabilities.
If your organization develops software, you need to ensure that you have implemented appropriate data-in-use controls throughout your application and DevOps lifecycles.
You will also want to update your supplier risk management checklist to ensure that suppliers are taking equivalent steps to secure your “data in use” inside their applications and services.
Do not be the slowest running from the bear - vulnerabilities stemming from "data-in-use" exploits is a real and present threat.
For .NET developers that want to get into implementation detail, here’s a terrific article from November’s MSDN Magazine that includes links to sample code. 

Monday, October 2, 2017

GDPR and Application Development: My question to the EDCC - asked and answered

Development and the Law - Development may often be overlooked - but it is never forgotten nor is it exempt.
Development and the Law - Development may often be overlooked - but it is never forgotten nor is it exempt.
Working for an ISV with European clients – many of which are large corporations that develop their own applications that process EU PII – I’ve been watching this space closely.
To what extent do Controller/Processor obligations (and, by extension, penalties) extend “upstream” into the development organization and its practices?
I’ve poured over the GDPR and pulled out what I think are the relevant bits – a tiny sampling would include
  • The entire notion of “processor”,
  • The use of a “state of the art” standard rather than the normalized “reasonable effort”
(both from SEC 32, Security of processing)…
And in Recital 78: Appropriate technical and organizational measures, “developing and designing” of applications is given an equal weight alongside “selecting and using” of applications…
There is plenty in the GDPR to support the idea that development organizations will be expected to meet the same (or equivalent) standards as their operational or IT counterparts (see GDPR liability: software development and the new law)
…but I wondered what would happen if I asked the European Direct Contact Centre? So I submitted the following (in part)
…and in a few weeks I received the following:
Put more succinctly, the EDCC responded YES.
Yes, Development and DevOps organizations are subject to GDPR obligations (and penalties). These include both incorporating “the state of the art” in data protection (as a development and DevOps practice) as well as a means of “demonstrating” (proving) that such dev and DevOps practices are (have been) consistently and effectively applied.
What is the difference between a “state of the art” standard versus a normalized “reasonable” standard? What are examples of know attack vectors and exploits that fall under this umbrella? How do you know if your development practices can meet this standard? Great questions really... and definitely answerable.
Development may be frequently overlooked in the race to be GDPR ready – but it is most definitely NOT exempt.
For a deeper discussion on these issues, consider registering for App Dev and the Law on October 5, 10 AM EST
For info on PreEmptive's support for GDPR compliance, visit

GDPR, DTSA, ETC: App Dev and the law originally posted on LinkedIn on September 20, 2017

We’ve scheduled the next installment of our app risk webinar series: App Dev and the law: GDPR, DTSA, ETC

New laws mean new organizational obligations (and penalties).

This installment draws a straight line between your dev and DevOps practices and the new privacy, computing, and security obligations you’re facing (whether you know it or not).
We’ll drill into two specific pieces of legislation (GDPR and DTSA) and one industry’s recent cyber risk recommendations (The key principles of vehicle cyber security for connected and automated vehicles).

Why invest your valuable time?

After the webinar, you’ll leave with
  • Practical guidance for GDPR and DTSA for your dev efforts we well as
  • A framework that can be applied to most any existing (and future) regulations.

3 reasons why this content is timely

1.      Legislatures and regulators are finally responding to the existential threat posed by the increasing sophistication and pace of attacks and attack strategies.
2.      Their "response" includes laws like DTSA and GDPR that share important traits likely to impact development practices and planning.
  • Increased penalties: Increased penalties translate into significantly increased RISK of non-compliance (that's distinct from increased likelihood of non-compliance). Penalties increase the resulting damage of non-compliance.
  • Expanded obligations: Expanded obligations mean more ways to fail (to be non-compliant) – this does increase the likelihood of non-compliance, and
  • New standards of compliance with those obligations: New standards of compliance, e.g. maintaining “state of the art” versus “reasonable” competencies, dramatically increases the level of effort and expertise required to be compliant. Punitive fines, market valuation loss, and civil penalties all multiply when organizations can’t demonstrate that they have made the proper investments in their compliance programs.

Register for one of these two convenient time slots:

GDPR liability: software development and the new law Originally posted August 16, 2017

The GDPR is comprehensive; its impact is far reaching, and the penalties for infringement are severe (up to €20 million or 4% of global annual revenue, whichever is higher).
In short, no impacted business can afford to ignore The GDPR. As the May 2018 deadline looms, organizations find themselves scrambling to be “GDPR ready” – but what exactly does that mean?
I’ve simplified the GDPR legalese (while preserving the links to the original regulation) to help answer this question from a development perspective. If I can convey just one point with this post, it’s that the GDPR is much more than an IT or operational responsibility.
If you’re following the GDPR and your organization develops software (directly or through partners – for internal use or external use), this post is for you.

GDPR Roles

The GDPR is organized around the notion of Controllers and Processors and the responsibilities and liabilities they share.


  • Controller determines the “why” and the “how” of processing personal data.
  • Processor (or processors as the case may be) processes personal data for the Controller


The GDPR states that a person who has suffered any kind of damage (material or non-material) from a GDPR infringement has the right to compensation.
More to the point, processing systems that do not meet GDPR requirements (and therefore infringe) trigger GDPR liability for every user whose data is processed.
The cost of a single GDPR incident is too high for anyone to ignore. An infringing processing system has the potential to generate thousands – if not millions – of these incidents.
With this potential exposure, do processing system developers have any special obligations?

Processing system obligations

The GDPR mandates that processing systems include “appropriate” technical safeguards. For the GDPR, “appropriate” would consider factors like the state-of-the-art of hacking techniques and their corresponding countermeasures at any given time (implying an ongoing commitment to track and keep pace with developments in this area), the cost of safeguard implementations (time, money, other risks), as well as the relative likelihood and severity of any given class of data breach occurring.
In this sense, the GDPR is consistent with well-understood risk management practices that call for proportionate risk mitigation investments. For a discussion of these basic risk concepts in the context of application development, see The Six Degrees of Application Risk.
The GDPR amplifies these basic concepts and, by implication, expands the working definition of “infringement.”

Processing system infringement

The GDPR places a special importance on “ensuring ongoing confidentiality, integrity, availability and resilience of processing systems and services.”
In other words, the GDPR deliberately carves out obligations for the processing system implementer – not just for the owners and caretakers of the data that flows through those systems.
The GDPR goes on to state that special care must be taken in both assessing and proactively mitigating processing risks stemming from
  • Unlawful destruction, loss, or alteration of personal data, and from
  • Unauthorized disclosure of, or access to personal data transmitted, stored or otherwise processed.

GDPR Processing System Assessment

Extrapolating directly from the GDPR text, we can see that Controllers and Processors are responsible for implementing processing systems that
  • Are secure, resilient, and reliable (trusted),
  • Include controls to protect against unlawful and/or unauthorized access or disclosure of personal data, AND
  • Include “state of the art” (up-to-date) countermeasures against current attack techniques.
The “appropriate technical and organisational measures” standard used throughout the GDPR needs to be extended to ensure that bespoke (custom) software includes the required GDPR safeguards.

GDPR Software Development Assessment

A Controller or Processor that develops components of a processing system must ensure that the code they write does not violate the GDPR obligations list above.
The development organization must be able to demonstrate that it has not – and will not – release software with commonly known, well-understood or otherwise avoidable software gaps or vulnerabilities.
Now that we have a notion of what GDPR compliance means for development organizations – how do development organizations get “GDPR ready” efficiently, effectively, and reliably?
I thought you would never ask!