Monday, October 2, 2017

GDPR and Application Development: My question to the EDCC - asked and answered

Development and the Law - Development may often be overlooked - but it is never forgotten nor is it exempt.
Development and the Law - Development may often be overlooked - but it is never forgotten nor is it exempt.
Working for an ISV with European clients – many of which are large corporations that develop their own applications that process EU PII – I’ve been watching this space closely.
To what extent do Controller/Processor obligations (and, by extension, penalties) extend “upstream” into the development organization and its practices?
I’ve poured over the GDPR and pulled out what I think are the relevant bits – a tiny sampling would include
  • The entire notion of “processor”,
  • The use of a “state of the art” standard rather than the normalized “reasonable effort”
(both from SEC 32, Security of processing)…
And in Recital 78: Appropriate technical and organizational measures, “developing and designing” of applications is given an equal weight alongside “selecting and using” of applications…
There is plenty in the GDPR to support the idea that development organizations will be expected to meet the same (or equivalent) standards as their operational or IT counterparts (see GDPR liability: software development and the new law)
…but I wondered what would happen if I asked the European Direct Contact Centre? So I submitted the following (in part)
…and in a few weeks I received the following:
Put more succinctly, the EDCC responded YES.
Yes, Development and DevOps organizations are subject to GDPR obligations (and penalties). These include both incorporating “the state of the art” in data protection (as a development and DevOps practice) as well as a means of “demonstrating” (proving) that such dev and DevOps practices are (have been) consistently and effectively applied.
What is the difference between a “state of the art” standard versus a normalized “reasonable” standard? What are examples of know attack vectors and exploits that fall under this umbrella? How do you know if your development practices can meet this standard? Great questions really... and definitely answerable.
Development may be frequently overlooked in the race to be GDPR ready – but it is most definitely NOT exempt.
For a deeper discussion on these issues, consider registering for App Dev and the Law on October 5, 10 AM EST
For info on PreEmptive's support for GDPR compliance, visit https://www.preemptive.com/solutions/gdpr

GDPR, DTSA, ETC: App Dev and the law originally posted on LinkedIn on September 20, 2017


We’ve scheduled the next installment of our app risk webinar series: App Dev and the law: GDPR, DTSA, ETC

New laws mean new organizational obligations (and penalties).

This installment draws a straight line between your dev and DevOps practices and the new privacy, computing, and security obligations you’re facing (whether you know it or not).
We’ll drill into two specific pieces of legislation (GDPR and DTSA) and one industry’s recent cyber risk recommendations (The key principles of vehicle cyber security for connected and automated vehicles).

Why invest your valuable time?

After the webinar, you’ll leave with
  • Practical guidance for GDPR and DTSA for your dev efforts we well as
  • A framework that can be applied to most any existing (and future) regulations.

3 reasons why this content is timely

1.      Legislatures and regulators are finally responding to the existential threat posed by the increasing sophistication and pace of attacks and attack strategies.
2.      Their "response" includes laws like DTSA and GDPR that share important traits likely to impact development practices and planning.
  • Increased penalties: Increased penalties translate into significantly increased RISK of non-compliance (that's distinct from increased likelihood of non-compliance). Penalties increase the resulting damage of non-compliance.
  • Expanded obligations: Expanded obligations mean more ways to fail (to be non-compliant) – this does increase the likelihood of non-compliance, and
  • New standards of compliance with those obligations: New standards of compliance, e.g. maintaining “state of the art” versus “reasonable” competencies, dramatically increases the level of effort and expertise required to be compliant. Punitive fines, market valuation loss, and civil penalties all multiply when organizations can’t demonstrate that they have made the proper investments in their compliance programs.

Register for one of these two convenient time slots:

GDPR liability: software development and the new law Originally posted August 16, 2017




The GDPR is comprehensive; its impact is far reaching, and the penalties for infringement are severe (up to €20 million or 4% of global annual revenue, whichever is higher).
In short, no impacted business can afford to ignore The GDPR. As the May 2018 deadline looms, organizations find themselves scrambling to be “GDPR ready” – but what exactly does that mean?
I’ve simplified the GDPR legalese (while preserving the links to the original regulation) to help answer this question from a development perspective. If I can convey just one point with this post, it’s that the GDPR is much more than an IT or operational responsibility.
If you’re following the GDPR and your organization develops software (directly or through partners – for internal use or external use), this post is for you.

GDPR Roles

The GDPR is organized around the notion of Controllers and Processors and the responsibilities and liabilities they share.

Responsibilities

  • Controller determines the “why” and the “how” of processing personal data.
  • Processor (or processors as the case may be) processes personal data for the Controller

Liabilities

The GDPR states that a person who has suffered any kind of damage (material or non-material) from a GDPR infringement has the right to compensation.
More to the point, processing systems that do not meet GDPR requirements (and therefore infringe) trigger GDPR liability for every user whose data is processed.
The cost of a single GDPR incident is too high for anyone to ignore. An infringing processing system has the potential to generate thousands – if not millions – of these incidents.
With this potential exposure, do processing system developers have any special obligations?

Processing system obligations

The GDPR mandates that processing systems include “appropriate” technical safeguards. For the GDPR, “appropriate” would consider factors like the state-of-the-art of hacking techniques and their corresponding countermeasures at any given time (implying an ongoing commitment to track and keep pace with developments in this area), the cost of safeguard implementations (time, money, other risks), as well as the relative likelihood and severity of any given class of data breach occurring.
In this sense, the GDPR is consistent with well-understood risk management practices that call for proportionate risk mitigation investments. For a discussion of these basic risk concepts in the context of application development, see The Six Degrees of Application Risk.
The GDPR amplifies these basic concepts and, by implication, expands the working definition of “infringement.”

Processing system infringement

The GDPR places a special importance on “ensuring ongoing confidentiality, integrity, availability and resilience of processing systems and services.”
In other words, the GDPR deliberately carves out obligations for the processing system implementer – not just for the owners and caretakers of the data that flows through those systems.
The GDPR goes on to state that special care must be taken in both assessing and proactively mitigating processing risks stemming from
  • Unlawful destruction, loss, or alteration of personal data, and from
  • Unauthorized disclosure of, or access to personal data transmitted, stored or otherwise processed.

GDPR Processing System Assessment

Extrapolating directly from the GDPR text, we can see that Controllers and Processors are responsible for implementing processing systems that
  • Are secure, resilient, and reliable (trusted),
  • Include controls to protect against unlawful and/or unauthorized access or disclosure of personal data, AND
  • Include “state of the art” (up-to-date) countermeasures against current attack techniques.
The “appropriate technical and organisational measures” standard used throughout the GDPR needs to be extended to ensure that bespoke (custom) software includes the required GDPR safeguards.

GDPR Software Development Assessment

A Controller or Processor that develops components of a processing system must ensure that the code they write does not violate the GDPR obligations list above.
The development organization must be able to demonstrate that it has not – and will not – release software with commonly known, well-understood or otherwise avoidable software gaps or vulnerabilities.
Now that we have a notion of what GDPR compliance means for development organizations – how do development organizations get “GDPR ready” efficiently, effectively, and reliably?
I thought you would never ask!

App dev & the GDPR: three tenets for effective compliance originally posted August 13, 2017

https://www.preemptive.com/blog/article/953-app-dev-the-gdpr-three-tenets-for-effective-compliance/106-risk-management



According to the official EU GDPR website, http://www.eugdpr.org, “The EU General Data Protection Regulation (GDPR) is the most important change in data privacy regulation in 20 years.”
This may well be true. The GDPR includes unprecedented penalties connected to data breaches, it reaches across international borders, and it targets both data owners and 3rd party service providers that process/manage that data.
While data governance inside IT and DevOps orgs have (justifiably) been the primary focus of GDPR compliance efforts, application development organizations should also recognize that they have been put on notice as well.
If your software might, perhaps even at some point in the future, process EU personal data (whether or not your company is the organization running that software) – you and/or your clients will also likely be subject to GDPR obligations and potential penalties.
If you fall into this very wide net, the following three app dev GDPR tenets probably warrant your immediate consideration:

1. Development organizations can be held accountable for data breaches where attackers capitalized on avoidable software gaps or vulnerabilities.

A personal data breach, as defined by the GDPR, includes data damage, loss, or unauthorized access resulting from application tampering, monitoring, or vulnerability exploit.
The GDPR personal data breach definition includes “the unlawful alteration, loss, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed” (formatting added here for emphasis).
Many data breaches begin with an application vulnerability exploit (elevation of privileges for example) or application tampering (bypassing identity or other security checks using a debugger in a production setting to manipulate app data or runtime logic for example). In both of these examples, an attacker is able to subvert the controls and restrictions that an application would normally impose.
Recommendation: These risks and their corresponding mitigating controls need to be included in GDPR assessments and, as appropriate, remediation processes. This would apply to both software developed in-house and to supplier risk assessments when software is licensed or used as a service.

2. 100% vulnerability free applications 100% of the time is an unattainable standard.

Exploiting application vulnerabilities to gain unauthorized control over private data is a widely recognized, common attack technique.
In an ideal world, development would release vulnerability-free applications that were also immune to native and managed debugger hacks, profilers and reverse-engineering tools. We do not live in an ideal world.
Secure coding practices informed by subsequent static analysis and security testing are often effective in striving for this ideal, but even in the best case scenarios, can never guarantee a vulnerability-free application. Further, secure coding practices do not address risks stemming directly from unauthorized debugging, tampering, or reverse engineering hacks (since these do not rely upon vulnerability exploits for success).
Recommendation: Controls to prevent vulnerability discovery and exploitation in production settings are necessary compliments to those that minimize the likelihood that vulnerabilities are introduced in the first place.

3. Application hardening is a recognized control to minimize risks stemming from unauthorized use of debuggers to compromise production applications (and, by extension, the data that flows through them).

In June of 2017, 400 development organizations were asked if they had controls in place to mitigate these kinds of production attacks on their applications.
  1. 51% reported having preventative controls
  2. 35% reported having detective and defensive controls, and
  3. 23% reported having reporting controls. *
* It is also worth noting that the percentages across all categories were higher for development organizations serving manufacturing, financial, and healthcare industries. In short, independent of GDPR requirements, these kinds of controls are widely deployed.
Recommendation: Application hardening can play a vital role in an effective GDPR compliance program and should be evaluated for inclusion within existing application and cybersecurity control frameworks. Further, as the survey responses show, application hardening is generally known to be effective against these kinds of risks and, as such, may be considered by regulators and the courts to be "reasonable" precautions that should - by implication - be in place.

What next?

PreEmptive Solutions will be publishing risk assessment and project implementation templates to help enterprises and System Integrators evaluate and, when appropriate, implement application hardening GDPR controls.
If you would like to preview these templates to provide feedback (or learn more about our particular application hardening software), please email solutions@preemptive.com.

Another Application Vulnerability for Which There is No Fix Originally posted August 11, 2017

https://www.preemptive.com/blog/article/931-another-application-vulnerability-for-which-there-is-no-fix/90-dotfuscator


Garbage in, garbage out is shorthand for “incorrect or poor quality data will always produce faulty results.”
The “garbage data” vulnerability is especially gnarly in that there is actually no fix – no cure.
The only viable development strategy is one of avoidance.
In short, well-written applications take every opportunity to verify and validate data (and, obviously, to avoid generating garbage data that would ultimately pollute subsequent “downstream” data processing).
Compromised in, compromised out is a modern shorthand for another class of application vulnerability for which there is no fix.
Leaked or otherwise compromised data will always produce compromised results.
Consider the following compromised in, compromised out scenarios:
  • A “bad actor” uses leaked social security and credit data to apply for a loan. Even a hypothetical “perfect” financial system (bug-free, securely coded and effectively managed) will process the improper loan, transfer the misappropriated funds, and corrupt otherwise accurate credit histories.
  • A user gains unauthorized privileges via compromised identity data (through theft or an application exploit). She is now unstoppable as she moves through your organization’s systems, able to further compromise and leak business and personal data at will.
Compromised data is not “garbage data” in the development sense of the word in that
  • Data verification routines cannot (in the general case) verify data provenance (where data values have been stored, viewed, shared, etc. in the past or by other systems and users),
  • Data governance is ultimately defined by 3rd parties (regulators, legislators, law enforcement, etc), and
  • As such, subsequent “compromised results” will not only include bad outcomes like unauthorized bank loans, they can also include criminal penalties, fines, civil damages, reputational damage, market devaluation, higher cost of capital, etc.
Yet, compromised data is very much like garbage data in that developers have no viable defense other than avoidance.
As with “garbage data”, well-written applications must take every opportunity to
  • Prevent, detect, respond, and report on attempts to compromise data (both successful and failed attempts),
  • Avoid being the instrument of compromise – being the vector of an attack that compromises data – polluting downstream application processing.

A Breached Application Breaches Completely



Every application feeding your operation – no matter how small – whether developed by your organization or not – running inside your business or “upstream” inside your suppliers’ and partners’ networks has the potential to pollute (compromise) the systems they feed.
Sound extreme? Consider “Anatomy of the Target data breach: Missed opportunities and lessons learned” where it appears that one of the most damaging data breaches in recent history began with an attack on an air conditioner supplier. Hackers surfed that compromised data stream all the way into Target’s most valuable customer data.
Consider recent regulations like the EU’s GDPR or the recent recommendations from the UK on cybersecurity inside smart cars.Both clearly identify the shared responsibilities of application development organizations across corporate and even international borders to mitigate material privacy, financial, and safety risk – all the way down to small gaps in seemingly minor application functions.
It has never been more important for development organizations to include reasonable, scalable, and reliable controls to avoid, detect, and remediate application exploits – everywhere, not just in obvious, flagship systems.

DashO Root Detection & Defense is one Check that will not bounce! Originally posted on July 28, 2017

I’m delighted to report that PreEmptive Solutions released DashO 8.2 for Java and Android earlier this week. Like most of our releases, it has a lot packed into it including:
  • Android-O support,
  • Kotlin support,
  • Improvements to our Android Wizard, and
  • Build performance improvements.
BUT the feature I’m most excited about is the addition of our latest Check Type, Android Root Detection.
I’m excited because our approach to real-time security continues to be unique and root detection is exactly the kind of scenario to highlight our approach.
In order to make my case, I need to take a step back and define what we mean at PreEmptive when we refer to a “Check” or a “Check Type.”
A Check in PreEmptive parlance refers to a real-time incident detection and response framework that includes the logic to check for an incident occurrence and a rich variety of response and alert options.
Check Types refer to the kinds of incidents we can look for (detect) and respond to.
Today, on Java, Android, and .NET, we have Checks (Check Type support) for application tampering, the presence of a managed or native debugger at runtime, absolute and relative timeframe expiry, and (now) rooted Android devices.


The most significant advantage that our approach offers is functional scope including:
  • Incident detection: There are many ways to detect a rooted Android device – we use lots of them combined with heuristics to ensure accuracy. This “ambiguity” extends to debugger detection and our other checks as well. Your developers don’t need to keep up with these moving targets – that’s what we do.
  • Incident response
    • Turnkey real-time incident response options with no coding or training include throwing exceptions, suspending an application, and exiting an application,
    • Turnkey real-time alerts that can include a broad set of metadata with offline-caching and encryption over the wire built-in; also implemented with no coding whatsoever
    • Extended incident response and real-time alert behaviors that incorporate your application-specific or custom code to address context-specific operational or risk-related requirements that only you understand.
Regardless of the platform or the Check Type, the Check framework is consistent and that offers a whole lot of advantages over setting a flag inside a program with an API and requiring developers to code their own response.
PreEmptive Checks are INJECTED post compile – and can be done at the same time (or independently) of obfuscation. Our approach to injection allows for the injection of our code AND yours – offering the best of both worlds.
 

Here’s a more complete comparison of the advantages of using injection specifically to implement application detection and response controls.

Control Implementation Characteristics

Post-compile Injection

Programming

ComplexityLow: specialized behaviors such as incident detection or offline-caching of data are delivered as “turnkey” (no coding)High: each application presents its own unique set of implementation requirements that must be designed and tested as “first class” features.
Effort and trainingLow: injection patterns and configurations can be reused and shared across builds, releases, and applications.High: expertise and effort required will increase proportionately to the number of applications and development teams managed.
FlexibilityLow: injection targets are often limited to method entry and exit points and highly customized interaction with other application functionality may be constrained as well.High: controls implemented as code within an application have no inherent limitations.
ScalabilityHigh: injection tasks can be included in build and deployment workflows through a centralized process ensuring consist and effective use.Low: compliance must compete with the development’s backlog of fixes and features – application-by-application.
Transparency & AuditabilityHigh: as a part of the build and deployment workflow, successful use is logged and archived. The log can be used to guarantee functional compliance and proof of compliance.Low: proving to auditors or end-users that controls are present and do no more (or less) than documented would require code review rather than documentation review.
Why is our Check framework ideal of managing risks stemming from running an application on a rooted device? Root detection requires an evolving set of heuristics to ensure accurate results. We invest on keeping our algorithms up-to-date.
Once detected, appropriate responses will be highly variable based upon:
  • the application (banking or a medical device)
  • the application owner’s appetite for risk
  • the regulations governing the application, the application owner, and the user
Our unique combination of turnkey and extensibility functions ensure that you will be able to hit the right mix of defensive, reporting, and privacy features and you will be able to evolve them as needed.
In order to manage risk effectively, you have to manage it consistently. Our integration into production DevOps build and deploy pipelines ensures that your controls will be applied consistently and you will have the audit logs to verify your compliance.

Learn more about DashO for Java and Android (and evaluate the software) here.

The Six Degrees of Application Risk Originally posted on June 26, 2017

https://www.preemptive.com/blog/article/927-the-six-degrees-of-application-risk/90-dotfuscator

Cyber-attacks, evolving privacy and intellectual property legislation, and ever-increasing regulatory obligations are now simply “the new normal” – and the implications for development organizations are unavoidable; application risk management principles must be incorporated into every phase of the development lifecycle.
Organizations want to work smart – not be na├»ve – or paranoid. Application risk management is about getting this balance right. How much security is enough? Are you even protecting the right things?
The six degrees of application risk offer a basic framework to engage application stakeholders in a productive dialogue – whether they are risk or security professionals, developers, management, or even end users.
With these concepts, organizations will be in a strong position to take advantage of the following risk management hacks (an unfortunate turn of a phrase perhaps) that reduce the cost, effort, complexity, and time required to get your development on the right track.

Six Degrees of Application Risk

The following commonly used (and related) terms provide a minimal framework to communicate application risk concepts and priorities.
  1. Gaps are (mostly) well-understood behaviors and characteristics of an application, its runtime environment, and/or the people that interact with the application. As an example, .NET and Java applications (managed applications) are especially easy to reverse-engineer. This isn’t an oversight or an accident that will be corrected in the “next release.” Managed code, by design, includes significantly more information at runtime than its C++ or other native language counterparts – making it easier to reverse-engineer.
  2. Vulnerabilities are the subset of Gaps that, if exploited, can result in some sort of damage or harm. If, for example, an application was published as an open source project – one would not expect that reverse engineering an instance of that application would do any harm. After all, as an open source project, the source code would be published alongside the executable. In this case, the Gap (reverse engineering) would NOT qualify as a Vulnerability.
  3. Materiality is the subjective (but not arbitrary) assessment of how likely a vulnerability will be exploited combined with the severity of that exploitation. The likelihood of a climate-changing impact of a meteor hitting earth in the next 3 years is significantly lower than the likelihood of an electrical fire in your home. This distinction outweighs the fact that a meteor impact will obviously do far more harm than a single home fire. This is why we, as individuals, invest time and money preventing, detecting, and impeding electrical fires while taking no preemptive steps to mitigate the risks of a meteor collision.
  4. Priority ranking of vulnerabilities helps to ensure that our limited resources are most effectively allocated. Vulnerabilities are not all created equal and, therefore, do not justify the same degree of risk mitigation investment. Life insurance is important – but medical insurance typically is seen as “more material” justifying greater investments.
  5. Appetite for risk is another a subjective (but not arbitrary) measure. Appetite is synonymous with tolerance. Organizations cannot eliminate risk – but each organization must identify those vulnerabilities whose combined likelihood and impact are simply unacceptable. Some sort of action is required to reduce (not eliminate) those risks to bring them to within tolerable levels. Health insurance does not reduce the likelihood of a health-related incident – it reduces some of the harm that stems from an incident when it occurs. While many individuals have both life and health insurance, there are many who feel that they can tolerate living without life insurance but cannot tolerate losing health insurance.
  6. Material risks are those vulnerabilities whose risk profile are intolerably high. Material risks are, by definition, any vulnerability that merits some level of investment to bring either its likelihood and/or its impact down to within tolerable levels. Ideally, once all risk management controls are in place, there are no “intolerable risks” looming.

Applying the Six Degrees of Application Risk

Extending these concepts into the development process, at a high level, translate into the following activities:
  • Inventory relevant “gaps” across your development and production environments
  • Identify the vulnerabilities within the collection of gaps
  • Assess and prioritize according to your organization’s notions of materiality
  • Agree on a consistent definition of your organizations tolerance for these vulnerabilities (appetite)
  • Identify the vulnerabilities that present a material risk
  • Select and implement controls to mitigate these risks
  • Measure, assess, and correct on an ongoing (periodic) basis
Simple right?

Effective Application Risk Management Hacks

Incorporating any new process or technology into a mature development process is, in and of itself, a risky and potentially expensive proposition.
The threat of increasing development complexity or cost, or compromising application quality or user experience is often motivation enough to maintain the status quo.

Avoid unnecessary waste and risk – follow-the-leaders

There is an old saying in risk management that “you don’t have to be the fastest running from the bear – you just don’t want to be the slowest.” Hackers mostly attack targets of opportunity and regulators and the courts typically look for “reasonable” and “appropriate” controls. It is often much more efficient to benchmark and adapt the practices of your peers rather than develop your own risk management and security practices from the ground-up. There are many sources from which to choose.
  • Benchmark your practices against your organization’s
    • peers (similar organizations)
    • customers (their risks are often, by extension, your risks)
    • suppliers (they are experts in their specialty and/or may pose a risk if they do not live up to your appetite for risk)
  • Embrace well-understood and common practices
    • Adopt an accepted a standard or open risk management framework.
    • Monitor regulatory and legislative developments
    • Track relevant breaches and exploits and the aftermath

Like magicians, hackers do not reveal their tricks – but we will Originally posted on May 8, 2017

https://www.preemptive.com/blog/article/920-like-magicians-hackers-do-not-reveal-their-tricks-but-we-will/90-dotfuscator

According to NIST’s National Vulnerability Database, six vulnerability categories have grown from 68% to over 84% of the total number of reported vulnerabilities in just the past four years.
What these categories have in common are the tools hackers rely upon to probe, discover, and exploit these increasingly mainstream vulnerabilities. Specifically, hackers begin with application debuggers and reverse engineering tools to pick apart and modify applications. These “programmatic hacks” have led to many of today’s most devastating application and data exploits.

Stop a Hacker in Their Tracks

Anti-debugger controls can, when combined with code obfuscation (reverse engineering prevention), tamper defense, and other runtime checks, materially reduce application and data risk by impeding (if not outright preventing) the research typically required to identify and exploit application vulnerabilities.

Anti-debugger controls: a near-universal application risk management requirement

In each of the programmatic CVE categories listed above, a hacker likely began their attack by using some flavor of debugger to explore and manipulate a running instance of an application to bypass security, execute unauthorized code, elevate privileges, etc.
Effective anti-debugger controls mitigate these risks while minimizing potential development, quality, compliance, and/or performance side effects.
  • Debugger detection: Debuggers come in a variety of flavors and packaging. An effective control will detect both managed and native debuggers.
  • Debugger defense: Once an unauthorized debugger has been detected, a variety of pre-packaged real-time measures as well as application and runtime-specific tactics must be readily available for the developer to choose from. These can include throwing random exceptions, exiting the program, “bricking” the application permanently, generating custom log entries, etc.
  • Debugger notifications: In addition to real-time defense and mitigation, it is valuable to emit an alert or notification that can initiate an operational response including isolating the device or even the local network running the compromised application.
  • Implementation: Real-time counter measures and runtime reporting represent a new category of application behavior that must be specified, documented, and tested. Minimizing the amount and complexity of this incremental effort will often be the determining factor as to how consistently and effectively these controls are applied.
  • Quality and support: The mission-critical nature of these controls mandate the highest levels of quality, transparency, and support to ensure that the controls do not create more risk than they mitigate.

Dotfuscator for .NET and DashO for Java and Android

PreEmptive Solutions Dotfuscator for .NET and DashO for Java and Android have been developed and continuously improved over the past 15 years to meet these requirements – on desktop, mobile, server, and cloud.
Platforms (selected)Real-time defenseAlerts & reportingInjection (no coding required)Continuous deployment
Dotfuscator.NET, UWP, Xamarin, etc.YesYesYesYes – Visual Studio, VSTS
DashOJava, AndroidYesYesYesYes
For organizations developing applications worth protecting, visit Harden your .NET Applications with Dotfuscator's Anti-Debug Protections and PreEmptive Solutions’ Application “Bricking” Gives App Security a Nuclear Option

Blog Archive