Monday, October 2, 2017

The Six Degrees of Application Risk Originally posted on June 26, 2017

Cyber-attacks, evolving privacy and intellectual property legislation, and ever-increasing regulatory obligations are now simply “the new normal” – and the implications for development organizations are unavoidable; application risk management principles must be incorporated into every phase of the development lifecycle.
Organizations want to work smart – not be na├»ve – or paranoid. Application risk management is about getting this balance right. How much security is enough? Are you even protecting the right things?
The six degrees of application risk offer a basic framework to engage application stakeholders in a productive dialogue – whether they are risk or security professionals, developers, management, or even end users.
With these concepts, organizations will be in a strong position to take advantage of the following risk management hacks (an unfortunate turn of a phrase perhaps) that reduce the cost, effort, complexity, and time required to get your development on the right track.

Six Degrees of Application Risk

The following commonly used (and related) terms provide a minimal framework to communicate application risk concepts and priorities.
  1. Gaps are (mostly) well-understood behaviors and characteristics of an application, its runtime environment, and/or the people that interact with the application. As an example, .NET and Java applications (managed applications) are especially easy to reverse-engineer. This isn’t an oversight or an accident that will be corrected in the “next release.” Managed code, by design, includes significantly more information at runtime than its C++ or other native language counterparts – making it easier to reverse-engineer.
  2. Vulnerabilities are the subset of Gaps that, if exploited, can result in some sort of damage or harm. If, for example, an application was published as an open source project – one would not expect that reverse engineering an instance of that application would do any harm. After all, as an open source project, the source code would be published alongside the executable. In this case, the Gap (reverse engineering) would NOT qualify as a Vulnerability.
  3. Materiality is the subjective (but not arbitrary) assessment of how likely a vulnerability will be exploited combined with the severity of that exploitation. The likelihood of a climate-changing impact of a meteor hitting earth in the next 3 years is significantly lower than the likelihood of an electrical fire in your home. This distinction outweighs the fact that a meteor impact will obviously do far more harm than a single home fire. This is why we, as individuals, invest time and money preventing, detecting, and impeding electrical fires while taking no preemptive steps to mitigate the risks of a meteor collision.
  4. Priority ranking of vulnerabilities helps to ensure that our limited resources are most effectively allocated. Vulnerabilities are not all created equal and, therefore, do not justify the same degree of risk mitigation investment. Life insurance is important – but medical insurance typically is seen as “more material” justifying greater investments.
  5. Appetite for risk is another a subjective (but not arbitrary) measure. Appetite is synonymous with tolerance. Organizations cannot eliminate risk – but each organization must identify those vulnerabilities whose combined likelihood and impact are simply unacceptable. Some sort of action is required to reduce (not eliminate) those risks to bring them to within tolerable levels. Health insurance does not reduce the likelihood of a health-related incident – it reduces some of the harm that stems from an incident when it occurs. While many individuals have both life and health insurance, there are many who feel that they can tolerate living without life insurance but cannot tolerate losing health insurance.
  6. Material risks are those vulnerabilities whose risk profile are intolerably high. Material risks are, by definition, any vulnerability that merits some level of investment to bring either its likelihood and/or its impact down to within tolerable levels. Ideally, once all risk management controls are in place, there are no “intolerable risks” looming.

Applying the Six Degrees of Application Risk

Extending these concepts into the development process, at a high level, translate into the following activities:
  • Inventory relevant “gaps” across your development and production environments
  • Identify the vulnerabilities within the collection of gaps
  • Assess and prioritize according to your organization’s notions of materiality
  • Agree on a consistent definition of your organizations tolerance for these vulnerabilities (appetite)
  • Identify the vulnerabilities that present a material risk
  • Select and implement controls to mitigate these risks
  • Measure, assess, and correct on an ongoing (periodic) basis
Simple right?

Effective Application Risk Management Hacks

Incorporating any new process or technology into a mature development process is, in and of itself, a risky and potentially expensive proposition.
The threat of increasing development complexity or cost, or compromising application quality or user experience is often motivation enough to maintain the status quo.

Avoid unnecessary waste and risk – follow-the-leaders

There is an old saying in risk management that “you don’t have to be the fastest running from the bear – you just don’t want to be the slowest.” Hackers mostly attack targets of opportunity and regulators and the courts typically look for “reasonable” and “appropriate” controls. It is often much more efficient to benchmark and adapt the practices of your peers rather than develop your own risk management and security practices from the ground-up. There are many sources from which to choose.
  • Benchmark your practices against your organization’s
    • peers (similar organizations)
    • customers (their risks are often, by extension, your risks)
    • suppliers (they are experts in their specialty and/or may pose a risk if they do not live up to your appetite for risk)
  • Embrace well-understood and common practices
    • Adopt an accepted a standard or open risk management framework.
    • Monitor regulatory and legislative developments
    • Track relevant breaches and exploits and the aftermath

2nd Sneak Peek: 84% of dev teams fail to secure in-app IP from debugger hacks - and that's not the half of it! Originally posted on October 7, 2016

In the first "peek" into our soon to be published application risk management survey results, we shared that 58% of the respondents reported making ongoing development investments specifically to manage “application risk.” See Managing Application Vulnerabilities (an early peek into improved controls for your code and data)
Digging into the survey numbers, respondents divided their “application risk” into six subcategories and in the following proportions:
Risk Subcategories% of respondents reporting app risk
Intellectual property (IP) theft from code analysis (via reverse engineering)38%
Data loss and (non-application) trade secret theft37%
IP theft through app abuse (elevated privilege, unauthorized data access, etc.)36%
Operational disruption (malware, DDoS, etc.)32%
Regulatory and other compliance violations (privacy, financial, quality, audit, etc.)26%
Financial theft18%
It’s important to keep in mind that the risks enumerated above are NOT synonymous with technical vulnerabilities; there are multiple paths that a bad actor can take (for example) to “misappropriate” IP and trade secrets – multiple technical vulnerabilities to exploit – and multiple non-technical vulnerabilities too of course (social engineering, armed robbery, etc.).
The table above shows that, while financial theft is surely among the most significant risks most any business faces, only 18% of the development teams in our survey work on applications where attacks against their applications in particular might reasonably lead to financial theft.

Production debugger use for hacking and tampering left unchecked

The survey showed that, while development teams were invested in mitigating these six application risk categories, a majority of development teams did not have effective controls to prevent one specific technical vulnerability; the unauthorized use of a debugger against applications running in production.
In fact, in every risk category, the majority of development teams:
  • Recognized that this kind of debugger attack IS a material threat, AND
  • Acknowledged that they DO NOT have adequate controls in place to mitigate this threat.
For illustration, lets dig deeper into one of the six risk categories to see how this pattern plays out.

Digging deeper: IP theft from code

The chart below shows that 84% of respondents who identified IP theft from their code as a material risk also identified production debugger hacking as a significant and Unprotected technical vulnerability.

Risks are like potato chips; you can ever have just one

Unmanaged technical vulnerabilities are never a good thing, but this gets exponentially worse if a single vulnerability increases risk across multiple risk categories rather than just one. …and, according to our survey respondents, failing to prevent production debugger hacking most definitely falls squarely into this category.
To further raise the stakes for development teams, our survey clearly showed a strong correlation across risk categories. In other words, once an application has the potential to pose one kind of risk, it is extremely likely that it will pose a risk across multiple categories – thus increasing the potential damage of unchecked technical vulnerabilities like production debugger hacking.

Digging (even) deeper: IP theft from code risk as a leading indicator for additional application risks

According to our respondents, apps that have IP inside code that need protecting are much more likely to pose additional risks as well.

What's the take-a-way from the illustration above?
If you're protecting IP inside your app - you're over 11 times more likely than other development groups to ALSO have IP at risk from app attacks even though that IP lives outside of your app. ...and you're roughly 2X more likely to face risks across the remaining four application risk categories...
Also, if you've got un-managed technical vulnerabilities - to the extent that these vulnerabilities may factor into multiple risk categories, the danger each vulnerability poses is likely to be many times greater than you suspect.
If you’re interested in getting the final numbers (and an even deeper dive into both the risks and controls to effectively mitigate these risks), I expect to be publishing results in the next 1-2 weeks HERE (there's already a link to a related white paper on this page for download too so check that out now if you like).

Trade Secrets and Software: don’t give one up for the other Originally posted on August 5, 2016

The true value of trade secrets – as with any class of intellectual property – is directly proportional to the owner’s ability to enforce their rights through criminal and civil actions.
For the first time, under the recently enacted Defend Trade Secrets Act, a company can pursue claims for trade secret theft in a US federal court and seek remedies such as a seizure order to recover stolen secrets plus get compensated for damages and potentially impose punitive fines as well (making trade secret theft protection on par with other forms of intellectual property infringement i.e., patent, copyright, and trademarks).
However, to take full advantage of these remedies, companies must identify trade secrets in advance and implement reasonable secrecy measures to protect them.
Applying these general rules to application development and operations requires a specialized legal strategy further buttressed by “technical foresight,” e.g. an enhanced DevOps process.
The following videos offer application stakeholders:
For a general framework on how to manage application risk and value, see Application Risk Management in a nutshell - 8 minutes.

Defend Trade Secrets Act codifies “open season” on app reverse engineering Originally posted May 13, 2016

Code obfuscation and the doctrine of “contributory negligence”

On May 11, 2016, President Obama signed the Defend Trade Secrets Act of 2016.
Enjoying unprecedented bipartisan support (Senate 87-0 and the House 410-2), this bill expands trade secret protection across the US and substantially increases penalties for criminal misconduct – and what could go wrong with that?
After all, according to the Commission on the Theft of American Intellectual Property, the theft of trade secrets costs the economy more than $300 billion a year. …and, thanks in large part to technology, trade secrets have never been easier move, to copy, and to steal. In fact, in their 5 year strategic plan, the FBI labeled trade secrets as "one of the country's most vulnerable economic assets” precisely because they are so transportable.
…and nothing in today’s world is more mobile than application software
If you were to assume that this bill has been custom-tailored to protect the trade secrets embedded in application software - you would be in good company
In her most recent blog post praising the Defend Trade Secrets Act, Michelle K. Lee, Under Secretary of Commerce for Intellectual Property and the current USPTO Director writes, "No matter the industry, whether telecommunications or biotechnology, traditional or advanced manufacturing or software, trade secrets are an essential driver of innovation and need to be afforded proper protections.” … “Trade secret owners now also have the same access to federal courts long enjoyed by the holders of other types of IP.”
...but do we really? Do software developers really now "enjoy the same access to federal courts?" Sort of – maybe – OK – maybe not.
I’ll be writing a lot about this topic in the coming weeks and months, but, for now, let’s just drop to the bottom line. Without special care, Application owners have been stripped of every protection granted under the Defend Trade Secrets Act (DTSA).
Let me explain. The DTSA applies exclusively to VALUABLE information that is both SECRET and has been STOLEN (the legal term is “acquired through Improper Means”).
Developer ALERT: The DTSA explicitly EXCLUDES reverse engineering as an improper means. The DTSA states that Improper Means DOES NOT include “reverse engineering, independent derivation, or any other lawful means of acquisition.”
Is this an oversight? Did the legal staff of the Senate Judiciary Committee (who authored this bill) accidentally use this overloaded development term?
The answer is an unequivocal no – the exclusion of reverse engineered software is intentional and by design.
I recently found myself in a briefing on Capitol Hill with senior legal counsel inside the Senate Judiciary Committee (the agenda was encryption that day – not trade secrets) – but I asked this question directly – “Did the committee intentionally include language that would exempt any intellectual property that could be accessed via reverse engineering of applications?” He did not hesitate – in fact, to be honest, he was emphatic. “Yes” he said, “if I can see your IP with a reverse engineering tool – it’s mine.”
OUCH – is this the end of days? Is every algorithm and process embedded in your software officially free for the taking?
Thankfully – no – it’s not nearly that dire.
First – whether or not your IP is covered under this law – obfuscating .NET, Android, Java, or iOS apps make reverse engineering much harder. Code obfuscation will prevent – or at least reduce the number of times that your IP is lifted through reverse engineering.
The real question is whether application obfuscation can be used to extend the protections of the DTSA to include application software in a court of law.

“Reasonable Efforts” and “The Doctrine of Contributory Negligence”

How do you ensure employees don’t publicize your textual and image-based trade secrets (and exempt these from protection as well)?
You make sure employees know that they are secret through clear markings, communication, and education – and you secure relevant documents with physical and electronic locks. These are called “affirmative steps” that demonstrate concrete efforts to preserve confidentiality.
Failure to take these kinds of reasonable efforts lead to The Doctrine of Contributory Negligence.
This “doctrine” captures conduct that falls below the standard to which one should conform for one’s own protection. When you fall below this standard, courts will often treat your information as public – and, to the extent you rise above that standard – courts are typically more willing to accept both the secret nature and the value of the IP in question.
Unfortunately, applications are not documents - and so standard “electronic and physical locks” do not apply.
However, code obfuscation does apply here. Obfuscation is a well-understood, widely practiced, and recognized practice to prevent reverse engineering. Code obfuscation does not guarantee absolute secrecy – but it is unquestionably recognized as a “reasonable step” to preserve secrecy – it’s a lock on a front door that sends an unmistakable message to anyone who approaches – if I’m obfuscated – keep out.
Will development organizations who fail to include basic code obfuscation fall prey to the ominous sounding “Doctrine of Contributory Negligence?”
Can application obfuscation send a clear enough message to the courts to bring back trade secret theft protection under the newly minted Defend Trade Secrets Act?
These and other pressing Intellectual Property questions will be answered in upcoming episodes of “As the IP World Turns” (or, more realistically, my next blog post)
In the meantime, don’t forget to take reasonable precautions to protect any potential software trade secrets from reverse engineering.

Reconciling GooglePlay's security recommendations with Xamarin deployment Originally posted February 25, 2016

An app control that both Microsoft and Google can get behind? What about Xamarin?
First - Congratulations Xamarin (and Microsoft) - as someone who has used Xamarin personally and worked with the people professionally, I see this as a win-win-win (for Xamarin, Microsoft, and, last but not least, developers!).
To the topic at hand... One might argue that the phrase "GooglePlay security recommendations" is a contradiction in terms or even oxymoronic - but I take a different view. If (EVEN) Google recommends a security practice to protect your apps - then it must REALLY be a basic requirement - one that should not be ignored.
I'm talking about basic obfuscation to prevent reverse engineering and tampering.
Here's an excerpt from Android's developer documentation
"To ensure the security of your application, particularly for a paid application that uses licensing and/or custom constraints and protections, it's very important to obfuscate your application code." ...and they go on to write "The use of ProGuard or a similar program to obfuscate your code is strongly recommended for all applications that use Google Play Licensing." (I did NOT add the emphasis)
For those unfamiliar with ProGuard - it's a free/open source obfuscator - quite a good one really for the money ;) - but seriously - it's kind of an analog to Dotfuscator Community Edition included with Visual Studio (also for free). The point being that both Google and Microsoft have long recognized that basic controls to prevent reverse engineering need to be ubiquitously available to every developer (no one is suggesting all apps be obfuscated).
...but what about Xamarin apps targeting Android or iOS? ...not so much. ProGuard cannot obfuscate Xamarin apps - nor can any of the other native Java/Android obfuscators (including PreEmptive's own DashO). ...But (good news) Dotfuscator Professional can. ...But (bad news) it's not free. Still, if you're serious about this topic, you'd probably want something other than the "free version" on either platform. Here's a link to a PreEmptive blog post on how to protect your Xamarin apps with Dotfuscator (both iOS and Android): Xamarin Applications and Dotfuscator.
Question: Given the Microsoft Xamarin acquisition, should we (PreEmptive/Microsoft) consider extending Dotfuscator CE (the free one) to provide comparable protection to Android and iOS apps generated by Xamarin as we do for .NET apps today (and since 2003)?
Let me know your thoughts - I really do want to hear from Xamarin developers (and the app owners that employ them :).

Question: True or False, Seat belts are to Driver Safety as Obfuscation is to Application Risk Management
The correct answer is FALSE!
The equivalence fails because a seat belt is a device and obfuscation is a control. Why might you (or the application stakeholders) be in danger? First, read through the key descriptors of these two controls.

Table 1: contrasting application risk management with driver safety risk management.
To pursue application development opportunities as aggressively as possible (but not too aggressively to create unnecessary risk), organizations must also manage application threats and risks through a mix of proactive, detective and responsive controls;controls that are, in an ideal scenario, supported by strong analytics and based on strategic objectives, risk appetite and capacity.
If your organization has not settled on objectives, organizational risk tolerance, and what levels of investment you’re prepared to make to achieve these objectives, you can’t possibly have an effective risk management program.

Effective application risk management;

Consistency and efficiency requires sustained investments in the following:

Implement an effective feature set aligned with control categories (proactive, detective, and responsive).

Effective risk management supports all three control “dimensions.”

Table 2: Mapping of application hardening features to three categories of control.
This is not an exhaustive list of techniques and technologies to secure applications; and feature “bake-offs” are always suspect. However, if you don’t assess your risk (which has nothing to do with how easy it is to exploit an application vulnerability), you won’t know if a normal 3 point seat belt is sufficient (for a mainstream car) or if you need a child seat or a 5 point harness required by NASCAR.


As “the last step” before digital signing and application distribution, quality issues that may arise have the potential to have catastrophic impact on deployment and production application service levels.


Three factors drive release cycles for PreEmptive Solutions application protection and risk management products; the latter two are unique to the larger security and risk management category.
  1. New product features and accrued bug fixes: this is typically the sole driving force for new software product releases.
  2. Updates to OS, runtime, and specialized runtime frameworks: delayed support for new formats and semantics would result in delays in developer support for those platforms or will force poor risk management practices on the platforms that most likely need protection most of all.
  3. Emergence of new threats and malicious patterns and practices: as with anti-virus software, bad actors are constantly searching for ways to circumvent security controls. Without consistent tracking of this activity and timely updates to react to these developments, application security technology can quickly be rendered as obsolete.

Low friction

In order to be effective and consistently applied, the configuration and implementation of proactive, detective, and corrective controls cannot require excessive time or expertise. Specific areas where PreEmptive Solutions invests to reduce development and operational friction include:
  • Automated detection and protection of common programming frameworks, e.g. WPF, Universal Applications, Spring, etc.
  • Custom rule definition language to maximize protection across complex programming patterns at scale.
  • Specialized utilities to simplify debugging of hardening applications.
  • Automated deployment: support for build farms, dynamically constructed virtual machines, command line integration, MSBuild, Ant, etc. come standard with PreEmptive Solutions’ professional SKUs.
  • Cross-assembly hardening to extend protection strategies across distributed components and for components built in different locations and at different times.
  • Support for patch and incremental hardening to minimize and simplify updates to hardened application components.

Responsive support

Should critical issues arise, live support can prove to be the difference between applications shipping on time or suffering last-minute and unplanned delays.

Vendor viability

Applications can live in production for years – and with extended application lifecycles comes the requirement to secure these applications across evolving threat patterns, runtime environments, and compliance obligations.