Friday, October 9, 2015

EU's highest court throws out privacy framework for US companies: small businesses suffer

Three ways that small tech businesses are just like every other small business – except when we’re not

Here’s the issue; small tech companies have all of the awesome characteristics of small businesses in the broadest sense (they’re job creators, innovators, revenue makers…) but they often find themselves having to navigate complex regulatory and compliance issues that have historically been reserved for (large) multi-national corporations – all while building their businesses on technology that’s evolving way faster than the regulations that govern them. (As a footnote here, let me throw in a commercial plug for ACT – a trade association focused on exactly these issues).
Tuesday's nullification of the Safe Harbor framework (a system that streamlined the transfer of EU user data to US businesses) in what everyone pretty much agrees was a consequence of the NSA spying scandals is a perfect example. In this case, we see how small tech businesses can get caught in the middle of a p^%*ing match between the EU and the US federal govt. …and I don’t care what side of the aisle you’re on – everyone loves small business growth and innovation right?Here’s a great bipartisan issue that our lawmakers should be able to address – don’t you think?
Three ways that small tech businesses are just like every other small business – except when we’re not
One: Like every small business, we can’t afford to have a permanent team of lawyers on our payroll …but small tech businesses can go international overnight - having to navigate across international jurisdictions.
The Safe Harbor system eliminated a raft of complexity and potentially 1000’s of hours of legal work required to manage EU user data – making it feasible for small tech businesses to do business inside the EU.
Small businesses simply cannot be expected to navigate a maze of international privacy obligations – each with their own rules – and penalties. Without the Safe Harbor system (or something to replace it), previously open markets will soon be out of reach.
Two: Like every small business, we often rely on 3rd party service providers for professional services (legal, payroll, HR, etc.) …but small tech business also rely upon 3rd party providers for services rendered inside their apps (versus inside their offices) while those apps are being used by their clients; for example, payment processing and application analytics.
This distributed notion of computing introduces multiple layers of business entities at the very sensitive point where the application is being used in production – exponentially expanding the legal and compliance problems (each service provider must also have their own agreements within each country/jurisdiction).
This is now more than just unmanageably large and expensive –it’s potentially unsolvable. Small businesses deal with lots of unknowns, (security vulnerabilities for example), but this new wrinkle will almost certainly have a chilling effect – either on how we serve EU markets AND/OR how we rely on 3rd party service providers (a core development pattern that, if abandoned, would make US dev firms less competitive).
Three: Like every small business, small tech companies cannot change direction with the swipe of a pen the way laws and regulations can come and go.
While the Safe Harbor framework was instantaneously nullified with one verdict, applications that were compliant moments before are now potentially in jeopardy – and they’re still running and still sending data – whether the app owner likes it or not.
Bottom line, this is a regulatory and governance issue and we need governments to work out……
Everyone loves small businesses right? We need…
  • To know what’s expected of us
  • Agreement on what compliance looks like
  • Visibility into enforcement and penalty parameters
Then, we can do what we know how to do – make smart technical and business development investments.

Other material

Here are three more links:
Two days ago, when the Safe Harbor ruling first came down, I posted an explanation of how (Link 1PreEmptive Analytics can re-direct application usage data to support the kind of seismic shifts in architecture that might follow (Link 2here.
That same evening, I was put in touch with Elizabeth Dwoskin, a WSJ reporter who was writing a piece on the impact that this sudden move would have on small businesses – my conversations with her are actually what prompted this post (WSJ has already posted her well-written article,(Link 3Small Firms Worry, as Big-Data Pact Dies).
You might ask, if her article is so well-written (which it is), why would I have anything to add? She was looking for a “man-on-the-street” (dev-in-the-trenches) perspective on this one particular news item, BUT, the Safe Harbor ambush is just one example of the larger issues I hope I was able to outline here.

How will today's Safe Harbor ruling impact users of multi-tenant application analytics services?

Earlier today, the Safe Harbor system was just overturned (see Europe-U.S. data transfer deal used by thousands of firms is ruled invalid). 
The legal, operational, and risk implications are huge for companies that have, up until today, relied on this legal system (either directly or through third parties that relied on Safe Harbor) to meet EU's privacy obligations. 
What are the implications for application analytics solutions (homegrown or commercially offered)? It's not clear at this moment in time, but one thing is for sure - it is a lot harder to turn off an application, re-architect a multi-tier system, or force an upgrade than it is to simply sign a revised privacy agreement. 
Multi-national companies that continue to transfer and process personal data from European citizens without implementing an alternative contractual solution or receiving the authorization from a data protection authority are at risk for legal action, monetary fines, or a prohibition on data transfers from the EU to US. 
If this transfer of data is embedded inside an application/system's architecture - then a wholesale development/re-architecture plan may be required. Of course, re-architecting systems to keep data local within a country or region, may simply be impossible (efficiency, cost effective, ...) UNLESS the system is, itself, built to provide that kind of flexibility already.  
Happily, PreEmptive Analytics is. 
  • PreEmptive Analytics endpoints (in addition to on-prem of course) can live inside any Microsoft Azure VM. Clients with very specific requirements as to where their actual VM’s are being hosted would always be able to meet those requirements with us. …and what about when a client gets even more specific (country borders for example) or when they want to support multiple jurisdictions with one app? (this leads to the second point…)
  • PreEmptive Analytics instrumentation supports runtime/dynamic selection of target endpoints. While this would take a little bit of custom code on the developer’s part – our instrumentation would allow an application – at runtime – to determine where it should send it’s telemetry (perhaps a service that is called at startup that has a lookup table – if the app is running in Germany – send it to …, if it’s in China, send it to …, if it’s in the US…). This would allow an app developer with an international user base to support conflicting privacy and governance obligations with one application.
It may turn out that keeping German application analytics data in Germany may be as important to US companies now as it is to German companies. One thing's for sure - the cadence and road map for application analytics cannot be tied to the cadence and road map of any one application - the two have to live side-by-side - but independently.

Tuesday, September 1, 2015

When it comes to application risk management, you can't do it alone.

I’m often asked to estimate how many developers are required to obfuscate and harden their applications against reverse engineering and tampering – and when they say “required,” what they usually mean is what is the bare minimum number of developers that need to be licensed to use our software.

Of course it's important to get the number of licensed users just right (if the count is too high, you're wasting money - but, if it's too low, you're either not going to be efficient or effective - or worse still - you're forced to violate the license agreement to do your job).

Yet, as important as this question seems, it's not the first question that needs answering.

Staffing to effectively manage application risk is not the same as counting the number of concurrent users required to run our (or any) software at a given point in time.

Consider this:

How many people are required to run our application hardening products on a given build of an application? Actually, none at all, both Dotfsucator for .NET and DashO for Java) can be fully integrated into your automated build and (continuous) deployment processes.

However, how many people does it take to effectively protect your application assets against reverse engineering and tamperingThe answer can be no less than two. Here’s why…

  • Application risk management is made up of one (or more) controls (processes not programs). These controls must first be defined, then implemented, then applied consistently, and, lastly, monitored to ensure effective use.
  • Application hardening (obfuscation and tamper defense injection) is just such a control – a control that is embedded into a larger DevOps framework – and a control that is often the final step in a deployment process (followed only by digital signing).


Now, in order to be truly effective, application hardening cannot create more risk than it avoids – the cure cannot be worse than the disease.

What risks can come from a poorly managed application hardening control (process)?

If an application hardening task fails and goes undetected,

  • the application may be distributed unprotected into production and the risk of reverse engineering and tamper go entirely unmanaged, or 
  • the application may be shipped in a damaged state causing runtime failures in production.


If an application hardening task failure is detected, but the root cause cannot be quickly fixed, then the application can't be shipped; deadlines are missed and the software can't be used.

So, what’s the minimum number of people required to protect an application against reverse engineering and tampering?

You’ll need (at least) one person to define and implement the application hardening control.

…and you’ll need one person to manage the hardening control (monitor each time the application is hardened, detect any build issues, and resolve any issues should they arise in a timely fashion).

Could one individual design, implement and manage an application hardening control? Yes, one person can do all three tasks for sure.

However, if the software being protected is released with any frequency or with any urgency, one individual cannot guarantee that he/she will be available to manage that control on every given day at every given time – they simply must have a backup – a "co-pilot."

No organization should implement an application hardening control that’s dependent on one individual – there must be at least two individuals trained (and authorized) to run, administer, and configure your application hardening software and processes. The penalty for unexpected shipping delays and/or shipping damaged code or releasing an unprotected application asset into “the wild” is typically so severe that even though the likelihood of such an event occurring on any given day may seem remote - it cannot be rationalized.

This is nothing new in risk management – every commercial plane flies with a co-pilot for this very reason – and airline manufacturers do not build planes without a co-pilot’s seat. It would be cheaper to build and fly planes that only accommodate one pilot – and it wouldn’t be an issue for most flights – but to ignore the risk that having a single pilot brings would be more than irresponsible – it would be unethical.

Are there other reasons for additional people and processes to be included? Of course – but these are tied to development methodologies, architecture, testing and audit requirements of the development organization, etc. These are not universal practices.

If reverse engineering and/or application tampering pose Intellectual Property, privacy, compliance, piracy, or other material risks, they need to be managed accordingly - as a resilient and well-defined process. Or, in a word, when it comes to application risk management, you can't do it alone.

Tuesday, June 23, 2015

6 signs that you may be overdue for a mobile application risk review

Every organization must ultimately make their own assessment as to the level of risk they are willing to tolerate – and mobile application risk is no exception to this rule.

Yet, given the rapidly changing mobile landscape (inside and outside of every enterprise), organizations need to plan on regular assessments of their mobile risk management policies – especially as their mobile applications grow in importance and complexity.

Here are 6 indicators that you may be overdue for a mobile application risk assessment.
  1. Earlier PC/on-premises equivalents ARE hardened and/or monitored. Perhaps these risks need to be managed on mobile devices too – or, conversely, the risks no longer need to be managed at all.
  2. Enterprise mobile apps are distributed through public app marketplaces like Google Play or iTunes. Using public marketplaces exposes apps to potentially hostile users and can be used as a platform to distribute counterfeit versions of those very same apps.
  3. Mobile apps are run within a BYOD infrastructure alongside apps and services outside of corporate control. Access to a device via third-party software can lead to a variety of malicious scenarios that include other apps (yours) installed on the same device.
  4. Mobile apps embed (or directly access) proprietary business logic. Reverse engineering is a straight forward exploit. Protect against IP theft while clearly signaling an expectation of ownership and control – which is often important during a penalty phase of a criminal and/or civil trial.
  5. Mobile apps access (or have access to) personally identifiable information (or other data governed by regulatory or compliance mandates). Understanding how services are called and data is managed within an app can readily expose potential vulnerabilities and unlock otherwise secure access to high-value services.
  6. Mobile apps play a material role in generating or managing revenue or other financial assets. High value assets or processes are a natural target for bad actors. Piracy, theft, and sabotage begins by targeting “weak links” in a revenue chain. An app is often the first target.
Want to know more about how PreEmptive Solutions can help reduce IP theft, data loss, privacy violations, software piracy, and other risks uniquely tied to the rise of enterprise mobile computing? 

Visit www.preemptive.com - or contact me here - i'd welcome the contact.

In the meantime, here’s an infographic identifying leading risk categories stemming from increased reliance on mobile applications. The vulnerabilities (potential gaps) call out specific tactics often employed by bad actors; the Controls identify corresponding practices to mitigate these risks.

The bottom half of the infographic maps the capabilities of PreEmptive Solutions Mobile Application Risk Portfolio across platforms and runtimes and up to the risk categories themselves.



















For more information on PreEmptive Solutions Enterprise Mobile Application Risk product portfolio, check out: PreEmptive Solutions’ mobile application risk management portfolio: four releases in four weeks.

Friday, June 19, 2015

ISV App Analytics: 3 patterns to improve quality, sales, and your roadmap

Application analytics are playing an increasingly important role in DevOps and Application Lifecycle Management more broadly – but ISV-specific use cases for application analytics have not gotten as much attention. ISV use cases – and by extension, the analytics patterns employed to support them – are unique. Three patterns described here are Beta, Trial, and Production builds. Clients and/or prospects using these “product versions” come with different expectations and hold different kinds of value to the ISV – and, as such – each instance of what is essentially the same application should be instrumented differently.

The case for injection

Typically, application instrumentation is implemented via APIs inside the application itself. While this approach offers the greatest control, any change requires a new branch or version of the app itself. With injection – the process of embedding instrumentation post-compile – the advantage is that you are able to introduce wholly different instrumentation patterns without having to rebuild or branch an application's code base.

The following illustration highlights the differences in instrumentation patterns across product version – patterns that we, at PreEmptive, use inside our own products.


Beta and/or Preview

  • Measure new key feature discovery and usage 
  • Track every exception that occurs throughout the beta cycle 
  • Measure impact and satisfaction of new use cases (value versus usage) 
  • *PreEmptive also injects “Shelf Life” – custom deactivation behaviors triggered by the end of the beta cycle 

Trial

  • License key allowing for tracking individual user activity in the context of the organization they represent (the prospective client) - this is CONNECTED to CRM records after the telemetry is delivered
  • Performance and quality metrics that are likely to influence outcome of a successful evaluation through better timed and more effective support calls 
  • Feature usage that suggest user-specific requirements – again, increasing the likelihood of a successful evaluation 
  • * Preemptive injects “Shelf Life” logic to automatically end evaluations (or extend them) based upon sales cycle 

Production

  • Enforce organization’s opt-in policy to ensure privacy and compliance. NO personally identifying information (PII) is collected in the case of PreEmptive’s production instrumentation. 
  • Feature usage, default setting, and runtime stack information to influence development roadmap and improve proactive support. 
  • Exception and performance metrics to improve service levels. 
  • * PreEmptive injects Shelf Life functionality to enforce annual subscription usage. 

The stakeholders and their requirements are often not well understood at the start of a development project (and often change over time). Specifically, sales and line of business management may not know their requirements until the product is closer to release – or after the release when there's greater insight into the sales process. A development team could not use an analytics API even if they had wanted to. …and this is one very strong case for using analytics injection over traditional APIs.

PreEmptive Solutions ISV application analytics examples

Here are recent screen grabs of Dotfuscator CE usage (preview release) inside Visual Studio 2015.
Here is a similar collection of analytics Key Performance Indicators (KPIs) – this time focusing on current user evaluations.



…and lastly, here are a set of representative KPIs tracking production usage of DashO for Java.


If you’re building software for sale – and you’d like to streamline your preview releases, shorten your sales cycles and increase your win rates – and better align your product roadmap with what your existing clients are actually doing – then application analytics should be a part of your business – and – most likely – injection as a means of instrumentation is for you as well.

Wednesday, April 15, 2015

Five tenets for innovation and sustained competitive advantage through application development

I'm privileged to spend most of my working days in front of smart people doing interesting work across a wide spectrum of industries - and in the spirit of "ideas don't have to be original - they just have to be good(c)" (the copyright is my attempt at humor RE other people's good ideas versus my silly aphorism) - anyhow, back to my central point - mobile, cloud, the rise of big data, etc. are all contributing to a sense that business (and the business of IT) is entering an entirely new phase fueled by technology, globalization, etc... and with this scale of change comes confusion  ...but in spite of all this background noise, I'm witnessing many of our smartest customers and partners converge on the following five tenets - tenets that I know are serving some of the smartest people in the coolest organizations  extremely well - cheers.

1.       Organizations must innovate or be rendered obsolete.
       Challenge: Applications now serve as a hub of innovation and a primary means of differentiation – across every industry and facet of our modern economy.
       Response: Innovative organizations use applications to uniquely engage with their markets and to streamline their operations.
2.       Genuine innovation is a continuous process – to be scaled and sustained.
       Challenge: Development/IT must internalize evolving business models and emerging technologies while sustaining ongoing IT operations and managing increasingly complex regulatory and compliance obligations.
       Response: Leading IT organizations imagine and deliver high-value applications through agile feedback-driven development practices and accelerated development cycles that place a premium on superior software quality and exceptional user experiences.
3.       Modern applications bring modern risks.
       Challenge: In order to sustain competitive advantage through application innovation, organizations must effectively secure and harden their application asset portfolios against the risks of revenue loss, Intellectual Property theft, denial of service attacks, privacy breaches, and regulatory and compliance violations.
       Response: Successful organizations ensure that security, privacy, and monitoring requirements are captured and managed throughout the application lifecycle from design through deployment and deprecation – as reflected in appropriate investments and upgrades in processes and technologies.
4.       Every organization is a hybrid organization – every IT project starts in the middle.
       Challenge: Organizations must balance the requirement to innovate with the requirement to operate competitively with existing IT assets.
       Response: Mature organizations do not hard-wire development, security, analytics, or DevOps practices to one technology generation or another. The result is materially lower levels of technical debt and the capacity to confidently embrace new and innovative technologies and the business opportunities they represent.
5.       Enterprise IT requirements cannot be satisfied with consumer technologies – shared mobile platforms and BYOD policies do not alter this tenet.
       Challenge: Enterprise security, compliance, and integration requirements cannot (and will not) be satisfied by mobile/web development and analytics platforms designed for consumer-focused, standalone app development (and the business models they support).

       Response: Mature IT organizations drive mobile app innovation without compromising core enterprise ALM, analytics, or governance standards by extending proven practices and enterprise-focused platforms and technologies. 

Tuesday, April 7, 2015

Darwin and Application Analytics

Survival of the fittest

Technological evolution is more than a figure of speech.

Survival, e.g. adoption (technology proliferation and usage) favors the species (technology) that adapts most effectively to environmental changes and most successfully competes for limited resources required for day-to-day sustenance. In other words, the technology that is most agile wins in this winner take all Darwinian world.

You might think you know where I’m headed – that I’m going to position application analytics and PreEmptive Analytics in particular as being best able to ensure the agility and resilience applications need to survive – and while that’s true – that’s not the theme of today’s post.

A rose by any other name… and applications are (like) people too!

Today’s theme is on properly classifying application analytics (and PreEmptive Analytics in particular) among all of the other related (and in some cases, competing) technologies – are they fish or fowl? Animal, vegetable, or mineral? Before you can decide if application analytics is valuable – you have to first identify what it is and how it fits into your existing ecosystem – food chain - biosphere.

In biology, all life forms are organized into a hierarchy (taxonomy) of seven levels (ranks) where each level is a super set of the levels below. Here, alongside people and roses, is a proposed “taxonomic hierarchy” for application analytics.





















What’s the point here?


What does this tell us about the species “PreEmptive Analyticus”? The hierarchy (precedence of the levels) and their respective traits are what ultimately gives each species their identity.  ...and this holds true for application analytics (and PreEmptive Analytics in particular) too.

Commercial Class software is supported by a viable vendor (PreEmptive Solutions in this case) committed to ensuring the technology’s lasting Survival (with resources and a roadmap to address evolving requirements).

Homegrown solutions are like mules – great for short term workloads, but they’re infertile with no new generations to come or capacity to evolve.

Analytics is the next most significant rank (Order) – PreEmptive Analytics shares a common core of functionality (behavior) with every other commercial analytics solution out there today (and into the future)

HOWEVER, while common functionality may be shared, it is not interchangeable.

Hominids are characterized as Primates with “relatively flat faces” and “three dimensional vision” – both humans and chimpanzees obviously qualify, but no one would confuse the face of a human for that of a chimpanzee. Each species uniquely adapts these common traits to compete and to thrive in its own way.

The Family (analytics focused more specifically on software data) and the Genus (specifically software data emitted from/by applications) each translate into increasingly unique and distinct capabilities – each of which, in turn, drive adoption.

In other words, in order to qualify as a Species in its own right, PreEmptive Analytics must have functionality driving its own proliferation and usage (adoption) distinct from other species e.g. profilers, performance monitors, website monitoring solutions, etc. while also establishing market share (successfully competing). 


How do you know if you've found a genuine new species?


According to biologists and zoologists alike, the basic guidelines are pretty simple, you need a description of the species, a name, and some specimens.

In this spirit, I offer the following description of PreEmptive Analytics – for a sampling of “specimens” (case studies and references) - contact me and I’m more than happy to oblige…

The definition enumerates distinguishing traits and the "taxonomic ranking" that each occupies - so this is not your typical functional outline or marketecture diagram.





















CAUTION – keep in mind that common capabilities can be shared across species, but they are not interchangeable - each trait is described in terms of its general function, how it's been specialized for PreEmptive Analytics and how/why its adaptable to our changing world (and therefore more likely to succeed!) - I’m not going to say who’s the monkey in my analytics analogy here, but I do want to caution against bringing a chimp to a do a (wo)man’s job.

PreEmptive Analytics

Core Analytics functionality

Specialized: The ingestion, data management, analytics computations, and the visualization capabilities include “out of the box” support for application analytics specific scenarios including information on usage, users, feature usage patterns, exceptions, and runtime environment demographics.

Adaptable: In addition to these canned analytics features, extensibility points (adaptability) ensure that whatever unique analytics metrics are most relevant to each application stakeholder (product owner, architect, development manager, etc.) can also be supported. 

Software Data (Family traits)


Incident Detection: PreEmptive Analytics (for TFS) analyzes patterns of application exceptions to identify production incidents and to automatically schedule work items (tasks).

Data transport: The PreEmptive Analytics Data Hub routes and distributed incoming telemetry to one or more analytics endpoints for analysis and publication.

Specialized: “Out of the box” support for common exception patterns, automatic offline-caching and common hybrid network scenarios are all built-in.

Adaptable: User-defined exception patterns and support for on-premises deployments, isolated networks, and high volume deployments are all supported. 

Application Data (Genus traits)

Application instrumentation (collecting session, feature, exception, and custom data): PreEmptive Analytics APIs plus Dotfuscator and DashO (for injection of instrumentation without coding) support the full spectrum of PC, web, mobile, back-end, and cloud runtimes, languages, and application types.

Application quality (ensuring that data collection and transmission does not compromise application quality, performance, scale…): PreEmptive Analytics runtime libraries (regardless of the form of instrumentation used) are built to “always be on” and to run without impacting the service level of the applications being monitored.

Runtime data emission and governance (opt-in policy enforcement, offline-caching, encryption on the wire…): The combination of the runtime libraries and the development patterns supported with the instrumentation tools ensure that security, privacy and compliance obligations are met.

Specialized: the instrumentation patterns support every scale of organization from the entrepreneurial to the highly regulated and secure.

Adaptable: Application-specific data collection, opt-in policy enforcement, and data emission is efficiently and transparently configurable supporting every class of application deployment from consumer to financial, to manufacturing, and beyond… 

PreEmptive Analytics (Species traits)


Every organization must continuously pursue differentiation in order to remain relevant (to Survive). In a time when almost all business that organizations do is digitized and runs on software, custom applications are essential in providing this differentiation.

Specialized: PreEmptive Analytics has integrated and adapted all of these traits (from instrumentation to incident detection) to focus on connecting application usage and adoption to the business imperatives that fund/justify their development. As such, PreEmptive Analytics is built for the non-technical business manager, application owners, and product managers as well as development managers and architects.

Adaptable: Deployment, privacy, performance, and specialized data requirements are supported across industries, geographies, and architectures providing a unified analytics view on every application for the complete spectrum of application stakeholder.

So what are you waiting for? Put down your brontosaurus burger and move your development out of the stone age.

Monday, March 23, 2015

Application Analytics: measure twice, code once

Microsoft recently announced the availability of Visual Studio 2015 CTP 6 – included with all of the awesome capabilities and updates was the debut of Dotfuscator Community Edition (CE) 2015. …and, in addition to updates to user functionality (protection and analytics instrumentation capabilities), this is the first version of Dotfuscator CE to include it’s own analytics (we’re using PreEmptive analytics to anonymously measure basic adoption, usage, and user preferences). Here’s some preliminary results… (and these could all be yours too of course using the very same capabilities from PreEmptive Analytics!)

Users by day comparing new and returning users shows extremely low returning users – this indicates that users are validating that the functionality is present, but not actually using the technology as part of a build process – this makes sense given that this is the first month of a preview release – users are validating the IDE – not building real products on that IDE.


Feature usage and user preferences including timing of key activities like what % of users are opting in (of course opt in policy exists and is enforced), what runtimes they care about (including things like Silverlight and ClickOnce and Windows Phone…), the split between those who care about protection and/or analytics, and timing of critical activities that can impact DevOps are all readily available




Broad geolocation validates international interest and highlights unexpected synergies (or issues) that may be tied to localized issues (language, training, regulation, accessibility, etc.)

This is an example of the most general, aggegrated, and generic usage collection - of course the same analytics plumbing can be used to capture all flavor of exception, user behavior, etc. - but ALWAYS determined by your own design goals and the telemetry is ALWAYS under your control and governance - from "cradle to grave."

BOTTOM LINE: the faster you can iterate – the better your chances for a successful, agile, application launch – building a feedback driven, continuous ALM/DevOps organization cries out for effective, secure, and ubiquitous application analytics – how is your organization solving for this requirement?