Data privacy goes beyond protecting from data breaches. There are companies who regularly compromise their customer data as part of their business operations without ever telling the customer they’re doing so. These companies have that data legitimately, with permissions given to them by the user, but the company then goes on to sell the data to a third party or uses it to cultivate additional information, beyond what the end user ever imagined.
Think about your favorite mobile app. You probably didn’t read the fine print about what that mobile app may be doing to access your contact info or track your location before hitting the “I agree” button. Yet, that’s what a lot of apps are doing. Even if you did read the fine print, you probably agreed to the terms of service anyway because you really wanted the app and weren’t able to pick and choose your privacy settings. That’s the problem. It’s all or nothing when it comes to giving organizations permission to use your information in return for using their product or service.
Steal vs. sell
All organizations have put systems in place to protect corporate and customer data to protect it from potential theft. Every security executive and their teams have deployed strong security solutions and processes to protect their enterprise network from outside compromise. All with the goal of protecting the data.
But what if your company’s business model is based on reselling every piece of customer data that is taken in? And do you consider the security stance and policies of the organization that purchased the data you collected, or how that entity might exploit the data? Where is the ethical line between compromised data through theft or compromised data through third-party resale?
For example, for many years, credit card companies and other financial services firms have sold transaction data to third-parties. And this data often gets acquired by other financial institutions – for example, a hedge fund looking for credit card transaction data to estimate the sales growth of Walmart stores prior to Walmart’s quarterly earnings release, trading the stock in advance of this official release. This might be a legitimate use of the purchased data, but would credit card customers really be happy to know how their personal buying information was being used to boost someone else’s business?
And what about the not-so-legitimate ways some organizations are using data? For example, in return for a free mobile app, you might be allowing a third party – perhaps even an unknown oversees entity, like a game publisher – to track your location, exploit your contacts, etc. for malicious reasons, or just simply sell this information for financial gain. For those who wonder how free apps make money, in many cases, that’s how – they’re exploiting information about your personal behaviors, preferences and connections.
Where do you draw the line?
If there was a data breach and that information was stolen by a cybercrime ring or compromised by a nation-state, it would make headline news. So how do we balance the ethics of an organization compromising that same data for their corporate gain?
CISOs are in the position of protecting their company’s data, but what if their company’s business model is to sell or exploit that data. Where do you draw the line between business operations and ethical behavior?
That’s the unanswered question. CISOs are in a tough place here. Their sworn duty is to protect the data of their organization, but if that organization turns around and decides it is going to sell customer data for a dollar per file, it’s tough to fight leadership decisions.
It’s unclear whether or not legislation is the best course of action here. Regulations like GDPR have certainly given data privacy new attention, and consumers are more aware of the importance of protecting their personal information. But those privacy regulations don’t address the ethical issues as of yet.
I think there needs to be some industry standard to allow users to opt in on what data companies can and cannot use and how it can and cannot be used, especially for paid applications. Lacking this industry standard, legislation will unfortunately be required, which, with history as our guide, will likely be sub-optimal.
To get the ball rolling, using consumer mobile apps as an example, an easy start would be for the industry to decide that if the software is free, personal information must be traded in return. For paid versions, personal data should be off limits. In the middle, users would be willing to disclose some behaviors for certain levels of functionality. These choices should be easily and clearly described at the time of initial download.
While we all agree that a clear solution is badly needed, where should it come from? Should governments be able to legislate business models where personal information is used or exploited? Or should it be up to the industry and the private markets to decide? At the moment, a coalition of VCs, including mine (Glasswing Ventures), is starting to tackle these questions around ethical data use.
At the same time, Google and other companies have started initiatives around the ethical use of AI and other technologies that touch their customer data. I am hopeful that these various initiatives will quickly converge and start to tip the industry in the right direction and get us more quickly down the path to a viable solution.
This article is published as part of the IDG Contributor Network. Want to Join?