You might’ve heard that in early April, it was announced that over 500 million Facebook records were found on two exposed AWS servers earlier this year. The user data was discovered by Upguard, a cybersecurity firm dedicated to identifying data leaks (ZDNet). The data, purchased from Facebook by two third-party app development companies, was stored on Amazon servers such that it was possible for the public to download it.
This scenario is nothing new for Facebook, but here’s what interesting: after the exposure was announced, stories began circulating throughout the media, publicly identifying AWS as responsible for the exposed data. No one can say for sure what misinformation spurred the slew of commentary, but one thing’s for certain—both third-party companies who purchased the data from Facebook were responsible for securing the data and ultimately Facebook is responsible for how its data is maintained; AWS likely held little or no responsibility for data storage. The narrative was odd to begin with, and after its inception, AWS didn’t make any public statements to refute the claims. It’s possible that this was because the stories were simply broadly circulated misinformation, thus there was no real need to make a statement. Whatever the reason, the story blew over, quickly superseded by other information security news events that arose shortly after.
But there’s a lesson to be learned: when a company places their data in a public cloud environment, it’s incumbent upon the data owner or data steward to ensure the operating system is hardened, the database, storage devices, and any transmission mechanism has encryption, and access controls are in place. The truth is, both Facebook and their business customers failed to secure their environments. Though the media reports laid blame on AWS, each of the app development companies placed the data on the AWS servers, and each are responsible for securing it.
“What a lot of people don’t understand about cloud service providers, especially at AWS, is that they’ll give you the platform or operating system to place your data and applications on, but the company that’s contracted for that service is responsible for the implementation of most of the security controls. At minimum, a cloud customer should conduct due diligence to ensure that stated security controls are in fact in place. Cloud customers must understand that, as Data Owners, they are ultimately 100% responsible for the data and that can never be transferred to a cloud provider .”
-Mark A. Houpt, Chief Information Security Officer, DataBank
Curiously, AWS was accused of failing to do their job even though they’re likely not ultimately responsible for the data that’s placed onto their servers. We’ve established that it was the responsibility of the Cloud Service Provider customers, but where does the Data Owner come in?
For starters, Data Owners do legitimately share data with third-party customers. It’s a standard practice, and legally allowed in the United States, for Data Owners. In Europe and other locations this is not always permitted. This data sharing practice is a large contributor to a Data Owners revenue stream. However, it would behoove a Data Owner to acknowledge their ownership and to ensure that third-party, sharing customers like app developers and marketing agents are maintaining secure environments where the data is processes. In many cases, large enterprises with large resources will conduct third-party vendor security programs that include testing and vendor risk analysis to help customers secure the data. Smaller and medium sized businesses have less resources but should still conduct a security program over vendors. Data owners have a legal and ethical obligation to continue to ensure shared data is secured through its entire lifecycle, regardless of transfer or location, after it’s provided to customers.
“It is best practice in the Information Security industry to be doing a trust but verify type of situation, such that they either insist on third-party reports, hire third parties to conduct security testing and analysis, or use an internal security team to conduct testing on customers to make sure that data is being secured.”
-Mark Houpt, Chief Information Security Officer, DataBank
It could be argued that much of the problem is that customers using public cloud resources have a notorious tendency to assume that the risk and ownership of data is transferred to the Cloud Provider and education needs to take place to dispel these notions. False assumptions on this topic continue to purvey among enterprises across all industries, and there aren’t enough information security experts stepping forward to correct the narrative.
Unfortunately, it’s a common occurrence for any public cloud service subscriber to assume that just because they’ve put their data in an environment, the cloud provider takes responsibility for anything that occurs within their environment, and this is simply not the case.
Upon taking a closer look at the fine print, it is clear that the data owner is always the entity from which the data originated. Data processors, such as hosting providers, do not bear ownership, and never assume ownership responsibility or transfer risk or liability of data.
Let’s talk about how we handle the proverbial line in the sand when it comes to data here at DataBank.
One of the first things we do with our customers is clearly communicate that DataBank never assumes ownership for customer data. The ownership always remains with the customer. It’s our principle to be transparent in our methodology for securing our environment and systems, as well as what customers are responsible for within their own environments. We provide an implementation summary, as well as a detailed customer responsibility matrix to ensure that responsibilities, as well as the format in which data is secured, is clear.
Public cloud providers are useful for particular use cases, but as customers, it’s important to recognize that most large-scale CSPs only do so much in the way of security. Comparatively, at DataBank we go to the extent of employing NIST 800-53R4 controls to ensure strong levels of protection. Within a Managed Services PaaS environment where a customer has selected all security tools and features offered by DataBank, there are 325 different security controls DataBank helps maintain. Eighty-five percent (85%) are of the IaaS and PaaS controls are managed by DataBank on behalf of the customer, who is responsible for the remaining 15%. We also clearly identify and provide consultative guidance on how to apply controls to your application(s) and data.
This makes DataBank’s CloudPlus an arguably more secure environment for enterprises. Here’s what seems to trip people up: although security-focused cloud providers, such as DataBank, will manage a certain amount of controls, the customer remains responsible for conducting a security analysis to verify that the CSP is in fact delivering on their commitment. To assist with this, DataBank provides reports so that customers can trust and verify that the other 85% of controls are in fact in place, accurate, and functional.
It’s critical to note: this level of security is achieved through PaaS managed security. In selecting the IaaS managed security model, for example, DataBank would manage significantly less security controls as a result of the customer choosing to take on responsibility for a greater portion of the environment. Colocation infrastructure comes with its own responsibility delegation, as well. The environment you choose affects your level of responsibility.
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.