From network-in-depth to defense-in-depth in the era of serverless architectures

A traditional way of implementing defense in depth is to rely heavily on the network. Traditional security architects are somehow obsessed by the network and consider it as the primary protection layer whatever asset they want to protect for whatever kind of architecture, to the extent that they transformed the defense in depth principle to a network in depth one.

The second pillar is usually authentication, the third is encryption, and, at last, monitoring comes to supplement these two inevitable layers.

As many of you might have noticed, this kind of approach is perfectly suitable for traditional IT where we have full control over the datacenter, and whose the scale is usually limited. But network is not anymore the primary layer for massively distributed systems and event-driven architectures that are in the realm of serverless. If you think about IoT, about geo-distributed systems, how could you restrict this to a given network perimeter? The principle of defense in depth is to rely on multiple protection mechanisms, network should only be one of them when available.

I recently analyzed the Azure Service Catalog (Network & Management services excluded) and I could get some stats: 18% of the services can be created inside a VNET, 19% can integrate with a VNET (meaning that they can interact with resources that are inside a VNET but themselves are outside) and 65% may be protected by their own firewall. I should add that whenever it is possible to lock things down using network, it will usually make your costs rising and you will lose the true elasticity.

To give a concrete example, if you consider Azure Data Factory with the Microsoft hosted runtime, you can simply consume the service as is but you cannot control the runtime’s IP nor even do you know its IP range. If that runtime needs to (and that’s the purpose of an ETL) access a data store, you can’t use the datastore’s firewall to restrict the access to that ETL only since it doesn’t come with a static IP or IP Range. To mitigate that, you can host the Data Factory runtime, either on-premises, either on Virtual Machine(s). Doing so will not only increase your costs but you will also kill your elasticity (or seriously reduce it since VM scale sets are not as elastic as a native serverless offering) ., while the MS hosted runtime would adapt 100% to the demand, cost-wise too.

Another nice example of pure serverless awesomeness: CDN. A CDN service cannot be locked down network-wise and that’s not the point since it is supposed to serve any user worlwide…some may argue that you usually do not serve sensitive content through CDN and I agree but you can put a CDN in front of a restricted storage account so you never know what users could put inside…Moreover, in the CDN world, there is the famous hotlinking issue where unexpected CDN consumers start consuming your CDN exposed resources…having an impact on both the bandwitdh and the costs (because you will pay the bill), so sometimes CDN can also become a problem. How does Verizon deals with that? Just by letting you define an encrypted token containing some rules the consumer must comply with…not by locking down access to a specific network. This example shows that we have to think out of the box.

So, network-driven security leads for sure to increased costs and defeats the promise of PaaS and FaaS where the typical benefits (time-to-market & costs) are based on economies of scale and on multi-tenancy. Azure is not the sole platform where network-based security is hardly doable, even AWS which is renowned for its network capabilities, does not behave differently with services such as Cloudsearch (Azure Search), SQS (Service Bus), SWF (Logic Apps), Kinesis Firehose and Kinesis Streams (Event Hub), just to name a few. Even AWS Lambda (Azure Functions) were originally designed to run in a non-predefined network perimeter, and although it is possible to run them inside a VPC (like hosting the Azure Function runtime onto an ILB ASE), it’s not recommended from a performance & scalability perspective.

To add on this, the new born Azure Sentinel (preview) is itself part of the serverless offering…isn’t it ironic for a SIEM to be outside of a controlled network perimeter? Hey, I was almost forgetting the panacea: conditional access…Well tried! While Azure Active Directory conditional access is indeed a very good way to control network boundaries, it is far from covering all the scenarios. For instance, any Azure resource that is not subject to Azure Active Directory authentication will not benefit from conditional access, but more importantly, at the time of writing, any clientid/clientsecret having access through RBAC or AAD Apps leveraging the Client Credential flow is not subject to conditional access…so, as a malicious insider, I could just try to grab this pair of credentials and play from whatever network perimeter I want. Should we stop using the Public Cloud because of that? Hell no! The truth is that security people need to reinvent themselves and start considering the network as only one of the elements.

What other protection layers do we have? Identity (whether AAD, or keys), Encryption at rest and in transit, and even better: client-side encryption.  Rotating access keys very frequently, use Managed Identities as much as possible so that passwords/secrets can’t even be leaked…unless Azure itself is cracked!

What else?? Well, and what about the application itself? This is particularly true in serverless since the whole point is that serverless services (Azure Functions, Databricks, etc.) are by design restricted in what they can do towards the underlying hosts. (since of course, there is always a server behind, even in serverless architectures :)). What about true DevSecOps, where you enforce security controls and application code robustness in an automated way through your CI/CD pipelines? By the way, pentests (although not fully adequate with agile methodologies) are still possible too. There are numerous ways to compensate the “loss” of network control.

But don’t get me wrong, I’m not telling you to discard the network totally, I’m rather even in favor of using this protection layer whenever possible, but this should not drive a PaaS and FaaS security strategy since it will simply defeat the whole purpose.  In Digital Transformation Program (where the Cloud usually plays an important role), there is the word Transformation which implies changing habits, reinvent oneself and finding new methods of achieving similar results. Thinking about alternatives is absolutely necessary!

So, to anyone having access to C-level people, conveying such a message is important to avoid waste of time, money and energy. The culture (especially in security) is by far the most difficult aspect to handle. Last but not least: we all work for a business, IT for IT, security for security does not make sense. Our job is to highlight the risks, define the residual risk and let the business take informed decisions about whether they want to take it or not.

About Stephane Eyskens

Office 365, Azure PaaS and SharePoint platform expert
This entry was posted in Azure. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s