Integrating On-Premises Jenkins with VSTS to deploy to an ILB ASE


I recently had to work on integrating an on-premises Jenkins with VSTS in order to use VSTS’s out of the box capabilities to deploy resources to Azure. Although there is quite a good documentation on this topic, you must be able to read between the lines. So, with this blog post, I’m not going to repeat was is described in the article but I’m going to try to fill the gap when it comes to integrating with an on-premises Jenkins and not with a Cloud-based Jenkins, as assumed by the Microsoft documentation. Since an image tells more than a lengthy speech, here is one that sum it up all:


So, in the big lines, the combination of blue & red lines allow for a full Continuous Deployment Story, meaning that whenever something happens at Jenkins level, a VSTS release definition is triggered automatically to deploy resources to Azure. In this schema, this is an ILB ASE. It’s important because it’s not publicly accessible.

Here, I’m using a self-hosted VSTS agent that is in the same VNET as the ILB ASE and that has connectivity towards the on-premises Jenkins Server through expressroute but this could be S2S VPN as well. The VSTS agent is using HTTPS outbound to connect to VSTS with PAT (Personal Access Token) kind of authentication. The agent connects through HTTP or HTTPS + FBA, depending on your Jenkins setup. The default installation is merely HTTP+FBA but Jenkins can deal with different kind of authentication mechanisms as well as using HTTPS instead of HTTP.

Now, if we zoom on the red lines only, we see that the on-premises Jenkins server URL must be published through a reverse-proxy as an external facing URL. This is required for VSTS Service Hooks that are used to enable CD and trigger releases automatically.

Bottom line: if you can afford to trigger VSTS releases manually (or schedule) them, you can forget the red lines.

There is still an extra trick you need to know if you go for the blue lines only, meaning not publishing an external URL for the Jenkins server. When creating your Jenkins Service Endpoint is VSTS, do not try to test it as it won’t work:


It won’t work simply because the URL is an on-prem one and VSTS has no way to connect to it. However, you can safely register the endpoint. The trick comes later on when setting up the release definition, make sure to use Jenkins tasks to download the artifacts and do not try to use the release artifcats feature as this one only works with external facing endpoints:



This will make sure artifacts are downloaded by the task executed by your self-hosted agent that has connectivity to your on-premises Jenkins server.

Happy deployments!




Posted in Azure, DevOps, vsts | Tagged , , , | Leave a comment

My recipe to build secure applications hosted in Azure


Here are some tips that might help you building and hosting secure applications in Azure.

Application Architecture: Clients and APIs

Make sure to make a clear segregation between clients and APIs. I’m not a great fan of MVC where the C part is often used as an API layer by developers. I advocate for a clear separation between the client part (could be a mobile APP, a SPA, etc.) and the API layer. The clients and APIs should be hosted in different App Services.

Do not trust client devices

Whether your client is a browser, a mobile app or any other native client application, as a rule of thumb, never store any sensitive information that is beyond the scope of the current device/user. So for instance, never use a Storage Account key right from a client device. Do not assume that mobile apps’s code is not accessible. So for instance, if you write a Xamarin mobile app, it will be packaged and delivered as an IPA file which can easily be retrieved via iTunes. Extracting code from the archive is a piece of cake and using reflectoring tools to reverse engineer the code is not more complicated.

Moreover, it is damn easy to proxy a mobile device with Fiddler (or other tools) so as to capture and analyze HTTP/HTTPS traffic. Same goes for browsers of course, although it is even easier to explore the browser storage & network activity.

API isolation

Make sure to ensure API isolation (network isolation) and/or secure them through AAD or any other authentication/authorization mechanism. In Azure, there are multiple ways to ensure network isolation/protection such as using App Service Environments or IP filtering at App Service level.

API Gateway

Use an APIM layer between your consumers and your APIs. Of course, in the Azure story, Azure API Management is a first class citizen. Define proper APIM policies to ensure only eligible requests are sent to the backend APIs. Leverage the built-in JWT token validation and other techniques such as Client Certificates to secure the communication between API consumers and APIs. On top of the security bits, APIM will help you defining throttling policies which are an extra protection against DOS attacks.


Use Azure’s network plumbing (NSG, ASG, subnets, vnets, peering, etc.) to control network flows between Azure to Azure resources and Azure to on-premises. Azure makes it easy now to setup & control secure connections.


Host SPAs behind an Azure Application Gateway with WAF enabled to protect against most of the OWASP vulnerabilities and more particularly the attacks targeting browsers.


Whether you need to use certificates, encryption keys or secrets, do not reinvent the wheel and just use Vault together with its SDK. Azure Key Vault is a FIPS 140-2 level 2 HSM, in other words, some piece of hardware many organizations can’t even afford.

Keys, Secrets & Certificates rotation

Make sure you rotate sensitive information on a regular basis. Vault may help achieving this as explained in this article

Organizations are used to deal with PKI but should not only focus on this. I’m more particularly referring to Azure Active Directory Applications that are a new (few years old already) kid in town and for which I often see never-expiring application secrets.  In comparison, I very rarely see never expiring certificates. Why is that? Simply because AAD Apps are not yet integrated in the existing enterprise processes. But whatever the reason, that is pretty bad, especially if some of your components do leverage the Client Credentials Flow (app-only).  Therefore, I’d recommend using short-lived AAD Apps secrets. The Graph API helps identifying secrets that are about to expire so it is not hard to have scheduled Azure Automation runbooks rotating these secrets automatically and seamlessly for your business applications.

Keep credentials out of code

Explore (still in preview) Azure MSI if your APIs need to access other resources.

CI/CD pipeline, quality gates & security scanners

Nowadays, security also happens here. With Agile Development and very frequent releases, it is no more possible to rely only on penetration tests since they are very expensive and hardly affordable every 2 weeks. So, unless you have Blue & Red teams in your organization, you’d better invest in tools that will analyze both the code quality & security of your code base.  To me, the code should be agnostic to its final hosting location and should be robust by design. Layer 7 vulnerabilities are often due to unvalidated untrusted data and this, can be avoided right from the code itself, independenly of a WAF.

Monitor, monitor, monitor!

Admittedly, monitoring what’s going on in Azure isn’t an easy piece because there is a plethora of tools, but when it comes to security, I’d definitely recommend using the Azure Security Center and Traffic Analytics (preview) together with NSG Flow Logs.

Happy Security!



Posted in Azure, Azure Active Directory, Azure Key Vault, Security | Tagged , , | Leave a comment

Azure Security Cheat Sheet


Cybersecurity is a concern for everyone today as more and more workloads are connected in a way or another, meaning exposed in a way or another. When it comes to the Cloud, things turn wilder as PaaS, CaaS, FaaS and even IaaS to some extent, represent a paradigm shift. Organizations tend to rely on good old recipes that are perfectly suitable for traditional on-premises systems but not especially a good fit for the Cloud, even when talking IaaS, since the underlying network plumbing does not have much in common with on-premises networks. Moreover, insiders represent a severe security risk but some organizations still think their premises is way more secure than the Cloud, just because, it is within the walls.

That said, I don’t want to start a debate but rather to sum up what is available in Azure to build secure architectures. It is probably not exhaustive but these are the most common tools with a short recap of their purpose.

  • Virtual Networks aka VNET. These are the easiest way to isolate components from the internet or at least to control inbound traffic to any resource belonging to the VNET. VNETs can be peered with each other and have a connectivity to on-premises via S2S VPN or ExpressRoute. More and more PaaS components integrate with VNETs. Service endpoints also allow you to restrict some services (Azure SQL, Storage) to VNET-only resources. Also, make sure to provision VMs with private IPs only when to manage them from your premises but of course, you’ll need either S2S VPN, either ExpressRoute. P2S is also possible but discouraged.
  • NSG (Network Security Groups) & ASG (Application Security Groups): both help dealing with inbound and outbound traffic within a subnet and/or VMs. ASG improves the way you apply these rules by minimizing the required number of NSGs. Best practices: apply NSG at subnet level, avoid applying them at VM level. Make sure your logical split between subnets is clean, that will ease the configuration of NSG.
  • Basic DDOS Protection Plan: applies by default to every VNET. Basic DDOS is aimed at protecting every public ip address. So key take away here, put everything you can inside a VNET to benefit from this built-in protection
  • Standard DDOS Protection Plan: must be enabled at VNET level. On top of the basic protection, one get additional protection that is more specific to the environment. Standard DDOS makes use of ML behind the scenes to improve the level of protection.
  • There are plenty of network artifacts (public ip, private ip, load balancers, traffic manager, route tables, etc.) which should be taken care of but are not “per se” safeguards on their own.
  • Application Gateway: Layer 7 WAF with built-in protection against OWASP TOP 10 most common attacks
  • NVA (Network Virtual Appliances): this one is not out of the box. Traditional vendors published NVAs to the Azure Marketplace. Not sure using these is in-line with the paradigm shift I mentioned earlier. However, using at least one (for instance to control inbound/outbound traffic to your premises) will probably make your customer sleep better at night.
  • Azure API Management: allows any organization to build and expose APIs in a consistent manner. Thanks to policies, one can define rules that will inspect every incoming request and discard non-compliant ones. APIM comes with plenty of ways to secure APIS (JWT policies, Client Certificates, Subscription Keys) etc. which can be combined and can be (premium tier) integrated with VNET in External and Internal modes. The Products — APIs — Operations structure makes it easy to define the bare minimal security requirements via policies at product level to let all APIs inherit from that.
  • Azure Key Vault: based on a HSM, Key Vault is definitely where any sensitive information (secrets, encryption keys, certificates) should be stored. Key Vault is definitely a first-class citizen in the Azure security story.
  • ASE (App Service Environment) is a way to leverage typical App Services with only private IPs (note that there is also a public ASE). On top of security concerns, ASE come with enhanced performance.
  • Encryption comes with everything in Azure. Always Encrypted is a feature you should turn on for every database. Azure Key Vault can also be used to encrypt content from APIs where encryption can take place in Azure (private keys never leave the vault) or in-code via key-wrapping techniques. In short, hybrid encryption (RSA+Symetric) is made easy.
  •  RBAC (Role Based Access Control) is also a first-class citizen since it allows to control the access to various resources in a very granular way. Therefore, a good logical organization (number of subscriptions, resource groups etc.) is important to optimize the use of RBAC.
  • Azure PIM (Privileged Identity Management): this guy helps granting roles for a limited amount of time, in order to avoid having too many people with too much permissions.
  • Azure Active Directory Applications are also a very good way to protect custom-built APIs and to grant access to SaaS APIs such as the Graph API (but plenty of others too).
  • EMS (Enterprise Mobility Suite) allows for very advanced conditional access rules, multi-factor auth, RMS, etc.
  • Azure MSI is a great way to keep credentials out of code and basically out of any configuration file. The App Service benefits from a system identity that can be used to request tokens to any resource. Of course, the system SPN should be granted permissions over those resources prior to using MSI from the App Service. Talking of code, it is clearly when security starts. Having a proper automation system running code security checks (quality gates) early in the development lifecycle is key. With agile development, pen test is not the most cost effective solutions, given the very frequent number of releases. Therefore, relying on source code security scanners is key. Also basic things such as checking the security of an API post-deployment is key (you can have a look at the API Security Checker here )
  • Azure Security Center: this guy will highlight everything resource that is not well secured and which could potentially lead to security troubles. You can see it as your security dashboard. It will also enables features such as JIT (just in time access) which basically opens management ports only when needed and closed them automatically to reduce the attack surface of VMs.

On top of the above, Azure ships with different monitoring tools that help supporting a proper governance as well as detecting potential security issues. These tools range from the Log Analytics, Azure Monitor, Azure Advisor, etc. to Azure Policies. So, as you can see, there are many different tools to build secure solutions in Azure. There is no one size fits all but often combining a few of these guys should do the job!

Happy Security!

Posted in Azure | Tagged , , | Leave a comment

Deploy Azure App Services to multiple regions within the same subscription – VSTS trick


Most of the times, when deploying App Services such as a webapp to a single region, you simply use the Azure App Service Deploy task, that is currently in version 3.0 and whose a preview of the next version is to come.

However, using the very same task to deploy an App Service to multiple regions, in case you have a HA setup is a little more challenging. Looking at the below screenshot:


you can easily specify the name of the App Service. The problem is that, when working with multiple regions, the name will most probably be the same in the other region, therefore, the task cannot distinguish which service is targeted.  So, ideally, we should be able to select the resource group to make this distinction.

It turns out that one can select the resource group when ticking the Deploy to slot option:


but what if you don’t use slots??? Then, the easy fix is to put the value “production” in the Slot field.

Credits to Thomas Browet (@thomas_brw), one of my colleagues, for the tip.

Happy deployments!

Posted in Azure, DevOps, vsts | Tagged , , , | Leave a comment

VSTS task/extension upgrade explained


There is quite good documentation on Microsoft web sites on how to build custom tasks and custom extensions but things become a little more complicated when it comes to upgrading existing tasks and/or extensions. Since I ended up reverse engineering extensions built by big third parties (by downloading them), I thought it was well worth a blog post to prevent you having the same pain.

That said, you have to distinguish the tasks & extensions. For a private use, one can perfectly work only with tasks. Extensions are a way to distribute one or more tasks from the market place, either for a private share, either for public share. I will only consider the latter.

Fixing a bug

If you want to fix a bug of an existing task, you can simply upgrade the patching or minor version number and upgrade the extension number in the manifest.

"version": {
"Major": "1",
"Minor": "0",
"Patch": "1"

Updating the extension in the Management Portal (I’ll come to this later) will automatically make the build agents of accounts consuming the extension, picking up the last version of the task.

Publishing a major version of a task

If the change is bigger than merely a bug fix and if you want to have task versions side by side to avoid breaking anything or to propose multiple versions of a same task such as:


you’ll need to work a little more. In that case, you can duplicate your existing task but end up with a folder structure like this:

my task
—-v1 artifacts
—-v2 artifcats

The task identifier must remain the same, the name may be left unchanged or be changed and the major versions must be different. In the extension manifest, the contribution should refer to the root folder of the task and its identifier like this:

"contributions": [
      "id": "ca1755b2-751f-45e3-9bad-89a5c08d457d",
      "type": "ms.vss-distributed-task.task",
      "targets": [
      "properties": {
        "name": "rootfoldername"

Deleting a task or a version

I wouldn’t recommend you trying this but it seems to have no effect on accounts having already your extension installed, and this, to avoid any disruption of service I guess. However, beware that removing an extension from the marketplace seems to be one step too far as existing account’s build/release definitions will be broken if using tasks from your extension.

In case of a mere task/version deletion, new accounts will only see the remaining tasks and/or versions while existing accounts would only get a fresh copy by uninstalling and reinstalling completely the extension which is unlikely to happen. It could be handy at tenant level to have an upgrade version as well to opt-in explicitly and have a better control over what the supplier is doing. Today, the only two options are “Disable” and “Uninstall”.

Updating the extension itself

The only way to push changes to existing and new accounts is to publish a new version of the extension itself. This can be easily done by updating the extension manifest manually and by calling tfx extension create, or simply by calling tfx extension create –rev-version which will create the extension package and change the manifest in order to increment the version number. Once the package is produced, you can simply use the update menu option:


Happy VSTS.


Posted in vsts | Tagged , , , , , | Leave a comment

DevOps – Azure API Management and VSTS, better together


Visual Studio Team Services aka VSTS is a great tool when it comes to Application Lifecycle Management, Continuous Integration and Continuous Deployment. It is a must have tool in any DevOps organization working with Microsoft technologies (but not only). With that in mind, it is a surprise to no-one that most of the Azure PaaS services are natively integrated with VSTS, using either existing extensions, either ARM templates, either ARM APIs.

However, strangely enough, I couldn’t find a real integration with Azure API Management other than this extension, which is a nice effort but not reflecting the real value of Azure API Management. Some getting started ARM templates are available but that’s rather light for now. Moreover, while ARM templates are great, they are sometimes limited or not that easy to manipulate.

So, in an attempt to contribute, I released a free VSTS extension on the marketplace, called API Management Suite,  that covers a rather broad set of features of Azure API Management. The extension helps dealing with:

  • Creation/Update of Gateway APIs with and without versioning pointing to traditional backend API services
  • Creation/Update of Gateway APIs with and without versioning on top of Azure Functions
  • Creation/Update of Gateway Products
  • Built-in support of Gateway Policies for both products & APIs

Everything is open sourced on GitHub in this repo.

Happy deployments!

Posted in Azure, DevOps | Tagged , , , | Leave a comment

May Azure AD V1.0 endpoint be used for GDPR compliancy?


By now, everybody should have heard about GDPR. While not being a lawyer, I think I can summarize it this way: any identifiable personal information as well as sensitive personal information is subject to GDPR regulation.  This first and foremost implies informing the user about which usage is done with his personal data.

The major asset to comply with GDPR is the consent. By letting users consent about what is done with their personal information, you should be on the safe path. However, GPDR comes with strong requirements such as: every distinctive usage should come with its own consent and could be revoked at any time by the end user, which means that you cannot simply bundle everything in one basket and ask the user to consent to the whole thing, even if doing this, is already better than nothing. Continue reading

Posted in Azure, Azure Active Directory | Tagged , , | Leave a comment