Episode 1 was a global introduction to AI & Cognitive Services. Episode 2 is much more hands-on oriented and is aimed at building fundamental assets for the next episodes. You will learn how to get started with a minimal chatbot (that we’ll reuse throughout the entire course) and the typical steps involved in creating & consuming a Cognitive Service. This is a 15 minutes intensive session with 13 minutes of pure step by step demos. You can watch this video here
In this episode, I will draw the AI landscape of the Microsoft ecosystem. I want you to be a little more familiar with fundamental topics such as Machine Learning, Deep Learning and Natural Language Processing which might sound a little bit confusing for many developers. Once the high-level concepts will be covered, I’ll make an introduction of the Azure Cognitive Services and I’ll try to quickly answer the “what’s in it for me” question out of real world examples mapped to the various services. If you’re a hardcore developer, you might be disappointed by this episode as I will not show code yet, but by the end of it, you should understand when to use what and how to manage customer expectations. For the “how to bits”, I invite you to join me at Episode 2.
As you might have seen, the Linguistic Analysis API of the Azure Cognitive Services is available as part of the language category. It allows you to perform POS-tagging, which is basically a way to identify each word and its role within a piece of text.
I find POS-Tagging particularly useful whenever you want to capture the essence of a phrase. I’ve been using it a few times to simplify user search queries and build dynamic queries programmatically. So, whatever usage you want to make out of POS-tagging, the current implementation of Microsoft has a little shortcoming: they never answer with both tokens & tags regrouped. To give you a concrete example, here is a screenshot of all possible results (at the time of writing):
It is always dangerous to compare softwares/services from different vendors as benchmarking is rarely exhaustive and can sometimes be subject to interpretation and misunderstanding. On top of that, hardcore fans of vendors might lose their common sense and objectivity as it can quickly turn emotional.
However, I recently had the opportunity to have a demo of Watson from a seasoned IBM consultant which lead me to try out and explore Watson a little further. I’m working with Azure Cognitive Services for more than a year, especially using LUIS and the bot framework to build chatbots. On top of my Azure experience, I have some background in AI & NLP in general as I’ve been involved in multiple initiatives (as for instance a package I wrote on DBPedia Spotlight) for the past 3 years, using neither IBM, neither Microsoft services. Continue reading
Today, Bots & more particularly Chatbots are on every lip! Why this buzz? The answer is very easy: AI has become mainstream thanks to vendors such as Microsoft, IBM and others. Chatbots make use of computational linguistics behind the scenes, not a new concept though, since Alan Turing was already working on that in the nineteen-fifties! So, what has changed in the meantime, why do we sunddenly reach a new paradigm? Resources & Data are the answers as today, the amount of available information & hardware capabilities have increased dramatically. Continue reading
I recently realized thanks to a colleague @MMeuree, that the ID_TOKEN that’s supposed to contain the group membership as shown below:
does not list more than 4 groups (here I grabbed the token using another flow). So, if the user belongs to more than 4 groups, you’re going to see hasgroups: true as part of the token instead of the actual groups. This behavior is by design no matter what you specified in the App manifest with regards to the groupMembershipClaims attribute. So, the alternative is simply to query the Graph API.
In this blog post, I’ going to explain what I consider a creative way of exposing on-premises APIs. Let’s envision the following scenario:
You have an on-premises API that is secured using Windows Authentication and for which you need to know the identity of the caller. This API is already consumed by various on-premises consumers and you want to make it also available to online consumers but you want to benefit from throttling and caching capabilities of Azure API Management.
A traditional way of doing this could be by hosting your on-premises API into a DMZ and plugging the APIMGMT to that DMZ endpoint. Another way is using VNETs and VPN techniques to control and establish connectivity. That said, you’d control the connectivity but you should still be able to control identity as per our scenario, this is a pre-requisite for your backend API to know the identity of the user consuming it (via an App).