Episode 1 was about analyzing what’s available in the Microsoft ecosystem in terms of artificial intelligence. In Episode 2, we setup the foundations to get us started with a minimal bot & we saw the typical steps involved in creating & consuming a Cognitive Service.
In this demo-intensive episode, we’ll first see what the challenges are when dealing with natural language and building chatbots. We’ll then capitalize on what we’ve done already, by adding a LUIS layer to our bot. I will not only explain how LUIS works but I’ll also give you concrete hands-on demos out of real world experience.
You can watch this episode and the others on Channel9
Episode 1 was a global introduction to AI & Cognitive Services. Episode 2 is much more hands-on oriented and is aimed at building fundamental assets for the next episodes. You will learn how to get started with a minimal chatbot (that we’ll reuse throughout the entire course) and the typical steps involved in creating & consuming a Cognitive Service. This is a 15 minutes intensive session with 13 minutes of pure step by step demos. You can watch this video here
In this episode, I will draw the AI landscape of the Microsoft ecosystem. I want you to be a little more familiar with fundamental topics such as Machine Learning, Deep Learning and Natural Language Processing which might sound a little bit confusing for many developers. Once the high-level concepts will be covered, I’ll make an introduction of the Azure Cognitive Services and I’ll try to quickly answer the “what’s in it for me” question out of real world examples mapped to the various services. If you’re a hardcore developer, you might be disappointed by this episode as I will not show code yet, but by the end of it, you should understand when to use what and how to manage customer expectations. For the “how to bits”, I invite you to join me at Episode 2.
As you might have seen, the Linguistic Analysis API of the Azure Cognitive Services is available as part of the language category. It allows you to perform POS-tagging, which is basically a way to identify each word and its role within a piece of text.
I find POS-Tagging particularly useful whenever you want to capture the essence of a phrase. I’ve been using it a few times to simplify user search queries and build dynamic queries programmatically. So, whatever usage you want to make out of POS-tagging, the current implementation of Microsoft has a little shortcoming: they never answer with both tokens & tags regrouped. To give you a concrete example, here is a screenshot of all possible results (at the time of writing):
[Update 09/2017: learn how to bring huge capabilities to your bot by watching my free Channel9 course]
It is always dangerous to compare softwares/services from different vendors as benchmarking is rarely exhaustive and can sometimes be subject to interpretation and misunderstanding. On top of that, hardcore fans of vendors might lose their common sense and objectivity as it can quickly turn emotional.
However, I recently had the opportunity to have a demo of Watson from a seasoned IBM consultant which lead me to try out and explore Watson a little further. I’m working with Azure Cognitive Services for more than a year, especially using LUIS and the bot framework to build chatbots. On top of my Azure experience, I have some background in AI & NLP in general as I’ve been involved in multiple initiatives (as for instance a package I wrote on DBPedia Spotlight) for the past 3 years, using neither IBM, neither Microsoft services. Continue reading
Today, Bots & more particularly Chatbots are on every lip! Why this buzz? The answer is very easy: AI has become mainstream thanks to vendors such as Microsoft, IBM and others. Chatbots make use of computational linguistics behind the scenes, not a new concept though, since Alan Turing was already working on that in the nineteen-fifties! So, what has changed in the meantime, why do we sunddenly reach a new paradigm? Resources & Data are the answers as today, the amount of available information & hardware capabilities have increased dramatically. Continue reading
I recently realized thanks to a colleague @MMeuree, that the ID_TOKEN that’s supposed to contain the group membership as shown below:
does not list more than 4 groups (here I grabbed the token using another flow). So, if the user belongs to more than 4 groups, you’re going to see hasgroups: true as part of the token instead of the actual groups. This behavior is by design no matter what you specified in the App manifest with regards to the groupMembershipClaims attribute. So, the alternative is simply to query the Graph API.