Xamarin Forms with Microsoft Cognitive Services

Xamarin Forms with Microsoft Cognitive Services


Short introduction

Microsoft Cognitive Services – what’s that? Try to imagine that with your phone you can take a picture and few seconds later you have its description and whole analysis.

Yes that is possible. Microsoft Cognitive Services are intelligent APIs that enables you to add just a few lines of code to your app to detect which emotions are visible in the picture or check the spelling.

In this article I would like to present how to use it with Xamarin Forms mobile applications.

The whole source code is available on my Git Hub account.

What do I need to start?

1) Visual Studio 2015 Community (for free or choose higher version) or Xamarin Studio


Let’s start

1) Open Microsoft Cognitive Services website

2) Click “My account” in the right corner and sign in with the Microsoft account:


3) Select “Computer Vision – Preview” from the list then accept the terms and click “Subscribe”:


5) Once you subscribe you should see below information about available subscriptions:


We will use Key 1 (from Computer Vision – Preview tab) in this tutorial so copy it to the safe place.


Let’ s create some Xamarin Forms app to test Cognitive Services!

1) Open Xamarin Studio (or VS with Xamarin) and create new Xamarin Forms project:


Select “Shared Project” in “Shared Code” field.

Type the name of the app – my is XamarinCognitiveServices.




2) Take some picture and analyse it:

In this tutorial we will create simple application where user can select picture from the gallery and analyse it with Microsoft Cognitive Services.

To do it we have to add “Media Plugin for Xamarin” available on NuGet.

Adding NuGet is the same process for iOS and Andrid project. Below I will present how to do it for Android but remember to add package in the same way to the iOS project.

1. Right click on the “Packages” available inside Android project:


2. Type: “xam.plugin” in the searchbox and select “Media Plugin for Xamarin and Windows”:


On you add package it should be available for use. Remember to do the same for iOS project!


To enable image analysis we need to add “Microsoft.ProjectOxford.Vision” library to both projects: Android and iOS.

To do it find it in the Nuget Manager:


Now it’s time to add some UI to enable user to select picture from the gallery:

Once user selects the image app should display description retrieved from Cognitive Services.

Remember that on Android platfrom you should grant android.permission.INTERNET.

1. Open Shared Project tab and select XAML page with the same name as your project:


2. Change XAML code. It should look like below. We add the button to choose from picture from the gallery:

<?xml version="1.0" encoding="utf-8"?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-namespace:XamarinCognitiveServices" x:Class="XamarinCognitiveServices.XamarinCognitiveServicesPage">
 <StackLayout Orientation="Vertical" HorizontalOptions="Center" VerticalOptions="Center">
 <Button Text="Choose picture" VerticalOptions="Center" HorizontalOptions="Center" Clicked="Handle_Clicked"/>
 <Image x:Name="SelectedImage" Aspect="AspectFit" WidthRequest="200" HeightRequest="200" />
 <Label x:Name="InfoLabel" HorizontalOptions="Center"/>

3. Now let’s add picture selection handler. Once user select the image we would like to show it and display description retrieved from the Cognitive Services below it:

 public partial class XamarinCognitiveServicesPage : ContentPage

    public XamarinCognitiveServicesPage()

    async void selectPicture()
       if (CrossMedia.Current.IsPickPhotoSupported)
          var image = await CrossMedia.Current.PickPhotoAsync();
          var stream = image.GetStream();
          SelectedImage.Source = ImageSource.FromStream(() =>
            return stream;
          var result = await GetImageDescription(image.GetStream());
          foreach (string tag in result.Description.Tags)
             InfoLabel.Text = InfoLabel.Text + "\n" + tag;

 public async Task<AnalysisResult> GetImageDescription(Stream imageStream)
      VisionServiceClient visionClient = new VisionServiceClient("<<YOUR API KEY HERE>>");
      VisualFeature[] features = { VisualFeature.Tags, VisualFeature.Categories, VisualFeature.Description };
      return await visionClient.AnalyzeImageAsync(imageStream, features.ToList(), null);

 void Handle_Clicked(object sender, System.EventArgs e)

4. Launch iOS project, click the button and grant access to photos:




Pick the image. After few seconds you should see the analysis result:


As you can see there are many tags with the image description like: car, transport or driving.

Sum up

In this post I wanted to show how to use Microsoft Cognitive Services with Xamarin. Please remember that this is only one example.

There are more APIs to use like:

Computer Vision API

Emotion API

Speaker Recognition API

Text Analytics API

Go, register and try some of them because it is worth a try!