Properly measuring HTTP request time with node.js

When your backend code is calling external APIs you may want to measure particular request time to identify bottlenecks.

The most straight forward, but incorrect, way to measure how long request takes is to use JavaScript Date object:

var request = require('request');

let start_time = new Date().getTime();

request.get('https://google.com', function (err, response) {
    console.log('Time elapsed:', new Date().getTime() - start_time);
});

However, this won’t give you the actual time that request takes. Above request call is async, and you start measuring time at the time when request was queued, not actually sent.

In order to determine how much time elapsed since sending request, you can use the time parameter:

var request = require('request');

request.get({ url: 'http://www.google.com', time: true }, function (err, response) {
    console.log('The actual time elapsed:', response.elapsedTime);
});

You can also compare results returned by both methods:

var request = require('request');

let start_time = new Date().getTime();

request.get('https://google.com', function (err, response) {
    console.log('Time elapsed since queuing the request:', new Date().getTime() - start_time);
});

request.get({ url: 'http://www.google.com', time: true }, function (err, response) {
    console.log('The actual time elapsed:', response.elapsedTime);
});

When I run it, I got the following results:

The actual time elapsed: 72
Time elapsed since queuing the request: 156

Notice that the first callback resolves after the second one(!)

The difference is almost 2x. Depending on your server side code, this difference might be even larger, and give you incorrect hints while you are profiling your application.


Boogie board – notepad of the future

Are you using paper notepads to write down ad-hoc notes?

These multi page paper notebooks are super useful. You can just turn the page, save your old sketch and have clean page for new one! WRONG! This is the worst feature! You never look at these notes again, and they just pile up.

Recently, I got Boogie Board – an LCD writing tablet! It cost $20 and it changed my life.

BoogieBoard

You can sketch whatever you want, and erase with one button click. It’s like a pocket whiteboard. If something is important I just dump it to my OneNote before erasing (rarely happens). You don’t have to look for pen anymore. You have one that can be attached to the board, and you can even write with your hands (nails) on it.

I also got bigger one for in-office use. My desk before and after:

BoogieBoard - before BoogieBoard - after

Get one or big one for yourself! It will change your life!


Add custom metadata to Azure blob storage files and search them with Azure Search

Did you know that you can add custom metadata to your blob containers, and even to individual blob files?

You can do it in the Azure Portal, using SDK or REST API.

The most common scenario is adding metadata during file upload. Below code is uploading sample invoice from disk, and adds year, month, and day metadata properties.

const string StorageAccountName = "";
const string AccountKey = "";
const string ContainerName = "";

string ConnectionString = $"DefaultEndpointsProtocol=https;AccountName={StorageAccountName};AccountKey={AccountKey};EndpointSuffix=core.windows.net";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(ContainerName);

const string FileName = "Invoice_2017_01_01";
using (var fileStream = System.IO.File.OpenRead([email protected]"D:\dev\BlobMetadataSample\invoices\{FileName}.pdf"))
{
    var fileNameParts = FileName.Split('_');
    var year = fileNameParts[1];
    var month = fileNameParts[2];
    var day = fileNameParts[3];

    var blob = container.GetBlockBlobReference(FileName);
    blob.Metadata.Add("year", year);
    blob.Metadata.Add("month", month);
    blob.Metadata.Add("day", day);
    blob.UploadFromStream(fileStream);

    var yearFromBlob = blob.Metadata.FirstOrDefault(x => x.Key == "year").Value;
    var monthFromBlob = blob.Metadata.FirstOrDefault(x => x.Key == "month").Value;
    var dayFromBlob = blob.Metadata.FirstOrDefault(x => x.Key == "day").Value;

    Console.WriteLine($"{blob.Name} ({yearFromBlob}-{monthFromBlob}-{dayFromBlob})");
}

If you just want to add metadata to existing blob, instead of calling blob.UploadFromStream(fileStream) you can run blob.SetMetadata().

When you create new index for blob in Azure Search, we will automatically detect these fields. If you already have Azure Search index created, you can add new fields (has to be the same as metadata key), and all changes will be synchronized with next re-indexing.


I am joining Cloud AI team to work on Azure Search

Azure Search

It has been over 3 years since I joined the Azure Portal team. During that time I learned a lot about every aspect of web and mobile development. I delivered over 20 technical talks at different conferences around the World and local meetups. It was amazing to take the new Portal from preview to v1. In the meantime, during the //oneweek hackathon, together with a few other folks, we built a prototype of the Azure Mobile App. After getting feedback from Scott Guthrie who said that “it would be super useful” I started working on the app overnight.

I didn’t know much about mobile development at the time, but I wanted to learn. I didn’t know much about complexities of Active Directory authentication and Azure Resource Manager APIs. I just knew that it would be super cool to have an app that would allow me to check the status of my Azure resources while waiting for my lunch. Receiving a push notification, and being able to scale VM from my phone would be also tremendously valuable.

When I started working on the app full time, my dream came true. I could truly connect my passion with work. I enjoyed the long hours, and late nights we all put to make it happen. The day when Scott Hanselman presented the Azure App at the //build conference was on of the best days of my life.

Now, when the Azure App is released, and backed by great team, I can move to the next challenge.

Machine learning is becoming part of every aspect of our lives. Over last few years, ML crossed a threshold necessary to be extremely useful. I always wanted to be part of it. I took a great Coursera class by Andrew Ng, I started overnight project StockEstimator and I got involved in SeeingAI to learn how Real-World Machine Learning looks like.

Now, I’m taking it to the next level. I am joining Azure Search Team to lead their User Experience. I will be responsible for bringing the product to customers. While using my existing web development knowledge, I will have an amazing opportunity to learn more about Big Data, AI and ML.

Azure Search is managed cloud search service that offers scalable full-text search over multiple languages, geo-spatial search, filtering and faceted navigation, type-ahead queries, hit highlighting, and custom analyzers. You can find more details in this talk by Pablo Castro (Azure Search manager and creator of Open Data Protocol).

The cool thing about working for Microsoft is that you may end up working with person who created HTTP protocol. Henrik Frystyk Nielsen, former Tim Berners-Lee’s student, who shared office with Håkon Wium Lie (creator of CSS), joined my new team this month. What’s even cooler, he is sitting next to me 🙂

In my new office with Henrik:

Henrik Frystyk Nielsen and Jacob Jedryszek

If you want to learn more about all the cool stuff we are doing at Cloud AI group there is an awesome .NET Rocks Podcast with Joseph Sirosh. Check it out!

There is also awesome talk by Joseph from the last Connect(); conference, which includes JFK files demo presented by Corom Thompson from my team (creator of How-Old.NET). In that demo Corom showcases how you can use Azure Search and Cognitive Services to explore JFK files. Super cool! You can see demo in below video, and code on github.

It has never been a better time to work on the intersection of Cloud and Artificial Intelligence!


Adding biometrics authentication to Xamarin.iOS (Touch ID / Face ID) and Xamarin.Android (Fingerprint)

One of the top Azure App users requests was to add Touch ID support for additional security. In this post I will share the details of implementing biometrics authentication for iOS and Android with Xamarin.

There are three aspects of biometrics auth:
1. Enable user to turn biometrics authentication on and off. Users shouldn’t be forced to use this additional security feature.
2. Detecting when user should be asked for biometrics authentication, e.g., when app is coming from background, and when app is starting.
3. Authentication process. Includes detecting hardware capabilities (is touch or face id available?), and local setup (does user configured local authentication in system settings).

Enabling biometrics authentication usually can be controlled in settings (like in Outlook or OneDrive). We did the same in Azure App:

Require Touch ID Settings

iOS

Detecting when user is switching back to our app in iOS is pretty simple. Every time when user switch from background, method WillEnterForeground in AppDelegate is being called. We just need to override it with our custom implementation:

public override void WillEnterForeground(UIApplication application)
{
    // biometrics authentication logic here
}

You should also authenticate user when app is being launched. In that case authentication should be performed in your initial view controller.

In iOS we have 2 kinds of biometrics authentication:
1. Touch ID
2. Face ID (available from iPhoneX)

We can also fallback to passcode if touch/face ID is not configured, or user’s device does not support it.

The iOS Local Auth API is pretty straightforward, and well documented. I created simple helper to handle feature detection and authentication:

public static class LocalAuthHelper
{
    private enum LocalAuthType
    {
        None,
        Passcode,
        TouchId,
        FaceId
    }

    public static string GetLocalAuthLabelText()
    {
        var localAuthType = GetLocalAuthType();

        switch (localAuthType)
        {
            case LocalAuthType.Passcode:
                return Strings.RequirePasscode;
            case LocalAuthType.TouchId:
                return Strings.RequireTouchID;
            case LocalAuthType.FaceId:
                return Strings.RequireFaceID;
            default:
                return string.Empty;
        }
    }

    public static string GetLocalAuthIcon()
    {
        var localAuthType = GetLocalAuthType();

        switch (localAuthType)
        {
            case LocalAuthType.Passcode:
                return SvgLibrary.LockIcon;
            case LocalAuthType.TouchId:
                return SvgLibrary.TouchIdIcon;
            case LocalAuthType.FaceId:
                return SvgLibrary.FaceIdIcon;
            default:
                return string.Empty;
        }
    }

    public static string GetLocalAuthUnlockText()
    {
        var localAuthType = GetLocalAuthType();

        switch (localAuthType)
        {
            case LocalAuthType.Passcode:
                return Strings.UnlockWithPasscode;
            case LocalAuthType.TouchId:
                return Strings.UnlockWithTouchID;
            case LocalAuthType.FaceId:
                return Strings.UnlockWithFaceID;
            default:
                return string.Empty;
        }
    }

    public static bool IsLocalAuthAvailable => GetLocalAuthType() != LocalAuthType.None;

    public static void Authenticate(Action onSuccess, Action onFailure)
    {
        var context = new LAContext();
        NSError AuthError;

        if (context.CanEvaluatePolicy(LAPolicy.DeviceOwnerAuthenticationWithBiometrics, out AuthError)
            || context.CanEvaluatePolicy(LAPolicy.DeviceOwnerAuthentication, out AuthError))
        {
            var replyHandler = new LAContextReplyHandler((success, error) =>
            {
                if (success)
                {
                    onSuccess?.Invoke();
                }
                else
                {
                    onFailure?.Invoke();
                }
            });

            context.EvaluatePolicy(LAPolicy.DeviceOwnerAuthentication, Strings.PleaseAuthenticateToProceed, replyHandler);
        }
    }

    private static LocalAuthType GetLocalAuthType()
    {
        var localAuthContext = new LAContext();
        NSError AuthError;

        if (localAuthContext.CanEvaluatePolicy(LAPolicy.DeviceOwnerAuthentication, out AuthError))
        {
            if (localAuthContext.CanEvaluatePolicy(LAPolicy.DeviceOwnerAuthenticationWithBiometrics, out AuthError))
            {
                if (GetOsMajorVersion() >= 11 && localAuthContext.BiometryType == LABiometryType.TypeFaceId)
                {
                    return LocalAuthType.FaceId;
                }

                return LocalAuthType.TouchId;
            }

            return LocalAuthType.Passcode;
        }

        return LocalAuthType.None;
    }

    private static int GetOsMajorVersion()
    {
        return int.Parse(UIDevice.CurrentDevice.SystemVersion.Split('.')[0]);
    }
}

There are helper methods determining proper label (GetLocalAuthLabelText), icon (GetLocalAuthIcon) and authentication text (GetLocalAuthUnlockText) depending on available authentication type. There is also one liner IsLocalAuthAvailable checking if Local Authentication (face/touch ID or passcode) is available, and Authenticate method that performs authentication, which takes success and failure callbacks as parameters. It can be used in WillEnterForeground method as follows:

public override void WillEnterForeground(UIApplication application)
{
    if (!AppSettings.IsLocalAuthEnabled)
    {
        return;
    }

    LocalAuthHelper.Authenticate(null, // do not do anything on success
    () =>
    {
        // show View Controller that requires authentication
        InvokeOnMainThread(() =>
        {
            var localAuthViewController = new LocalAuthViewController();
            Window.RootViewController.ShowViewController(localAuthViewController, null);
        });
    });
}

We do not have to do anything on success. The popup shown by iOS will disappear and user will be able to use the app. On failed authentication though we should display some kind of shild (e.g., ViewController) that prevent user from using the app until authorization succeed. This is how it looks in Azure App:

Azure App - Unlock with Touch ID

Android

Detecting when app is coming from background in Android is tricky. There is no single method that is invoked only when app is coming back from background. The OnResume method is being called when app is coming back from the background, but it’s also called when you switch from one activity to another. Solution for that is to keep a time stamp with last successful authentication, and update it to DateTime.Now every time when activity is calling OnPause. This happen when app is going to background, but also when app is changing between activities. Thus we cannot simply set flag Background=true when OnPause is called. However, when difference between subsequent OnPause and OnResume is larger than some period of time (e.g., more than a few seconds) we can assume that app went to background. Below code should be implemented in some BaseActivity class that all activities inherit from:

public class BaseActivity
{
  public const int FingerprintAuthTimeoutSeconds = 5;
  public static DateTime LastSuccessfulFingerprintAuth = DateTime.MinValue;
    
  protected override void OnResume()
  {
    base.OnResume();

    if (IsFingerprintAvailable() && LastSuccessfulFingerprintAuth > DateTime.Now.AddSeconds(-FingerprintAuthTimeoutSeconds))
    {
      StartActivity(typeof(FingerprintAuthActivity));
    }
  }

  protected override void OnPause()
  {
    base.OnPause();

    if (IsFingerprintAvailable())
    {
      LastSuccessfulFingerprintAuth = DateTime.Now;
    }
  }
}

The basics of Fingerprint authentication are very well described in Xamarin docs.

Even better reference is a sample app FingerprintGuide from Xamarin.

The main disadvantage of adding fingerprint authentication in Android (over Face/Touch ID in iOS) is requirement to build your own UI and logic for the authentication popup. This includes adding icon, and handling all authentication results. iOS handles incorrect scan, and displays popup again with passcode fallback after too many unsuccessful tries. In Android you have to implement this entire logic by yourself.

Summary

Adding biometrics authentication is useful for apps that hold sensitive data, like banking apps, file managers (Dropbox, OneDrive), or an app that has access to your Azure Resources 🙂

Implementing local authentication in iOS is pretty straightforward, and iOS APIs provide authentication UI for free. In Android however, the APIs are only working with the backend, and UI has to be implemented by you.

Local authentication should be always optional. Some users may not need nor want it. Thus, it should be configurable in the app settings.

Try out biometrics auth in Azure App!

Download on the App Store
Get it on Google Play