programming

QCon conferences – real experts experience exchange

QCon Beijing - speakers

Earlier this year I attended QCon Beijing and QCon Sao Paulo conferences. I really like QCon, because there is no marketing, just real experts exchange of experience. After my first QCon (Shanghai in 2016) I was very excited to come back this year!

QCon Beijing

Cognitive Search

I delivered talk about Building web apps with Cloud and AI. I showed how to build intelligent web apps with Azure Search and Cognitive Services (Azure Machine Learning APIs). We call this approach Cognitive Search. In my demo I showcased how you can determine which crypto currencies to buy using sentiment analysis on tweets. I streamed tweets to Event Hub, which triggers Azure Function that calls Text Analytics API to calculate sentiment of tweet. I store tweets and its sentiments in SQL Database. I also created Azure Search index to be able to effectively search through tweets. This index is being syncronized with SQL DB through integrated change tracking, and Azure Search indexer that runs on schedule.

Cognitive Search architecture

I built UI using AzSearch.js (UI generation tool for Azure Search indexes), ASP.NET Core and TypeScript (BTW: there is a lot of cool stuff in TypeScript these days!).

Crypto Search UI

In addition to search interface I also created aggregation chart comparing sentiments between different cryptos:

crypto charts

Source code is on github: crypto-search.

Video from my talk:

My talk was very well received. Attendees were assessing talks with green (great talk), yellow (ok talk) and red (bad talk) cards. I got 117 green, 9 yellows and 0 reds.

Conference

I had a great opportunity to meet a lot of engineers and architects from leading tech companies from around the World.

Mads Torgersen (the architect of C# language) shared future plans for C#. It was surprising for me how few people attended his talk. Sisie Xia and Chris Coleman from LinkedIn delivered a great talk about facing challenge of growth, and how they tackle it using their in-house tool Redliner. Julius Volz shared insights about Prometheus monitoring system.

Majority of talks were in Chinese. I went to Peng Xing‘s talk about how they develop Progressive Web Apps at Baidu. I was able to gather 60% of content, but got the essence by talking to him directly. I spent most of my time talking to people in the hallways. It was very eye opening to meet engineers from Chinese cloud giants. China cloud market is dominated by Alibaba Cloud (AliCloud), Baidu and Tencent. Azure and AWS have small market share. Google does not exists in China at all. Alibaba (largest online retailer in China) has even ambitions to overtake AWS in near future. It is worth to notice that Alibaba is Chinese equivalent of Amazon. At the same time people consider Baidu being Chinese Google, and Tencent (who owns WeChat) to be like Facebook. I had an opportunity to chat with Lu from Alibaba Cloud and Wang Yao (Head of IaaS at Baidu). After talking to them, and other engineers I would describe both companies’ stack in 3 Words: MacBook, Java and Go. This might be ignorant generalization, but almost every other engineer from big 3 (AliCloud, Baidu, Tencent) that I talked to was either writing code in Java or golang (using Mac of course). I also learned about Alibaba’s Search as a Service: OpenSearch. Something to keep eye on when they will be expanding to USA and Europe market.

The most popular track was of course Blockchain. Room was overflowed for the entire day:

QCon Beijing - blockchain track

China Tech

Every time when I visit China I am impressed by their progress. Highways superior to US interstates, fast trains between all major cities, and now – mobile payments adoption everywhere. Today, in China, most people use WeChat or AliPay. Sometimes cashier can get mad at you if you want to pay cash or credit card, because you cause inconvenience. Scanning QR code is 10x faster! You can even tip a waiter with your mobile phone!

Tipping in China

Last year in Seattle we had three bike-sharing companies, and everybody here thinks that we are at the edge of innovation. By the end of 2017 Beijing had 60 bike-sharing companies. Many of them started in 2016. During my visit I learned that China have engineers dominated government. Maybe this explains their progress?

If you are going to China, it is useful to have these 3 apps:
1. WeChat – Chinese facebook, for communication with other Chinese people, and exchanging contacts
2. DiDi – Chinese Uber
3. AliPay – for mobile payments (currently WeChat payments requires Chinese ID and account in Chinese Bank)

It was also great to meet China division of Cognitive Services team!

Cognitive Services - China team

Trip to China inspired me to read a book comparing Chinese and Western culture in the World of innovation and progress:

From the Great Wall to Wall Street

There is a lot of things that these two Worlds can learn from each other.

QCon Sao Paulo

Cognitive Search

Two days before my talk in Brazil, we officially announced Cognitive Search built in into Azure Search. You do not have to create Cognitive Service anymore. You do not have to write code that orchestrate processing data, and calling API. We do it for you. All what you have to do is to check the checkbox. More details here.

Cognitive Search

I extended my demo of crypto analysis based on tweets by adding sentiment analysis on news articles. A while ago we put news on Azure Search index. We filtered out news related to cryptos, and put it on Azure Blob storage. We also improved our JFK files demo, and now you can deploy it by yourself by following instructions in this github repo.

Usually during my talk I ask if somebody ever deployed ElasticSearch and how long did it take. In China one guy said it was ~2 weeks. In Brazil there was one guy who said: 6 months(!) 🙂 That’s why you don’t want this to be your problem. Azure Search takes care of deployment, availability and upgrades for you.

My talk was pretty well received in Brazil as well. I got 148 green, 15 yellow and 0 red cards.

Conference

QCon Sao Paulo had very diverse mix of experts from all around the World. Starting with Aaron Stannard (co-founder of AKKA.NET), Nicholas Matsakis (from Rust core team), and Rodrigo Kumpera (architect and top contributor of mono project), through Michelle Casbon (now Engineer in Google Cloud Platform focused on machine learning and big data tools), Ben Lesh (RxJS Lead at Google), and Martin Spier (performance engineer at Netflix) to Soups Ranjan (Director of Data Science at Coinbase*), Amanda Casari (Data Scientist at SAP Concur), and Piper Niehaus (Elm lang passionate).

The most interesting thing I learned at QCon Sao Paulo was how different companies struggle with monitoring, telemetry and system malfunction detection. They all have very sophisticated automation, but it is still not enough in today’s World complexity. As systems we build have more, and more complex architecture, we need to build even better monitoring software to maintain them.

In Azure Search we are using OData standard. However, recently GraphQL is gaining popularity. Dan McGhan shared this article about comparing GraphQL and OData: REST API Industry Debate: OData vs GraphQL vs ORDS. Interesting read!

Brazil

May is almost winter in Brazil, but it’s also the best time to visit. Not too hot, not too cold. Perfect weather for enjoying your time!

I like that every bar and restaurant in Brazil have TV on soccer channel 🙂 As a long standing fan of Brazil national football team (since Ronaldo Luiz Nazario de Lima times) I enjoyed it a lot!

Sao Paulo, the largest city in Brazil (21 million people) and financial center of the country, is also the largest tech-hub in south America. Most of tech-companies are there (Microsoft, Google, Amazon).

If you ever go to Brazil, remember to visit Sugarloaf to watch the sunset and after the sunset view 🙂

View from Sugarloaf after night

Summary

Speaking at QCons and connecting with engineers from different backgrounds is very valuable experience. Being able to learn about other cultures is a plus as well. Sharing with other work that you do everyday can also give you different perspective, and notice things that you would never think about.

If you want to learn more about Azure Search check out our getting started docs. To create intelligent search pipelines check out my Cognitive Search blog post. For more details we have quickstart and more comprehensive API overview.

Questions? Find me on twitter!

*Soups did not tell me what is the next coin coming to CoinBase 🙁


Cognitive Search – Azure Search with AI

Cognitive Search

Today, at Microsoft //build conference we announced Cognitive Search. You may wonder what is Cognitive Search. To put it as simple as possible: it’s Azure Search powered by Cognitive Services (Azure Machine Learning APIs). You remember when you wanted to run some intelligence over your data with Cognitive Services? You had to handle creating, e.g., Text Analytics API, then writing code that would take your data from database, issue request to API (remember to use proper key!), serialize, deserialize data and put result in your database?

Now, with Cognitive Search, you can achieve that by checking one checkbox. You just need to pick a field on which you want to run analytics, and which cognitive services or skills (1 cognitive service usually contain multiple skills) to run. As for now we support 6 skills:

  1. Key phrases
  2. People
  3. Places
  4. Organizations
  5. Language
  6. OCR (Optical Character Recognition)

We output results directly to your search index.

Creating Intelligent Search Index

To take advantage of Cognitive Search you need to create Azure Search service in South-Central US or in West Europe. More regions coming soon!

To create search index powered by cognitive services you need to use ‘import data’ flow. Go to your Azure Search Service and click on ‘Import data’ command:

Cognitive Search - step 1

Then pick your data source (MSSQL, CosmosDB, blob storage etc.). I will choose sample data source that contains real estate data:

Cognitive Search - import data

Now, you need to pick a field on which you want to run analytics. I will choose description. You also need to choose which cognitive services (skills) you want to run, and provide output field names (fields to which we will output cognitive services analysis result):

Cognitive Search - skillset definition

In the next step you need to configure your index. Usually you want to make fields retrievable, searchable, and filterable. You may also consider making them facetable if you want to aggregate results. This is my sample configuration:

Cognitive search - define index

In the last step you just need to configure indexer – a tool that synchronizes your data source with your search index. In my case I will choose to do synchronization only once, as my sample data source will never change.

Cognitive Search - create indexer

After indexer finish you can browse your data, and cognitive services results in search explorer.

Cognitive Search - browse

You can also generate more usable search UI for your data with AzSearch.js.

Generating UI to search data with AzSearch.js

If you don’t like browsing your data with search explorer in Azure Portal that returns raw JSON, you can use AzSearch.js to quickly generate UI over your data.

The easiest way to get started is to use AzSearch.js generator. Before you start, enable CORS on your index:

Cognitive search - CORS

Once you get your query key and index definition JSON paste it into generator together with your search service name, and click ‘Generate’. An html page with simple search interface will be created.

Cognitive Search - AzSearch.js

This site is super easy to customize. Providing html template for results change JSON into nicely formatted search results:

Cognitive search - AzSearch.js pretty

All what I did was to create HTML template:

    const resultTemplate =
        `<div class="col-xs-12 col-sm-5 col-md-3 result_img">
            <img class="img-responsive result_img" src={{thumbnail}} alt="image not found" />
        </div>
        <div class="col-xs-12 col-sm-7 col-md-9">
            <h4>{{displayText}}</h4>
            <div class="resultDescription">
                {{{summary}}}
            </div>
            <div>
                sqft: <b>{{sqft}}</b>
            </div>
            <div>
                beds: <b>{{beds}}</b>
            </div>
            <div>
                baths: <b>{{baths}}</b>
            </div>
            <div>
                key phrases: <b>{{keyPhrases}}</b>
            </div>
        </div>`;

And add it to already present addResults function call:

automagic.addResults("results", { count: true }, resultTemplate);

I also created resultsProcessor to do some custom transformations. I.e., join few fields into one, truncate description to 200 characters, and convert key phrases from array into string separated by commas:

var resultsProcessor = function(results) {
        return results.map(function(result){
            result.displayText = result.number + " " + result.street+ " " +result.city+ ", " +result.region+ " " +result.countryCode;
            var summary = result.description;
            result.summary = summary.length &lt; 200 ? summary : summary.substring(0, 200) + "...";
            result.keyPhrases = result.keyphrases.join(", ");
            return result;
        });
    };
    automagic.store.setResultsProcessor(resultsProcessor);

You can do similar customization with suggestions. You can also add highlights to your results and much more. Everything is described in AzSearch.js README. We also have starter app written with TypeScript and React based on sample real estate data, which takes advantage of more advanced features of AzSearch.js. If you have any questions or suggestions regarding AzSearch.js let me know on Twitter!

Summary

Cognitive Search takes analyzing data with Azure Search to the next level. It takes away the burden of writing your own infrastructure for running AI-based analysis. For more advanced analysis, including OCR on your images, check out our docs. I am super excited to see it in action, and for the next improvements that we are working on. Let us know what do you think!

*This blog post was written in Boeing 787 during my flight from Toronto to São Paulo, when I was on my way to QCon conference.


Get Computer Science Crash Course with Imposter’s Handbook

THE IMPOSTER'S HANDBOOK

I just finished reading Rob Connery‘s book Imposter’s Handbook. It’s a very good high-level overview of Computer Science concepts that you may not encounter in everyday job. It is also a good guidance for “what I should know”.

If you do not have CS degree I recommend you to check out this book. You can skip chapters about concepts that you are familiar with. If something is new to you – this book will provide you nice introduction to the topic, which you can later on dive in on your own.

If you do have CS degree, I still recommend you to at least check out what’s there. I’m sure you will learn something, or at least refresh your knowledge.

Check out hacker news discussion!

Do you have CS degree or you are self-taught programmer?


Properly measuring HTTP request time with node.js

When your backend code is calling external APIs you may want to measure particular request time to identify bottlenecks.

The most straight forward, but incorrect, way to measure how long request takes is to use JavaScript Date object:

var request = require('request');

let start_time = new Date().getTime();

request.get('https://google.com', function (err, response) {
    console.log('Time elapsed:', new Date().getTime() - start_time);
});

However, this won’t give you the actual time that request takes. Above request call is async, and you start measuring time at the time when request was queued, not actually sent.

In order to determine how much time elapsed since sending request, you can use the time parameter:

var request = require('request');

request.get({ url: 'http://www.google.com', time: true }, function (err, response) {
    console.log('The actual time elapsed:', response.elapsedTime);
});

You can also compare results returned by both methods:

var request = require('request');

let start_time = new Date().getTime();

request.get('https://google.com', function (err, response) {
    console.log('Time elapsed since queuing the request:', new Date().getTime() - start_time);
});

request.get({ url: 'http://www.google.com', time: true }, function (err, response) {
    console.log('The actual time elapsed:', response.elapsedTime);
});

When I run it, I got the following results:

The actual time elapsed: 72
Time elapsed since queuing the request: 156

Notice that the first callback resolves after the second one(!)

The difference is almost 2x. Depending on your server side code, this difference might be even larger, and give you incorrect hints while you are profiling your application.


Add custom metadata to Azure blob storage files and search them with Azure Search

Did you know that you can add custom metadata to your blob containers, and even to individual blob files?

You can do it in the Azure Portal, using SDK or REST API.

The most common scenario is adding metadata during file upload. Below code is uploading sample invoice from disk, and adds year, month, and day metadata properties.

const string StorageAccountName = "";
const string AccountKey = "";
const string ContainerName = "";

string ConnectionString = $"DefaultEndpointsProtocol=https;AccountName={StorageAccountName};AccountKey={AccountKey};EndpointSuffix=core.windows.net";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(ContainerName);

const string FileName = "Invoice_2017_01_01";
using (var fileStream = System.IO.File.OpenRead([email protected]"D:\dev\BlobMetadataSample\invoices\{FileName}.pdf"))
{
    var fileNameParts = FileName.Split('_');
    var year = fileNameParts[1];
    var month = fileNameParts[2];
    var day = fileNameParts[3];

    var blob = container.GetBlockBlobReference(FileName);
    blob.Metadata.Add("year", year);
    blob.Metadata.Add("month", month);
    blob.Metadata.Add("day", day);
    blob.UploadFromStream(fileStream);

    var yearFromBlob = blob.Metadata.FirstOrDefault(x => x.Key == "year").Value;
    var monthFromBlob = blob.Metadata.FirstOrDefault(x => x.Key == "month").Value;
    var dayFromBlob = blob.Metadata.FirstOrDefault(x => x.Key == "day").Value;

    Console.WriteLine($"{blob.Name} ({yearFromBlob}-{monthFromBlob}-{dayFromBlob})");
}

If you just want to add metadata to existing blob, instead of calling blob.UploadFromStream(fileStream) you can run blob.SetMetadata().

When you create new index for blob in Azure Search, we will automatically detect these fields. If you already have Azure Search index created, you can add new fields (has to be the same as metadata key), and all changes will be synchronized with next re-indexing.