Wroc# – developer conference worth attending

Wroc# - audience

Earlier this year I had a pleasure to speak at Wroc# conference in Poland. It was very well organized event that was almost free for attendees. The only cost was required donation to charity: PLN 150 (~$50).

There was only 1 track with awesome speakers lineup! I finally had an opportunity to meet Roy Osherove in person. I learned about unit testing from his book The Art of Unit testing. I’m currently reading his another book: Elastic Leadership – a lot of useful tips not only for team leaders! Among other speakers there were Scott Helme (Security Researcher) that uncovered some things about web security I have never heard about! Zan Kavtaskin gave great overview about building apps on Azure Cloud, Glenn Henriksen showed how on server-less computing works in Real-World, and Sander Hoogendoorn together with Kim van Wilgen shared their perspective on over-engineering development processes.

The conference venue was great. I would say it was the best setup I’ve even seen! There was one big room, divided into 3 parts: stage, chairs for audience and Mix&Mingle zone (AKA M&M). You could talk (in the Mix&Mingle zone), and still be able to follow presentations. Speakers’ room was on the upper floor, but it was more like a balcony, from where you could listen to talks and overlook entire venue.

I delivered talk about building mobile apps with Xamarin. I shared what we have learned while building Azure Mobile App, which started as hackathon project, and later turned into official Microsoft product. The app got announced on the stage of //build conference last year. Along the way we learned how to properly architect Xamarin project for multiple platforms, where to do not take shortcuts, does and don’ts for CI/CD and testing.

There was a guy who put a nice summary of my talk:

Wroc# - building mobile apps with Xamarin

At the end of the conference there was speaker’s panel where we were answering and discussing questions from the audience. We had good discussion about different aspects of software development from estimating project cost to writing unit tests. Almost every speaker had different background, and this made it even more interesting!

Wroc# - speakers panel

If you haven’t been to Poland before: Wroclaw is an amazing city, and many of my foreign friends says it’s their favorite city in Poland. Wroclaw is often refereed as WrocLove 😉


Last, but not least: thank you for everyone who made this conference happen!

Wroc# - organizers

Cognitive Search – Azure Search with AI

Cognitive Search

Today, at Microsoft //build conference we announced Cognitive Search. You may wonder what is Cognitive Search. To put it as simple as possible: it’s Azure Search powered by Cognitive Services (Azure Machine Learning APIs). You remember when you wanted to run some intelligence over your data with Cognitive Services? You had to handle creating, e.g., Text Analytics API, then writing code that would take your data from database, issue request to API (remember to use proper key!), serialize, deserialize data and put result in your database?

Now, with Cognitive Search, you can achieve that by checking one checkbox. You just need to pick a field on which you want to run analytics, and which cognitive services or skills (1 cognitive service usually contain multiple skills) to run. As for now we support 6 skills:

  1. Key phrases
  2. People
  3. Places
  4. Organizations
  5. Language
  6. OCR (Optical Character Recognition)

We output results directly to your search index.

Creating Intelligent Search Index

To take advantage of Cognitive Search you need to create Azure Search service in South-Central US or in West Europe. More regions coming soon!

To create search index powered by cognitive services you need to use ‘import data’ flow. Go to your Azure Search Service and click on ‘Import data’ command:

Cognitive Search - step 1

Then pick your data source (MSSQL, CosmosDB, blob storage etc.). I will choose sample data source that contains real estate data:

Cognitive Search - import data

Now, you need to pick a field on which you want to run analytics. I will choose description. You also need to choose which cognitive services (skills) you want to run, and provide output field names (fields to which we will output cognitive services analysis result):

Cognitive Search - skillset definition

In the next step you need to configure your index. Usually you want to make fields retrievable, searchable, and filterable. You may also consider making them facetable if you want to aggregate results. This is my sample configuration:

Cognitive search - define index

In the last step you just need to configure indexer – a tool that synchronizes your data source with your search index. In my case I will choose to do synchronization only once, as my sample data source will never change.

Cognitive Search - create indexer

After indexer finish you can browse your data, and cognitive services results in search explorer.

Cognitive Search - browse

You can also generate more usable search UI for your data with AzSearch.js.

Generating UI to search data with AzSearch.js

If you don’t like browsing your data with search explorer in Azure Portal that returns raw JSON, you can use AzSearch.js to quickly generate UI over your data.

The easiest way to get started is to use AzSearch.js generator. Before you start, enable CORS on your index:

Cognitive search - CORS

Once you get your query key and index definition JSON paste it into generator together with your search service name, and click ‘Generate’. An html page with simple search interface will be created.

Cognitive Search - AzSearch.js

This site is super easy to customize. Providing html template for results change JSON into nicely formatted search results:

Cognitive search - AzSearch.js pretty

All what I did was to create HTML template:

    const resultTemplate =
        `<div class="col-xs-12 col-sm-5 col-md-3 result_img">
            <img class="img-responsive result_img" src={{thumbnail}} alt="image not found" />
        <div class="col-xs-12 col-sm-7 col-md-9">
            <div class="resultDescription">
                sqft: <b>{{sqft}}</b>
                beds: <b>{{beds}}</b>
                baths: <b>{{baths}}</b>
                key phrases: <b>{{keyPhrases}}</b>

And add it to already present addResults function call:

automagic.addResults("results", { count: true }, resultTemplate);

I also created resultsProcessor to do some custom transformations. I.e., join few fields into one, truncate description to 200 characters, and convert key phrases from array into string separated by commas:

var resultsProcessor = function(results) {
        return results.map(function(result){
            result.displayText = result.number + " " + result.street+ " " +result.city+ ", " +result.region+ " " +result.countryCode;
            var summary = result.description;
            result.summary = summary.length &lt; 200 ? summary : summary.substring(0, 200) + "...";
            result.keyPhrases = result.keyphrases.join(", ");
            return result;

You can do similar customization with suggestions. You can also add highlights to your results and much more. Everything is described in AzSearch.js README. We also have starter app written with TypeScript and React based on sample real estate data, which takes advantage of more advanced features of AzSearch.js. If you have any questions or suggestions regarding AzSearch.js let me know on Twitter!


Cognitive Search takes analyzing data with Azure Search to the next level. It takes away the burden of writing your own infrastructure for running AI-based analysis. For more advanced analysis, including OCR on your images, check out our docs. I am super excited to see it in action, and for the next improvements that we are working on. Let us know what do you think!

*This blog post was written in Boeing 787 during my flight from Toronto to São Paulo, when I was on my way to QCon conference.

Do you Trust password managers?

Password managers are very popular these days. There are some that store your passwords locally (e.g., KeePass), but vast majority store your passwords online. Two, most popular ones are 1password and LastPass.

All online password managers claim they are secure. But do you know that for sure? Additionally you have no idea what code changes developers working on them are making everyday. You have no idea whether that 128-bit identifier generated locally is actually unique, and cannot be sniffed by spyware on your machine.

Is storing all password in 1 place, behind 1 password better than reusing passwords on multiple sites? When website it being hacked, they usually leak encrypted password (most of the time with salt). If somebody get access to all your passwords though, you are in BIG TROUBLE. I don’t feel I need to even explain what can happen. It didn’t happen to the most popular managers yet, but…

I believe password managers are useful, but should not store most sensitive passwords there. Including your e-mail, bank account, etc. Remember these, or store them on external flash drive. You can also use KeePass to encrypt them. Additionally, you should not store, not-generated passwords. If these get compromised attacker gets some idea what your other passwords can be.

What’s your toughts? Are you using password managers?

Get Computer Science Crash Course with Imposter’s Handbook


I just finished reading Rob Connery‘s book Imposter’s Handbook. It’s a very good high-level overview of Computer Science concepts that you may not encounter in everyday job. It is also a good guidance for “what I should know”.

If you do not have CS degree I recommend you to check out this book. You can skip chapters about concepts that you are familiar with. If something is new to you – this book will provide you nice introduction to the topic, which you can later on dive in on your own.

If you do have CS degree, I still recommend you to at least check out what’s there. I’m sure you will learn something, or at least refresh your knowledge.

Check out hacker news discussion!

Do you have CS degree or you are self-taught programmer?

Properly measuring HTTP request time with node.js

When your backend code is calling external APIs you may want to measure particular request time to identify bottlenecks.

The most straight forward, but incorrect, way to measure how long request takes is to use JavaScript Date object:

var request = require('request');

let start_time = new Date().getTime();

request.get('https://google.com', function (err, response) {
    console.log('Time elapsed:', new Date().getTime() - start_time);

However, this won’t give you the actual time that request takes. Above request call is async, and you start measuring time at the time when request was queued, not actually sent.

In order to determine how much time elapsed since sending request, you can use the time parameter:

var request = require('request');

request.get({ url: 'http://www.google.com', time: true }, function (err, response) {
    console.log('The actual time elapsed:', response.elapsedTime);

You can also compare results returned by both methods:

var request = require('request');

let start_time = new Date().getTime();

request.get('https://google.com', function (err, response) {
    console.log('Time elapsed since queuing the request:', new Date().getTime() - start_time);

request.get({ url: 'http://www.google.com', time: true }, function (err, response) {
    console.log('The actual time elapsed:', response.elapsedTime);

When I run it, I got the following results:

The actual time elapsed: 72
Time elapsed since queuing the request: 156

Notice that the first callback resolves after the second one(!)

The difference is almost 2x. Depending on your server side code, this difference might be even larger, and give you incorrect hints while you are profiling your application.