The taste of Netherlands: Tech Days NL in Amsterdam

TechDaysNL 2016

Earlier this month I had a pleasure to attend Tech Days NL conference in Amsterdam. It was very well organized conference.

I had opportunity to meet with many passionate developers. I met Bart de Smet. He is a C# Ninja who works for Microsoft at Bing Team, but he also C# compiler contributor. If you are C# passionate, like I am, I really recommend you to check out his talks. I also met¬†Gil Cleeren (author of a few awesome Pluralsight courses about Xamarin, and organizer of Techorama conference), Roy Corneliasen (Xamarin MVP who is building mobile apps while working for Xpirit), and Mike Martin (Azure MVP). Check out their sessions as well! I’m looking forward to meet them again at MVP summit in Redmond this November!

Third time this year I spoke about the Azure Portal Architecture. I had to shorten my VSLive session from 1:15h to 45 mins. So if the VSLive session was too long, and too much of commitment for you, check out my Tech Days session ūüėČ I gave a high-level overview of our architecture, technologies we are using, our deployment approach, and lessons learned over last 2 years: performance tips & tricks, how to avoid regressions, and how to handle them when¬†they happen.

UI Acceptance Testing Accessibility

Accessibility keywords

In the previous post Unit Testing Accessibility I showed how to run accessibility check on HTML node with aXe. This approach can be used to test components of your website.

You can take your accessibility testing to the next level by adding accessibility check for entire pages.

Do you remember Martin Fowler’s testing pyramid?

testign pyramid

While high-level UI tests can detect the same issues that unit tests can, usually UI tests are slower to run. Sometimes it takes 30 seconds to 1 minute to run 1 UI test, while unit test can be run in less than 100 milliseconds. Ideally you should have the most common scenario covered by UI test, and all possible customizations (on the component level) covered by unit tests. You can also add UI tests for some complex combinations of your components, and when you are fixing a bug in situation that unit test cannot cover encountered scenario. Because as you know, while fixing a bug, you should add unit test covering buggy scenario.

In general, unit tests and UI tests should be complementary. UI test should indicate an issue, while unit test should help you to find a source of the problem.

In accessibility testing, high-level checks are very useful in¬†detecting accessibility issues caused by some “small change”. Once you detect accessibility violation, you should:

  1. narrow it down to particular unit of your website
  2. add unit test to cover broken scenario
  3. make sure it fails
  4. fix the issue (by writing code)
  5. make sure that unit test pass
  6. make sure that end to end UI tests pass

Check out Marcy Sutton’s article: Accessibility Testing with aXe and WebdriverJS. She created sample github repo¬†that is demonstrating how to set everything up.

Unit Testing Accessibility

In Web Accessibility Hacker Way I mentioned that “only 20% of accessibility requirements can be verified by tools”. Nevertheless, it is worth to cover this 20%. Especially, when it is not very hard. You know that having automated test that guard against regressions always pays off in a long run.

As of today the best automatic verification tool for accessibility is aXe.


There is aXe Chrome plugin and aXe Firefox plugin that enables you to run accessibility audit manually:

aXe - results

Running automated tool manually is useful, but it is better to run it automatically as unit test, and incorporate it into your Continuous Integration to run it automatically after every commit.

Running accessibility audit with aXe

You can install aXe with npm:

npm i axe-core

The aXe has a function a11yCheck that performs accessibility audit on specified HTML Element. You may run it against widgets or partial views on your web app. That function takes 2 parameters:

  1. HTMLElement to be audited
  2. callback function that is invoked with results parameter
axe.a11yCheck($("#myElement")[0], function (results) {
    results.violations.length && console.error(results.violations);

It is useful to print errors to console, as the results.violations is an array of nested objects with different properties. Many of them are helpful to diagnose the issue.

aXe - console errors

*a11y is abbreviation for accessibility (similar like i18n for internationalization), 11 is number of letters between ‘a’ and ‘y’

Running aXe with Jasmine 2.x

In order to run aXe with Jasmine, you need to take into account that a11yCheck is asynchronous. Thus, you need to pass and invoke done function:

describe("a11y check", function() {
  it("has no accessbility violations (check console for errors)", function(done) {
    axe.a11yCheck($("#myElement")[0], function (results) {
        if (results.violations.length>0) {

Running aXe with QUnit 2.x

It is similar in QUnit. You also need to invoke done function, but first you need to get it by calling assert.async():

QUnit.test("a11y check", function(assert) {
    var done = assert.async();

    axe.a11yCheck($("#myElement")[0], function (results) {
        assert.strictEqual(results.violations.length, 0, "There should be no A11y violations (check console for errors)");
        if (results.violations.length>0) {


I created a sample with button and input tag:

<div id="fixture">
  <button>My button</button> 
  <input type="text" />

This sample is not accessible because input tag does not have a label, and a11yCheck should report violations. The sample code, with Jasmine and QUnit tests, is available on github: axe-unittests.


While automated accessibility unit tests are great, you probably still want to use aXe plugin for Chrome to investigate reported violations. It’s more convenient, and it has neat user interface, while in unit tests you need to dig in into console errors.

When you start adding accessibility checks for different parts of your system, you may encounter many violations at first. It is better to still add tests, that ignore known violations, and then incrementally fix the issues. This approach prevents regressions, while delaying adding test until all violations are fixed may cause introducing new ones while fixing others.

Azure Portal Tips & Tricks – 21. Azure Portal Settings

Azure Portal Tips & Tricks is a series of short videos where I am showing various features of the Azure Portal and how you can take advantage of them to be more productive.

In this video I am showing the Azure Portal settings.

You can follow the series by subscribing to my channel or going directly to Azure Portal Tips & Tricks playlist:

If you have any suggestions or questions about the Azure Portal, or there is something in particular that you would like to see in this series, tweet me at @JakubJedryszek or leave a comment.

Working Effectively with Legacy Code

I haven’t publish any book review for a while. It does not mean I am not reading books anymore. I just didn’t¬†feel that some of the books I read recently requires my recommendation, or I didn’t have any thoughts that I needed necessary to share right now.

I have added a few books to my favorite books list though. Check them out!

Working Effectively with Legacy Code deserves blog post because of a few reasons:

  1. Every Software Developer should read it
  2. It’s not really about legacy code
  3. Published in 2004 (12 years ago!) is still very up to date

Working Effectively with Legacy Code 1st Edition by Michael Feathers (Michael Feathers)

The book has three parts:

  1. Importance of unit tests when changing software
  2. Recipes for real World problems that we face when changing software (e.g., “I need to change a monster method” or “What methods should I test when¬†introducing a change”)
  3. Dependency-breaking techniques catalog

The first part should be familiar to most of programmers these days. If it is not for you then you should read Agile Principles, Patterns and Practices (by Robert Martin), TDD by Example (by Kent Beck), and The Art of Unit Testing (by Roy Osherove). You can thank me later.

The second part is the essence of the book. It shows, by example, how to add new feature, make a change to existing code, or fix a bug. Most books about software development, present examples on very simple, clean code that we never see in real World. This book, takes some messy piece of code and shows how to make it testable, how to get rid of too many side effects, and clean it up by separating dependencies and responsibilities. Many times we want to test one functionality, and then we realize that we need to instantiate tens of objects that other method depends on. Sounds familiar? This chapter shows how to handle that.

The last part (Dependency-Breaking Techniques) is very similar to Martin Fowler’s Refactoring: Improving the Design of Existing Code. It’s a set of techniques, and step by step description how to apply them to existing code.

As I mentioned earlier. This book is not really about legacy code. I think it is more about evolving existing code. It is natural, when adding features, we¬†add lines of code to the method. The hard part is to know when you should extract new method, or introduce a class, and refactor dependencies. It’s OK¬†to have global variables. The problem is to keep track of them or localize them.¬†How do you know, that adding a new functionality will not break something? You have 100% test coverage for every possible use case? That’s not possible because of¬†complexity of software we are creating today. “What¬†methods should I test” shows neat technique to backtrack effects, and side effects of a change that we are introducing.

There is much more, and¬†you should check out this book. You don’t have to read it from cover to cover. I strongly recommend you, at least, to scan through part 2 (changing software), and I am sure you will learn something new that you can apply¬†for your project today!