In-memory caching in Xamarin apps


Recently we added in-memory caching to Azure App. You can try it out now on iOS and Android!

It turns out Mono doesn’t have System.Runtime.Caching namespace, which makes it easy to implement caching for .NET apps. We had to find another way.

Caching libraries for Xamarin

We looked at a few libraries for caching (e.g., MemoryCache and Akavache), but surprisingly none of them manage cache size and memory. They simply add items to Dictionary, and if you add too many you get OutOfMemoryException.

It may not be an issue for many applications, but in Azure App we need to take into account users who has multiple subscriptions with thousands of resources.

BTW: Akavache is a great library. Besides in-memory cache it also supports persistent cache, have clean APIs and a lot of great documentation.

Implementing in-memory cache

After browsing internets and asking people at Xamarin chat we didn’t find anything that would work for us, and we decided to implement in-memory cache by ourselves.

public class InMemoryCache<T> : IInMemoryCache<T>
    private const int LimitedCacheThreshold = 1000;

    private class Reference
        private int _hitCount = 0;

        public DateTimeOffset Timestamp
            private set;

        public T Data
            private set;

        public void AddRef()
            Interlocked.Increment(ref _hitCount);

        public int ResetRef()
            var count = _hitCount;
            _hitCount = 0;
            return count;

        public static Reference Create(T obj)
            return new Reference()
                Timestamp = DateTimeOffset.Now,
                Data = obj,

        private Reference()

    private readonly ConcurrentDictionary<string, WeakReference<Reference>> _weakCache;
    private readonly ConcurrentDictionary<string, Reference> _limitedCache;
    private readonly ConcurrentDictionary<string, Task<T>> _pendingTasks;

    private InMemoryCache()
        _weakCache = new ConcurrentDictionary<string, WeakReference<Reference>>(StringComparer.Ordinal);
        _limitedCache = new ConcurrentDictionary<string, Reference>(StringComparer.Ordinal);
        _pendingTasks = new ConcurrentDictionary<string, Task<T>>(StringComparer.Ordinal);

    public static IInMemoryCache<T> Create()
        return new InMemoryCache<T>();

    public async Task<T> GetOrAdd(string key, DateTimeOffset expiration, Func<string, Task<T>> addFactory)
        WeakReference<Reference> cachedReference;

        if (_weakCache.TryGetValue(key, out cachedReference))
            Reference cachedValue;
            if (cachedReference.TryGetTarget(out cachedValue) || cachedValue != null)
                if (cachedValue.Timestamp > expiration)
                    return cachedValue.Data;

            var actualValue = await _pendingTasks.GetOrAdd(key, addFactory);

            if (_limitedCache.Count > LimitedCacheThreshold)
                var keysToRemove = _limitedCache
                    .Select(item => Tuple.Create(
                    .OrderBy(item => item.Item1)
                    .ThenBy(item => item.Item2)
                    .Select(item => item.Item3)
                    .Take(LimitedCacheThreshold / 2)

                foreach (var k in keysToRemove)
                    Reference unused;
                    _limitedCache.TryRemove(k, out unused);

            var reference = Reference.Create(actualValue);
            _weakCache[key] = new WeakReference<Reference>(reference);
            _limitedCache[key] = reference;

            return actualValue;
            Task<T> unused;
            _pendingTasks.TryRemove(key, out unused);

We use two layers of caching. First is using WeakReference that leaves memory management to Garbage Collector. As GC is not very predictable and sometimes may unnecessary release some reference, we have second layer of caching. We call it _limitedCache, and it keeps objects in memory until capacity reach 1000 objects. Then we remove half (500), least used objects from dictionary. Because the same objects are being kept in two dictionaries, the WeakReference will never be released as long as object is in _limitedCache. Thus, we always check only if object is present in _weakCache.

There is also third dictionary that keeps track of pending tasks that are responsible for getting data. This prevents us from sending the same requests more than once if object is not in cache yet.


What is great about building apps with Xamarin is the ability to share code across platforms. When we were implementing cache, we didn’t touch any platform specific code. All work was done in Portable Class Library.

Adding cache to Azure App helped not only to decrease user’s network data usage, but also to improve performance significantly!

If you need in-memory cache for your app, go ahead and use the above code snippet! If you are looking for persistent cache then consider using Akavache.

Are you caching? How? Why? Why not?

Trying iOS 11 with Xamarin

The triathlon season is over. I completed all three, planned races for this year:

  1. Ironman 70.3 Coeur d’Alene
  2. SeaFair Sprint Triathlon (new PR!)
  3. Lake Meridian Olympic Triathlon (new PR!)

I also finished RAMROD (epic Ride Around Mt Rainier in One Day) and Course d’Equipe. The last bike ride for this season is Gran Fondo Whistler in two weeks.

In the meantime…

The Winter is coming!

Apple is cooking for us iOS 11, and I decided to give it a shot! It actually works nice.

  1. Install latest Xcode beta from here
  2. Install latest Xamarin.iOS (all links are here, hint: version is 10.99, not 11 yet)
  3. Set VS for Mac to Xcode-beta (Preferences -> Projects -> SDK Locations -> Apple -> Location)

If you did everything correct you should be able to see new iOS11 simulator:

iOS 11 simulator

I encountered one issue: when deploying to device I got following errors:

Error: unable to find utility “lipo”, not a developer tool or in PATH
Error: Failed to create the a fat library

Solution was to run the following command:

sudo xcode-select --switch /Applications/

Related Xamarin Forums thread.


So far everything works pretty well. Occasionally when I run VS for Mac it doesn’t detect simulators, but after restart they are back!

Have you tried iOS 11 yet?

Azure Resource Manager Batch API

The latest Azure Mobile App update has statuses on the resources list:

Azure App - Statuses on resources list

You probably want to ask why we didn’t have them before. Great question! Currently Azure Resource Manager (public API we are using to get your Azure resources) requires to make separate calls to get single resource status. It means: if you have 100-200 resources, you would have to make 100-200 extra calls. There are some people who has almost 2000 in one subscription! Taking performance and data usage into consideration, this is not ideal.

Both iOS and Android platforms allows to address this problem to some extent by querying for status only resources that are currently visible. However this is still extra 5-10 calls. It is even worse when you start scrolling, and very bad if you scroll on your list containing 2000 resources.

Batch API

Sometime ago ARM added Batch API – you can send POST request with up to 20 URIs in the body. Response will contain up to 20 packaged responses that you have to extract. Using batch API, you can decrease number of requests by up to 20x. This matters especially when user has a lot of resources and keep scrolling on the list.

When implementing batch requests, you need to figure out the optimal interval for sending requests. We started with 200ms, but then we changed it to 50ms. Additionally, every time new request is coming we delay sending batch request by additional 50ms. This may cause indefinite delay. In order to solve this: we always submit request if queue has 20 or more pending requests. 20*50ms = 1000ms = 1s = long time! We tweaked it again, and changed interval to 20ms. With current implementation, we wait anytime between 20ms and 400ms to send batch request.

Implementing Batch API

You probably gonna say: “it all sounds great, but how do I implement it”? For you convenience I created small console application that demonstrate ARM Batch API in action, and I put it on github.

Xamarin.iOS and Xamarin.Android does not have System.Threading.Timer. We created our own implementation OneShotTimer (thanks William Moy!).

Entire magic happens in ArmService. It has one public method GetResource that instead of directly sending GET request is adding request to ConcurrentQueue. OneShotTimer and BatchRequestDipatcher methods are responsible for sending the actual HTTP request.

In order to run console app, you need to provide ARM token, and (optionally) resource ids you want to request. In demo app I provided fake resource ids, which will be fine to issue requests, but you will not get resource back.

To get ARM token, go to Azure Portal, open F12 tools and inspect some ARM request. From request headers, copy Authorization header (string starting with Bearer rAnDoMcHaRacTErS...):

Azure Portal - ARM token

You can also get resources ids from F12 tab. The best way is to go to All Resources blade, and find some batch request:

Azure Portal - resources ids

Once you paste resource ids and ArmToken in Program.cs you can run the app, and you should see the following output:

Batch requests with 5s randomness

Requests are send in random time, anytime from 0 to 5s after program runs. This is done using Task.Delay:

var tasks = _resourceIds.Select(async resourceId =>
                await Task.Delay(new Random().Next() % 5000);   // simulate calling GetResource from different parts of UI
                var response = await _armService.GetResource(resourceId);

When you change randomness from 5s to 0.5s you can observe that there will be less batch requests (AKA more requests sent in single batch):

Batch requests with 0.5s randomness


Using Batch API for getting resource statuses visibly improves performance in the mobile app. It is noticeable especially when using network data.

Azure Resource Manager has plans to add ARM API that will allow to do 1 request to get multiple resources with statuses. This should improve performance even more in the future.

If you are facing similar problem with your app, consider implementing Batch API on your server!

Taking WordPress blog to HTTPS with CloudFlare in less than 10 minutes!


Making your website secure has never been easier! I was able to take my WordPress blog to HTTPS in less than 10 minutes!


This part is super easy and straight-forward. Just sign up for CloudFlare, go to and follow instructions. You can also check this Troy Hunt’s demo to see it in action.

Once you finish, your website will be running on HTTPS!

Additional benefit is taking advantage of CloudFlare cache! For free! As you can see on the below screenshot, in last month: 54/66 GB was served from CloudFlare, only 11/66 GB came from my server!

CloudFlare - cached bandwidth


If you have WordPress blog (like I do), above setup will take your website to HTTPS, but all urls (hyperlinks, images, stylesheets etc.) will be still HTTP. This will result in mixed content error.

I love WordPress because every problem you may have was already solved by somebody else 🙂 In this case problem is solved by CloudFlare Flexible SSL Plugin.

Multiple domains

If you have multiple domains pointing to your blog, things are a little bit more complicated: WordPress Multisite SSL with domain mapping using Cloudflare .


If you want to learn more about HTTPS, check out What Every Developer Must Know About HTTPS. It is also worth to remember that HTTPS might be faster than HTTP!

Is your website secure? Why not?

Quick intro to web development with TypeScript, webpack and Aurelia

TypeScript at SeattleJS

Earlier this month I spoke at SeattleJS meetup. I love this meetup! People attending it are awesome! Thank you Jeremy Foster for inviting me to speak! If you are living in Seattle area you should definitely check it out!

I gave fast-paced 30 mins overview of TypeScript. I showed a sample app that is taking advantage of webpack for continuous compilation, bundling and minification. I also did quick demo of Aurelia Framework <3

After presentation I got a lot of questions about migrating from JavaScript to TypeScript, and about specifics of building large web apps.

TypeScript team and Anders Hejlsberg shared with me a few interesting reads about migrating to TypeScript:

I also found out that Visual Studio Code was initially written in TypeScript, and before v1 release they switched to TypeScript.

Do you have questions or thoughts about migrating from JavaScript to TypeScript? Join discussion on twitter.

To learn more about specifics of building large web apps, check out my talk from Ignite Australia I gave earlier this year:

At the same conference, I gave a longer version of the talk I did at the meetup. So if you want to dive in deeper, the video is here.

It is also worth to mention that TypeScript is getting more, and more traction every day. If you are web developer you should seriously consider using it over pure JavaScript, or transpiled JavaScript vNext.

TypeScript = JavaScript vNext + types

Every valid JavaScript code is valid TypeScript code. Thus, by choosing TypeScript you have flexibility to use JavaScript, and opportunity to add type checks to some critical components of your project.

TypeScript - GoogleTrends

Once more, thanks SeattleJS for organizing awesome meetup and inviting me to speak! Great meetup, great people, keep up good work!