Ian Griffiths has published a six part series on when to use, and when not to use, .NET 4.5’s async features with WPF. The series begins with a post titled Too Much, Too Fast with WPF and Async.
With async one may be tempted to just liberally sprinkle it into the application and call it a day. Unfortunately this doesn’t work too well when the batch size, that is to say the time difference between each async call, is smaller than the cost of creating a Task object and the associated context switching.
Large batches can reduce the time to completion, but can interfere with UI responsiveness. Ian writes,
Although this is much faster than the 8.5 second case, we’ve lost something: that slower example produced useful results in the UI much faster. In fact, a user might prefer the slower version in practice, because if useful data appears immediately, you might not even notice that it takes three times longer to finish populating the list—it was probably going to take a lot longer than 8.5 seconds to scroll down through the whole list anyway. So by one important measure, the naive asynchronous method is better: it provides useful information to the user sooner.
Ian Griffiths also looks at using the thread pool and WPF 4.5’s new Collection Synchronization feature. This technique is also needed if you use ConfigureAwait(false) to avoid forcing the processing to occur on the UI thread.
That call to ConfigureAwait declares that we don’t care about which context the method continues on. The upshot is that when a read that cannot complete immediately does eventually finish, the deferred execution of the rest of the method will happen on a thread pool thread. This means our await no longer incurs any WPF dispatcher overhead. But of course, it also means that all our list updates will happen on a worker thread, so we’ll need to use the same tricks as before to avoid problems: either we’ll need to wait until we’re done before making the list visible to data binding, or we’ll have to enable cross-thread change notification handling.
Another technique that Ian demonstrates is chunking data using Reactive Extensions. This uses the Buffer function to limit batch sizes to 100 ms or 5000 items, whichever comes first, and the ObserveOnDispatcher function to marshal it back onto the UI thread. The pattern is more verbose than the other techniques, but it “starts showing […] data almost immediately, and finishes loading and displaying all the data in 2.3 seconds”, which is an improvement over the original synchronous implementation.