C# Async Antipatterns
The async and await keywords have done a great job of simplifying writing asynchronous code in C#, but unfortunately they can't magically protect you from getting things wrong. In this article, I want to highlight a bunch of the most common async coding mistakes or antipatterns that I've come across in code reviews.

1. Forgotten await
Whenever you call a method that returns a Task or Task<T> you should not ignore its return value. In most cases, that means awaiting it, although there are occasions where you might keep hold of the Task to be awaited later.
In this example, we call Task.Delay but because we don't await it, the "After" message will get immediately written, because Task.Delay(1000) simply returns a task that will complete in one second, but nothing is waiting for that task to finish.
If you make this mistake in a method that returns Task and is marked with the async keyword, then the compiler will give you a helpful error:
Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call.
But in synchronous methods or task-returning methods not marked as async it appears the C# compiler is quite happy to let you do this, so remain vigilant that you don't let this mistake go unnoticed.
2. Ignoring tasks
Sometimes developers will deliberately ignore the result of an asynchronous method because they explicitly don't want to wait for it to complete. Maybe it's a long-running operation that they want to happen in the background while they get on with other work. So sometimes I see code like this:
The danger with this approach is that nothing is going to catch any exceptions thrown by DoSomethingAsync . At best, that means you didn't know it failed to complete. At worst, it can terminate your process .
So use this approach with caution, and make sure the method has good exception handling. Often when I see code like this in cloud-applications, I often refactor it to post a message to a queue, whose message handler performs the background operation.
3. Using async void methods
Every now and then you'll find yourself in a synchronous method (i.e. one that doesn't return a Task or Task<T> ) but you want to call an async method. However, without marking the method as async you can't use the await keyword. There are two ways developers work round this and both are risky.
The first is when you're in a void method, the C# compiler will allow you to add the async keyword. This allows us to use the await keyword:
The trouble is, that the caller of MyMethod has no way to await the outcome of this method. They have no access to the Task that DoSomethingAsync returned. So you're essentially ignoring a task again.
Now there are some valid use cases for async void methods. The best example would be in a Windows Forms or WPF application where you're in an event handler. Event handlers have a method signature that returns void so you can't make them return a Task .
So it's not necessarily a problem to see code like this:
But in most cases, I recommend against using async void . If you can make your method return a Task , you should do so.
4. Blocking on tasks with .Result or .Wait
Another common way that developers work around the difficulty of calling asynchronous methods from synchronous methods is by using the .Result property or .Wait method on the Task . The .Result property waits for a Task to complete, and then returns its result, which at first seems really useful. We can use it like this:
But there are some problems here. The first is that using blocking calls like Result ties up a thread that could be doing other useful work. More seriously, mixing async code with calls to .Result (or .Wait() ) opens the door to some really nasty deadlock problems .
Usually, whenever you need to call an asynchronous method, you should just make the method you are in asynchronous. Yes, its a bit of work and sometimes results in a lot of cascading changes which can be annoying in a large legacy codebase, but that is still usually preferable to the risk of introducing deadlocks.
There might be some instances in which you can't make the method asynchronous. For example, if you want to make an async call in a class constructor - that's not possible. But often with a bit of thought, you can redesign the class to not need this.
For example, instead of this:
You could do something like this, using an asynchronous factory method to build the class instead:
Other situations where your hands are tied are when you are implementing a third party interface that is synchronous, and cannot be changed. I've ran into this with IDispose and implementing ASP.NET MVC's ActionFilterAttribute . In these situations you either have to get very creative, or just accept that you need to introduce a blocking call, and be prepared to write lots of ConfigureAwait(false) calls elsewhere to protect against deadlocks (more on that shortly).
The good news is that with modern C# development, it is becoming increasingly rare that you need to block on a task. Since C# 7.1 you could declare async Main methods for console apps, and ASP.NET Core is much more async friendly than the previous ASP.NET MVC was, meaning that you should rarely find yourself in this situation.
5. Mixing ForEach with async methods
The List<T> class has a "handy" method called ForEach that performs an Action<T> on every element in the list. If you've seen any of my LINQ talks you'll know my misgivings about this method as encourages a variety of bad practices (read this for some of the reasons to avoid ForEach ). But one common threading-related misuse I see, is using ForEach to call an asynchronous method.
For example, let's say we want to email all customers like this:
What's the problem here? Well, what we've done is exactly the same as if we'd written the following foreach loop:
We've generated one Task per customer but haven't waited for any of them to complete.
Sometimes I'll see developers try to fix this by adding in async and await keywords to the lambda:
But this makes no difference. The ForEach method accepts an Action<T> , which returns void . So you've essentially created an async void method, which of course was one of our previous antipatterns as the caller has no way of awaiting it.
So what's the fix to this? Well, personally I usually prefer to just replace this with the more explicit foreach loop:
Some people prefer to make this into an extension method, called something like ForEachAsync , which would allow you to write code that looks like this:
But don't mix List<T>.ForEach (or indeed Parallel.ForEach which has exactly the same problem) with asyncrhonous methods.
6. Excessive parallelization
Occasionally, a developer will identify a series of tasks that are performed sequentially as being a performance bottleneck. For example, here's some code that processes some orders sequentially:
Sometimes I'll see a developer attempt to speed this up with something like this:
What we're doing here is calling the ProcessOrderAsync method for every order, and storing each resulting Task in a list. Then we wait for all the tasks to complete. Now, this does "work", but what if there were 10,000 orders? We've flooded the thread pool with thousands of tasks, potentially preventing other useful work from completing. If ProcessOrderAsync makes downstream calls to another service like a database or a microservice, we'll potentially overload that with too high a volume of calls.
What's the right approach here? Well at the very least we could consider constraining the number of concurrent threads that can be calling ProcessOrderAsync at a time. I've written about a few different ways to achieve that here .
If I see code like this in a distributed cloud application, it's often a sign that we should introducing some messaging so that the workload can be split into batches and handled by more than one server.
7. Non-thread-safe side-effects
If you've ever looked into functional programming (which I recommend you do even if you have no plans to switch language), you'll have come across the idea of "pure" functions. The idea of a "pure" function is that it has no side effects. It takes data in, and it returns data, but it doesn't mutate anything. Pure functions bring many benefits including inherent thread safety.
Often I see asynchronous methods like this, where we've been passed a list or dictionary and in the method we modify it:
The trouble is, this code is risky as it is not thread-safe for the users list to be modified on different threads at the same time. Here's the same method updated so that it no longer has side effects on the users list.
Now we've moved the responsibility of adding the user into the list onto the caller of this method, who has a much better chance of ensuring that the list is accessed from one thread only.
8. Missing ConfigureAwait(false)
ConfigureAwait is not a particularly easy concept for new developers to understand, but it is an important one, and if you find yourself working on a codebase that uses .Result and .Wait it can be critical to use correctly.
I won't go into great detail, but essentially the meaning of ConfigureAwait(true) is that I would like my code to continue on the same " synchronization context " after the await has completed. For example, in a WPF application, the "synchronization context" is the UI thread, and I can only make updates to UI components from that thread. So I almost always want ConfigureAwait(true) in my UI code.
Now ConfigureAwait(true) is actually the default, so we could safely leave it out of the above example and everything would still work.
But why might we want to use ConfigureAwait(false) ? Well, for performance reasons. Not everything needs to run on the "synchronization context" and so its better if we don't make one thread do all the work. So ConfigureAwait(false) should be used whenever we don't care what thread the continuation runs on, which is actually a lot of the time, especially in low-level code that is dealing with files and network calls.
However, when we start combining code that has synchronization contexts, ConfigureAwait(false) and calls to .Result , there is a real danger of deadlocks. And so the recommended way to avoid this is to remember to call ConfigureAwait(false) everywhere that you don't explicitly need to stay on the synchronization context.
For example, if you make a general purpose NuGet library, then it is highly recommended to put ConfigureAwait(false) on every single await call, since you can't be sure of the context in which it will be used.
There is some good news on the horizon. In ASP.NET Core there is no longer a synchronization context , which means that you no longer need to put calls to ConfigureAwait(false) in. Although, it remains recommended when creating NuGet packages.
But if you are working on projects that run the risk of a deadlock, you need to be very vigilant about adding the ConfigureAwait(false) calls in everywhere.
9. Ignoring the async version
Whenever a method in the .NET framework takes some time, or performs some disk or network IO, there is almost always an asynchronous version of the method you can use instead. Unfortunately, the synchronous versions remain for backwards compatibility reasons. But there is no longer any good reason to use them.
So for example, prefer Task.Delay to Thread.Sleep , prefer dbContext.SaveChangesAsync to dbContext.SaveChanges and prefer fileStream.ReadAsync to fileStream.Read . These changes free up the thread-pool threads to do other more useful work, allowing your program to process a higher volume of requests.
10. try catch without await
There's a handy optimization that you might know about. Let's suppose we have a very simple async method that only makes a single async call as the last line of the method:
In this situation, there is no need to use the async and await keywords. We could have simply done the following and returned the task directly:
Under the hood, this produces slightly more efficient code, as code using the await keyword compiles into a state machine behind the scenes.
But let's suppose we update the function to look like this:
It looks safe, but actually the catch clause will not have the effect you might expect. It will not catch all exceptions that might be thrown while the Task returned from SendAsync runs. That's because we've only caught exceptions that were thrown while we created that Task. If we wanted to catch exceptions thrown at any point during that task, we need the await keyword again:
Now our catch block will be able to catch exceptions thrown at any point in the SendAsync task execution.
There are lots of ways in which you can cause yourself problems with async code, and so its worth investing time to deepen your understanding of threading. In this article I've just picked out a few of the problems I see most frequently, but I'm sure there are plenty more that could be added. Let me know in the comments what advice you'd add to this list.
If you'd like to learn more about threading in C#, a few resources I can recommend are Bill Wagner's recent NDC talk The promise of an async future awaits , Filip Ekberg's Pluralsight course Getting Started with Asynchronous Programming in .NET , and of course anything written by renowned async expert Stephen Cleary .
Great article, thanks!!
you want to call an async method
Can you please add the correct pattern for each example? You explain what not to do, and the reason, but not how to fix it. How could we learn? ;-)
I love #6 đ
Yes, good suggestions for more accurate terminology
Were there any in particular that you had a problem with? I did provide recommendations in most of the points where I didn't think it was obvious.
We've flooded the thread pool with thousands of tasks, potentially preventing other useful work from completing.
Yes, it would not be so bad if the tasks added to the thread pool had no blocking calls in them. But there would still be a potentially large backlog of tasks to get through which could delay other parts of the system, and you can also accidentally launch a denial of service attack on a down-stream service (e.g. a database) with this approach, so constraining the number of concurrent operations is usually a better idea.
Confession: 4 and 5.
There are two ways developers work round this and both are risky.
Ah, sorry that wasn't clear. #4 is the second way - using .Result or .Wait in a void function
Great job @markheath1010:disqus ! I really enjoyed reading this.
Hey Mark, That's a really nice list. For #2 Ignoring tasks I would use the discard operator to avoid a potential warning. Although, your suggestion about publishing a message makes a lot of sense. In the #10, about the handy optimization, I used to do it until I came across this Async Guidance . May I ask your thoughts on this matter? Oh, the #6 is a great advice! Thanks
Yes, if you read that article there is a note explaining that removing the await in certain circumstances results in more efficient code, but it comes with this gotcha plus a couple of others that the article you linked to (which I only discovered myself a few days ago) mentions.
Sometimes I feel the whole async await is an anti pattern :(. It spreads virally, isn't that easy to understand, hence it's easy to get wrong, and then the need to add all those repetitive calls to ConfigAsync(), which ruins the whole abstraction. It's like the dancing bear: It's not that the bear is good at dancing, but that it dance at all.
On viral spreading - yes, it does if you retrofit it into an existing non-async application, but for new development, your top level controllers/message handlers/startup code should be async, so it becomes much easier to flow async down to lower levels. And the need for ConfigureAwait does not apply to ASP.NET Core, so things are at least getting a bit easier. I think the underlying issue is that threading is hard, and as nice as keywords like async and await are, there is no magical way to make all the complexity of writing good async code go away.
Akka.NET would be simpler
thank you, thank you, thank you...
Wonderful article. I'm working in a code base where I've encountered each issue you've described, sad but true. I'm not sure if anyone else has pointed this out, but in .Net Core ConfigureAwait() does nothing because out of the box .Net Core doesn't have a SynchronizationContext. https://blog.stephencleary....
I would add another bad practice: on async Task methods, return null. It only crashs on runtime. https://dotnetfiddle.net/Fu...
yes, that's another one to avoid. Some mocking frameworks will return null by default on all mocked methods that return a class on an interface which is annoying if you want to mock an async interface.
Great summary! Thanks!
yes, it's nice we don't have to worry about that any more in .NET Core (although now that 3.0 supports WinForms/WPF I'm not sure how true that is any more)
Awesome read!. thank you
Good Job Thanks
I keep seeing all the wrong ways to write async code. I'm looking for the bet way to write parallel async code using httpclient that doesn't get hung up with a Wait state. any pointers to code that does it right?
I do #5 all the time, but I wrap in in a DoWithSemaphore call to (optionally) restrict the thread count. I have this vague recollection that the task manager is intelligent enough to pool the threads, but I haven't actually seen that borne out in practice, hence my semaphore.
Hey Mark great article. I came across it in a Google result when I was researching what might be an async antipattern I'm doing. The compiler is giving me warnings and I cannot understand why. I have this method: public async Task EnqueuePodcastItemAsync(PodcastItemQueueItem item) { ... await queue.CreateIfNotExistsAsync().ConfigureAwait(false); await queue.AddMessageAsync(...); } and this caller: public async Task DoParsingStepAsync(PodcastItemQueueItem item) { await _storageManager.EnqueuePodcastItemAsync(item).ConfigureAwait(false); } The compiler is warning me that the method DoParsingStepAsync does not need to use async/await. Why??
it's not an "antipattern", but in this case, you can reduce a little bit of additional overhead by returning the Task that EnqueuePodcastItemAsync returns directly. However, as soon as DoParsingStepAsync needs to do another async action, or does something after the await, then you do need the async await keywords.
So the reason for the compiler warning about me not needing to use async await here - is it because it sees that I am awaiting multiple adjacent method calls but not using the results - therefore it is suggesting just pass the Task and not await until you absolutely have to? (to avoid the overhead of the state machine)?
yes, in this specific case, the state machine is not necessary. However, I've tended to allow developers to just use await in these circumstances anyway as they can find it confusing to know when this rule applies
Ok I see. Also I noticed something interesting. If I edit the DoParsingStepAsync method and add any line of code after the "await _storageManager.EnqueuePodcastItemAsync(item).ConfigureAwait(false);", .e.g. even a Console.Writeline(), the warning goes away. Why is that?
Dittoing the sentiment..
Good article! But there is no reason to use ConfigureAwait in net.core anymore. Now ConfigureAwait(false) is the by default behaviour!
This is an excellent article @markheath1010:disqus In the context of initializing a Command's execution method, currently Discards get rid of ex RelayCommand (async => await SomeMethoAsync) gives a warning about the issue you noted with any async void method, regarding exceptions being unhandled. C# 7.0 feature Discards silences this warning, but my question is does it resolve the issue or just outsmart the intellisense?
Well done, very nice article
Great article. I've just encountered #5 today in my job.
3 years later but figured I'd ask anyway - what are your thoughts on just wrapping an async/await in Task.Run if you can't convert your code to be async/await all the way down. There's the possibility to add ConfigureAwait(false) but that would be a lot of code changes and wrapping an async/await Task in Task.Run and calling .Result on the Task.Run doesn't result in a deadlock. Example - this doesn't result in a deadlock. Task.Run(()=> theAsyncFunctionWithoutAwait()).Result
Unfortunately I believe there are still scenarios in which Task.Run ... Result can cause a deadlock. The good news these days is that if you're using the latest .NET (e.g. .NET 6) there are far fewer scenarios in which you can get deadlocks and also hardly ever do you even need to run async code in a non-async method.
Nice write-up, Mark...
Nice article. Thank you
Await, and UI, and deadlocks! Oh my!
Stephen Toub - MSFT
January 13th, 2011 2 2
It’s been awesome seeing the level of interest developers have had for the Async CTP and how much usage it’s getting. Of course, with any new technology there are bound to be some hiccups. One issue I’ve seen arise now multiple times is developers accidentally deadlocking their application by blocking their UI thread, so I thought it would be worthwhile to take a few moments to explore the common cause of this and how to avoid such predicaments.
At its core, the new async language functionality aims to restore the ability for developers to write the sequential, imperative code they’re used to writing, but to have it be asynchronous in nature rather than synchronous. That means that when operations would otherwise tie up the current thread of execution, they’re instead offloaded elsewhere, allowing the current thread to make forward progress and do other useful work while, in effect, asynchronously waiting for the spawned operation to complete. In both server and client applications, this can be crucial for application scalability, and in client applications in particular it’s also really useful for responsiveness.
Most UI frameworks, such as Windows Forms and WPF, utilize a message loop to receive and process incoming messages. These messages include things like notifications of keys being typed on a keyboard, or buttons being clicked on a mouse, or controls in the user interface being manipulated, or the need to refresh an area of the window, or even the application sending itself a message dictating some code to be executed. In response to these messages, the UI performs some action, such as redrawing its surface, or changing the text being displayed, or adding items to one of its controls., or running the code that was posted to it. The “message loop” is typically literally a loop in code, where a thread continually waits for the next message to arrive, processes it, goes back to get the next message, processes it, and so on. As long as that thread is able to quickly process messages as soon as they arrive, the application remains responsive, and the application’s users remain happy. If, however, processing a particular message takes too long, the thread running the message loop code will be unable to pick up the next message in a timely fashion, and responsiveness will decrease. This could take the form of pauses in responding to user input, and if the thread’s delays get bad enough (e.g. an infinite delay), the application “hanging”.
In a framework like Windows Forms or WPF, when a user clicks a button, that typically ends up sending a message to the message loop, which translates the message into a call to a handler of some kind, such as a method on the class representing the user interface, e.g.:
private void button1_Click(object sender, RoutedEventArgs e) { string s = LoadString(); textBox1.Text = s; }
Here, when I click the button1 control, the message will inform WPF to invoke the button1_Click method, which will in turn run a method LoadString to get a string value, and store that string value into the textBox1 control’s Text property. As long as LoadString is quick to execute, all is well, but the longer LoadString takes, the more time the UI thread is delayed inside button1_Click, unable to return to the message loop to pick up and process the next message.
To address that, we can choose to load the string asynchronously, meaning that rather than blocking the thread calling button1_Click from returning to the message loop until the string loading has completed, we’ll instead just have that thread launch the loading operation and then go back to the message loop. Only when the loading operation completes will we then send another message to the message loop to say “hey, that loading operation you previously started is done, and you can pick up where you left off and continue executing.” Imagine we had a method:
public Task<string> LoadStringAsync();
This method will return very quickly to its caller, handing back a .NET Task<string> object that represents the future completion of the asynchronous operation and its future result. At some point in the future when the operation completes, the task object will be able to hand out the operations’ result, which could be the string in the case of successful loading, or an exception in the case of failure. Either way, the task object provides several mechanisms to notify the holder of the object that the loading operation has completed. One way is to synchronously block waiting for the task to complete, and that can be accomplished by calling the task’s Wait method, or by accessing its Result, which will implicitly wait until the operation has completed… in both of these cases, a call to these members will not complete until the operation has completed. An alternative way is to receive an asynchronous callback, where you register with the task a delegate that will be invoked when the task completes. That can be accomplished using one of the Task’s ContinueWith methods. With ContinueWith, we can now rewrite our previous button1_Click method to not block the UI thread while we’re asynchronously waiting for the loading operation to complete:
private void button1_Click(object sender, RoutedEventArgs e) { Task<string> s = LoadStringAsync(); s.ContinueWith(delegate { textBox1.Text = s.Result; }); // warning: buggy }
This does in fact asynchronously launch the loading operation, and then asynchronously run the code to store the result into the UI when the operation completes. However, we now have a new problem. UI frameworks like Windows Forms, WPF, and Silverlight all place a restriction on which threads are able to access UI controls, namely that the control can only be accessed from the thread that created it. Here, however, we’re running the callback to update the Text of textBox1on some arbitrary thread, wherever the Task Parallel Library (TPL) implementation of ContinueWith happened to put it. To address this, we need some way to get back to the UI thread. Different UI frameworks provide different mechanisms for doing this, but in .NET they all take basically the same shape, a BeginInvoke method you can use to pass some code as a message to the UI thread to be processed:
private void button1_Click(object sender, RoutedEventArgs e) { Task<string> s = LoadStringAsync(); s.ContinueWith(delegate { Dispatcher.BeginInvoke(new Action(delegate { textBox1.Text = s.Result; })); }); }
The .NET Framework further abstracts over these mechanisms for getting back to the UI thread, and in general a mechanism for posting some code to a particular context, through the SynchronizationContext class. A framework can establish a current context, available through the SynchronizationContext.Current property, which provides a SynchronizationContext instance representing the current environment. This instance’s Post method will marshal a delegate back to this environment to be invoked: in a WPF app, that means bringing you back to the dispatcher, or UI thread, you were previously on. So, we can rewrite the previous code as follows:
private void button1_Click(object sender, RoutedEventArgs e) { var sc = SynchronizationContext.Current; Task<string> s = LoadStringAsync(); s.ContinueWith(delegate { sc.Post(delegate { textBox1.Text = s.Result; }, null); }); }
and in fact this pattern is so common, TPL in .NET 4 provides the TaskScheduler.FromCurrentSynchronizationContext() method, which allows you to do the same thing with code like:
private void button1_Click(object sender, RoutedEventArgs e) { LoadStringAsync().ContinueWith(s => textBox1.Text = s.Result, TaskScheduler.FromCurrentSynchronizationContext()); }
As mentioned, this works by “posting” the delegate back to the UI thread to be executed. That posting is a message like any other, and it requires the UI thread to go through its message loop, pick up the message, and process it (which will result in invoking the posted delegate). In order for the delegate to be invoked then, the thread first needs to return to the message loop, which means it must leave the button1_Click method.
Now, there’s still a fair amount of boilerplate code to write above, and it gets orders of magnitude worse when you start introducing more complicated flow control constructs, like conditionals and loops. To address this, the new async language feature allows you to write this same code as:
private async void button1_Click(object sender, RoutedEventArgs e) { string s = await LoadStringAsync(); textBox1.Text = s; }
For all intents and purposes, this is the same as the previous code shown, and you can see how much cleaner it is… in fact, it’s close to identical in the code required to our original synchronous implementation. But, of course, this one is asynchronous: after calling LoadStringAsync and getting back the Task<string> object, the remainder of the function is hooked up as a callback that will be posted to the current SynchronizationContext in order to continue execution on the right thread when the loading is complete. The compiler is layering on some really helpful syntactic sugar here.
Now things get interesting. Let’s imagine LoadStringAsync is implemented as follows:
static async Task<string> LoadStringAsync() { string firstName = await GetFirstNameAsync(); string lastName = await GetLastNameAsync(); return firstName + ” ” + lastName; }
LoadStringAsync is implemented to first asynchronously retrieve a first name, then asynchronously retrieve a last name, and then return the concatenation of the two. Notice that it’s using “await”, which, as pointed out previously, is similar to the aforementioned TPL code that uses a continuation to post back to the synchronization context that was current when the await was issued. So, here’s the crucial point: for LoadStringAsync to complete (i.e. for it to have loaded all of its data and returned its concatenated string, completing the task it returned with that concatenated result), the delegates it posted to the UI thread must have completed. If the UI thread is unable to get back to the message loop to process messages, it will be unable to pick up the posted delegates that resulted from the asynchronous operations in LoadStringAsync completing, which means the remainder of LoadStringAsync will not run, which means the Task<string> returned from LoadStringAsync will not complete. It won’t complete until the relevant messages are processed by the message loop.
With that in mind, consider this (faulty) reimplementation of button1_Click:
private void button1_Click(object sender, RoutedEventArgs e) { Task<string> s = LoadStringAsync(); textBox1.Text = s.Result; // warning: buggy }
There’s an exceedingly good chance that this code will hang your application. The Task<string>.Result property is strongly typed as a String, and thus it can’t return until it has the valid result string to hand back; in other words, it blocks until the result is available. We’re inside of button1_Click then blocking for LoadStringAsync to complete, but LoadStringAsync’s implementation depends on being able to post code asynchronously back to the UI to be executed, and the task returned from LoadStringAsync won’t complete until it does. LoadStringAsync is waiting for button1_Click to complete, and button1_Click is waiting for LoadStringAsync to complete. Deadlock!
This problem can be exemplified easily without using any of this complicated machinery, e.g.:
private void button1_Click(object sender, RoutedEventArgs e) { var mre = new ManualResetEvent(false); SynchronizationContext.Current.Post(_ => mre.Set(), null); mre.WaitOne(); // warning: buggy }
Here, we’re creating a ManualResetEvent, a synchronization primitive that allows us to synchronously wait (block) until the primitive is set. After creating it, we post back to the UI thread to set the event, and then we wait for it to be set. But we’re waiting on the very thread that would go back to the message loop to pick up the posted message to do the set operation. Deadlock.
The moral of this (longer than intended) story is that you should not block the UI thread. Contrary to Nike’s recommendations, just don’t do it. The new async language functionality makes it easy to asynchronous wait for your work to complete. So, on your UI thread, instead of writing:
Task<string> s = LoadStringAsync(); textBox1.Text = s.Result; // BAD ON UI
you can write:
Task<string> s = LoadStringAsync(); textBox1.Text = await s; // GOOD ON UI
Or instead of writing:
Task t = DoWork(); t.Wait(); // BAD ON UI
Task t = DoWork(); await t; // GOOD ON UI
This isn’t to say you should never block. To the contrary, synchronously waiting for a task to complete can be a very effective mechanism, and can exhibit less overhead in many situations than the asynchronous counterpart. There are also some contexts where asynchronously waiting can be dangerous. For these reasons and others, Task and Task<TResult> support both approaches, so you can have your cake and eat it too. Just be cognizant of what you’re doing and when, and don’t block your UI thread.
(One final note: the Async CTP includes the TaskEx.ConfigureAwait method. You can use this method to suppress the default behavior of marshaling back to the original synchronization context. This could have been used, for example, in the LoadStringAsync method to prevent those awaits from needing to return to the UI thread. This would not only have prevented the deadlock, it would have also resulted in better performance, because we now no longer need to force execution back to the UI thread, when nothing in that method actually needed to run on the UI thread.)
Stephen Toub - MSFT Partner Software Engineer, .NET
Comments are closed. Login to edit/delete your existing comments
I must admin that this article(and the series of articles you have written on TPL) is one of the most comprehensive blogs I have ever read regarding TPL.
I have one quick question regarding this statement that you have made:
Thereâs an exceedingly good chance that this code will hang your application. The Task.Result property is strongly typed as a String, and thus it canât return until it has the valid result string to hand back; in other words, it blocks until the result is available. Weâre inside of button1_Click then blocking for LoadStringAsync to complete, but LoadStringAsyncâs implementation depends on being able to post code asynchronously back to the UI to be executed, and the task returned from LoadStringAsync wonât complete until it does. LoadStringAsync is waiting for button1_Click to complete, and button1_Click is waiting for LoadStringAsync to complete. Deadlock!
So the main problem as to why Deadlock occurs is because the UI Thread cannot process the message posted to the message pump right? Or is it because LoadStringAsync() method cannot even post message to the message pump because the UI thread is blocked by the caller waiting for LoadStringAsync() to complete. If my understanding is correct, the DeadLock happens because the UI thread cannot process the message posted to the message pump which means that LoadStringAsync did post to the message pump but that message cannot be picked up by the UI thread(as its waiting) and thus LoadStringAsync cannot mark itself as complete?
> So the main problem as to why Deadlock occurs is because the UI Thread cannot process the message posted to the message pump right?
Correct. The UI thread is blocked waiting for the task to complete, and the task won’t complete until the UI thread pumps messages, which won’t happen because the UI thread is blocked.

This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Volume 28 Number 03
Async/Await - Best Practices in Asynchronous Programming
By Stephen Cleary | March 2013
These days thereâs a wealth of information about the new async and await support in the Microsoft .NET Framework 4.5. This article is intended as a âsecond stepâ in learning asynchronous programming; I assume that youâve read at least one introductory article about it. This article presents nothing new, as the same advice can be found online in sources such as Stack Overflow, MSDN forums and the async/await FAQ. This article just highlights a few best practices that can get lost in the avalanche of available documentation.
The best practices in this article are more what youâd call âguidelinesâ than actual rules. There are exceptions to each of these guidelines. Iâll explain the reasoning behind each guideline so that itâs clear when it does and does not apply. The guidelines are summarized in Figure 1 ; Iâll discuss each in the following sections.
Figure 1 Summary of Asynchronous Programming Guidelines
Avoid Async Void
There are three possible return types for async methods: Task, Task<T> and void, but the natural return types for async methods are just Task and Task<T>. When converting from synchronous to asynchronous code, any method returning a type T becomes an async method returning Task<T>, and any method returning void becomes an async method returning Task. The following code snippet illustrates a synchronous void-returning method and its asynchronous equivalent:
Void-returning async methods have a specific purpose: to make asynchronous event handlers possible. It is possible to have an event handler that returns some actual type, but that doesn't work well with the language; invoking an event handler that returns a type is very awkward, and the notion of an event handler actually returning something doesn't make much sense. Event handlers naturally return void, so async methods return void so that you can have an asynchronous event handler. However, some semantics of an async void method are subtly different than the semantics of an async Task or async Task<T> method.
Async void methods have different error-handling semantics. When an exception is thrown out of an async Task or async Task<T> method, that exception is captured and placed on the Task object. With async void methods, there is no Task object, so any exceptions thrown out of an async void method will be raised directly on the SynchronizationContext that was active when the async void method started. Figure 2 illustrates that exceptions thrown from async void methods canât be caught naturally.
Figure 2 Exceptions from an Async Void Method Canât Be Caught with Catch
These exceptions can be observed using AppDomain.UnhandledException or a similar catch-all event for GUI/ASP.NET applications, but using those events for regular exception handling is a recipe for unmaintainability.
Async void methods have different composing semantics. Async methods returning Task or Task<T> can be easily composed using await, Task.WhenAny, Task.WhenAll and so on. Async methods returning void donât provide an easy way to notify the calling code that theyâve completed. Itâs easy to start several async void methods, but itâs not easy to determine when theyâve finished. Async void methods will notify their SynchronizationContext when they start and finish, but a custom SynchronizationContext is a complex solution for regular application code.
Async void methods are difficult to test. Because of the differences in error handling and composing, itâs difficult to write unit tests that call async void methods. The MSTest asynchronous testing support only works for async methods returning Task or Task<T>. Itâs possible to install a SynchronizationContext that detects when all async void methods have completed and collects any exceptions, but itâs much easier to just make the async void methods return Task instead.
Itâs clear that async void methods have several disadvantages compared to async Task methods, but theyâre quite useful in one particular case: asynchronous event handlers. The differences in semantics make sense for asynchronous event handlers. They raise their exceptions directly on the SynchronizationContext, which is similar to how synchronous event handlers behave. Synchronous event handlers are usually private, so they canât be composed or directly tested. An approach I like to take is to minimize the code in my asynchronous event handlerâfor example, have it await an async Task method that contains the actual logic. The following code illustrates this approach, using async void methods for event handlers without sacrificing testability:
Async void methods can wreak havoc if the caller isnât expecting them to be async. When the return type is Task, the caller knows itâs dealing with a future operation; when the return type is void, the caller might assume the method is complete by the time it returns. This problem can crop up in many unexpected ways. Itâs usually wrong to provide an async implementation (or override) of a void-returning method on an interface (or base class). Some events also assume that their handlers are complete when they return. One subtle trap is passing an async lambda to a method taking an Action parameter; in this case, the async lambda returns void and inherits all the problems of async void methods. As a general rule, async lambdas should only be used if theyâre converted to a delegate type that returns Task (for example, Func<Task>).
To summarize this first guideline, you should prefer async Task to async void. Async Task methods enable easier error-handling, composability and testability. The exception to this guideline is asynchronous event handlers, which must return void. This exception includes methods that are logically event handlers even if theyâre not literally event handlers (for example, ICommand.Execute implementations).
Async All the Way
Asynchronous code reminds me of the story of a fellow who mentioned that the world was suspended in space and was immediately challenged by an elderly lady claiming that the world rested on the back of a giant turtle. When the man enquired what the turtle was standing on, the lady replied, âYouâre very clever, young man, but itâs turtles all the way down!â As you convert synchronous code to asynchronous code, youâll find that it works best if asynchronous code calls and is called by other asynchronous codeâall the way down (or âup,â if you prefer). Others have also noticed the spreading behavior of asynchronous programming and have called it âcontagiousâ or compared it to a zombie virus. Whether turtles or zombies, itâs definitely true that asynchronous code tends to drive surrounding code to also be asynchronous. This behavior is inherent in all types of asynchronous programming, not just the new async/await keywords.
âAsync all the wayâ means that you shouldnât mix synchronous and asynchronous code without carefully considering the consequences. In particular, itâs usually a bad idea to block on async code by calling Task.Wait or Task.Result. This is an especially common problem for programmers who are âdipping their toesâ into asynchronous programming, converting just a small part of their application and wrapping it in a synchronous API so the rest of the application is isolated from the changes. Unfortunately, they run into problems with deadlocks. After answering many async-related questions on the MSDN forums, Stack Overflow and e-mail, I can say this is by far the most-asked question by async newcomers once they learn the basics: âWhy does my partially async code deadlock?â
Figure 3 shows a simple example where one method blocks on the result of an async method. This code will work just fine in a console application but will deadlock when called from a GUI or ASP.NET context. This behavior can be confusing, especially considering that stepping through the debugger implies that itâs the await that never completes. The actual cause of the deadlock is further up the call stack when Task.Wait is called.
Figure 3 A Common Deadlock Problem When Blocking on Async Code
The root cause of this deadlock is due to the way await handles contexts. By default, when an incomplete Task is awaited, the current âcontextâ is captured and used to resume the method when the Task completes. This âcontextâ is the current SynchronizationContext unless itâs null, in which case itâs the current TaskScheduler. GUI and ASP.NET applications have a SynchronizationContext that permits only one chunk of code to run at a time. When the await completes, it attempts to execute the remainder of the async method within the captured context. But that context already has a thread in it, which is (synchronously) waiting for the async method to complete. Theyâre each waiting for the other, causing a deadlock.
Note that console applications donât cause this deadlock. They have a thread pool SynchronizationContext instead of a one-chunk-at-a-time SynchronizationContext, so when the await completes, it schedules the remainder of the async method on a thread pool thread. The method is able to complete, which completes its returned task, and thereâs no deadlock. This difference in behavior can be confusing when programmers write a test console program, observe the partially async code work as expected, and then move the same code into a GUI or ASP.NET application, where it deadlocks.
The best solution to this problem is to allow async code to grow naturally through the codebase. If you follow this solution, youâll see async code expand to its entry point, usually an event handler or controller action. Console applications canât follow this solution fully because the Main method canât be async. If the Main method were async, it could return before it completed, causing the program to end. Figure 4 demonstrates this exception to the guideline: The Main method for a console application is one of the few situations where code may block on an asynchronous method.
Figure 4 The Main Method May Call Task.Wait or Task.Result
Allowing async to grow through the codebase is the best solution, but this means thereâs a lot of initial work for an application to see real benefit from async code. There are a few techniques for incrementally converting a large codebase to async code, but theyâre outside the scope of this article. In some cases, using Task.Wait or Task.Result can help with a partial conversion, but you need to be aware of the deadlock problem as well as the error-handling problem. Iâll explain the error-handling problem now and show how to avoid the deadlock problem later in this article.
Every Task will store a list of exceptions. When you await a Task, the first exception is re-thrown, so you can catch the specific exception type (such as InvalidOperationException). However, when you synchronously block on a Task using Task.Wait or Task.Result, all of the exceptions are wrapped in an AggregateException and thrown. Refer again to Figure 4 . The try/catch in MainAsync will catch a specific exception type, but if you put the try/catch in Main, then it will always catch an AggregateException. Error handling is much easier to deal with when you donât have an AggregateException, so I put the âglobalâ try/catch in MainAsync.
So far, Iâve shown two problems with blocking on async code: possible deadlocks and more-complicated error handling. Thereâs also a problem with using blocking code within an async method. Consider this simple example:
This method isnât fully asynchronous. It will immediately yield, returning an incomplete task, but when it resumes it will synchronously block whatever thread is running. If this method is called from a GUI context, it will block the GUI thread; if itâs called from an ASP.NET request context, it will block the current ASP.NET request thread. Asynchronous code works best if it doesnât synchronously block. Figure 5 is a cheat sheet of async replacements for synchronous operations.
Figure 5 The âAsync Wayâ of Doing Things
To summarize this second guideline, you should avoid mixing async and blocking code. Mixed async and blocking code can cause deadlocks, more-complex error handling and unexpected blocking of context threads. The exception to this guideline is the Main method for console applications, orâif youâre an advanced userâmanaging a partially asynchronous codebase.
Configure Context
Earlier in this article, I briefly explained how the âcontextâ is captured by default when an incomplete Task is awaited, and that this captured context is used to resume the async method. The example in Figure 3 shows how resuming on the context clashes with synchronous blocking to cause a deadlock. This context behavior can also cause another problemâone of performance. As asynchronous GUI applications grow larger, you might find many small parts of async methods all using the GUI thread as their context. This can cause sluggishness as responsiveness suffers from âthousands of paper cuts.â
To mitigate this, await the result of ConfigureAwait whenever you can. The following code snippet illustrates the default context behavior and the use of ConfigureAwait:
By using ConfigureAwait, you enable a small amount of parallelism: Some asynchronous code can run in parallel with the GUI thread instead of constantly badgering it with bits of work to do.
Aside from performance, ConfigureAwait has another important aspect: It can avoid deadlocks. Consider Figure 3 again; if you add âConfigureAwait(false)â to the line of code in DelayAsync, then the deadlock is avoided. This time, when the await completes, it attempts to execute the remainder of the async method within the thread pool context. The method is able to complete, which completes its returned task, and thereâs no deadlock. This technique is particularly useful if you need to gradually convert an application from synchronous to asynchronous.
If you can use ConfigureAwait at some point within a method, then I recommend you use it for every await in that method after that point. Recall that the context is captured only if an incomplete Task is awaited; if the Task is already complete, then the context isnât captured. Some tasks might complete faster than expected in different hardware and network situations, and you need to graciously handle a returned task that completes before itâs awaited. Figure 6 shows a modified example.
Figure 6 Handling a Returned Task that Completes Before Itâs Awaited
You should not use ConfigureAwait when you have code after the await in the method that needs the context. For GUI apps, this includes any code that manipulates GUI elements, writes data-bound properties or depends on a GUI-specific type such as Dispatcher/CoreDispatcher. For ASP.NET apps, this includes any code that uses HttpContext.Current or builds an ASP.NET response, including return statements in controller actions. Figure 7 demonstrates one common pattern in GUI appsâhaving an async event handler disable its control at the beginning of the method, perform some awaits and then re-enable its control at the end of the handler; the event handler canât give up its context because it needs to re-enable its control.
Figure 7 Having an Async Event Handler Disable and Re-Enable Its Control
Each async method has its own context, so if one async method calls another async method, their contexts are independent. Figure 8 shows a minor modification of Figure 7 .
Figure 8 Each Async Method Has Its Own Context
Context-free code is more reusable. Try to create a barrier in your code between the context-sensitive code and context-free code, and minimize the context-sensitive code. In Figure 8 , I recommend putting all the core logic of the event handler within a testable and context-free async Task method, leaving only the minimal code in the context-sensitive event handler. Even if youâre writing an ASP.NET application, if you have a core library thatâs potentially shared with desktop applications, consider using ConfigureAwait in the library code.
To summarize this third guideline, you should use ConfigureÂAwait when possible. Context-free code has better performance for GUI applications and is a useful technique for avoiding deadlocks when working with a partially async codebase. The exceptions to this guideline are methods that require the context.
Know Your Tools
Thereâs a lot to learn about async and await, and itâs natural to get a little disoriented. Figure 9 is a quick reference of solutions to common problems.
Figure 9 Solutions to Common Async Problems
The first problem is task creation. Obviously, an async method can create a task, and thatâs the easiest option. If you need to run code on the thread pool, use Task.Run. If you want to create a task wrapper for an existing asynchronous operation or event, use TaskCompletionSource<T>. The next common problem is how to handle cancellation and progress reporting. The base class library (BCL) includes types specifically intended to solve these issues: CancellationTokenSource/CancellationToken and IProgress<T>/Progress<T>. Asynchronous code should use the Task-based Asynchronous Pattern, or TAP ( msdn.microsoft.com/library/hh873175 ), which explains task creation, cancellation and progress reporting in detail.
Another problem that comes up is how to handle streams of asynchronous data. Tasks are great, but they can only return one object and only complete once. For asynchronous streams, you can use either TPL Dataflow or Reactive Extensions (Rx). TPL Dataflow creates a âmeshâ that has an actor-like feel to it. Rx is more powerful and efficient but has a more difficult learning curve. Both TPL Dataflow and Rx have async-ready methods and work well with asynchronous code.
Just because your code is asynchronous doesnât mean that itâs safe. Shared resources still need to be protected, and this is complicated by the fact that you canât await from inside a lock. Hereâs an example of async code that can corrupt shared state if it executes twice, even if it always runs on the same thread:
The problem is that the method reads the value and suspends itself at the await, and when the method resumes it assumes the value hasnât changed. To solve this problem, the SemaphoreSlim class was augmented with the async-ready WaitAsync overloads. Figure 10 demonstrates SemaphoreSlim.WaitAsync.
Figure 10 SemaphoreSlim Permits Asynchronous Synchronization
Asynchronous code is often used to initialize a resource thatâs then cached and shared. There isnât a built-in type for this, but Stephen Toub developed an AsyncLazy<T> that acts like a merge of Task<T> and Lazy<T>. The original type is described on his blog ( bit.ly/dEN178 ), and an updated version is available in my AsyncEx library ( nitoasyncex.codeplex.com ).
Finally, some async-ready data structures are sometimes needed. TPL Dataflow provides a BufferBlock<T> that acts like an async-ready producer/consumer queue. Alternatively, AsyncEx provides AsyncCollection<T>, which is an async version of BlockingCollection<T>.
I hope the guidelines and pointers in this article have been helpful. Async is a truly awesome language feature, and now is a great time to start using it!
Stephen Cleary  is a husband, father and programmer living in northern Michigan. He has worked with multithreading and asynchronous programming for 16 years and has used async support in the Microsoft .NET Framework since the first CTP. His home page, including his blog, is at stephencleary.com .
Thanks to the following technical expert for reviewing this article: Stephen Toub Stephen Toub works on the Visual Studio team at Microsoft. He specializes in areas related to parallelism and asynchrony.
Additional resources
C# Deadlocks in Depth - Part 1

For me, multi-threading programming is one of the most fun things I do as a developer. Itâs fun because itâs hard and challenging. And I also get a particular sense of satisfaction when solving deadlocks (youâll see what I mean).
This series will go through understanding deadlocks, show common deadlock types, how to solve them, how to debug them and best practices to avoid them. In Part 1 , Iâll show one of the easiest deadlock scenarios, how to debug it in Visual Studio and finally how to fix it. Weâll cover some of the basics, but Iâll move quickly to more advanced topics as well.

Defining a Deadlock
A deadlock in C# is a situation where two or more threads are frozen in their execution because they are waiting for each other to finish. For example, thread A is waiting on lock_1 that is held by thread B. Thread B canât finish and release lock_1 because it waits on lock_2 , which is held by thread A. Too confusing? Iâll show you an example in a moment, but first letâs talk about Locks .
Brief explanation of Locks
A Lock is a way for us to synchronize between Threads. A lock is a shared object that can be Acquired by a Thread, and also Released . Once Acquired, other threads can be made to halt execution until the lock is Released. A lock is usually placed around a critical section, where you want to allow a single Thread at a time. For example:
Without a lock , 2 threads might enter the critical section, ending up with 2 instances of our Singleton. The example uses the lock statement . A lock statement uses Monitor.Enter and Monitor.Exit under the hood. Another way to achieve locking is to use a Mutex or a Semaphore . We might talk about those as well.
Deadlock example 1: The Nested-Lock
Explanation of the code:
- Two objects are created for lock purposes. In C#, any object can be used as a lock.
- Task.Run starts 2 Tasks, which are run by 2 Threads on the Thread-Pool .
- The first Thread acquires lock1 and sleeps for 1 second. The second acquires lock2 and also sleeps for a second. Afterward, thread 1 waits for lock2 to be released and thread 2 waits for lock1 to be released. So they both wait indefinitely and result in a Deadlock .
- Task.WaitAll(task1, task2) waits on the methodâs Thread until both Tasks are finished, which never happens. This makes it a 3-Thread deadlock. The Console print is: StartingâŚ
Debugging a Deadlock
You can see the deadlock in the debugger easily, once you know what to look for. In the example above, running the code in Visual Studio results in a hang. Hit on the Debug | Break All (Ctrl + Alt + Break), then go to Debug | Windows | Threads . Youâll see the following:

This is how a deadlock looks like in debugging. As you can see, the Main Thread (on the left) is stuck on Task.WaitAll() . The other 2 Threads are stuck on the inner lock statement. In fact, to recognize deadlocks, you should look for Threads stuck on one of the following:
- lock statements
- WaitOne() methods when working with AutoResetEvent, Mutex, Semaphore, EventWaitHandle.
- WaitAll() and WaitAny() when working with Tasks.
- Join() when working with Threads.
- .Result and . GetAwaiter().GetResult() when working with Tasks.
- Dispatcher.Invoke() when working in WPF.
When you see the debuggerâs execution point stuck on any of the above, thereâs a big chance you have a deadlock. Weâll see in following parts of this series examples of deadlocks with most, if not all of those statements.
Solving the Nested-Lock Deadlock
Now that you recognized the deadlock, itâs time to solve it. There are several ways to go about it. The obvious one being: donât use a lock within a lock. Thatâs not always possible though. For example, letâs say each lock represents an Account . We want to use the lock on each operation on the account. When we do an operation with both accounts (like a Transfer), we want to lock both of them.
Solution #1 â Nest the locks in the same order
If we nest the locks in the same order, thereâs not going to be a deadlock. Letâs change the code in our example a bit to mimic the Account Locking problem:
Now, to solve it by nesting the locks in the same order, we need to change:
Since the outer lock is going to be the same in all transfers, thereâs no deadlock. One of the Threads is going to wait in the outer lock until the first Thread finishes, then go on.
Solution #2 â Use Timeout
Another way to solve this is to use a Timeout when waiting for a lock to be released. If the lock isnât released within some time, the operation is canceled. It can be moved back to an operation queue or something similar and executed at a later time. Or just try again after a small delay.
Remember, we said that the lock statement is actually Monitor.Enter() and Monitor.Exit() under the hood. When using those methods, itâs possible to pass a Timeout as a parameter. This means that if the locked failed to Acquire within the Timeout , False is returned.
In our case, we try to acquire both locks. If acquiring fails, we simply release both and try again. Theoretically, it might be possible with this method to always fail to do an operation â When both Threads acquire the outer lock at exactly the same time, then fail to acquire the inner lock. But, in practice itâs pretty much impossible. The thread-switching mechanism will be at different times each time.
Itâs worth mentioning that modern applications with Transfer type of operations can avoid locks entirely by using patterns like Event Sourcing .
In this part, we talked a bit about locks , saw one type of deadlock, how to debug it and 2 ways to solve it.
As a best practice, be very suspicious when using locks inside other locks. This might be missed since the entire method can be within a locked context. Another best practice is if you do need to use a lock, place as little code as possible inside.
This is going to be a 2-part or 3-part series, Iâm not sure yet. In the next part(s), Iâll show some of the more common deadlocks and more sophisticated ways to debug them.
Hereâs a little spoiler deadlock from the next part of the series:
Did you get that particular satisfaction that comes along with solving deadlock? If you did, check out the C# Deadlocks in Depth Part 2 . And Iâd love it if you subscribe to the blog and be notified of more in-depth C# articles. Happy coding.

Welcome to my blog! Iâm a software developer, C# enthusiast, author, and a blogger. I write about C#, .NET, memory management, and performance. Working at Microsoft, but all opinions in this blog are my own. More about me â
Check out my book Practical Debugging for .NET Developers to become an expert problem solver

Recent Posts
- How to Debug LINQ queries in C#
- Premature Infrastructure is the Root of All Evil
- Declutter Your Work Day: 9 Tips to Manage your Tasks Without Stress
- 7 Command Prompt Techniques in Windows You Should Know
- All the Possible Ways to Debug Node.js
- How Would Steve Jobs Fare as a Software Engineer?
- 9 Steps to Master the Keyboard and Become an Ultra Efficient Software Developer
- 9 Announcements in Microsoft Build 2023 and their Implications for the Future
- My 2023 C# Software Developer Tool List
- Looking at C# 12 Proposals and Beyond
- Don't Block on Async Code
This is a problem that is brought up repeatedly on the forums and Stack Overflow. I think itâs the most-asked question by async newcomers once theyâve learned the basics.
Consider the example below. A button click will initiate a REST call and display the results in a text box (this sample is for Windows Forms, but the same principles apply to any UI application).
The âGetJsonâ helper method takes care of making the actual REST call and parsing it as JSON. The button click handler waits for the helper method to complete and then displays its results.
This code will deadlock.
ASP.NET Example
This example is very similar; we have a library method that performs a REST call, only this time itâs used in an ASP.NET context (Web API in this case, but the same principles apply to any ASP.NET application):
This code will also deadlock. For the same reason.
What Causes the Deadlock
Hereâs the situation: remember from my intro post that after you await a Task, when the method continues it will continue in a context .
In the first case, this context is a UI context (which applies to any UI except Console applications). In the second case, this context is an ASP.NET request context.
One other important point: an ASP.NET request context is not tied to a specific thread (like the UI context is), but it does only allow one thread in at a time . This interesting aspect is not officially documented anywhere AFAIK, but it is mentioned in my MSDN article about SynchronizationContext .
So this is what happens, starting with the top-level method (Button1_Click for UI / MyController.Get for ASP.NET):
- The top-level method calls GetJsonAsync (within the UI/ASP.NET context).
- GetJsonAsync starts the REST request by calling HttpClient.GetStringAsync (still within the context).
- GetStringAsync returns an uncompleted Task, indicating the REST request is not complete.
- GetJsonAsync awaits the Task returned by GetStringAsync. The context is captured and will be used to continue running the GetJsonAsync method later. GetJsonAsync returns an uncompleted Task, indicating that the GetJsonAsync method is not complete.
- The top-level method synchronously blocks on the Task returned by GetJsonAsync. This blocks the context thread.
- ⌠Eventually, the REST request will complete. This completes the Task that was returned by GetStringAsync.
- The continuation for GetJsonAsync is now ready to run, and it waits for the context to be available so it can execute in the context.
- Deadlock. The top-level method is blocking the context thread, waiting for GetJsonAsync to complete, and GetJsonAsync is waiting for the context to be free so it can complete.
For the UI example, the âcontextâ is the UI context; for the ASP.NET example, the âcontextâ is the ASP.NET request context. This type of deadlock can be caused for either âcontextâ.
Preventing the Deadlock
There are two best practices (both covered in my intro post ) that avoid this situation:
- In your âlibraryâ async methods, use ConfigureAwait(false) wherever possible.
- Donât block on Tasks; use async all the way down.
Consider the first best practice. The new âlibraryâ method looks like this:
This changes the continuation behavior of GetJsonAsync so that it does not resume on the context. Instead, GetJsonAsync will resume on a thread pool thread. This enables GetJsonAsync to complete the Task it returned without having to re-enter the context. The top-level methods, meanwhile, do require the context, so they cannot use ConfigureAwait(false) .
Using ConfigureAwait(false) to avoid deadlocks is a dangerous practice. You would have to use ConfigureAwait(false) for every await in the transitive closure of all methods called by the blocking code, including all third- and second-party code . Using ConfigureAwait(false) to avoid deadlock is at best just a hack ).
As the title of this post points out, the better solution is âDonât block on async codeâ.
Consider the second best practice. The new âtop-levelâ methods look like this:
This changes the blocking behavior of the top-level methods so that the context is never actually blocked; all âwaitsâ are âasynchronous waitsâ.
Note: It is best to apply both best practices. Either one will prevent the deadlock, but both must be applied to achieve maximum performance and responsiveness.
- My introduction to async/await is a good starting point.
- Stephen Toubâs blog post Await, and UI, and deadlocks! Oh, my! covers this exact type of deadlock (in January of 2011, no less!).
- If you prefer videos, Stephen Toub demoed this deadlock live (39:40 - 42:50, but the whole presentation is great!).
- The Async/Await FAQ goes into detail on exactly when contexts are captured and used for continuations.
This kind of deadlock is always the result of mixing synchronous with asynchronous code. Usually this is because people are just trying out async with one small piece of code and use synchronous code everywhere else. Unfortunately, partially-asynchronous code is much more complex and tricky than just making everything asynchronous.
If you do need to maintain a partially-asynchronous code base, then be sure to check out two more of Stephen Toubâs blog posts: Asynchronous Wrappers for Synchronous Methods and Synchronous Wrappers for Asynchronous Methods , as well as my AsyncEx library .
Answered Questions
There are scores of answered questions out there that are all caused by the same deadlock problem. It has shown up on WinRT, WPF, Windows Forms, Windows Phone, MonoDroid, Monogame, and ASP.NET.
Update (2014-12-01): For more details, see my MSDN article on asynchronous best practices or Section 1.2 in my Concurrency Cookbook .

- Async/await Intro
- There Is No Thread
- React/Redux TodoMVC
- A Tour of Task
- Task.Run Etiquette
- Task.Run vs. BackgroundWorker
- TCP/IP .NET Sockets FAQ
- Managed Services
- IDisposable and Finalizers
- Option Parsing

- Blazor WASM đĽ
- ASP.NET Core Series
- GraphQL ASP.NET Core
- ASP.NET Core MVC Series
- Testing ASP.NET Core Applications
- EF Core Series
- HttpClient with ASP.NET Core
- Azure with ASP.NET Core
- ASP.NET Core Identity Series
- IdentityServer4, OAuth, OIDC Series
- Angular with ASP.NET Core Identity
- Blazor WebAssembly
- .NET Collections
- SOLID Principles in C#
- ASP.NET Core Web API Best Practices
- Top REST API Best Practices
- Angular Development Best Practices
- 10 Things You Should Avoid in Your ASP.NET Core Controllers
- C# Back to Basics
- C# Intermediate
- Design Patterns in C#
- Sorting Algorithms in C#
- Docker Series
- Angular Series
- Angular Material Series
- HTTP Series
- .NET/C# Author
- .NET/C# Editor
- Our Editors
- Leave Us a Review
- Code Maze Reviews
Select Page
How to Run an Async Method Synchronously in .NET
Posted by Code Maze | Updated Date Apr 25, 2023 | 2

In this article, we will learn how to run an async method synchronously in .NET.
Let’s start.
Different Ways We Can Run Asynchronous Methods Synchronously
First, let’s scaffold a simple console application in Visual Studio. Alternatively, we can create the project using the CLI command:Â
dotnet new console
After that, let’s add a Person class with its properties:

Having set up the project, let’s see how we can run an async method synchronously.
To recoup, an async method has the async keyword in its method signature and has the await keyword in the body . The await keyword holds the execution of the rest of the method until the asynchronous operation is complete. However, this happens without blocking the thread.
We use async methods for operations that don’t execute instantly, like fetching data from a remote server.
By running an async method synchronously, we block the current thread until the operation executes to completion. We have a great article on asynchronous programming with async and await in ASP.NET Core if this is a new topic for you.
We will be looking at different approaches to running an async method synchronously, namely:
- Task.RunSynchronously()
- Task.Wait()
- Task.Result
- GetAwaiter().GetResult()
Since we are discussing synchronous programming, please note that each of the approaches discussed in this article blocks the current thread until the results are available . We should do our best to avoid blocking asynchronous code because this could cause deadlocks as explained in Don’t Block on Async Code by Stephen Cleary.
We’ve got a lot to cover, so let’s dive in.
Using Task.RunSynchronously() to Run a Method Synchronously
We use the Task.RunSynchronously() method to execute a task in a synchronous manner. The tasks execute on the same thread one after another as determined by the TaskScheduler . Please refer to the article on the differences between tasks and threads for a detailed exploration.
In addition to that, calling the Task.RunSynchronously() method executes a task only once. In the event that task execution has already started, calling the Task.RunSynchronously() method before completion throws an InvaliOperationException .Â
To demonstrate how it works, let’s create a new PersonService class and add a GetPeople() method:
First, we create a task that pauses the thread execution for 1 second before returning a list of Person objects. We have this 1-second delay to simulate a long-running operation, for instance, fetching data over a network.Â
Calling the RunSynchronously() method, we wait for 1 second before getting the result back. This means if we had more operations that need to be run on the same thread they will be held for 1 second. F or example, a UI thread will freeze for 1 second before being responsive again.
Now that we’ve covered the Task.RunSynchronously() method, let’s look at another alternative.
Using Task.Wait()
The Task.Wait() method similarly blocks the thread until a task is completed, is canceled, or has timed out.Â
The method takes an optional timeout parameter which specifies the amount of time to wait for the task to complete before proceeding with program execution. In the event that task execution experiences a timeout, the Task.Wait() method returns false otherwise it returns true .
Also, if the task gets canceled during execution or it ends in a failed state, then it throws an exception wrapped in an AggregateException .
Let’s add an async method to the PersonService class:
This method delays the task for 1 second before returning a list of Person objects. This way, we’re simulating an asynchronous operation.
Next, let’s add a new method to see Task.Wait() in action:
We first create a task by calling the GetPeopleAsync() method. Then, we call the Task.Wait() method, which blocks the thread until the task has been executed successfully. In this case, the thread is blocked for 1 second until the list of Person objects is returned. Finally, we read the results of the operation using Task.Result .
Using Task.Result to Synchronously Run a Method
The Task.Result property returns the result of a completed Task<T> .Â
Let’s implement this by adding a new method to the PersonService class:
We’re first creating a Task<List<Person>> task by calling the GetPeopleAsync() method. To get the results of the task, we call Task.Result which returns the list.
Task.Result also blocks when the result is not ready and doesn’t return immediately. But when the Result is ready it returns it immediately.
In case an exception is thrown, it is wrapped in an AggregateException .
Having seen how both Task.Wait() and Task.Result work, let’s look at an alternative approach.
Using GetAwaiter().GetResult()
The GetAwaiter().GetResult() method call is equivalent to calling Task.Wait() method and Task.Result . However, the difference is that GetAwaiter().GetResult() is preferred to the latter because it propagates exceptions instead of wrapping them in an AggregateException .
Let’s add a method that calls the GetPeopleAsync() method synchronously using the GetAwaiter().GetResult() method:
Similarly, we are synchronously calling GetPeopleAsync() which is an asynchronous method. After we run our code, we have to wait for one second for the task to fully execute before getting the response.Â
Exception Handling When Synchronously Calling an Async Method
The AggregateException wrapper that’s thrown when using Task.Wait() and Task.Result makes error handling difficult, that’s where GetAwaiter().GetResult() comes to the rescue. The GetResult() method checks for exceptions in the task, and if any, it throws them directly without having a wrapper around them.
This way, it’s easier for us to catch specific exceptions instead of catching a general AggregateException and checking the inner exceptions.
Having mentioned the differences in exception handling between Task.Wait() , Task.Result and GetAwaiter().GetResult() , let’s see how that in action.
To start off, let’s create a new async method that throws an exception:
This method delays task execution for 1 second before throwing an InvalidOperationException .
Let’s add a new method to demonstrate AggregateException :
First, we create a task and assign it from the ThrowExceptionAsync() method. Then, in the try-catch block, we call the Wait() method which will block the thread for 1 second and then it will throw an Exception an InvalidOperationException .
Since we’re using the Wait() method, the InvalidOperationException is wrapped in an AggregateException . To get the actual error message, we loop over the InnerExceptions in the AggregateException . We would do the same if we were using Task.Result .
If we have multiple cases to handle for each type of exception, we will have to write additional code and this easily complicates it.
Let’s see how we can handle exceptions when using GetAwaiter().GetResult() method:
Similar to the first approach, we also use the try-catch block. The only difference is that we handle the exceptions directly using the Exception namespace. The latter is a simpler approach compared to when we used the Task.Wait() method.
In this article, we have learned how to run an async method synchronously in .NET. We have looked at the different ways to do this, and how each approach is similar or different from the other.
We have also covered exception handling, more specifically, using the AggregateException and Exception types. However, all these approaches block the thread execution and therefore could result in deadlocks. If necessary, use them with a lot of caution.

Join our 20k+ community of experts and learn about our Top 16 Web API Best Practices .

IMAGES
VIDEO
COMMENTS
No deadlock issues will occur due to the use of Task.Run. public String GetSqlConnString(RubrikkUser user, RubrikkDb db) { // deadlock if called from threadpool, // works fine on UI thread, works fine from console main return Task.Run(() => GetSqlConnStringAsync(user, db)).Result; }
Task<int> task = Execute ("..."); task.Wait (); // blocks until done return 0 < task.Result; If you wanted to further the await chain, you could have Success return a Task<bool> instead. Note that there is a potential for a deadlock depending on the current SynchronizationContext.
After reading Stephen Cleary's comment in another question that Task.Run() always schedules on the thread pool even async methods, it made me think. In .NET 4.5 in ASP.NET or any other synchronization context that schedules tasks to the current thread / same thread, if I have an asynchronous method: private async Task MyAsyncMethod() { ...
1. The task that wrapped this AsyncFunc ( AsyncFunc is the task in step 2) is ready to return, but the wrapped Task is created in the UI thread hence it has the SyncContext. It is true that the Task is created on the UI thread, but it is not generated by an async method. The Task.Run method is not implemented with the async keyword.
1 I'm having a deadlock issue with some async code in 4.5. I read Stephen Cleary's blog about preventing deadlocks that occur when the task captures the executing context then in that same context you block by waiting for the Task. I tried to implement the solution but I'm still getting deadlocks and I'm not seeing why. Original Code
Result doesn't cause a deadlock by itself.It causes a deadlock when called from a single-threaded context if there is an await for that task that also needs that context.. More details:. await by default captures a context and resumes on that context. (You can use ConfigureAwait(false) to override this default behavior and resume on a thread pool thread instead.)
18 Typical code that might pop up in a C# codebase and can be pretty dangerous. You ran into some deadlocks, you are trying to write async code the proper way or maybe you're just curious....
Great question. When you do Task.Run, the delegate that you pass to it will be run on the threadpool, but importantly here for you, not with a SynchronizationContext.This is what would usually give you a deadlock, when you are blocking on the result of some code, which itself has exclusive access to that context.
2. Ignoring tasks Sometimes developers will deliberately ignore the result of an asynchronous method because they explicitly don't want to wait for it to complete. Maybe it's a long-running operation that they want to happen in the background while they get on with other work. So sometimes I see code like this:
This behavior can be costly in terms of performance and can result in a deadlock on the UI thread. Consider calling Task.ConfigureAwait (Boolean) to signal your intention for continuation. How to fix violations To fix violations, call ConfigureAwait on the awaited Task. You can pass either true or false for the continueOnCapturedContext parameter.
The Run (Action, CancellationToken) method is a simpler alternative to the TaskFactory.StartNew (Action, CancellationToken) method. It creates a task with the following default values: Its CreationOptions property value is TaskCreationOptions.DenyChildAttach. It uses the default task scheduler.
Await, and UI, and deadlocks! Oh my! - . NET Parallel Programming Await, and UI, and deadlocks! Oh my! Stephen Toub - MSFT January 13th, 2011 2 2 It's been awesome seeing the level of interest developers have had for the Async CTP and how much usage it's getting. Of course, with any new technology there are bound to be some hiccups.
The first problem is task creation. Obviously, an async method can create a task, and that's the easiest option. If you need to run code on the thread pool, use Task.Run. If you want to create a task wrapper for an existing asynchronous operation or event, use TaskCompletionSource<T>.
Task.Run () executes work on a ThreadPool Thread. Dispatcher.Invoke () is a WPF method that synchronously executes work on the UI Thread. It queues work on the Dispatcher-Queue and waits for it to finish. .Wait () waits for the task to finish, so it keeps the UI Thread busy.
Explanation of the code: Two objects are created for lock purposes. In C#, any object can be used as a lock. Task.Run starts 2 Tasks, which are run by 2 Threads on the Thread-Pool. The first Thread acquires lock1 and sleeps for 1 second. The second acquires lock2 and also sleeps for a second.
We might start by writing something like the following: 1 static async Task ProcessImage(byte[] imageData) 2 { 3 await Task.Run(() => 4 { 5 RotateImage(imageData); 6 DarkenImage(imageData); 7 BlurImage(imageData); 8 } 9 } csharp. But then we notice that BlurImage (or a version of it that accepts a byte array) already returns a Task, so we ...
Deadlock. The top-level method is blocking the context thread, waiting for GetJsonAsync to complete, and GetJsonAsync is waiting for the context to be free so it can complete. For the UI example, the "context" is the UI context; for the ASP.NET example, the "context" is the ASP.NET request context.
We're first creating a Task<List<Person>> task by calling the GetPeopleAsync () method. To get the results of the task, we call Task.Result which returns the list. Task.Result also blocks when the result is not ready and doesn't return immediately. But when the Result is ready it returns it immediately.
There is no way to gracefully jump into the waiting thread, pause the Task.Result call, jump to the message pump and let it do its thing, and then jump back in the middle of Button1_Click and resume the Task.Result call as if nothing happened. It's just not something that is possible in synchronous code.