One of the quickest ways to kill your API’s performance is to force the user to wait for things they don’t actually need to witness.
If a user clicks “Register,” they want a success message immediately. They don’t want to watch a spinning loader while your server waits for an SMTP connection, generates a PDF invoice, or retries a flaky third-party API.
In this post, we will explore the architectural patterns of asynchronous processing in ASP.NET Core. We’ll look at when to keep it simple with in-memory queues, when to scale out with RabbitMQ (using MassTransit), and how to tell the user: “It’s done.”
The Problem: The Blocking HTTP Request
Imagine an HTTP POST to /api/orders.
- Save Order to DB (50ms)
- Call Inventory API (500ms – 2s)
- Generate PDF Invoice (3s)
- Send Confirmation Email (1s)
If you do this synchronously, the user waits 5+ seconds. If the email server is down, the whole order fails. This is bad UX and bad architecture.
The solution is the Fire and Forget pattern (often returning an HTTP 202 Accepted). You accept the work, queue it, and process it in the background.
Strategy 1: The “Keep It Simple” Approach (In-Memory)
You don’t always need a distributed message bus. If you have a single instance of your API (monolith) and the background task isn’t mission-critical (meaning, if the server crashes and the task is lost, it’s annoying but not a disaster), stick to .NET native features.
The Tool: System.Threading.Channels + BackgroundService
Forget Task.Run (which is dangerous in ASP.NET). The standard way to handle this today is a Producer/Consumer pattern using Channels.
When to use this:
- Sending non-critical emails (Password reset, Welcome emails).
- Simple logging or analytics updates.
- Single-server architecture.
The Code:
// 1. Define the interface / implementation for the queue
public interface IBackgroundTaskQueue
{
ValueTask QueueBackgroundWorkItemAsync(Func<CancellationToken, ValueTask> workItem);
ValueTask<Func<CancellationToken, ValueTask>> DequeueAsync(CancellationToken cancellationToken);
}
public class BackgroundTaskQueue : IBackgroundTaskQueue
{
private readonly Channel<Func<CancellationToken, ValueTask>> _queue;
public BackgroundTaskQueue()
{
// BoundedChannelFullMode.Wait will slow down producers if consumer can't keep up
var options = new BoundedChannelOptions(100) { FullMode = BoundedChannelFullMode.Wait };
_queue = Channel.CreateBounded<Func<CancellationToken, ValueTask>>(options);
}
public async ValueTask QueueBackgroundWorkItemAsync(Func<CancellationToken, ValueTask> workItem)
{
await _queue.Writer.WriteAsync(workItem);
}
public async ValueTask<Func<CancellationToken, ValueTask>> DequeueAsync(CancellationToken cancellationToken)
{
return await _queue.Reader.ReadAsync(cancellationToken);
}
}
You then register a BackgroundService (Hosted Service) that listens to this channel and executes the delegates.
Pros: No extra infrastructure (Redis/RabbitMQ), extremely fast, easy to debug.
Cons: Data Loss. If the application restarts, queued items are gone forever. No retries out of the box.
Strategy 2: The “Enterprise” Approach (Distributed Queues)
When reliability is non-negotiable, or you are running Microservices (where one API creates the job and a different Worker Service processes it), you need a durable Message Broker.
The Tool: RabbitMQ + MassTransit
While you can use the RabbitMQ.Client library directly, don’t. The code gets messy dealing with connection recovery, serialization, and topology. Use a framework like MassTransit or Wolverine. They abstract the broker, giving you retries, poison queues, and maintainability for free.
When to use this:
- Durability: If the server crashes, the message stays in RabbitMQ until a server comes back up.
- Scaling: If PDF generation is heavy, you can spin up 10 Worker Services consuming from the same queue while keeping your API lightweight.
- Long-running processes: Tasks that take minutes or hours.
The Code (MassTransit):
Producer (API):
// In your controller
public async Task<IActionResult> CreateOrder(CreateOrderDto dto)
{
// Save to DB...
// Publish event
await _publishEndpoint.Publish(new OrderCreatedEvent { OrderId = order.Id });
return Accepted();
}
Consumer (Worker Service):
public class OrderCreatedConsumer : IConsumer<OrderCreatedEvent>
{
public async Task Consume(ConsumeContext<OrderCreatedEvent> context)
{
var orderId = context.Message.OrderId;
// This will automatically retry based on policy if it fails
await _emailService.SendConfirmationAsync(orderId);
}
}
Pros: Extremely reliable, scalable, supports competing consumers, handles retries gracefully (Exponential Backoff).
Cons: Operational complexity (Managing a RabbitMQ instance/cluster), learning curve.
Decision Matrix: Which one do I choose?
| Scenario | Recommendation | Why? |
|---|---|---|
| “Forgot Password” Email | In-Memory | If the server crashes during send, the user just clicks the button again. Low risk. |
| Credit Card Charging | RabbitMQ | You cannot lose this message. It must be durable and transactional. |
| Generating Monthly Report | RabbitMQ | Typically a long-running process. Allows you to offload the CPU usage to a separate worker server so the API stays fast. |
| Calling unreliable External API | RabbitMQ | MassTransit handles the retry logic (“Redelivery”) far better than inside a loop in a HostedService. |
Closing the Loop: Notifying the Caller
Since we returned a 202 Accepted immediately, how does the user know when the PDF is actually ready?
1. SignalR (Real-time)
If the user is staring at your Blazor or React app waiting for the result, SignalR is the gold standard.
- Flow: Client connects to a Hub -> API fires job -> Worker processes -> Worker invokes Hub -> Client receives update.
- Context: Great for progress bars (“Processing: 45%…”) or popup toaster notifications.
2. Webhooks (B2B)
If your API is being consumed by another company’s software, you should use Webhooks.
- Flow: The caller provides a
callback_urlin the initial request. When your background worker finishes, it sends an HTTP POST to that URL with the result. - Context: Standard for Payment Gateways (Stripe) or CI/CD pipelines.
3. Polling (The Fallback)
The client receives a JobId in the 202 response. The client polls an endpoint /api/jobs/{id} every 5 seconds to check status (Pending -> Completed).
- Context: Simple to implement, works without Websockets, but inefficient on network traffic.
4. Email / Push Notification
The true “Fire and Forget.” The user closes the browser. 10 minutes later, they get an email: “Your export is ready for download.”
Summary
Asynchronous processing is the hallmark of a mature .NET backend.
- Start by asking: “Can I afford to lose this task if the server restarts?”
- If Yes: Use
System.Threading.Channelsand aBackgroundService. - If No: Spin up a Docker container with RabbitMQ and use MassTransit.
- Once the work is done, close the loop using SignalR for real-time users or Webhooks for API integrations.
By moving heavy lifting off the request thread, you ensure your ASP.NET Core API remains snappy, scalable, and resilient.