Explore why traditional MediatR pipeline validation might be hurting your .NET application's architecture. Discover how to move validation to domain objects with value objects, following the 'Parse, Don't Validate' principle.
When working with MediatR in .NET applications, pipeline behaviors serve as powerful middleware components that can intercept and process requests before they reach their handlers. One common use case for pipeline behaviors is validation - ensuring commands contain valid data before processing them. However, this approach might not be the best solution for maintaining clean and reliable code.
Let's explore why pipeline validation might be problematic and how we can improve it by moving validation to the command parameters themselves.
Understanding MediatR Pipelines and Their Purpose
MediatR pipelines act as middleware in your application, intercepting requests before they reach their handlers. Think of them as security checkpoints in an airport - each checkpoint can inspect, modify, or even reject passengers (requests) before they reach their destination (handlers).
Here's what a typical pipeline behavior looks like:
public class MyBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
{
public async Task<TResponse> Handle(
TRequest request,
RequestHandlerDelegate<TResponse> next,
CancellationToken cancellationToken)
{
// Do action before handler
var response = await next();
// Do action after handler
return response;
}
}
The traditional approach to validation using MediatR involves creating a validation pipeline behavior. Let's look at an example where we validate a command for creating a goat:
public interface IBehaviorValidator;
public interface IBehaviorValidator<in T>: IBehaviorValidator
{
Validation Validate(T request);
}
public class ValidationBehavior<T, TOutput>(IEnumerable<IBehaviorValidator<T>> validators)
: IPipelineBehavior<T, TOutput>
where T : IRequest
{
public async Task<TOutput> Handle(T request, RequestHandlerDelegate<TOutput> next, CancellationToken cancellationToken)
{
foreach (var validator in validators)
{
var result = validator.Validate(request);
if (result.IsValid)
continue;
throw; // here is when the command is invalid
}
return await next();
}
}
All that's left is to integrate ValidateBehavior
into the dependency injection, and every command sent that contains a validation will inevitably pass through it.
So what's the problem?
The Problems with Pipeline Validation
To see the problem, we'll create our first validation on the creation of a goat via the AddGoatCommand
.
public record AddGoatCommand(string Name) : IRequest;
This check ensures that the goat's name has a minimum size of 3 characters.
public class AddGoatValidation : IBehaviorValidator<AddGoatCommand>
{
public Validation Validate(AddGoatCommand command)
{
if (command.Name.Length < 3)
return new Validation(BadRequest.WithLegacyMessage("PublicKey is empty"));
return new Validation();
}
}
There are a number of problems here, which we will look at:
- First, it's possible to create command that are invalid
- Second, the command is check only once in the pipe
Problem 1: Invalid Commands in the Message Bus
The first significant issue with pipeline validation is that we're allowing invalid commands to enter our message bus. Consider this scenario:
public class AddGoatNameCommand : IRequest
{
public string Name { get; set; }
}
// Somewhere in the application
await mediator.Send(new AddGoatNameCommand { Name = "A" });
Even though this command will eventually fail validation, we've already:
- Created an invalid command
- Serialized it
- Sent it through the message bus
- Started the pipeline processing
The system must allocate memory and process the command object, serialize it for the message bus, initialize the pipeline context, and set up validation - all before discovering the command was invalid from the start. This creates unnecessary load on the system.
In high-throughput systems handling thousands of commands per second, these wasted operations can significantly impact performance. By validating at creation instead, we can avoid all these unnecessary steps and fail fast when data is invalid.
Problem 2: Violating "Parse, Don't Validate"
The "Parse, Don't Validate" principle, introduced by Alexis King, advocates for constructing only valid data structures rather than creating potentially invalid ones and validating them later. This principle has several important implications:
- Data should be validated at construction time, not after
- If an object exists, it should be valid by definition
- Invalid states should be unrepresentable in the type system
In our current approach with pipeline validation, we're doing exactly what this principle advises against: we create a command with potentially invalid data, pass it around our system, and only then check if it's valid. This is both inefficient and theoretically unsound.
Consider our AddGoatCommand
- nothing in its type signature indicates that the name must be at least 3 characters long. A developer working with this command has no way to know about this constraint without diving into the validation code. This is a classic example of implicit requirements that should be made explicit through the type system.
The Solution: Command Parameter Validation
The solution to our validation problems requires us to rethink our approach. Instead of validating data after it has been created, we need to guarantee its validity from the outset. A goat name is not a simple string of characters - it's a business concept with its own rules.
By using a value object, an immutable class that represents this business concept, we can encapsulate these rules directly in the type itself. When a developer sees a GoatName in the code, he immediately understands that it's a valid name that respects all the business rules.
This approach transforms validation into true domain modeling: a GoatName cannot exist in an invalid state - either its creation succeeds, or it fails immediately. Let's see how to implement this solution:
public class GoatName
{
private readonly string _value;
private GoatName(string value)
{
_value = value;
}
public static GoatName Create(string value)
{
if (string.IsNullOrEmpty(value) || value.Length < 3)
{
throw new InvalidGoatNameException("Goat name must be at least 3 characters long");
}
return new GoatName(value);
}
public override string ToString() => _value;
}
Now we can update our command to use this value object:
public record CreateGoatCommand(GoatName name) : IRequest;
Now it's no longer possible to create an AddGoatCommand
with an invalid name - perfect!
Conclusion
Using a MediatR pipeline for validation is by no means the most optimal way of checking order data. By creating domain objects that contain business rules, we obtain a centralized list of the following benefits:
- Validation at Source: Invalid commands can't exist - they fail fast at creation.
- Self-Documenting Code: The
GoatName
class clearly shows what makes a valid goat name. - Single Source of Truth: Validation rules are encapsulated in the domain model.
- Type Safety: The compiler helps ensure we're using valid goat names everywhere.
- Simplified Pipeline: Validation pipelines can focus on cross-cutting concerns rather than basic validation.
- Better Domain Modeling: Our code now reflects that a goat's name is a business concept with rules, not just a string.
By moving validation from the pipeline to the command parameters, we've created a more robust, maintainable, and theoretically sound solution. Remember: if your commands can exist in an invalid state, you might want to reconsider your design.
Have a goat day 🐐
Join the conversation.