MediatR and me

I was only recently introduced to MediatR through the magic of Reddit.

Jimmy Bogard also does the excellent AutoMapper tool so it was work looking into. He’s written about his use of it for a long while on his blog

Flattening the layers

His posts on implementation patterns and dealing with duplication are the real gems though.

My particular problem is that in my three layers in my REST API, I find I’m constantly injecting new classes to try to avoid duplication of logic.  This MSDN post actually articulates the problem I’ve got with repositories and whatnot.

Given my layers:

  • Web (Controllers in ASP.NET Core terms)
  • Business stuff
  • Data Access

I want to keep some separation to concentrate on just testable logic my business layer. Web should just transform a request and call the relevent business object. The business logic needs data sometimes.

He says “I want MediatR to serve as the outermost window into the actual domain-specific behavior in my application” which is great. The end result is a Controller class that barely does anything except call IMediator.

However, I don’t want a handler that just duplicates my problem but just hides it behind MediatR so only my controller is prettier.  How do I organize things to be simplier while still having some layering with reusability and little duplication?

Handlers calling Handlers

Really what happens is that my “business” layer handlers end up calling other business handlers or data handlers.

My unit tests look like a series of these Moq statements:

mediator.Setup(x => x.Send(It.IsAny(), CancellationToken.None))
.Callback<IRequest, CancellationToken>((r,y) =>
{
var x = (Locations.NewLocation) r;
Assert.Equal(newSite.Name, x.Name);
Assert.Equal(userName, x.CreatedBy);
Assert.Equal(2, x.SiteId);
Assert.Equal(4, x.LocationTypeId);
Assert.Equal(345, x.ReportingUnitId);
})
.ReturnsAsync(location);

I’m validating that I’m passing a message with MediatR and validating the message’s contents.

Is this bad?

It seems like it is.

MediatR basically just divides everything with loosely coupled message passing. It could all end up as a huge message soup with tons of layers of indirection.

Jimmy has a good example of how he uses MediatR, AutoMapper and other things with ASP.NET Core on github: https://github.com/jbogard/ContosoUniversityCore

However, the logic is just a basic CRUD app. Nothing needs to share anything.

Is there anything that can stop that?

Just discipline I guess.

The good: a cache in the pipeline!

(I’m waving my hands with the implementation details of caching as that’s code for another post.)

I put a Redis cache on my reference data from the database.  I had a pattern around my data access but it was a lot of copy/paste.  Now, I have a marker interface for my requests and it just automatically caches because the MediatR pipeline just resolves it.

The cache pipeline handler with a marker interface is declared like this:

public class DistributedCachingHandler<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : class, IUseDistributedCache
where TResponse : class

Now, any request that goes through MediatR that implements the IUseDistributedCache will go through this pipeline handler.

Actually, the generics and type resolution is not done by MediatR but by your IoC container of choice.  I was sticking to the default container in ASP.NET Core.  However, the resolution isn’t as sophisticated as StructureMap, AutoFac, etc. and ends up erroring when trying to create pipelined types generically constrained by the marker the interface.  So now, I just plugged in AutoFac and still use IServiceCollection as I normally would.

The good part 2: FluentValidation and MediatR

(I’m waving my hands with the implementation details of this as that’s code for another post.)

FluentValidation is a good library for creating validation classes for various POCOs then the validation can just be plugged into anywhere.  I want this to be plugged into two places: when ASP.NET Core is accepting a model from a REST call (using an ActionFilter) and also in my MediatR pipeline!

I made a MediatR pipeline handler that takes any request/response pair and sends it through FluentValidation. If any Validators are registered, then they are ran.

For the Action Filter, FluentValidation is chained onto your AddMvc method and marks the ModelState as invalid. You can handle this in many ways but I made another Action Filter to automatically return when the Model is invalid.

Keeping secrets safe with ASP.NET Core and Credstash

Originally posted on Medium: https://medium.com/@adamhathcock/keeping-secrets-safe-with-asp-net-core-and-credstash-b6e268176791

I primarily use Amazon Web Services and .NET Core. Most .NET users tend to look to Azure by default because of Microsoft support. However, I strongly prefer AWS. All information here deals with .NET Core and AWS.

Secrets

Keeping secrets secure seems to be a pretty hard problem. The best thing with the biggest mindshare behind it seems to be Hashicorp Vault but it’s an application with more infrastructure to setup just to get it working.

I really hate having to run 3rd party applications in my own cloud applications. I only do it when I’m forced to. Basically, only when Amazon doesn’t have a matching service or the AWS service isn’t fit for purpose.

However, they do: the Key Management Service. I’m not going to get into detail about it but it’s not suitable by itself.

Fortunately, someone else already did some leg work to use KMS: enter Credstash

Credstash

You can read about Credstash on the github site but basically it’s a command line utility to add and retrieve secrets. It’s python based and perfectly good for doing the admin of secrets. However, I want to use it with my .NET Core applications.

Credstash uses KMS to protect keys and DynamoDB to store encrypted values.

Credstash vs Hashicorp Vault

Reference: Credstash

Vault is really neat and they do some cool things (dynamic secret generation, key-splitting to protect master keys, etc.), but there are still some reasons why you might pick credstash over vault:

  • Nothing to run. If you want to run vault, you need to run the secret storage backend (consul or some other datastore), you need to run the vault server itself, etc. With credstash, there’s nothing to run. all of the data and key storage is handled by AWS services
  • lower cost for a small number of secrets. If you just need to store a small handful of secrets, you can easilly fit the credstash DDB table in the free tier, and pay ~$1 per month for KMS. So you get good secret management for about a buck a month.
  • Simple operations. Similar to “nothing to run”, you dont need to worry about getting a quorum of admins together to unseal your master keys, dont need to worry about monitoring, runbooks for when the secret service goes down, etc. It does expose you to risk of AWS outages, but if you’re running on AWS, you have that anyway

That said, if you want to do master key splitting, are not running on AWS, care about things like dynamic secret generation, have a trust boundary that’s smaller than an instance, or want to use something other than AWS creds for AuthN/AuthZ, then vault may be a better choice for you.

Narochno.Credstash

I created an ASP.NET Core configuration compatible reader for Credstash. It’s fairly simple and so far is working well.
Find it on NuGet and use it like so:

AWSCredentials creds = new StoredProfileAWSCredentials();
if (!env.EnvironmentName.MatchesNoCase("alpha"))
{
    creds = new InstanceProfileAWSCredentials();
}
builder.AddCredstash(new CredstashConfigurationOptions()
{
    EncryptionContext = new Dictionary<string, string>()
    {
        {"environment", env.EnvironmentName}
    },
    Region = RegionEndpoint.EUWest1,
    Credentials = creds
});

There’s probably more there than you need but I need it.

For AWS Creds, I use locally stored creds in my profile for development. I call this environment alpha so don’t sweat that. On an instance, I want to use IAM profile based permissions. Usage of this is on the credstash page.

KMS has a concept of EncryptionContexts that are basically just key/value pairs that need to match in order for the decryption of secrets to be successful. I use the environment name as an extra value to segment secrets by.

Conclusion

I can finally have something secure without having values hardcoded in a repo somewhere. KMS has an audit trail and Credstash uses an immutable value system to version secrets so that old values are still there.

It’s cheap, easy to setup and works with C# now. Everything I need.

DistributedCache extensions for Data Protection in ASP.NET Core

Repo: https://github.com/Visibilityltd/Visibility.AspNetCore.DataProtection.DistributedCache

This contains two simple classes:

  • DistributedCache DataProtection Provider
  • DistributedCache PropertiesDataFormat

DataProtection Provider

When having a distributed and stateless ASP.NET Core web server, you need to have your Data Protection keys saved to a location to be shared among your servers.

The default providers that the ASP.NET Core team provides are here

I was just going to use Redis but the implementation is hard-coded to Redis. I’m already using the DistributedCache Redis provider, so why not just link in to that? I don’t need to configure two different things now.

Usage

services.AddDataProtection()
.PersistKeysToDistributedCache();

Boom, now if you’re using IDistributedCache you now persist your generated DataProtection keys there.

DistributedCache PropertiesDataFormat

Another issue is that the state on the URL can used for Authentication can be large. Why not use cache?

This is inspired and mostly copied from: https://github.com/IdentityServer/IdentityServer4/issues/407

Usage

Useful for any Authentication middleware. You need to hook it into the AuthenticationOptions for your protocol like so:

I’m using CAS Authentication

var dataProtectionProvider = app.ApplicationServices.GetRequiredService();
var distributedCache = app.ApplicationServices.GetRequiredService();

var dataProtector = dataProtectionProvider.CreateProtector(
typeof(CasAuthenticationMiddleware).FullName,
typeof(string).FullName, schemeName,
"v1");

//TODO: think of a better way to create
var dataFormat = new DistributedPropertiesDataFormat(distributedCache, dataProtector);

...

app.UseCasAuthentication(x =>
{
x.StateDataFormat = dataFormat;
...
};

OpenId and OAuth have StateDataFormat in their options. I’m sure others do too.

.NET Core 1.1 building with Docker and Cake

I’m going to attempt to catalog how I’m using Docker to test and build containers that are for deployment into Amazon ECS.

Build Process

  1. Use Dockerfile.build
    • Uses Cake:
      1. dotnet restore
      2. dotnet build
      3. dotnet test
      4. dotnet publish
  2. Save running image to container
  3. Copy publish directory out of container
  4. Use Dockerfile
    • Copy publish directory into image
  5. Push built image to ECS

Driving the build: Cake

I love Cake and have contributed some minor things to it. It does support .NET Core. However, the nuget.exe used to drive some critical things like nuget push does not. push is actually the only command I need that isn’t on .NET Core. So I standardized on requiring Mono for just the build container.

My base Cake file: build.cake

var target = Argument("target", "Default");
var tag = Argument("tag", "cake");

Task("Restore")
  .Does(() =>
{
    DotNetCoreRestore("src/\" \"test/\" \"integrate/");
});

Task("Build")
    .IsDependentOn("Restore")
  .Does(() =>
{
    DotNetCoreBuild("src/**/project.json\" \"test/**/project.json\" \"integrate/**/project.json");
});

Task("Test")
    .IsDependentOn("Build")
  .Does(() =>
{
    var files = GetFiles("test/**/project.json");
    foreach(var file in files)
    {
        DotNetCoreTest(file.ToString());
    }
});

Task("Publish")
    .IsDependentOn("Test")
  .Does(() =>
{
    var settings = new DotNetCorePublishSettings
    {
        Framework = "netcoreapp1.1",
        Configuration = "Release",
        OutputDirectory = "./publish/",
        VersionSuffix = tag
    };

    DotNetCorePublish("src/Server", settings);
});

Task("Default")
    .IsDependentOn("Test");

RunTarget(target);

I broke out all the steps as I often run Cake for each step during development. You’ll notice that each dotnet command behaves differently. It’s very annoying.

I have a project structure that usually goes like this:

  • src – Source files
  • test – Unit tests for those source files
  • integrate – Integration tests that should run separately from unit tests.
  • misc – Other code stuff

Other things to notice:

  • Default is test. Don’t want to accidently publish
  • publish has a hard-coded entry point. Probably should make that argument.
  • tag is a tag I want to tag the published build with. I want to see something unique for each publish. I default this with cake for local publishes.

The Build Container: Dockerfile.build

I actually started with following the little HOW-TO from the ASP.NET team from here:

FROM cl0sey/dotnet-mono-docker:1.1-sdk

ARG TAG=docker
ENV TAG ${TAG}

WORKDIR /app
RUN mkdir /publish

COPY . .
RUN ./build.sh -t publish --scriptargs "--tag=${TAG}"

Notice the source image: cl0sey/dotnet-mono-docker:1.1-sdk

Someone was nice enough to already make a Docker image with Mono on top of the base microsoft/dotnet:1.1-sdk-projectjson image. The SDK image is what is needed for using all of the dotnet cli commands that aren’t just running.

Notice:

  • ARG and ENV declarations for specifying the tag variable. I think ARG declares it and ENV allows it to be used as a bash-like variable.
  • creating a publish directory.
  • How I pass the tag variable to the Cake script.

The Deployment Container: Dockerfile

FROM microsoft/dotnet:1.1.0-runtime

COPY ./publish /app
WORKDIR /app

EXPOSE 5000

ENV ASPNETCORE_ENVIRONMENT beta

ENTRYPOINT ["dotnet", "Server.dll"]

Notice:

  • I actually use the official runtime image.
  • COPY command to grab the local publish directory and put it in the app directory inside the container.
  • I keep the default 5000 port. Why not? It’s all hidden in AWS.
  • I just declared my environment to be beta instead of staging
  • ENTRYPOINT has to be an array of strings. Server.dll is the executable assembly.

Hanging It All Together: CircleCI

I’m using CircleCI as my CI service because it’s free/cheap. Also, it runs Docker and can do Docker inside Docker. The docker commands will work just about anywhere though.

machine:
  services:
    - docker

dependencies:
  override:
    - docker info

test:
  override:
    - docker build -t build-image --build-arg TAG="${CIRCLE_BRANCH}-${CIRCLE_BUILD_NUM}" -f Dockerfile.build .
    - docker create --name build-cont build-image


deployment:
  beta:
    branch: master
    commands:
    - docker cp build-cont:/app/publish/. publish/
    - docker build -t server-api:latest .
    - docker tag server-api:latest $AWS_ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/server-api:$CIRCLE_BUILD_NUM
    - ./push.sh

Notice:

  • test phase
    • The test phase does docker build on Dockerfile.build This file does everything, including publish. The image is tagged as build-image.
    • test phase also creates a container called build-cont for possible deployment.
    • My tag is made of the branch name plus the build number. These are CircleCI variables.
  • deployment phase
    • named beta I could have more environments for deployment, I guess.
    • locked to the master branch. When I push feature branches, only the test phase runs to test things. Only when merged into master does it deploy.
    • docker cp copies the publish directory out of the build-cont container.
    • Dockerfile is used with docker build and tagged as server-api:latest
    • I also explicitly tag the image with my AWS ECS specific name. CircleCI hides my AWS account id in an environment variable for me.
    • push.sh actually does the push to AWS.

push.sh to AWS ECS

Finally, I want to save my Docker image.

#!/usr/bin/env bash

configure_aws_cli(){
    aws --version
    aws configure set default.region eu-west-1
    aws configure set default.output json
}

push_ecr_image(){
    eval $(aws ecr get-login --region eu-west-1)
    docker push $AWS_ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/visibility-api:$CIRCLE_BUILD_NUM
}

configure_aws_cli
push_ecr_image

The bash script is copied in part from something else more complicated. You can’t just do the push command from the circle.yaml because of the need to use eval to login to AWS. My AWS push creds are also locked in a CircleCI environment variable that the aws ecr get-login command expects.