.NET Core on Circle CI 2.0 using Docker and Cake

I’ve only just started with Circle 2.0, which just had it’s beta tag removed.

It’s completely Docker based which I adore. I refuse to package code any other way these days.

My goal would was to build on what I previously did on Circle CI but only use an official Microsoft .NET Core SDK docker image. Having to layer extra tools onto another image and manage that is extra work. I abhor extra work.

.circleci/config.yml

Circle 2.0 moves their YAML to a subdirectory which seems to be envogue these days so we can have lots of files for specific services!

version: 2
jobs:
  build:
    working_directory: ~/api
    docker:
      - image: microsoft/dotnet:1.1.2-sdk-jessie
    environment:
      - DOTNET_CLI_TELEMETRY_OPTOUT: 1
      - CAKE_VERSION: 0.19.1
    steps:
      - checkout
      - restore_cache:
          keys:
            - cake-{{ .Environment.CAKE_VERSION }}
      - run: ./build.sh build.cake --target=restore
      - save_cache:
          key: cake-{{ .Environment.CAKE_VERSION }}
          paths:
            - ~/api/tools
      - run: ./build.sh build.cake --target=build
      - run: ./build.sh build.cake --target=test

The hard part with Circle CI 2.0 is that caching is done pretty manually and changes aren’t auto-detected. You have to version cache keys or hashes that act as cache keys. I haven’t mastered it yet.

Ideally, I’d cache the Cake tools directory and my .nuget folder on this running image but I’m not there yet.

The big thing to note is that the image is based on the official SDK image with all the necessary build tools.

Bootstrapping Cake

So it should be easy to do this now as I already have a build.sh to execute Cake right? Nope!

The bash script uses the unzip utility that usually exists. This is needed to extract the nuget package that is downloaded. curl doesn’t exist either, by the way.

Fortunately, the dotnet cli is here. It should easily restore Cake. My new build.sh needs a csproj to restore Cake with. Since the new csproj XML is tiny, this is easy to echo into a file.

#!/usr/bin/env bash
##########################################################################
# This is the Cake bootstrapper script for Linux and OS X.
# This file was downloaded from https://github.com/cake-build/resources
# Feel free to change this file to fit your needs.
##########################################################################

# Define directories.
SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
TOOLS_DIR=$SCRIPT_DIR/tools
TOOLS_PROJ=$TOOLS_DIR/tools.csproj
CAKE_DLL=$TOOLS_DIR/Cake.CoreCLR.$CAKE_VERSION/cake.coreclr/$CAKE_VERSION/Cake.dll


# Make sure the tools folder exist.
if [ ! -d "$TOOLS_DIR" ]; then
  mkdir "$TOOLS_DIR"
fi

###########################################################################
# INSTALL CAKE
###########################################################################

if [ ! -f "$CAKE_DLL" ]; then
    echo "<Project Sdk=\"Microsoft.NET.Sdk\"><PropertyGroup><OutputType>Exe</OutputType><TargetFramework>netcoreapp1.1</TargetFramework></PropertyGroup></Project>" > $TOOLS_PROJ
    dotnet add $TOOLS_PROJ package cake.coreclr -v $CAKE_VERSION --package-directory $TOOLS_DIR/Cake.CoreCLR.$CAKE_VERSION
fi

# Make sure that Cake has been installed.
if [ ! -f "$CAKE_DLL" ]; then
    echo "Could not find Cake.exe at '$CAKE_DLL'."
    exit 1
fi

###########################################################################
# RUN BUILD SCRIPT
###########################################################################

# Start Cake
exec dotnet "$CAKE_DLL" "$@"

Note: I’ve moved the CAKE_VERSION variable out of the script to attempt to use it with CircleCI but it can easily be added back

Generating URL slugs in .NET Core

Updated: 5/5/17

  • Better handling of diacritics in sample

I’ve just discovered what a Slug is:

Some systems define a slug as the part of a URL that identifies a page in human-readable keywords.

It is usually the end part of the URL, which can be interpreted as the name of the resource, similar to the basename in a filename or the title of a page. The name is based on the use of the word slug in the news media to indicate a short name given to an article for internal use.

I needed to know this as I’m particapting in the Realworld example projects and I’m doing a back end for ASP.NET Core.

The API spec kept saying slug, and I had a moment of “ohhh, that’s what that is.” Anyway, I needed to be able to generate one. Stackoverflow to the rescue!: https://stackoverflow.com/questions/2920744/url-slugify-algorithm-in-c

Also, decoding random characters from a lot of languages isn’t straight forward so I used one of the best effort implementations from the linked SO page: https://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net

Now, here’s my Slug generator:

//https://stackoverflow.com/questions/2920744/url-slugify-algorithm-in-c
//https://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net
public static class Slug
{
    public static string GenerateSlug(this string phrase)
    {
        string str = phrase.RemoveDiacritics().ToLower();
        // invalid chars           
        str = Regex.Replace(str, @"[^a-z0-9\s-]", "");
        // convert multiple spaces into one space   
        str = Regex.Replace(str, @"\s+", " ").Trim();
        // cut and trim 
        str = str.Substring(0, str.Length <= 45 ? str.Length : 45).Trim();
        str = Regex.Replace(str, @"\s", "-"); // hyphens   
        return str;
    }

    public static string RemoveDiacritics(this string text)
    {
        var s = new string(text.Normalize(NormalizationForm.FormD)
            .Where(c => CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
            .ToArray());

        return s.Normalize(NormalizationForm.FormC);
    }
}

Targeting “unrecognized” portable .NET framework targets with VS2017

ERROR!

1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.TargetFrameworkInference.targets(84,5): error : Cannot infer TargetFrameworkIdentifier and/or TargetFrameworkVersion from TargetFramework='portable-net40+sl5+win8+wpa81+wp8'. They must be specified explicitly.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(1111,5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v0.0" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

In the Microsoft.NET.TargetFrameworkInference.targets file it helpfully says this:

<!-- 
    Note that this file is only included when $(TargetFramework) is set and so we do not need to check that here.

    Common targets require that $(TargetFrameworkIdentifier) and $(TargetFrameworkVersion) are set by static evaluation
    before they are imported. In common cases (currently netstandard, netcoreapp, or net), we infer them from the short
    names given via TargetFramework to allow for terseness and lack of duplication in project files.

    For other cases, the user must supply them manually.

    For cases where inference is supported, the user need only specify the targets in TargetFrameworks, e.g:
      <PropertyGroup>
        <TargetFrameworks>net45;netstandard1.0</TargetFrameworks>
      </PropertyGroup>

    For cases where inference is not supported, identifier, version and profile can be specified explicitly as follows:
       <PropertyGroup>
         <TargetFrameworks>portable-net451+win81;xyz1.0</TargetFrameworks>
       <PropertyGroup>
       <PropertyGroup Condition="'$(TargetFramework)' == 'portable-net451+win81'">
         <TargetFrameworkIdentifier>.NETPortable</TargetFrameworkIdentifier>
         <TargetFrameworkVersion>v4.6</TargetFrameworkVersion>
         <TargetFrameworkProfile>Profile44</TargetFrameworkProfile>
       </PropertyGroup>
       <PropertyGroup Condition="'$(TargetFramework)' == 'xyz1.0'">
         <TargetFrameworkIdentifier>Xyz</TargetFrameworkVersion>
       <PropertyGroup>

    Note in the xyz1.0 case, which is meant to demonstrate a framework we don't yet recognize, we can still
    infer the version of 1.0. The user can also override it as always we honor a TargetFrameworkIdentifier
    or TargetFrameworkVersion that is already set.
   -->

In a project, I was targeting: net45 netstandard1.3 and .NETPortable,Version=v4.0,Profile=Profile328

The auto migration only does so far:
net45;netstandard1.3;portable40-net40+sl5+win8+wp8+wpa81

There are some other properties added for portable40-net40+sl5+win8+wp8+wpa81 but the end result is that on build, MSBuild doesn’t know what portable40-net40+sl5+win8+wp8+wpa81 means.

To fix this, translate Profile328 to what the comments say from the targets file. I also used this from Microsoft as a guide for profile targets.

I added:

<PropertyGroup Condition="'$(TargetFramework)' == 'portable-net40+sl5+win8+wpa81+wp8'">
    <TargetFrameworkIdentifier>.NETPortable</TargetFrameworkIdentifier>
    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
    <TargetFrameworkProfile>Profile328</TargetFrameworkProfile>
</PropertyGroup>

The name portable-net40+sl5+win8+wpa81+wp8 could be anything really as long as they match and the above XML really puts the profile, version and identifer for MSBuild.

Here’s the complete working csproj

Why couldn’t migrate do this for you? I don’t know.

Creating async AutoMapper mappings

I’ve finally come to embrace AutoMapper after a long love-hate relationship over the years.  The basic use-case was always useful but there were always edge-cases where it fell down for me.  I’m usually thinking this was my fault as I was mis-using or misunderstanding.

In my current usage, I’ve come across the need to use async along side mapping data to a domain object. Mapping a domain object isn’t just for DB calls (though, these can be async as well). While you might want to do crazier things like an HTTP call for data to map, my use-case for this is simple: a Redis cache. The cache contains things from the DB and the API is rightfully async.

AutoMapper Type Converters

AutoMapper does provide a way to custom build a type: TypeConverters. However, the API is sync and it seems unlikely that async support will come to this or similar or any APIs in AutoMapper. The work of maintaining sync and async code paths is non-trivial and requiring async code paths for simple mappings does seem silly.

I wish there was a good way to do this but there doesn’t seem to be. Enter AsyncMapper!

Narochno.AsyncMapper

This is a library that sits on top of AutoMapper and aims to basically provide async versions of a Type Converter.

AsyncMapper first looks for it’s own interfaces for the requested mapping. If found, it uses that. If not found, it forwards to AutoMapper. Ideally, you’d still use AutoMapper inside AsyncMappings to do the grunt work of mapping as well.

My toy example should illustrate what I’m after.

The intent is this a just a small helper library for async operations but AutoMapper still is the primary way to map.

What next?

Look at Narochno.AsyncMapper and see how it looks and feels. There are a few things on the TODO but I wouldn’t grow more functionality into this. This library greatly assists my mapping organization that needs async.

 

 

Keeping secrets safe with ASP.NET Core and Credstash

Originally posted on Medium: https://medium.com/@adamhathcock/keeping-secrets-safe-with-asp-net-core-and-credstash-b6e268176791

I primarily use Amazon Web Services and .NET Core. Most .NET users tend to look to Azure by default because of Microsoft support. However, I strongly prefer AWS. All information here deals with .NET Core and AWS.

Secrets

Keeping secrets secure seems to be a pretty hard problem. The best thing with the biggest mindshare behind it seems to be Hashicorp Vault but it’s an application with more infrastructure to setup just to get it working.

I really hate having to run 3rd party applications in my own cloud applications. I only do it when I’m forced to. Basically, only when Amazon doesn’t have a matching service or the AWS service isn’t fit for purpose.

However, they do: the Key Management Service. I’m not going to get into detail about it but it’s not suitable by itself.

Fortunately, someone else already did some leg work to use KMS: enter Credstash

Credstash

You can read about Credstash on the github site but basically it’s a command line utility to add and retrieve secrets. It’s python based and perfectly good for doing the admin of secrets. However, I want to use it with my .NET Core applications.

Credstash uses KMS to protect keys and DynamoDB to store encrypted values.

Credstash vs Hashicorp Vault

Reference: Credstash

Vault is really neat and they do some cool things (dynamic secret generation, key-splitting to protect master keys, etc.), but there are still some reasons why you might pick credstash over vault:

  • Nothing to run. If you want to run vault, you need to run the secret storage backend (consul or some other datastore), you need to run the vault server itself, etc. With credstash, there’s nothing to run. all of the data and key storage is handled by AWS services
  • lower cost for a small number of secrets. If you just need to store a small handful of secrets, you can easilly fit the credstash DDB table in the free tier, and pay ~$1 per month for KMS. So you get good secret management for about a buck a month.
  • Simple operations. Similar to “nothing to run”, you dont need to worry about getting a quorum of admins together to unseal your master keys, dont need to worry about monitoring, runbooks for when the secret service goes down, etc. It does expose you to risk of AWS outages, but if you’re running on AWS, you have that anyway

That said, if you want to do master key splitting, are not running on AWS, care about things like dynamic secret generation, have a trust boundary that’s smaller than an instance, or want to use something other than AWS creds for AuthN/AuthZ, then vault may be a better choice for you.

Narochno.Credstash

I created an ASP.NET Core configuration compatible reader for Credstash. It’s fairly simple and so far is working well.
Find it on NuGet and use it like so:

AWSCredentials creds = new StoredProfileAWSCredentials();
if (!env.EnvironmentName.MatchesNoCase("alpha"))
{
    creds = new InstanceProfileAWSCredentials();
}
builder.AddCredstash(new CredstashConfigurationOptions()
{
    EncryptionContext = new Dictionary<string, string>()
    {
        {"environment", env.EnvironmentName}
    },
    Region = RegionEndpoint.EUWest1,
    Credentials = creds
});

There’s probably more there than you need but I need it.

For AWS Creds, I use locally stored creds in my profile for development. I call this environment alpha so don’t sweat that. On an instance, I want to use IAM profile based permissions. Usage of this is on the credstash page.

KMS has a concept of EncryptionContexts that are basically just key/value pairs that need to match in order for the decryption of secrets to be successful. I use the environment name as an extra value to segment secrets by.

Conclusion

I can finally have something secure without having values hardcoded in a repo somewhere. KMS has an audit trail and Credstash uses an immutable value system to version secrets so that old values are still there.

It’s cheap, easy to setup and works with C# now. Everything I need.

DistributedCache extensions for Data Protection in ASP.NET Core

Repo: https://github.com/Visibilityltd/Visibility.AspNetCore.DataProtection.DistributedCache

This contains two simple classes:

  • DistributedCache DataProtection Provider
  • DistributedCache PropertiesDataFormat

DataProtection Provider

When having a distributed and stateless ASP.NET Core web server, you need to have your Data Protection keys saved to a location to be shared among your servers.

The default providers that the ASP.NET Core team provides are here

I was just going to use Redis but the implementation is hard-coded to Redis. I’m already using the DistributedCache Redis provider, so why not just link in to that? I don’t need to configure two different things now.

Usage

services.AddDataProtection()
.PersistKeysToDistributedCache();

Boom, now if you’re using IDistributedCache you now persist your generated DataProtection keys there.

DistributedCache PropertiesDataFormat

Another issue is that the state on the URL can used for Authentication can be large. Why not use cache?

This is inspired and mostly copied from: https://github.com/IdentityServer/IdentityServer4/issues/407

Usage

Useful for any Authentication middleware. You need to hook it into the AuthenticationOptions for your protocol like so:

I’m using CAS Authentication

var dataProtectionProvider = app.ApplicationServices.GetRequiredService();
var distributedCache = app.ApplicationServices.GetRequiredService();

var dataProtector = dataProtectionProvider.CreateProtector(
typeof(CasAuthenticationMiddleware).FullName,
typeof(string).FullName, schemeName,
"v1");

//TODO: think of a better way to create
var dataFormat = new DistributedPropertiesDataFormat(distributedCache, dataProtector);

...

app.UseCasAuthentication(x =>
{
x.StateDataFormat = dataFormat;
...
};

OpenId and OAuth have StateDataFormat in their options. I’m sure others do too.

.NET Core 1.1 building with Docker and Cake

I’m going to attempt to catalog how I’m using Docker to test and build containers that are for deployment into Amazon ECS.

Build Process

  1. Use Dockerfile.build
    • Uses Cake:
      1. dotnet restore
      2. dotnet build
      3. dotnet test
      4. dotnet publish
  2. Save running image to container
  3. Copy publish directory out of container
  4. Use Dockerfile
    • Copy publish directory into image
  5. Push built image to ECS

Driving the build: Cake

I love Cake and have contributed some minor things to it. It does support .NET Core. However, the nuget.exe used to drive some critical things like nuget push does not. push is actually the only command I need that isn’t on .NET Core. So I standardized on requiring Mono for just the build container.

My base Cake file: build.cake

var target = Argument("target", "Default");
var tag = Argument("tag", "cake");

Task("Restore")
  .Does(() =>
{
    DotNetCoreRestore("src/\" \"test/\" \"integrate/");
});

Task("Build")
    .IsDependentOn("Restore")
  .Does(() =>
{
    DotNetCoreBuild("src/**/project.json\" \"test/**/project.json\" \"integrate/**/project.json");
});

Task("Test")
    .IsDependentOn("Build")
  .Does(() =>
{
    var files = GetFiles("test/**/project.json");
    foreach(var file in files)
    {
        DotNetCoreTest(file.ToString());
    }
});

Task("Publish")
    .IsDependentOn("Test")
  .Does(() =>
{
    var settings = new DotNetCorePublishSettings
    {
        Framework = "netcoreapp1.1",
        Configuration = "Release",
        OutputDirectory = "./publish/",
        VersionSuffix = tag
    };

    DotNetCorePublish("src/Server", settings);
});

Task("Default")
    .IsDependentOn("Test");

RunTarget(target);

I broke out all the steps as I often run Cake for each step during development. You’ll notice that each dotnet command behaves differently. It’s very annoying.

I have a project structure that usually goes like this:

  • src – Source files
  • test – Unit tests for those source files
  • integrate – Integration tests that should run separately from unit tests.
  • misc – Other code stuff

Other things to notice:

  • Default is test. Don’t want to accidently publish
  • publish has a hard-coded entry point. Probably should make that argument.
  • tag is a tag I want to tag the published build with. I want to see something unique for each publish. I default this with cake for local publishes.

The Build Container: Dockerfile.build

I actually started with following the little HOW-TO from the ASP.NET team from here:

FROM cl0sey/dotnet-mono-docker:1.1-sdk

ARG TAG=docker
ENV TAG ${TAG}

WORKDIR /app
RUN mkdir /publish

COPY . .
RUN ./build.sh -t publish --scriptargs "--tag=${TAG}"

Notice the source image: cl0sey/dotnet-mono-docker:1.1-sdk

Someone was nice enough to already make a Docker image with Mono on top of the base microsoft/dotnet:1.1-sdk-projectjson image. The SDK image is what is needed for using all of the dotnet cli commands that aren’t just running.

Notice:

  • ARG and ENV declarations for specifying the tag variable. I think ARG declares it and ENV allows it to be used as a bash-like variable.
  • creating a publish directory.
  • How I pass the tag variable to the Cake script.

The Deployment Container: Dockerfile

FROM microsoft/dotnet:1.1.0-runtime

COPY ./publish /app
WORKDIR /app

EXPOSE 5000

ENV ASPNETCORE_ENVIRONMENT beta

ENTRYPOINT ["dotnet", "Server.dll"]

Notice:

  • I actually use the official runtime image.
  • COPY command to grab the local publish directory and put it in the app directory inside the container.
  • I keep the default 5000 port. Why not? It’s all hidden in AWS.
  • I just declared my environment to be beta instead of staging
  • ENTRYPOINT has to be an array of strings. Server.dll is the executable assembly.

Hanging It All Together: CircleCI

I’m using CircleCI as my CI service because it’s free/cheap. Also, it runs Docker and can do Docker inside Docker. The docker commands will work just about anywhere though.

machine:
  services:
    - docker

dependencies:
  override:
    - docker info

test:
  override:
    - docker build -t build-image --build-arg TAG="${CIRCLE_BRANCH}-${CIRCLE_BUILD_NUM}" -f Dockerfile.build .
    - docker create --name build-cont build-image


deployment:
  beta:
    branch: master
    commands:
    - docker cp build-cont:/app/publish/. publish/
    - docker build -t server-api:latest .
    - docker tag server-api:latest $AWS_ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/server-api:$CIRCLE_BUILD_NUM
    - ./push.sh

Notice:

  • test phase
    • The test phase does docker build on Dockerfile.build This file does everything, including publish. The image is tagged as build-image.
    • test phase also creates a container called build-cont for possible deployment.
    • My tag is made of the branch name plus the build number. These are CircleCI variables.
  • deployment phase
    • named beta I could have more environments for deployment, I guess.
    • locked to the master branch. When I push feature branches, only the test phase runs to test things. Only when merged into master does it deploy.
    • docker cp copies the publish directory out of the build-cont container.
    • Dockerfile is used with docker build and tagged as server-api:latest
    • I also explicitly tag the image with my AWS ECS specific name. CircleCI hides my AWS account id in an environment variable for me.
    • push.sh actually does the push to AWS.

push.sh to AWS ECS

Finally, I want to save my Docker image.

#!/usr/bin/env bash

configure_aws_cli(){
    aws --version
    aws configure set default.region eu-west-1
    aws configure set default.output json
}

push_ecr_image(){
    eval $(aws ecr get-login --region eu-west-1)
    docker push $AWS_ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/visibility-api:$CIRCLE_BUILD_NUM
}

configure_aws_cli
push_ecr_image

The bash script is copied in part from something else more complicated. You can’t just do the push command from the circle.yaml because of the need to use eval to login to AWS. My AWS push creds are also locked in a CircleCI environment variable that the aws ecr get-login command expects.