Targeting “unrecognized” portable .NET framework targets with VS2017


1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.TargetFrameworkInference.targets(84,5): error : Cannot infer TargetFrameworkIdentifier and/or TargetFrameworkVersion from TargetFramework='portable-net40+sl5+win8+wpa81+wp8'. They must be specified explicitly.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(1111,5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v0.0" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

In the Microsoft.NET.TargetFrameworkInference.targets file it helpfully says this:

    Note that this file is only included when $(TargetFramework) is set and so we do not need to check that here.

    Common targets require that $(TargetFrameworkIdentifier) and $(TargetFrameworkVersion) are set by static evaluation
    before they are imported. In common cases (currently netstandard, netcoreapp, or net), we infer them from the short
    names given via TargetFramework to allow for terseness and lack of duplication in project files.

    For other cases, the user must supply them manually.

    For cases where inference is supported, the user need only specify the targets in TargetFrameworks, e.g:

    For cases where inference is not supported, identifier, version and profile can be specified explicitly as follows:
       <PropertyGroup Condition="'$(TargetFramework)' == 'portable-net451+win81'">
       <PropertyGroup Condition="'$(TargetFramework)' == 'xyz1.0'">

    Note in the xyz1.0 case, which is meant to demonstrate a framework we don't yet recognize, we can still
    infer the version of 1.0. The user can also override it as always we honor a TargetFrameworkIdentifier
    or TargetFrameworkVersion that is already set.

In a project, I was targeting: net45 netstandard1.3 and .NETPortable,Version=v4.0,Profile=Profile328

The auto migration only does so far:

There are some other properties added for portable40-net40+sl5+win8+wp8+wpa81 but the end result is that on build, MSBuild doesn’t know what portable40-net40+sl5+win8+wp8+wpa81 means.

To fix this, translate Profile328 to what the comments say from the targets file. I also used this from Microsoft as a guide for profile targets.

I added:

<PropertyGroup Condition="'$(TargetFramework)' == 'portable-net40+sl5+win8+wpa81+wp8'">

The name portable-net40+sl5+win8+wpa81+wp8 could be anything really as long as they match and the above XML really puts the profile, version and identifer for MSBuild.

Here’s the complete working csproj

Why couldn’t migrate do this for you? I don’t know.

Creating async AutoMapper mappings

I’ve finally come to embrace AutoMapper after a long love-hate relationship over the years.  The basic use-case was always useful but there were always edge-cases where it fell down for me.  I’m usually thinking this was my fault as I was mis-using or misunderstanding.

In my current usage, I’ve come across the need to use async along side mapping data to a domain object. Mapping a domain object isn’t just for DB calls (though, these can be async as well). While you might want to do crazier things like an HTTP call for data to map, my use-case for this is simple: a Redis cache. The cache contains things from the DB and the API is rightfully async.

AutoMapper Type Converters

AutoMapper does provide a way to custom build a type: TypeConverters. However, the API is sync and it seems unlikely that async support will come to this or similar or any APIs in AutoMapper. The work of maintaining sync and async code paths is non-trivial and requiring async code paths for simple mappings does seem silly.

I wish there was a good way to do this but there doesn’t seem to be. Enter AsyncMapper!


This is a library that sits on top of AutoMapper and aims to basically provide async versions of a Type Converter.

AsyncMapper first looks for it’s own interfaces for the requested mapping. If found, it uses that. If not found, it forwards to AutoMapper. Ideally, you’d still use AutoMapper inside AsyncMappings to do the grunt work of mapping as well.

My toy example should illustrate what I’m after.

The intent is this a just a small helper library for async operations but AutoMapper still is the primary way to map.

What next?

Look at Narochno.AsyncMapper and see how it looks and feels. There are a few things on the TODO but I wouldn’t grow more functionality into this. This library greatly assists my mapping organization that needs async.



MediatR and me

I was only recently introduced to MediatR through the magic of Reddit.

Jimmy Bogard also does the excellent AutoMapper tool so it was work looking into. He’s written about his use of it for a long while on his blog

Flattening the layers

His posts on implementation patterns and dealing with duplication are the real gems though.

My particular problem is that in my three layers in my REST API, I find I’m constantly injecting new classes to try to avoid duplication of logic.  This MSDN post actually articulates the problem I’ve got with repositories and whatnot.

Given my layers:

  • Web (Controllers in ASP.NET Core terms)
  • Business stuff
  • Data Access

I want to keep some separation to concentrate on just testable logic my business layer. Web should just transform a request and call the relevent business object. The business logic needs data sometimes.

He says “I want MediatR to serve as the outermost window into the actual domain-specific behavior in my application” which is great. The end result is a Controller class that barely does anything except call IMediator.

However, I don’t want a handler that just duplicates my problem but just hides it behind MediatR so only my controller is prettier.  How do I organize things to be simplier while still having some layering with reusability and little duplication?

Handlers calling Handlers

Really what happens is that my “business” layer handlers end up calling other business handlers or data handlers.

My unit tests look like a series of these Moq statements:

mediator.Setup(x => x.Send(It.IsAny(), CancellationToken.None))
.Callback<IRequest, CancellationToken>((r,y) =>
var x = (Locations.NewLocation) r;
Assert.Equal(newSite.Name, x.Name);
Assert.Equal(userName, x.CreatedBy);
Assert.Equal(2, x.SiteId);
Assert.Equal(4, x.LocationTypeId);
Assert.Equal(345, x.ReportingUnitId);

I’m validating that I’m passing a message with MediatR and validating the message’s contents.

Is this bad?

It seems like it is.

MediatR basically just divides everything with loosely coupled message passing. It could all end up as a huge message soup with tons of layers of indirection.

Jimmy has a good example of how he uses MediatR, AutoMapper and other things with ASP.NET Core on github:

However, the logic is just a basic CRUD app. Nothing needs to share anything.

Is there anything that can stop that?

Just discipline I guess.

The good: a cache in the pipeline!

(I’m waving my hands with the implementation details of caching as that’s code for another post.)

I put a Redis cache on my reference data from the database.  I had a pattern around my data access but it was a lot of copy/paste.  Now, I have a marker interface for my requests and it just automatically caches because the MediatR pipeline just resolves it.

The cache pipeline handler with a marker interface is declared like this:

public class DistributedCachingHandler<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : class, IUseDistributedCache
where TResponse : class

Now, any request that goes through MediatR that implements the IUseDistributedCache will go through this pipeline handler.

Actually, the generics and type resolution is not done by MediatR but by your IoC container of choice.  I was sticking to the default container in ASP.NET Core.  However, the resolution isn’t as sophisticated as StructureMap, AutoFac, etc. and ends up erroring when trying to create pipelined types generically constrained by the marker the interface.  So now, I just plugged in AutoFac and still use IServiceCollection as I normally would.

The good part 2: FluentValidation and MediatR

(I’m waving my hands with the implementation details of this as that’s code for another post.)

FluentValidation is a good library for creating validation classes for various POCOs then the validation can just be plugged into anywhere.  I want this to be plugged into two places: when ASP.NET Core is accepting a model from a REST call (using an ActionFilter) and also in my MediatR pipeline!

I made a MediatR pipeline handler that takes any request/response pair and sends it through FluentValidation. If any Validators are registered, then they are ran.

For the Action Filter, FluentValidation is chained onto your AddMvc method and marks the ModelState as invalid. You can handle this in many ways but I made another Action Filter to automatically return when the Model is invalid.

Keeping secrets safe with ASP.NET Core and Credstash

Originally posted on Medium:

I primarily use Amazon Web Services and .NET Core. Most .NET users tend to look to Azure by default because of Microsoft support. However, I strongly prefer AWS. All information here deals with .NET Core and AWS.


Keeping secrets secure seems to be a pretty hard problem. The best thing with the biggest mindshare behind it seems to be Hashicorp Vault but it’s an application with more infrastructure to setup just to get it working.

I really hate having to run 3rd party applications in my own cloud applications. I only do it when I’m forced to. Basically, only when Amazon doesn’t have a matching service or the AWS service isn’t fit for purpose.

However, they do: the Key Management Service. I’m not going to get into detail about it but it’s not suitable by itself.

Fortunately, someone else already did some leg work to use KMS: enter Credstash


You can read about Credstash on the github site but basically it’s a command line utility to add and retrieve secrets. It’s python based and perfectly good for doing the admin of secrets. However, I want to use it with my .NET Core applications.

Credstash uses KMS to protect keys and DynamoDB to store encrypted values.

Credstash vs Hashicorp Vault

Reference: Credstash

Vault is really neat and they do some cool things (dynamic secret generation, key-splitting to protect master keys, etc.), but there are still some reasons why you might pick credstash over vault:

  • Nothing to run. If you want to run vault, you need to run the secret storage backend (consul or some other datastore), you need to run the vault server itself, etc. With credstash, there’s nothing to run. all of the data and key storage is handled by AWS services
  • lower cost for a small number of secrets. If you just need to store a small handful of secrets, you can easilly fit the credstash DDB table in the free tier, and pay ~$1 per month for KMS. So you get good secret management for about a buck a month.
  • Simple operations. Similar to “nothing to run”, you dont need to worry about getting a quorum of admins together to unseal your master keys, dont need to worry about monitoring, runbooks for when the secret service goes down, etc. It does expose you to risk of AWS outages, but if you’re running on AWS, you have that anyway

That said, if you want to do master key splitting, are not running on AWS, care about things like dynamic secret generation, have a trust boundary that’s smaller than an instance, or want to use something other than AWS creds for AuthN/AuthZ, then vault may be a better choice for you.


I created an ASP.NET Core configuration compatible reader for Credstash. It’s fairly simple and so far is working well.
Find it on NuGet and use it like so:

AWSCredentials creds = new StoredProfileAWSCredentials();
if (!env.EnvironmentName.MatchesNoCase("alpha"))
    creds = new InstanceProfileAWSCredentials();
builder.AddCredstash(new CredstashConfigurationOptions()
    EncryptionContext = new Dictionary<string, string>()
        {"environment", env.EnvironmentName}
    Region = RegionEndpoint.EUWest1,
    Credentials = creds

There’s probably more there than you need but I need it.

For AWS Creds, I use locally stored creds in my profile for development. I call this environment alpha so don’t sweat that. On an instance, I want to use IAM profile based permissions. Usage of this is on the credstash page.

KMS has a concept of EncryptionContexts that are basically just key/value pairs that need to match in order for the decryption of secrets to be successful. I use the environment name as an extra value to segment secrets by.


I can finally have something secure without having values hardcoded in a repo somewhere. KMS has an audit trail and Credstash uses an immutable value system to version secrets so that old values are still there.

It’s cheap, easy to setup and works with C# now. Everything I need.

DistributedCache extensions for Data Protection in ASP.NET Core


This contains two simple classes:

  • DistributedCache DataProtection Provider
  • DistributedCache PropertiesDataFormat

DataProtection Provider

When having a distributed and stateless ASP.NET Core web server, you need to have your Data Protection keys saved to a location to be shared among your servers.

The default providers that the ASP.NET Core team provides are here

I was just going to use Redis but the implementation is hard-coded to Redis. I’m already using the DistributedCache Redis provider, so why not just link in to that? I don’t need to configure two different things now.



Boom, now if you’re using IDistributedCache you now persist your generated DataProtection keys there.

DistributedCache PropertiesDataFormat

Another issue is that the state on the URL can used for Authentication can be large. Why not use cache?

This is inspired and mostly copied from:


Useful for any Authentication middleware. You need to hook it into the AuthenticationOptions for your protocol like so:

I’m using CAS Authentication

var dataProtectionProvider = app.ApplicationServices.GetRequiredService();
var distributedCache = app.ApplicationServices.GetRequiredService();

var dataProtector = dataProtectionProvider.CreateProtector(
typeof(string).FullName, schemeName,

//TODO: think of a better way to create
var dataFormat = new DistributedPropertiesDataFormat(distributedCache, dataProtector);


app.UseCasAuthentication(x =>
x.StateDataFormat = dataFormat;

OpenId and OAuth have StateDataFormat in their options. I’m sure others do too.

.NET Core 1.1 building with Docker and Cake

I’m going to attempt to catalog how I’m using Docker to test and build containers that are for deployment into Amazon ECS.

Build Process

  1. Use
    • Uses Cake:
      1. dotnet restore
      2. dotnet build
      3. dotnet test
      4. dotnet publish
  2. Save running image to container
  3. Copy publish directory out of container
  4. Use Dockerfile
    • Copy publish directory into image
  5. Push built image to ECS

Driving the build: Cake

I love Cake and have contributed some minor things to it. It does support .NET Core. However, the nuget.exe used to drive some critical things like nuget push does not. push is actually the only command I need that isn’t on .NET Core. So I standardized on requiring Mono for just the build container.

My base Cake file: build.cake

var target = Argument("target", "Default");
var tag = Argument("tag", "cake");

  .Does(() =>
    DotNetCoreRestore("src/\" \"test/\" \"integrate/");

  .Does(() =>
    DotNetCoreBuild("src/**/project.json\" \"test/**/project.json\" \"integrate/**/project.json");

  .Does(() =>
    var files = GetFiles("test/**/project.json");
    foreach(var file in files)

  .Does(() =>
    var settings = new DotNetCorePublishSettings
        Framework = "netcoreapp1.1",
        Configuration = "Release",
        OutputDirectory = "./publish/",
        VersionSuffix = tag

    DotNetCorePublish("src/Server", settings);



I broke out all the steps as I often run Cake for each step during development. You’ll notice that each dotnet command behaves differently. It’s very annoying.

I have a project structure that usually goes like this:

  • src – Source files
  • test – Unit tests for those source files
  • integrate – Integration tests that should run separately from unit tests.
  • misc – Other code stuff

Other things to notice:

  • Default is test. Don’t want to accidently publish
  • publish has a hard-coded entry point. Probably should make that argument.
  • tag is a tag I want to tag the published build with. I want to see something unique for each publish. I default this with cake for local publishes.

The Build Container:

I actually started with following the little HOW-TO from the ASP.NET team from here:

FROM cl0sey/dotnet-mono-docker:1.1-sdk

ARG TAG=docker

RUN mkdir /publish

COPY . .
RUN ./ -t publish --scriptargs "--tag=${TAG}"

Notice the source image: cl0sey/dotnet-mono-docker:1.1-sdk

Someone was nice enough to already make a Docker image with Mono on top of the base microsoft/dotnet:1.1-sdk-projectjson image. The SDK image is what is needed for using all of the dotnet cli commands that aren’t just running.


  • ARG and ENV declarations for specifying the tag variable. I think ARG declares it and ENV allows it to be used as a bash-like variable.
  • creating a publish directory.
  • How I pass the tag variable to the Cake script.

The Deployment Container: Dockerfile

FROM microsoft/dotnet:1.1.0-runtime

COPY ./publish /app



ENTRYPOINT ["dotnet", "Server.dll"]


  • I actually use the official runtime image.
  • COPY command to grab the local publish directory and put it in the app directory inside the container.
  • I keep the default 5000 port. Why not? It’s all hidden in AWS.
  • I just declared my environment to be beta instead of staging
  • ENTRYPOINT has to be an array of strings. Server.dll is the executable assembly.

Hanging It All Together: CircleCI

I’m using CircleCI as my CI service because it’s free/cheap. Also, it runs Docker and can do Docker inside Docker. The docker commands will work just about anywhere though.

    - docker

    - docker info

    - docker build -t build-image --build-arg TAG="${CIRCLE_BRANCH}-${CIRCLE_BUILD_NUM}" -f .
    - docker create --name build-cont build-image

    branch: master
    - docker cp build-cont:/app/publish/. publish/
    - docker build -t server-api:latest .
    - docker tag server-api:latest $$CIRCLE_BUILD_NUM
    - ./


  • test phase
    • The test phase does docker build on This file does everything, including publish. The image is tagged as build-image.
    • test phase also creates a container called build-cont for possible deployment.
    • My tag is made of the branch name plus the build number. These are CircleCI variables.
  • deployment phase
    • named beta I could have more environments for deployment, I guess.
    • locked to the master branch. When I push feature branches, only the test phase runs to test things. Only when merged into master does it deploy.
    • docker cp copies the publish directory out of the build-cont container.
    • Dockerfile is used with docker build and tagged as server-api:latest
    • I also explicitly tag the image with my AWS ECS specific name. CircleCI hides my AWS account id in an environment variable for me.
    • actually does the push to AWS. to AWS ECS

Finally, I want to save my Docker image.

#!/usr/bin/env bash

    aws --version
    aws configure set default.region eu-west-1
    aws configure set default.output json

    eval $(aws ecr get-login --region eu-west-1)
    docker push $$CIRCLE_BUILD_NUM


The bash script is copied in part from something else more complicated. You can’t just do the push command from the circle.yaml because of the need to use eval to login to AWS. My AWS push creds are also locked in a CircleCI environment variable that the aws ecr get-login command expects.