Generating URL slugs in .NET Core

Updated: 5/5/17

  • Better handling of diacritics in sample

I’ve just discovered what a Slug is:

Some systems define a slug as the part of a URL that identifies a page in human-readable keywords.

It is usually the end part of the URL, which can be interpreted as the name of the resource, similar to the basename in a filename or the title of a page. The name is based on the use of the word slug in the news media to indicate a short name given to an article for internal use.

I needed to know this as I’m particapting in the Realworld example projects and I’m doing a back end for ASP.NET Core.

The API spec kept saying slug, and I had a moment of “ohhh, that’s what that is.” Anyway, I needed to be able to generate one. Stackoverflow to the rescue!: https://stackoverflow.com/questions/2920744/url-slugify-algorithm-in-c

Also, decoding random characters from a lot of languages isn’t straight forward so I used one of the best effort implementations from the linked SO page: https://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net

Now, here’s my Slug generator:

//https://stackoverflow.com/questions/2920744/url-slugify-algorithm-in-c
//https://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net
public static class Slug
{
    public static string GenerateSlug(this string phrase)
    {
        string str = phrase.RemoveDiacritics().ToLower();
        // invalid chars           
        str = Regex.Replace(str, @"[^a-z0-9\s-]", "");
        // convert multiple spaces into one space   
        str = Regex.Replace(str, @"\s+", " ").Trim();
        // cut and trim 
        str = str.Substring(0, str.Length <= 45 ? str.Length : 45).Trim();
        str = Regex.Replace(str, @"\s", "-"); // hyphens   
        return str;
    }

    public static string RemoveDiacritics(this string text)
    {
        var s = new string(text.Normalize(NormalizationForm.FormD)
            .Where(c => CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
            .ToArray());

        return s.Normalize(NormalizationForm.FormC);
    }
}

TimeZoneInfo is Cross Platform?

Short one!

Apparently, TimeZoneInfo is different on Windows and Linux as cataloged here on Github.

There doesn’t seem to be a solution coming in the .NET Core 2.0 timeframe.

Fortunately, a developer on the above issue made a workaround: TimeZoneConverter

Implemment this and you’re golden:

using System.Runtime.InteropServices;
using TimeZoneConverter;

public static TimeZoneInfo GetTimeZoneInfo(string windowsOrIanaTimeZoneId)
{
    try
    {
        // Try a direct approach first
        return TimeZoneInfo.FindSystemTimeZoneById(windowsOrIanaTimeZoneId);
    }
    catch
    {
        // We have to convert to the opposite platform
        var tzid = RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
            ? TZConvert.IanaToWindows(windowsOrIanaTimeZoneId)
            : TZConvert.WindowsToIana(windowsOrIanaTimeZoneId);

        // Try with the converted ID
        return TimeZoneInfo.FindSystemTimeZoneById(tzid);
    }
}

Proto.Actor and me

Proto Actor is a new lightweight actor library (and I say library, not framework) by one of the creators of Akka.NET.

Before going further, you might want to read up on why one should consider using the Actor model in the first place. Also, the Akka.NET docs are good for use cases too. If you’re really into it, the original Akka project for the JVM should be given a look as well.

Why use Proto Actor over Akka.NET or Microsoft Orleans?

Akka.NET was the first time I tried using the actor model in any real anger. Coming from a traditional enterprise software application development background (using, Dependency Injection, unit tests, etc.) I found several things off putting.

First, the Props concept for creating actors felt wrong. Using DI wasn’t recommending. Testing actors required it’s own testing framework. Lastly, worst of all, using async/await is an anti-pattern. Blasphemy!

I never got beyond a proof of concept with Akka.NET but it felt promising. It being a port of the original Akka with the HOCON config and all still didn’t sit right with me.

Next, I took a look at Orleans.

Great! async/await is a first order thing. Virtual actors (or Grains) make lifetime management just a bit easier.

Downsides: the runtime silos still feel heavyweight and very closely tied to AppDomains. The generation of the serializable types is a bit clunky and very tied to the tooling provided. To be fair, they’re working on fixing much of what I didn’t like as their port to .NET Core requires much of it.

Again, I didn’t use Orleans much more than in a proof of concept. Primarily because of a project direction change. I almost deployed it to production.

Proto Actor is a library

Proto Actor does not use a runtime. In fact, it’s a set of fine grained libraries and relies mostly on 3rd party technologies to do a lot of the extra stuff (transport, serialization, logging, etc.) that aren’t the core of the actor concept.

Being cross-platform is first order concern for this project. While I don’t think I would recommend using Go and C# in the same cluster (though you could), the constraint means Proto Actor can only do things that are well known for both. Protobuf and gRPC for transport are good high performing well-known projects.

For the dotnet space, using Microsoft.Extension namespaces for logging and configuration which are pluggable with the new ASP.NET Core libraries mean your favorite logging and config can be used.

So what now?

I’ve been using Proto Actor in a Winforms Application (I know!) to better model the threading and eventing not only from the user but from attached devices to the application. The interactions basically require queues (actor mailboxes) in order to process everything.

I’ve been using MediatR a lot recently to better organize the features of my cloud applications and this desktop application is no different. However, what is different is that things are much more stateful.

I have objects managing state and need a queue to read messages from. Guess what that is, an actor!

Each attached device is an actor. They have children which model the real world objects they’re tracking. HTTP connections are modeled as an actor with state. The UI forms are actors that accept messages to give user feedback and user actions are messages to other actors.

I think it works out nicely. I made a simple dispatcher to allow the Winform UI synchronization context to process messages. My UI code does not have to worry about calling Invoke.

Up next: Proto Actors and DependencyInjection

Targeting “unrecognized” portable .NET framework targets with VS2017

ERROR!

1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.TargetFrameworkInference.targets(84,5): error : Cannot infer TargetFrameworkIdentifier and/or TargetFrameworkVersion from TargetFramework='portable-net40+sl5+win8+wpa81+wp8'. They must be specified explicitly.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(1111,5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v0.0" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

In the Microsoft.NET.TargetFrameworkInference.targets file it helpfully says this:

<!-- 
    Note that this file is only included when $(TargetFramework) is set and so we do not need to check that here.

    Common targets require that $(TargetFrameworkIdentifier) and $(TargetFrameworkVersion) are set by static evaluation
    before they are imported. In common cases (currently netstandard, netcoreapp, or net), we infer them from the short
    names given via TargetFramework to allow for terseness and lack of duplication in project files.

    For other cases, the user must supply them manually.

    For cases where inference is supported, the user need only specify the targets in TargetFrameworks, e.g:
      <PropertyGroup>
        <TargetFrameworks>net45;netstandard1.0</TargetFrameworks>
      </PropertyGroup>

    For cases where inference is not supported, identifier, version and profile can be specified explicitly as follows:
       <PropertyGroup>
         <TargetFrameworks>portable-net451+win81;xyz1.0</TargetFrameworks>
       <PropertyGroup>
       <PropertyGroup Condition="'$(TargetFramework)' == 'portable-net451+win81'">
         <TargetFrameworkIdentifier>.NETPortable</TargetFrameworkIdentifier>
         <TargetFrameworkVersion>v4.6</TargetFrameworkVersion>
         <TargetFrameworkProfile>Profile44</TargetFrameworkProfile>
       </PropertyGroup>
       <PropertyGroup Condition="'$(TargetFramework)' == 'xyz1.0'">
         <TargetFrameworkIdentifier>Xyz</TargetFrameworkVersion>
       <PropertyGroup>

    Note in the xyz1.0 case, which is meant to demonstrate a framework we don't yet recognize, we can still
    infer the version of 1.0. The user can also override it as always we honor a TargetFrameworkIdentifier
    or TargetFrameworkVersion that is already set.
   -->

In a project, I was targeting: net45 netstandard1.3 and .NETPortable,Version=v4.0,Profile=Profile328

The auto migration only does so far:
net45;netstandard1.3;portable40-net40+sl5+win8+wp8+wpa81

There are some other properties added for portable40-net40+sl5+win8+wp8+wpa81 but the end result is that on build, MSBuild doesn’t know what portable40-net40+sl5+win8+wp8+wpa81 means.

To fix this, translate Profile328 to what the comments say from the targets file. I also used this from Microsoft as a guide for profile targets.

I added:

<PropertyGroup Condition="'$(TargetFramework)' == 'portable-net40+sl5+win8+wpa81+wp8'">
    <TargetFrameworkIdentifier>.NETPortable</TargetFrameworkIdentifier>
    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
    <TargetFrameworkProfile>Profile328</TargetFrameworkProfile>
</PropertyGroup>

The name portable-net40+sl5+win8+wpa81+wp8 could be anything really as long as they match and the above XML really puts the profile, version and identifer for MSBuild.

Here’s the complete working csproj

Why couldn’t migrate do this for you? I don’t know.

Creating async AutoMapper mappings

I’ve finally come to embrace AutoMapper after a long love-hate relationship over the years.  The basic use-case was always useful but there were always edge-cases where it fell down for me.  I’m usually thinking this was my fault as I was mis-using or misunderstanding.

In my current usage, I’ve come across the need to use async along side mapping data to a domain object. Mapping a domain object isn’t just for DB calls (though, these can be async as well). While you might want to do crazier things like an HTTP call for data to map, my use-case for this is simple: a Redis cache. The cache contains things from the DB and the API is rightfully async.

AutoMapper Type Converters

AutoMapper does provide a way to custom build a type: TypeConverters. However, the API is sync and it seems unlikely that async support will come to this or similar or any APIs in AutoMapper. The work of maintaining sync and async code paths is non-trivial and requiring async code paths for simple mappings does seem silly.

I wish there was a good way to do this but there doesn’t seem to be. Enter AsyncMapper!

Narochno.AsyncMapper

This is a library that sits on top of AutoMapper and aims to basically provide async versions of a Type Converter.

AsyncMapper first looks for it’s own interfaces for the requested mapping. If found, it uses that. If not found, it forwards to AutoMapper. Ideally, you’d still use AutoMapper inside AsyncMappings to do the grunt work of mapping as well.

My toy example should illustrate what I’m after.

The intent is this a just a small helper library for async operations but AutoMapper still is the primary way to map.

What next?

Look at Narochno.AsyncMapper and see how it looks and feels. There are a few things on the TODO but I wouldn’t grow more functionality into this. This library greatly assists my mapping organization that needs async.

 

 

MediatR and me

I was only recently introduced to MediatR through the magic of Reddit.

Jimmy Bogard also does the excellent AutoMapper tool so it was work looking into. He’s written about his use of it for a long while on his blog

Flattening the layers

His posts on implementation patterns and dealing with duplication are the real gems though.

My particular problem is that in my three layers in my REST API, I find I’m constantly injecting new classes to try to avoid duplication of logic.  This MSDN post actually articulates the problem I’ve got with repositories and whatnot.

Given my layers:

  • Web (Controllers in ASP.NET Core terms)
  • Business stuff
  • Data Access

I want to keep some separation to concentrate on just testable logic my business layer. Web should just transform a request and call the relevent business object. The business logic needs data sometimes.

He says “I want MediatR to serve as the outermost window into the actual domain-specific behavior in my application” which is great. The end result is a Controller class that barely does anything except call IMediator.

However, I don’t want a handler that just duplicates my problem but just hides it behind MediatR so only my controller is prettier.  How do I organize things to be simplier while still having some layering with reusability and little duplication?

Handlers calling Handlers

Really what happens is that my “business” layer handlers end up calling other business handlers or data handlers.

My unit tests look like a series of these Moq statements:

mediator.Setup(x => x.Send(It.IsAny(), CancellationToken.None))
.Callback<IRequest, CancellationToken>((r,y) =>
{
var x = (Locations.NewLocation) r;
Assert.Equal(newSite.Name, x.Name);
Assert.Equal(userName, x.CreatedBy);
Assert.Equal(2, x.SiteId);
Assert.Equal(4, x.LocationTypeId);
Assert.Equal(345, x.ReportingUnitId);
})
.ReturnsAsync(location);

I’m validating that I’m passing a message with MediatR and validating the message’s contents.

Is this bad?

It seems like it is.

MediatR basically just divides everything with loosely coupled message passing. It could all end up as a huge message soup with tons of layers of indirection.

Jimmy has a good example of how he uses MediatR, AutoMapper and other things with ASP.NET Core on github: https://github.com/jbogard/ContosoUniversityCore

However, the logic is just a basic CRUD app. Nothing needs to share anything.

Is there anything that can stop that?

Just discipline I guess.

The good: a cache in the pipeline!

(I’m waving my hands with the implementation details of caching as that’s code for another post.)

I put a Redis cache on my reference data from the database.  I had a pattern around my data access but it was a lot of copy/paste.  Now, I have a marker interface for my requests and it just automatically caches because the MediatR pipeline just resolves it.

The cache pipeline handler with a marker interface is declared like this:

public class DistributedCachingHandler<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : class, IUseDistributedCache
where TResponse : class

Now, any request that goes through MediatR that implements the IUseDistributedCache will go through this pipeline handler.

Actually, the generics and type resolution is not done by MediatR but by your IoC container of choice.  I was sticking to the default container in ASP.NET Core.  However, the resolution isn’t as sophisticated as StructureMap, AutoFac, etc. and ends up erroring when trying to create pipelined types generically constrained by the marker the interface.  So now, I just plugged in AutoFac and still use IServiceCollection as I normally would.

The good part 2: FluentValidation and MediatR

(I’m waving my hands with the implementation details of this as that’s code for another post.)

FluentValidation is a good library for creating validation classes for various POCOs then the validation can just be plugged into anywhere.  I want this to be plugged into two places: when ASP.NET Core is accepting a model from a REST call (using an ActionFilter) and also in my MediatR pipeline!

I made a MediatR pipeline handler that takes any request/response pair and sends it through FluentValidation. If any Validators are registered, then they are ran.

For the Action Filter, FluentValidation is chained onto your AddMvc method and marks the ModelState as invalid. You can handle this in many ways but I made another Action Filter to automatically return when the Model is invalid.