Quality Assurance for Code

If you like design patterns , clean code , unit tests ....this site is for you.

If not...troll a little bit, your opinions are welcome as well.

Composition over inheritance

Wednesday, September 14, 2016

I’ve heard this phrase a lot. Said it a couple of times too. Most of the discussions about that subject: I’ve lost. I’ll keep trying though. Let’s compose, shall we.

I had a problem and decided to attack with Command Query Responsibility Separation (CQRS) and Event sourcing. Because CQRS is simpler than pure DDD and we can still use many of the great advantages of DDD for dealing with complex business logic…And I think it’s quite simple once you get the grip of it.

My over-engineered solution

Yes!!! I know: I should never, but instead of fighting the bull for real straight from the beginning, I prefer rehearsing, as much as possible, before going out to the arena.

enter image description here

My command stack contains one method per write operation. Each is very very simple, the actual business logic is implemented in entities (POCOs). Every persistence call is asynchronous.

class ProductCommands {
    private readonly IWriter<Product> writer;
    private readonly Func<CreateProductCommand, Product> createProduct;

    public ProductCommands(IWriter<Product> writer, 
        Func<ProductCreatedCommand, Product> createProduct) {
        this.writer = writer;
        this.createProduct = createProduct;
    }

    public async Task Handle(CreateProductCommand cmd) {
        var product = createProduct(cmd);

        await writer.Save(product);
    }
}

Simple, right? Product are aggregate roots that hold an internal list of modification events. These events are the key for Event sourcing. There are many implementations (look here, here and here):

abstract class AggregateRoot {
    private readonly List<DomainEventBase> uncommittedEvents = 
        new List<DomainEventBase>();
    public IEnumerable<DomainEventBase> UncommittedEvents => 
        uncommittedEvents.AsReadOnly();

    public void ClearUncommittedEvents() {
        uncommittedEvents.Clear();
    }
    //...
}

The aggregates keep events for all changes they have suffered until the events are committed and cleared.

Composition Root

At some point in my application there should be a place where all the instances are created. There, we will do something like:

var productCommands = new ProductCommands(
    NewProductWriter(),
    ProductFactory.Create);

From this point forward we’ll just need to change the way we create writer instances and the whole application will change it’s behavior.

private IWriter<Product> NewProductWriter() => 
    new DocumentStoreWriter<Product>();

For the time being let’s say we have this magical DocumentStoreWriter<T> that takes care of my persistence.

Dealing with events

Every time I save a product, I must also save it’s uncommitted events. This will be my first piece, a Decorator.

enter image description here

In real life this process will be asynchronous. The flow though is not very different.

class WriterEventCommittingDecorator<TEntity>: IWriter<TEntity> {
    private readonly IWriter<TEntity> inner;
    private readonly IEventStorage eventStorage;

    public WriterEventCommittingDecorator(
        IWriter<TEntity> inner, 
        IEventStorage eventStorage) {
        this.inner = inner;
        this.eventStorage = eventStorage;
    }

    public async Task Save(IEnumerable<TEntity> entities) {
        var eagerlyIteratedEntities = entities.ToList();

        var committingTasks = eagerlyIteratedEntities
            .Select(async t => {
                await eventStorage.Save(t.UncommittedEvents);
                t.ClearUncommittedEvents();
            });
        await Task.WhenAll(committingTasks);

        await inner.Save(eagerlyIteratedEntities);
    }
}

My new writer is:

private IWriter<Product> NewProductWriter() => 
    new WriterEventCommittingDecorator<Product>(
        new DocumentStoreWriter<Product>(/*...*/),
        new StandardEventStorage(/*...*/)
    );

Query Dbs

I don’t want (for this particular case) eventual consistency for my query databases, I prefer saving to all of them at once.

enter image description here

Programming this on the ProductCommands will be too cumbersome, not to mention it will not be an extensible solution, and we are very liklely to change the databases very often. This is work for a Composite.

enter image description here

Again, it will be asynchronous.

class CompositeWriter<TEntity>: IWriter<TEntity> {
    private readonly List<IWriter<TEntity>> innerWriters;

    public CompositeWriter(IEnumerable<TEntity> innerWriters) {
        this.innerWriters = innerWriters.ToList();
    }

    public Task Save(IEnumerable<TEntity> entities) {
        var innerSavingTasks = innerWriters.Select(t => t.Save(entities));

        return Task.WhenAll(innerSavingTasks);
    } 
}

My new writer is:

private IWriter<Product> NewProductWriter() => 
    new WriterEventCommittingDecorator<Product>(
        new CompositeWriter<Product>(
            new InitialStateWriter<Product>(/*...*/),
            new CurrentStateWriter<Product>(/*...*/),
            new ProductSearchDbWriter(/*...*/),
            new ProductReportsDbWriter(/*...*/)
        ),
        new StandardEventStorage(/*...*/)
    );

I have also replaced the magic DocumentStoreWriter<T> by the actual writers that will take care of the job.

I am getting close…

One of my use cases is a standalone import that will take lots (hundreds of thousands) of products and save them all, one time each. It’s a batch text book use case. Another decorator will do the trick.

enter image description here

Once more…asynchronous. I will use TPL’s Dataflow for building this one.

class WriterBatchDecorator<TEntity>: IWriter<TEntity> {
    private readonly IWriter<TEntity> inner;
    private readonly BatchBlock<TEntity> batch;

    public WriterBatchDecorator(IWriter<TEntity> inner, int batchSize = 100) {
        this.inner = inner;

        batch = CreateBatchBlock(batchSize);
    }

    private static BatchBlock<TEntity> CreateBatchBlock(int size) {
        var save = new ActionBlock<TEntity[]>(
            entities => inner.Save(entities));

        var batch = new BatchBlock<TEntity>(batchSize);
        batch.LinkTo(save);

        return batch;
    }

    public Task Save(IEnumerable<TEntity> entities) {
        var saveTasks = entities
            .Select(e => batch.SendAsync(e));

        return Task.WhenAll(saveTasks);
    }
}

My new writer is:

private IWriter<Product> NewProductWriter() => 
    new WriterEventCommittingDecorator<Product>(
        new WriterBatchDecorator<Product>(
            new CompositeWriter<Product>(
                new InitialStateWriter<Product>(/*...*/),
                new CurrentStateWriter<Product>(/*...*/),
                new ProductSearchDbWriter(/*...*/),
                new ProductReportsDbWriter(/*...*/)
            ),
            BatchSize
        ),
        new StandardEventStorage(/*...*/)
    );

I have put the batch inside the event committing decorator because I want the events to be committed as soon as possible. I could use a similar decorator for the event storage. For the actual user interactive solution we might just leave outside the batch, which is just having a different composition root there.

Next thing could be adding some explanatory variables to avoid dizziness.

private IWriter<Product> NewProductWriter() {
    var composite = new CompositeWriter<Product>(
        new InitialStateWriter<Product>(/*...*/),
        new CurrentStateWriter<Product>(/*...*/),
        new ProductSearchDbWriter(/*...*/),
        new ProductReportsDbWriter(/*...*/)
    );

    var batch = new WriterBatchDecorator<Product>(
        composite,
        BatchSize
    );

    var eventStorage = new StandardEventStorage(/*...*/);
    var eventCommitting = new WriterEventCommittingDecorator<Product>(
        batch,
        eventStorage
    );

    return eventCommitting;
}

Now I have all the things the user needs. Let’s put another one, one for us programmers: Logs. By now you might guess, it will be yet another decorator:

class WriterLoggingDecorator<TEntity>: IWriter<TEntity> {
    private readonly IWriter<TEntity> inner;
    private readonly ILogger logger;

    public WriterLoggingDecorator(
        IWriter<TEntity> inner,
        ILogger logger) {
        this.inner = inner;
        this.logger = logger;
    }

    public async Task Save(IEnumerable<TEntity> entities) {
        var eagerlyIteratedEntities = entities.ToList();

        var count = eagerlyIteratedEntities.Count;
        logger.Trace($"{typeof(TEntity)} writing {count}");

        try {
            logger.Trace("Begining actual save");

            inner.Save(eagerlyIteratedEntities);

            logger.Trace("Done with actual save");
        } catch (Exception ex) {
            logger.Error(ex);
        }
    }
}

Activating the logger will be as simple as including it in the composition root.

    // ...
    var composite = new CompositeWriter<Product>(
        new InitialStateWriter<Product>(/*...*/),
        new CurrentStateWriter<Product>(/*...*/),
        new ProductSearchDbWriter(/*...*/),
        new ProductReportsDbWriter(/*...*/)
    );

    var logging = new WriterLogDecorator(
        composite, 
        logger);

    var batch = new WriterBatchDecorator<Product>(
        logging,
        BatchSize
    );

The logger has been placed inside the batch, so entities getting into the batch will not be logged, instead, only those getting out of it. Different setups can serve different purposes. We might want several loggers around, in many places.

Outro

With a set of very small components it was easy to compose an application with desired behavior. Each component has a very simple and well defined responsibility. Once in place the whole application can be tweeked to improved performance and traceability. It’s simpler to tweek just one place than many (shotgun surgery). That one place is the composition root. Many new behaviors can be implemented in form of reusable decorators.

Now…let’s mentally imagine how would that look with inheritance.

Feature branching and CI

Thursday, October 29, 2015

This is me….being run over by a branching strategy.

In some projects I work, we kinda of follow git flow. It is awesome! The problem is that some how the branches have become a technique to control what get’s released (kind of feature toggles). It means that when features are ready, their branches are put on hold until we decide we can release them.

Feature Branching is a poor man’s modular architecture, instead of building systems with the ability to easy swap in and out features at runtime/deploytime they couple themselves to the source control providing this mechanism through manual merging.

Dan Bodart

An example

Taken from Martin Fowler. Here.

Let’s say we have feature branches. So Reverend Green and Professor Plum start working on their branches, G and P receptively. In the center there is the Develop branch D, where the branches are merged to when ready. At some point Reverend Green’s branch is merged creating commit D4. Now Professor Plum’s branch is out of date and must be synchronized by merging in from develop, this creates commit P5.

enter image description here

What’s the big deal? The biggest this P5 merge is the more likely Professor Plum’s branch end up being completely useless. The problem comes for semantic changes in the code: A method that no longer does what it did before, a class that has been removed, some refactoring (methods extracted, arguments added/removed) and so on. The code might even compile, but the purpose of the branch will be lost.

Merge paranoia

Long living feature branches take you to a point where most merges are dangerous. Some times because they are too big. Others because the branches that are supposed to be released are way too out of date.

All this avoiding the merge thing is called merge paranoia. It’s a recognized condition caused by code base in Software Peter early stages. It means that the technical debt has become so big that any tiny little change could bring the whole thing down.

This is a problem, but source control cannot do absolutely nothing about it.

Lost work.

I took some repository, created a PowerShell script and got open branches’ difference with develop, first and last commit dates, date spams: from last commit, between first and last. The numbers were alarming.

Just a brief:

Branch Files Changed Today()-LastCommit.Date LastCommit.Date-FirstCommit.Date
B01 1435 665 3
B02 1389 665 0
B31 302 133 160
B37 207 101 77
B38 47 80 0
B39 30 65 7
B40 8 58 0
B41 8 58 0
B42 7 57 0
B43 1 56 0
B44 1 56 0

There are 44 branches, the ones missing have been hidden to abbreviate. The youngest branch has almost 2 month old, the older: almost 2 years. Most people don’t remember what they have done after a couple of weeks. Time span between first and last commits is as big as 5 months. I could safely affirm: from B01 to B39 there is absolutely nothing to do, they are lost….forever. This repository has had 257 branches. That means 15.2% of all the work done here.

Imagine your boss saying – Didn’t we solved this problem months ago? – Yeah….but we never released. – Ok, let’s release it now. – Uh…here is the thing.

So…what do we do?

Time came: fresh blood and a brand new project. Let’s try the thing as it was defined!! For real!! It turned out: It works!!

We have user stories broken down into tasks. Each task must take less than a day, or it should be broken down into more. For each task we create a new branch, at least. That (those) branch (es) will exist for a couple of hours. When done, branches are merged into develop via pull request, peer code reviewed, approved, merged and closed.

enter image description here

Merge conflicts are very common, but, extremely easy to solve. We have tons of unit tests and static code analysis that tell us when we have broken something or increase the technical debt too much.

Productivity boost has been amazing!! But there is still a lot of room for improvement.

References

Software Branching and Parallel Universes
What is trunk based development
Git flow
Feature Branch
Feature Toggles
CI on a Dollar a Day

Variances

Monday, August 31, 2015

How to use variant generics in C#

Intro

Covariance and contravariance have been around for quite a while. They refer to inheritance of complex types. It is present in many object oriented languages.

Let’s take the following code:

object[] arr = new string[10]; // Ok for the compiler.

arr[0] = "some string"; // Ok for the compiler and execution.
arr[2] = 1; // Ok for the compiler, execution error.

So the string[] is a descendant of object[] so write/read operations have become covariant. It will be as if we:

class ArrayOfObject {
    public virtual void Set(int index, object value) { /* ... */ }
    public virtual object Get(int index) { /* ... */ }  
}

class ArrayOfString: ArrayOfObject {
    public override void Set(int index, string value) { /* ... */ }
    public override string Get(int index) { /* ... */ }
}

This code won’t be valid in C#. The reason is obvious, it leads to execution errors. Somehow they allowed it for arrays.

Formally

Overriding function’s parameters or return type are said to be covariant if they have a more specific type than the one being overriden. They are said to be contravariant if it is the other way around.

It’s always safe to return a more specific type (covariance) and receive (parameters) a less specific one (contravariance).

C# allows variances on interfaces and delegates. So given:

class Base {}
class Desc: Base {}

Contravariant interfaces

interface IContravariant<in T> {
    void Foo(T t);
}

class Contravariant<T>: IContravariant<T> {
    public void Foo(T t) { /* ... */ }
}

IContravariant<Desc> c = new Contravariant<Base>();

Covariant interfaces

interface ICovariant<out T> {
    T Bar();
}

class Covariant<T>: ICovariant<T> {
    public T Bar() { /* ... */ }
}

ICovariant<Base> c = new Covariant<Desc>();

Contravariant delegate parameters

delegate void Contravariant<in T>(T t);

Contravariant<Base> boo = x => {};
Contravariant<Desc> foo = boo;

Covariant delegate return

delegate T Covariant<out T>();

Covariant<Desc> boo = () => default(Desc);
Covariant<Base> foo = boo;

Sum and wrap up

Covariance is safe for outputs, thus the out keyword, contravariance is safe in inputs, so it’s in keyword. The compiler will take care of no allowing variances if no modifier is applied to interface and delegates. It will also take care of allowing proper variance with proper modifiers. Finally it won’t allow using the wrong modifiers.

Bootstraps

Monday, June 1, 2015

Improve your startup sequences with decoupled convention based modules: Bootstraps

Intro

I love extensibility. If it comes in automatic-based-on-conventions form, even better. I love plugins, reflection and everything else that would allow my application to grow and change without touching existing code. I know…it’s hard, but I like it too much. So I’ll keep trying…and failing.

A bootstrap is a class that is instantiated only once. It gets executed at some point during the initialization sequence and performs some simple initialization from its constructor.

public class InitializeSomethingBootstrap {
    public InitializeSomethingBootstrap(ISomeDependency s) {
        s.Initialize();
    }
}

It might not look like much, but bear with me a little longer, I will show you some juice.

Real life use case

Let’s take AutoMapper for instance. Before using it, it needs to be configured.

    Mapper.CreateMap<Source, Destination>();

This line could be part of our application startup sequence, but as the domain evolves more of these lines will be necessary. Instead, we can move this kind of code to a bootstrap, we can have one of these bootstraps per each sub domain. Anytime a new subdomain joins the picture there would be absolutely no need to modify exiting code.

public class PersonAutoMapperBootstrap {
    public PersonAutoMapperBootstrap() {
        Mapper.CreateMap<Person, PersonDto>();
        Mapper.CreateMap<Person, PersonDetailedDto>();
    }
}

Now…There something about AutoMapper. I really don’t like static things, only very small stateless functions. But since AutoMapper is the best of its kind and there is no simple way around its statically-ness, I’d rather abstract it and hide its use away.

public class PersonAutoMapperBootstrap {
    public PersonAutoMapperBootstrap(IMappings mappings) {
        mappings
            .Add(Mapping.From<Person>().To<PersonDto>())
            .Add(Mapping.From<Person>().To<PersonDetailedDto>());
    }
}

Cool, right? The fancy fluent interface is a plus…in real life I might not have the time for it, but now…let’s fly. Behind this IMapping there could be a very simple implementation that just call the static method. Off course, there could be much more complex mappings, but this ain’t about abstracting AutoMapper, so let’s take that subject some other time.

Testability

Another huge advantage of abstractions is testing. We can provide a fake IMappings and seal this proper behavior with blood over stone (or even better: with unit tests).

public class PersonAutoMapperBootstrapFixture {
    [Theory, AutoMoqData]
    public void It_creates_a_mapping_from_Person_to_PersonDto(
        [Frozen] Mock<IMappings> mappings,
        PersonAutoMapperBootstrap sut) {

        mappings.Verify(m => 
            m.Add(
                It.Is<IMapping>(x => 
                    x.SourceType == typeof(Person)
                    && x.DestinationType == typeof(PersonDto)
                )
            )
        );  
    }
}

This code says that after creating an instance of PersonAutoMapperBootstrap, its dependency, IMappings which in this case is a Mock<IMappings> will be called with an IMapping with specified SourceType and DestinationType. Here I’m using: xUnit, Moq and AutoFixture. You will find a lot about testing with these tools on Ploeh’s.

Initialization sequence

I use behavior chains for my initialization sequences. With time I might need more steps, remove old…or what ever. This pattern has been very useful for me. For the bootstraps I need 2 steps.

public class FindBootstraps {
    public class Input {
        public IEnumerable<Assembly> Assemblies { get; set; }
    }
    public class Output {
        public IEnumerable<Type> Bootstraps { get; set; }
    }

    public static Output Run(Input input) {
        return new Output {
            Bootstraps = Assemblies
                .SelectMany(a => a.GetTypes())
                .Where(t => t.Name.EndsWith("Bootstrap"));
        }
    }
}

This one finds bootstrap types. Types contained inside one of the Assemblies and following the naming convention EndsWith("Bootstrap") .

public class ExecuteBootstraps {
    public class Input {
        public IResolver Resolver { get; set; }
        public IEnumerable<Type> Bootstraps { get; set; }
    }
    public class Output {}

    public static Output Run(Input input) {
        var resolver = input.Resolver;
        foreach(var b in input.Bootstraps)
            resolver.Resolve(b);
    }
}

This one executes the bootstraps (surprise!!). The find step must have been executed and the Dependency Injection must have been completely configured when we get here.

Another real life use case

Registries are great! But they are global resources by definition and people tend to implement them as a static resource. I don’t like statics, remember?

This is a perfect job for the bootstrap:

public class PeopleErrorHandlerBootstrap {
    public PeopleErrorHandlerBootstrap(
        IErrorHandlerRegistry errorHandlers,
        IPeopleErrorHandler peopleErrorHandler) {

        errorHandlers.Add(peopleErrorHandler);
    }   
}

As the application grows more and more error handlers will join in. We can keep up just creating more and more bootstraps. There won’t be a need to modify existing code and yet we’ll be extending the application.

Execution order

Sometimes, it will make sense to execute a bootstrap after another(s). If the DAG of prerequisites gets really messy, we should create some convention to make sure they are executed in proper order, e.g: A PriorityAttribute. For the simple case, specifying the prerequisites as dependencies is enough.

public class SomeBootstrap {
    public SomeBootstrap(SomeOtherBootstrapThatMustBeExecuteBeforeThisOne b) {
        b.DoSomethingUseful();
    }
}

Outro

Bootstraps simplify extensibility in a decoupled testable manner. They are a simple technique that solves a non very small problem. They could be seamlessly used along with many technologies with no sweat. We might just require some automatic discover-ability (i.e: Reflection). They are in use in many commercial-real-life projects I have been involved.

Put your BLL monster in Chains

Friday, May 15, 2015

How to use the FubuMVC’s behavior chains pattern (BMVC) to improve BLL maintainability

Introduction

A very popular architecture for enterprise applications is the triplet Application, Business Logic Layer (BLL), Data Access Layer (DAL). For some reason, as time goes by, the Business Layer starts getting fatter and fatter losing its health in the process. Perhaps I was doing it wrong.

Somehow very well designed code gets old and turns into headless monster. I have ran into a couple of these monsters that I have been able to tame using FubuMVC’s behaviour chains. A pattern designed for web applications that I have found useful for breaking down complex BLL objects into nice maintainable pink ponies.

Paradise beach service

I need an example to make this work. So let’s go to the beach. Spain has some of the best beaches in Europe. Let’s build a web service to search for the dream beach. I want the clients to enter some criteria: province, type of sand, nudist, surf, some weather conditions as some people might like sun, others shade and surfers will certainly want some wind. The service will return the whole matching list.

There would be 2 entry points:

  • Minimal. Results will contain only beach Ids. Clients must have downloaded the json Beach List
  • Detailed. Results will contain all information I have about the beaches.

The weather report will be downloaded from a free on-line weather service like OpenWeatherMap. All dependencies will be abstract and constructor injected.

public IEnumerable<BeachMin> SearchMin(SearchRequest request) {
    var candidates = beachDal.GetBeachesMatching(
        request.Location, 
        request.TypeOfSand, 
        request.Features);

    var beachesWithWeatherReport = candidates
        .Select(x => new {
            Beach = x,
            Weather = weather.Get(x.Locality);
        });

    var requestSky = request.Weather.Sky;
    var filteredBeaches = beachesWithWeatherReport
        .Where(x => x.Weather.Wind == request.Weather.Wind)
        .Where(x => (x.Weather.Sky & requestSky) == requestSky)
        .Select(x => x.Beach);

    var orderedByPopularity = filteredBeaches
        .OrderBy(x => x.Popularity);

    return orderedByPopularity
        .Select(x => x.TranslateTo<BeachMin>());
}

This is very simple and might look as good code. But hidden in this few lines is a single responsibility principle violation. Here I’m fetching data from a DAL and from an external service, filtering, ordering and finally transforming data. There are five reasons of change for this code. This might look OK today but problems will come later, as code ages.

Let’s feed it some junk food

In any actual production scenario, this service will need some additions. Logging, to see what is going on and get some nice looking graphs; Cache to make it more efficient, and some Debug information to help us exterminate infestations. Where would all these behaviors go? To the business, of course. Nobody likes to put anything that is not database specific into the DAL. The web service itself does not have access to what is really going on. So…everything else goes to the BLL. This might look a little exaggerated, but believe me…it’s not.

public IEnumerable<BeachMin> SearchMin(SearchRequest request) {
    Debug.WriteLine("Entering SearchMin");

    var stopwatch = new Stopwatch();
    stopwatch.Start();  

    Logger.Log("SearchMin.Request", request);   

    Debug.WriteLine("Before calling DAL: {0}", stopwatch.Elapsed);
    var cacheKey = CreateCacheKey(request);
    var candidates = Cache.Contains(cacheKey)
        ? Cache.Get(cacheKey)
        : beachDal.GetBeachesMatching(
            request.Location, 
            request.TypeOfSand, 
            request.Features);
    Cache.Set(cacheKey, candidates);
    Debug.WriteLine("After calling DAL: {0}", stopwatch.Elapsed);

    Logger.Log("SearchMin.Candidates", candidates); 

    Debug.WriteLine(
        "Before calling weather service: {0}", 
        stopwatch.Elapsed);
    var beachesWithWeatherReport = candidates
    .Select(x => new {
        Beach = x,
        Weather = weather.Get(x.Locality);
    });
    Debug.WriteLine("After calling weather service: {0}", stopwatch.Elapsed);

    Logger.Log("SearchMin.Weather", beachesWithWeatherReport);

    Debug.WriteLine("Before filtering: {0}", stopwatch.Elapsed);
    var requestSky = request.Weather.Sky;
    var filteredBeaches = beachesWithWeatherReport
        .Where(x => x.Weather.Wind == request.Weather.Wind)
        .Where(x => (x.Weather.Sky & requestSky) == requestSky)
        .Select(x => x.Beach);
    Debug.WriteLine("After filtering: {1}", stopwatch.Elapsed);

    Logger.Log("SearchMin.Filtered", filteredBeaches);

    Debug.WriteLine("Before ordering by popularity: {0}", stopwatch.Elapsed);
    var orderedByPopularity = filteredBeaches
        .OrderBy(x => x.Popularity);
    Debug.WriteLine("After ordering by popularity: {0}", stopwatch.Elapsed);

    Debug.WriteLine("Exiting SearchMin");

    return orderedByPopularity;
}

If you don’t own any code like previous: Bravo!! lucky you. I have written way too many BLLs that look just like this one. Now, ask yourself: What, exactly, does a “Paradise beach service” have to do with logging, caching and debugging? Easy answer: Absolutely nothing.

Usually there would be anything wrong with this code. But every application needs maintenance. With time, business requirements will change and I would need to touch it. Then a bug will be found: touch it again. At some point the monster will wake up and there would be no more good news from that point forward.

Actual business logic

Let’s see what I’m actually doing:

  1. Find candidate beaches. Those in specified province with wanted features and type of sand.
  2. Get weather report about each of the candidates.
  3. Filter out those beaches not matching desired weather.
  4. Order by popularity.
  5. Transform the data into expected output.

This is how you would do it manually with a map and maybe a telephone and a patient operator to get the weather reports. This is exactly what a BLL must do, and nothing else.

I will implement a BLL for each of previous steps, they will have just one Execute method with one argument and a return value. Each step will have a meaningful intention revealing name that will receive an argument with the same name ended with Input and return type with the same name ended with Output. Conventions rock!!

public class FindCandidates : IFindCandidates
{   
    private readonly IBeachesDal beachesDal;

    public FindCandidates(IBeachesDal beachesDal)
    {
        this.beachesDal = beachesDal;
    }

    public FindCandidatesOutput Execute(FindCandidatesInput input)
    {
        var beaches = beachesDal.GetCandidateBeaches(
            input.Province, 
            input.TypeOfSand, 
            input.Features);

        return new FindCandidatesOutput
        {
            Beaches = beaches
        };
    }
}

public class GetWeatherReport : IGetWeatherReport
{
    private readonly IWeatherService weather;

    public GetWeatherReport(IWeatherService weather)
    {
        this.weather = weather;
    }

    public GetWeatherReportOutput Execute(GetWeatherReportInput input)
    {
        var beachesWithWeather = input.Beaches
            .Select(NewCandidateBeachWithWeather);

        return new GetWeatherReportOutput
        {
            BeachesWithWeather = beachesWithWeather
        };
    }

    private CandidateBeachWithWeather NewCandidateBeachWithWeather(
        CandidateBeach x)
    {
        var result = x.TranslateTo<CandidateBeachWithWeather>();
        result.Weather = weather.Get(x.Locality);

        return result;
    }
}

public class FilterByWeather : IFilterByWeather
{
    public FilterByWeatherOutput Execute(FilterByWeatherInput input)
    {
        var filtered = input.BeachesWithWeather
            .Where(x => x.Weather.Sky == input.Sky)
            .Where(x => input.MinTemperature <= x.Weather.Temperature 
                && x.Weather.Temperature <= input.MaxTemperature)
            .Where(x => input.MinWindSpeed <= x.Weather.WindSpeed 
                && x.Weather.WindSpeed <= input.MaxWindSpeed);

        return new FilterByWeatherOutput
        {
            Beaches = filtered
        };
    }
}

public class OrderByPopularity : IOrderByPopularity
{
    public OrderByPopularityOutput Execute(OrderByPopularityInput input)
    {
        var orderedByPopularity = input.Beaches.OrderBy(x => x.Popularity);

        return new OrderByPopularityOutput
        {
            Beaches = orderedByPopularity
        };
    }
}

public class TranslateToBeachMin : IConvertToMinResult
{
    public TranslateToBeachMinOutput Execute(
        TranslateToBeachMinInput input)
    {
        return new TranslateToBeachMinOutput
        {
            Beaches = input.Beaches
                .Select(x => x.TranslateTo<BeachMin>())
        };
    }
}

I know what you’re thinking: I took a 15 lines of code (LOC) program and transformed it into a 100 or more one…You are right. But let’s see what I have. Five clean and small BLLs, each represent a part of our previous single BLL. Their dependencies are abstract which will also make easy to test them thoroughly. They are easily to manage, because they are so small, they will be very easy to maintain, substitute and even reuse. For instance you don’t really need to have performed a life weather search to get a list of beaches and weather conditions to be filtered, you just need to create the input for each of the steps and voilĂ , you can execute that particular step. At the end I added a step to translate CandidateBeach into BeachMin which is the response I really need for our original service. I also extracted interfaces for each of the steps, it helps with abstractions and some other things I’ll do later.

Chain’em up

public IEnumerable<BeachMin> SearchMin(SearchRequest request) {
    var candidates = findCandidates.Execute(
        new FindCandidatesInput {
            Province = request.Province,
            TypeOfSand = request.TypeOfSand,
            Features = request.Features
        });

    var candidatesWithWeather = getWeatherReport.Execute(
        new GetWeatherReportInput {
            Beaches = candidates.Beaches
        });

    var filtered = filterByWeather.Execute(
        new FilterByWeatherInput {
            Beaches = candidates.Beaches,
            Sky = request.Sky,
            MinTemperature = request.MinTemperature,

            MaxTemperature = request.MaxTemperature,
            MinWindSpeed = request.MinWindSpeed,

            MaxWindSpeed = request.MaxWindSpeed
        });

    var orderedByPopularity = orderByPopularity.Execute(
        new OrderByPopularityInput {
            Beaches = filtered.Beaches
        });

    var result = translateToBeachMin.Execute(
        new TranslateToBeachMinInput {
            Beaches = orderedByPopularity.Beaches
        });

    return result;
}

What do you know? I’m back to 15 LOC, maybe less. I think this code doesn’t even need explaining. I took our steps and chain them into a Behavior Chain. From now on we will refer to Steps as Behaviors. I’m kind of where I started, but now our service depends on external extensible, reusable and abstract behaviors. Still, it must know them all. This makes difficult adding a new behavior. Another thing I will have almost identical code for the other entry point. I must do something to improve these two.

Mechanize it

I know…Sarah Connor wouldn’t agree. I have this tool which takes some objects and automatically chains them together into a function but before let’s see what a service depending on functions would look like.

public class SearchService : Service {
    public SearchService(
        Func<SearchMinRequest, SearchMinResponse> searchMin,
        Func<SearchDetailsRequest, SearchDetailsResponse> searchDetails) {
        this.searchMin = searchMin;
        this.searchDetails = searchDetails;
    }

    public object Any(SearchMinRequest request) {
        return searchMin(request);
    }

    public object Any(SearchDetailsRequest request) {
        return searchDetails(request);
    }
}

I’m using ServiceStack as web framework. Basically both Any method in the examples are web service entry points. As you can see they delegate the actual work to functions injected thru the constructor. At some point, which for ServiceStack is the application configuration, I need to create these functions and register them into the IoC container.

public override void Configure(Container container) {
    //...
    var searchMin = Chain<SearchMinRequest, SearchMinResponse>(
        findCandidates,
        getWeatherReport,
        filterByWeather,
        orderByPopularity,
        translateToBeachMin);

    var searchDetails = Chain<SearchDetailsRequest, SearchDetailsResponse>(
        findCandidates,
        getWeatherReport,
        filterByWeather,
        orderByPopularity,
        addDetails);

    container.Register(searchMin);
    container.Register(searchDetails);
    //...
}

private static Func<TInput, TOutput> Chain<TInput, TOutput>(
    params object[] behaviors) 
    where TInput : new() 
    where TOutput : new() 
{
    return behaviors
        .ExtractBehaviorFunctions()
        .Chain<TInput, TOutput>();
}

There are some points here that is worth mentioning:

  • Each behavior kind of depends on previous but it doesn’t really know it
  • The chain is created from functions which could be instance or static methods, lambda expressions or even functions defined in another language like F#
  • The ExtracBehaviorFunctions method takes in objects and extracts their Execute method or throw an exception if there is none. This is my convention, you could define your own
  • The Chain method takes in delegates and creates a function by chaining them together. It will throw exceptions if incompatible delegates are used

Additions

I will enrich our BLLs by means of transparent decorators. Using Castle.DynamicProxy I will generate types which will intercept the calls to our behaviors and add some features. Then I will register the decorated instances instead of the original. I will start with cache and debugging. The cache is a trivial in memory, 10 min. More complicated solutions can be easily implemented.

    container.Register(new ProxyGenerator());

    container.Register<ICache>(new InMemory10MinCache());

    container.RegisterAutoWired<CacheDecoratorGenerator>();
    CacheDecoratorGenerator = container.Resolve<CacheDecoratorGenerator>();

    container.RegisterAutoWired<DebugDecoratorGenerator>();
    DebugDecoratorGenerator = container.Resolve<DebugDecoratorGenerator>();

With this code our decorator generators are ready, let’s look now on how to decorate the behaviors.

    var findCandidates = DebugDecoratorGenerator.Generate(
        CacheDecoratorGenerator.Generate(
            container.Resolve<IFindCandidates>()));

    var getWeatherReport = DebugDecoratorGenerator.Generate(
            container.Resolve<IGetWeatherReport>());

    var filterByWeather = DebugDecoratorGenerator.Generate(
            container.Resolve<IFilterByWeather>());

    var orderByPopularity = DebugDecoratorGenerator.Generate(
        container.Resolve<IOrderByPopularity>());

    var convertToMinResult = DebugDecoratorGenerator.Generate(
        container.Resolve<IConvertToMinResult>());

Here I decorated every behavior with debugging and only findCandidates with caching too. It might be interesting to add some cache to weather report as well, but since the input might be a very big list of beaches caching won’t be correct. Instead I will add caching to both the DAL and the weather service.

    container.Register(c => DebugDecoratorGenerator.Generate(
        CacheDecoratorGenerator.Generate(
            (IBeachesDal) new BeachesDal(
                c.Resolve<Func<IDbConnection>>()))));

    container.Register(c => DebugDecoratorGenerator.Generate(
        CacheDecoratorGenerator.Generate(
            (IWeatherService) new WeatherService())));

Manual Decorators

Generated decorators are not enough for some tasks, and if you are friend of IDE Debugging they will certainly give you some headaches. There is always the manual choice.

public class FindCandidatesLogDecorator : IFindCandidates
{
    private readonly ILog log;
    private readonly IFindCandidates inner;

    public FindCandidatesLogDecorator(ILog log, IFindCandidates inner)
    {
        this.log = log;
        this.inner = inner;
    }

    public FindCandidatesOutput Execute(FindCandidatesInput input)
    {
        var result = inner.Execute(input);
        log.InfoFormat(
            "Execute({0}) returned {1}", 
            input.ToJson(), 
            result.ToJson());

        return result;
    }
}

By using more powerful IoC containers, like AutoFac you would be able to create more powerful decorators, both automatically generated and manual. You won’t ever have to touch your BLL unless there are Business Requirement changes or bugs.

When to use

When your BLL is a set of steps that are:

  • Well defined. The responsibilities are clear and have clear boundaries.
  • Independent. The steps don’t know each other.
  • Sequential. The order cannot be changed based on input. All steps must be always executed.

The behavior chain functions are kind of static, they are not meant to be altered in execution. You can create, though, a new function to replace an existing one based on any logic of your specific problem.

How it works

The generation code isn’t really that interesting. Just a lot of hairy statements generating lambda expressions using the wonderful Linq.Expressions. You can still look at it on the source code. Let’s see instead how generated code works. This is how the generated function looks like, or kind of.

var generatedFunction = 
    new Func<SearchDetailsRequest, SearchDetailsResponse>(req => {
        // Input
        var valuesSoFar = new Dictionary<string, object>();
        valuesSoFar["Provice"] = req.Provice;
        valuesSoFar["TypeOfSand"] = req.TypeOfSand;
        valuesSoFar["Features"] = req.Features;
        valuesSoFar["Sky"] = req.Sky;
        valuesSoFar["MinTemperature"] = req.MinTemperature;
        valuesSoFar["MaxTemperature"] = req.MaxTemperature;
        valuesSoFar["MinWindSpeed"] = req.MinWindSpeed;
        valuesSoFar["MaxWindSpeed"] = req.MaxWindSpeed;

        // Behavior0: Find candidates
        var input0 = new FindCandidatesInput {
            Provice = (string)valuesSoFar["Provice"],
            TypeOfSand = (TypeOfSand)valuesSoFar["TypeOfSand"],
            Features = (Features)valuesSoFar["Features"]        
        };
        var output0 = behavior0(input0);
        valuesSoFar["Beaches"] = output0.Beaches;

        // Behavior1: Get weather report
        var input1 = new GetWeatherReportInput {
            Beaches = (IEnumerable<CandidateBeach>)valuesSoFar["Beaches"]
        }
        var output1 = behavior1(input1);
        valuesSoFar["Beaches"] = output1.Beaches;

        // Behavior2: Filter by weather
        var behavior2Beaches = valuesSoFar["Beaches"];
        var input2 = new FilterByWeather {
            Beaches = (IEnumerable<CandidateBeachWithWeather>)behavior2Beaches,
            Sky = (Sky)valuesSoFar["Sky"],
            MinTemperature = (float)valuesSoFar["MinTemperature"],
            MaxTemperature = (float)valuesSoFar["MaxTemperature"],
            MinWindSpeed = (float)valuesSoFar["MinWindSpeed"],
            MaxWindSpeed = (float)valuesSoFar["MaxWindSpeed"]       
        }
        var output2 = behavior2(input2);
        valuesSoFar["Beaches"] = output2.Beaches;

        // Behavior3: Order by popularity
        var input3 = new OrderByPopularityInput {
            Beaches = (IEnumerable<CandidateBeach>)valuesSoFar["Beaches"]
        }
        var output3 = behavior3(input3);
        valuesSoFar["Beaches"] = output3.Beaches;

        // Behavior4: addDetails
        var input4 = new AddDetailsInput {
            Beaches = (IEnumerable<CandidateBeach>)valuesSoFar["Beaches"]
        }
        var output4 = behavior4(input4);
        valuesSoFar["Beaches"] = output4.Beaches;

        // Output
        return new SearchDetailsResponse {
             Beaches = (IEnumerable<BeachDetails>)valuesSoFar["Beaches"]
        };
    });

Using the code

What is it good for if you cannot see it working?

This will start the server in configured port, 52451 by default. Now you need to create a client program. You can manually create a client project by using ServiceStack. Or any other web framework. You can also use included Linqpad file at <project_root>\linqpad\search_for_beaches.linq which basically does as follows:

    var client = new JsonServiceClient("http://localhost:52451/");
    var response = client.Post(new SearchDetailedRequest {
        Province = "Huelva",
        Features = Features.Surf,
        MinTemperature = 0f,
        MaxTemperature = 90f,
        MinWindSpeed = 0f,
        MaxWindSpeed = 90f,
        TypeOfSand = TypeOfSand.White,
        Sky = Sky.Clear
    });

Conclusions

The high code quality is very important if you want a maintainable application with a long lifespan. By choosing the right design patterns and applying some techniques and best practices any tool will work for us and produce really elegant solutions to our problems. If on the other hand, you learn just how to use the tools, you are gonna end up programming for the tools and not for the ones that sign your pay-checks.