Dynamic TDD

This post is about Dynamics in C#, TDD and how we can mix all of this in a fun and fluent way ;-)

I presented that at the last ALT.NET Fr meetup and I want to share it more widely now after the very good first impressions given by the community.

For a long time now, I thought about how dynamic can help writing tests. Since my friend  Thomas Pierrain started the NFluent library, I tried to keep in mind introducing dynamic in it’s library because I think it could help.

I then came with some ideas in a form of a small POC that I want to share with you. Instead of a long discourse, I preferred to record the following small videos because the most interesting part of the concept is how things changes constantly in a very small feedback loop. And thanks to Remco Mulder and it’s continuous testing tool NCrunch, it’s even better and quicker ;-)

Introduction : problems, concepts


Dynamic TDD intro

Demo 1 : 2 step dynamic implementation


Dynamic TDD Demo 1

Demo 2: Function mocking facilities


Dynamic TDD Demo 2

As I said, this is now mostly ideas, concepts, tests but I think it’s important to go deeper in that direction because getting a better feedback loop is very important and we also have to test things like that to discover new usages.

Any ideas, comments are welcome, you can post comments here, ping me on twitter (@rhwy) or send me a mail to rui at rui dot fr.

happy coding!

Posted in Articles, Projects Tagged with: , , ,

NFluent Extensions

About

NFluent has been designed since the beginning to be easy to use and to provide a great user experience. That means that we want the API to be as Fluent as possible. And important part of the NFluent value is to allow a smooth writing of all the checks. That’s why we provided an important set of checks for all the base types.

But, in order to make it enjoyable for everyone we also provide a set of extensions that allows you to create extensions for your own types. we hope to provide with NFluent the core DSL for your tests and enhance the quality you can put in their writing.

Default Checks

Just to refresh you with the concepts of the NFluent Checks, let’s see some of the bases.

In order to provide a DSL for your tests, the Checks are based on the type of the SUT (System Under Test). Then, depending on the type you test, you’ll have a full fluent experience while writing with only the methods allowed on that type.

For example if you are testing an Integer:

var age = 21; 
Check.That(age).IsPositive(); 
Check.That(age).IsGreaterThan(18); 

Or Dates:

var einstein = new Date(1879,3,14); 
var anelka = new Date(1979,3,14); 
Check.That(einstein).IsBefore(anelka); 
Check.That(einstein).IsInSameMonthAs(anelka); 
//This one doesn't exist but it could;-) 
//Check.That(einstein).StopDoingStupidComparisonsJustBecauseOfMatchingDates(anelka) 

Or even better with lists :

var user = new User(); 
Check.That(user.Roles).ContainsExactly("guest","anonymous"); 

NFluent Extension

The default fluent checks provided in the core NFluent library are enough to replace you habitual Asserts. But Sometimes when you work with a real Domain (not anemic) and you want to provide a nice experience for the users of your library it is very valuable to create your own NFluent Checks!

This fit very well with recurrent tests you need to do.

All the secret of the NFluent extensions is on the ICheck<T> returned by the Check.That<T>(T sut) (among other things).

The idea is to provide an extension method on the ICheck<T> interface for the T type you want to check. That’s it! Don’t forget that it is this check type you have to extend and not the T type itself.

In fact, there is another secret to be able to extend you checks…In order to not pollute the intellisense experience, the value of the type you are checking is not provided on the ICheck<T> interface. That’s why it needs to be casted to a proper compatible type:

ICheck<mytype> mycheck = thecheck; 
var runnableCheck = mycheck as IRunnableCheck<mytype>; 
mytype myvalue = runnableCheck.Value; 
//test my Value.

This is now mostly internals and this cast is here only for information. Since v0.11 we have a nice helper to do that and this is how you must use it now:

void MyExtention(this ICheck<mytype> check) {

var runnableCheck = 
       ExtensibilityHelper<mytype>.ExtractRunnableCheck(check);

mytype myvalue = runnableCheck.Value; 

//test my Value and throw with a nice message if it is not what you expect
}

Once here you can use the Value and check what you need.

One other important point the Chaining part. If your extension is only a simple test, you may return nothing and just throw if you don’t have what you expect. On the other side, if you are building more complex things, it should be better to allow your Checks to be chainable. If you want to provide to the users of your extension an happy fluent syntax, you may want to be able to chain it with an other operator. In this case, you have to encapsulate your code inside a dedicated execute method in the runner. Then instead of the previous exemple, you may use:

ICheckLink<ICheck<mytype>>  MyExtention(this ICheck<mytype> check) {
 var runnableCheck = ExtensibilityHelper<mytype>.ExtractRunnableCheck(check);
 return runnableCheck.ExecuteCheck(
                () =>
                {
                    //do some test and throw if you're not happy
                },
                //add here a negated exception message for the NOT chaining);
}

Extend it now!

For example, imagine that you have in your application Users with Roles. For most of the business tests you need to do you’ll need to check that the user have a certain role.

Our User class for future usage:

public class User { 
   public int Id {get;set;} 
   public string Name {get;set;} 
   public IEnumerable<string> Roles {get;set;} 
   public User(int id, string name,IEnumerable<string> roles = null) { 
      Id = id; Name=name; 
      Roles = roles ?? new List<string>(); 
} } 

As you have many tests that include checking the roles existence of a user it is convenient to create a specific Check in order to factorize some code and to be more domain specific.

You just have to create a static class with your extension method on ICheck<User> like this:

public static class CheckUserExtensions 
{ 
   public static void HasRole(this ICheck<User> user, string role)
   {
      var runnableCheck = ExtensibilityHelper<User>.ExtractRunnableCheck(user);
      User value = runnableCheck.Value; 
      Check.That(value).IsNotEqualTo(null); 
      Check.That(value.Roles).IsNotEqualTo(null); 
      Check.That(value.Roles).Contains(role); 
   } 

Then, everywhere you need it instead of doing a lot of testings, you have a real domain word that makes sense:

[Test] 
public void should_do_something_with_users() 
{ 
   var user = new User(1,"rui",new []{"admin","editor"}); 
   Check.That(user).HasRole("test"); 
} 

Some conclusion

We saw some of the basics of NFluent and also how to create our own checks for our models.

NFluent by it’s nature, provides a really nice way to produce tests that make sense, that are easy to write (with dedicated intellisense for each type) and easy to read (it’s near plain english). Don’t forget that your tests should also be your documentation and providing fluently sentences near plain english instead of questionable asserts will enhance that.

If you are providing a library to other developers, it can also be very valuable to provide them a testing library with dedicated Checks for your models. It will enhance the understanding of the domain but also help them to write their own tests with the fluent interfaces you provided.

Happy Checking!

Posted in Articles, Technical Posts Tagged with: , ,

From OOP to FP : About dependency injection and higher-order functions

This is the next part of the series “My 2 weeks trip from OOP to FP”. To keep track of previous posts, here’s a little toc :

I don’t know if this is the case for everybody but when you start to learn a new language you try to think about all the things that you’ve learnt with a language you use on the daily basis. You would like to apply all the design patterns and techniques that you’ve always used with success (or not). This is not always possible because design patterns are not universal for every language as you may think of it. Design patterns are tied to some paradigm and thus only applicable in the context of that paradigm. In short, the design patterns are only valid in the context of the paradigm followed by the given language. I’m talking here about Object Oriented Paradigm (OOP) and Functional Paradigm (FP). Furthermore, you can read about this in the wikipedia definition as well :

…Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply object-orientation or more generally mutable state, are not as applicable in functional programming languages

The same rule applies to some of the S.O.L.I.D. principles. While the application of the SRP may be easily transposed to FP (you just have to replace the word “class” by “function” in the principle’s description ; “a class should have only a single responsibility.”), it’s not the case for other principles.

I would like to focus here on a pattern called “Dependency Injection” that is one method of achieving DIP in SOLID. Before asking why I want to focus on that topic in the context of F# let’s recall what DI allows (from wiki) :

Dependency injection is a software design pattern that allows the removal of hard-coded dependencies and makes it possible to change them, whether at run-time or compile-time…

Reading on further :

…This can be used, for example, … or to choose stubs or mock objects in test environments vs. real objects in production environments…

Well, now we have a big picture of the benefits of using a Dependency Injection pattern in OOP language like C#. It’s so popular these days, that when a team is starting a new project you have to rather justify why you don’t want to use DI rather than why you’re using it. Folks are even not asking themselves if they need it at all. They just pick up an DI framework, throw it into the project and continue without even thinking about.

But what’s important when you’re starting to use a functional programing language like F# is that you would like to have the same benefits that offer DI for OO language :

  • Decoupling by removing hard coded dependencies

Before going further let’s recall the context why I’m talking about.

Context

In my previous post “Sharing code between C# and F#” I’ve touched upon the architecture I set up for my C# and F# project. Comparing both of them we see clearly that the MVC (C#) project follows the Dependency Inversion principle and the Calc Engine (F#) part does not. In this post we’ll focus on the F# part which looks like that :

It seems like this is the classic layered architecture used 10 years ago. But this is straightforward and seemed to me appropriate to start with F#. (I would be grateful to all F# experts to point me on the right tracks if this is what should be done differently). This is closely related to the nature of the relation that dependencies has in OO and Functional languages.

Composition happens on different levels

As I said, one way of achieving decoupling is getting rid of hard coded dependencies and to depend on abstractions. In OO languages the composition is more “coarse” grained as it happens on the object level. Objects are composed into graphs and relations between objects and graphs are dependencies. That’s why respecting the SRP principle is very important because one object depends on another because of the functionality it can provide to the calling object. If the objects that we depend on, carry more functionalities, it may be also involved in dependencies with other objects which makes the dependency graph more complicated.

In F# the composition happens at the function level. For me it’s a “fine” grained composition because well written functions have generally a single responsibility. Comparing to C# is like we would like to compose methods and not objects (I’m not saying that a method in C# == function in F#).

Knowing these two facts we may ask ourselves if  “Dependency Injection” pattern makes any sense when related to the functional programming language. Dependency Injection is a pattern related to the OOP and it’s related to component composition which in that case are “objects“. So I can surely state that :

Dependency injection is a nonsense in FP

You may say that you’ve been reading through all this post just to discover the statement above which is obvious for everybody. But I want to stress that when you come to the new programming paradigm you’d be better living all your habits and patterns outside. Forgot about them and try to learn from the beginning. You’ll find ways to achieve what you want using patterns related to the given context.

Achieving decoupling in F#

If you look at the schema above, you will notice that I’m dealing here with Infrastructure Layer that deals with database access. I would like to not have a hard dependency on because when I’m testing my Facade layer I don’t want to deal with a real database and Windows Azure storage. I’m a noob in F# programming but what I’ve learnt until now, allows me to think that I could use a higher-order functions and partial application to achieve a kind of decoupling.

Higher-order functions and partial application

To put it simply, Higher-order function refers to a function that takes a function as a parameter or returns it as a result. Higher-order functions are a way to write generic functional code, which means that the same code can be reused for many similar but distinct purposes.

However the concept of partial application it’s easier to explain with an example. Let’s declare a function that takes 2 parameters and returns the sum of both :

> let add a b = a + b;;

val add : a:int -> b:int -> int

The function signature “a:int -> b:int -> int” indicates exactly that case (two integers as parameter and an integer as return value). In F#, there is a concept called “Partial application” that allows the caller to no pass every argument to the function. What’s happens in that case is that F# creates another function. Let’s see another example :

> let add10 = add 10;;

val add10 : (int -> int)

What happens is that we define a function called add10 which adds 10 to whatever the second argument will be. This function is the result of a partial application on the function add because we have passed only one argument instead of two. The signature just shows it (int -> int). This concept is very powerful, because we can just call add10 function in some later point in time when the second parameter will be available. So now we can call the add10 function in this way to obtain a result :

> add10 12;;
val it : int = 22

We call the function passing in 12 and the results is of 22 which is the correct (10 + 12 = 22). The partial application is also a concept broadly used with pipelining.

But how this can be useful with decoupling between layers ?

Let’s check how Dependency Injection is done in C# :

public class Caluclator : ICaluclator
{
	private readonly IApplicantRepository _applicantRepository;
	private readonly IAttributeRepository _attributeRepository;

	public Caluclator(IApplicantRepository applicantRepository, IAttributeRepository attributeRepository)
	{
		if (applicantRepository == null) throw new ArgumentNullException("applicantRepository");
		if (attributeRepository == null) throw new ArgumentNullException("attributeRepository");

		_applicantRepository = applicantRepository;
		_attributeRepository = attributeRepository;
	}

	public IEnumerable<string> Calculate(int itemId)
	{
		var attributes = _attributeRepository.GetAttributesForItem(itemId);
		var applicants = _applicantRepository.GetApplicantsByAttributes(attributes);

		// do some calculation on applicants.

		return applicants.Select(x => x.Name);
	}
}

The code is quite straightforward and I don’t think it needs much explanation. The dependencies are passed to the class’ constructor and used when a method Calculate is invoked. What’s interesting, is that if we wan’t to test the Calculate method, we have to create an instance of our calculator passing in to the constructor faked dependencies.

It’s something that I tried to achieve in my F# implementation. If you look at the schema above you’ll notice that it is interesting to keep decoupling for the Facade and Domain layers. Let’s focus on the Facade layer as it has a dependency on Infrastructure layer and the persistence.

First of all let’s define our Calculate method :

 let calculate getApplicantsFunc getAttributsFunc itemId =
    let attributes = getAttributsFunc itemId
    let applicants = getApplicantsFunc attributes
    applicants

What we have here is a function taking as parameters two functions and an itemId. The signature of the functions is as follows : ((‘a -> ‘b) -> (‘c -> ‘a) -> ‘c -> ‘b).

Then we can define our calculator function like this :

let calculator getApplicantsFunc getAttributsFunc = calculate getApplicantsFunc getAttributsFunc

The calculator function is derived from the calculate function because we just pass a two parameters instead of three. We don’t pass itemId so the compiler concludes that calculator must be also a function. This is a partial application at play. Let’s look on the calculator function signature : (‘a -> ‘b) -> (‘c -> ‘a) -> (‘c -> ‘b)

We notice that the last parameter which is a return value indicates that it’s a function. What’s interesting is that we are declaring our calculator function the same way as we’ve declared our Calculator class in C#, passing in two dependencies that we want to mock for unit testing for example.

All these definitions I’ve wrapped inside a kind of Facade type that will be shared with C# code. It looks like that :

type Calculator() = 
   
    let calculate getApplicantsFunc getAttributsFunc itemId =
        let attributes = getAttributsFunc itemId
        let applicants = getApplicantsFunc attributes
        applicants

    member v.calculator getApplicantsFunc getAttributsFunc = calculate getApplicantsFunc getAttributsFunc

    interface ICalculator with
        member v.Calculate itemId =
            v.calculator getApplicantFromDb getAttributesFromDb itemId

You notice that the type Calculator() implements an interface ICalculator shared from C# code. This interface defines one method (function) Calculate where we call our calculator passing in two functions from the F# persistance layer getApplicantFromDb and getAttributesFromDb and also an itemId. The method (function) Calculate has a hard dependency on theses two methods from persistence layer but it doesn’t matter because it’s not at that level that I want to test. This is only here for C# sharing and will be only visible from C#.

What I want to test is my internal calculator defined like this :

member v.calculator getApplicantsFunc getAttributsFunc = calculate getApplicantsFunc getAttributsFunc

Now you can see that all we want is to have a calculator function with different kind of dependencies. Let’s see how it works. Before I’ll show you to possible of implementation for the dependencies.

Simulation of the dependencies of the real world :

let getAttributesFromDb itemId =
    [itemId..(itemId + 10)]

let getApplicantFromDb attributes =
    attributes |> List.map (fun i -> "applicant" + i.ToString())

You have to imagine that these two functions access to the real database to get data. Here I’ve just declared some data for test.

We can also have a faked functions like that :

let getAttributesFromDbFake itemId =
    [itemId..(itemId + 100)]

let getApplicantFromDbFake attributes =
    attributes |> List.map (fun i -> "stubedApplicant" + i.ToString())

It’s quite similar to the previous except that I named it with a prefix suffix Fake and the 100 is added to the generating list instead of 10 like in the previous declaration. But it’s not the point. Let’s say that these are the fake functions for testing.

Now I can check how my calculator behaves with a real implementation in F# Interactive :

> let calc = new Facade.Calculator();;

val calc : Facade.Calculator

> let calculator = calc.calculator getApplicantFromDb getAttributesFromDb;;

val calculator : (int -> string list)

> calculator 10;;
val it : string list =
  ["applicant10"; "applicant11"; "applicant12"; "applicant13"; "applicant14";
   "applicant15"; "applicant16"; "applicant17"; "applicant18"; "applicant19";
   "applicant20"]

As you can see the result is taken from the “real” implementation of my functions. But if I want to test my calculator implementation I could pass in a fake functions :

> let calculator = calc.calculator getApplicantFromDbFake getAttributesFromDbFake;;

val calculator : (int -> string list)

> calculator 10;;
val it : string list =
  ["stubedApplicant10"; "stubedApplicant11"; "stubedApplicant12";
   "stubedApplicant13"; "stubedApplicant14"; "stubedApplicant15";
   "stubedApplicant16"; "stubedApplicant17"; "stubedApplicant18";
   "stubedApplicant19"; "stubedApplicant20"; "stubedApplicant21";
   "stubedApplicant22"; "stubedApplicant23"; "stubedApplicant24";
   "stubedApplicant25"; "stubedApplicant26"; "stubedApplicant27";
   "stubedApplicant28"; "stubedApplicant29"; "stubedApplicant30";
   "stubedApplicant31"; "stubedApplicant32"; "stubedApplicant33";
   "stubedApplicant34"; "stubedApplicant35"; "stubedApplicant36";
   "stubedApplicant37"; "stubedApplicant38"; "stubedApplicant39";
   "stubedApplicant40"; "stubedApplicant41"; "stubedApplicant42";
   "stubedApplicant43"; "stubedApplicant44"; "stubedApplicant45";
   "stubedApplicant46"; "stubedApplicant47"; "stubedApplicant48";
   "stubedApplicant49"; "stubedApplicant50"; "stubedApplicant51";
   "stubedApplicant52"; "stubedApplicant53"; "stubedApplicant54";
   "stubedApplicant55"; "stubedApplicant56"; "stubedApplicant57";
   "stubedApplicant58"; "stubedApplicant59"; "stubedApplicant60";
   "stubedApplicant61"; "stubedApplicant62"; "stubedApplicant63";
   "stubedApplicant64"; "stubedApplicant65"; "stubedApplicant66";
   "stubedApplicant67"; "stubedApplicant68"; "stubedApplicant69";
   "stubedApplicant70"; "stubedApplicant71"; "stubedApplicant72";
   "stubedApplicant73"; "stubedApplicant74"; "stubedApplicant75";
   "stubedApplicant76"; "stubedApplicant77"; "stubedApplicant78";
   "stubedApplicant79"; "stubedApplicant80"; "stubedApplicant81";
   "stubedApplicant82"; "stubedApplicant83"; "stubedApplicant84";
   "stubedApplicant85"; "stubedApplicant86"; "stubedApplicant87";
   "stubedApplicant88"; "stubedApplicant89"; "stubedApplicant90";
   "stubedApplicant91"; "stubedApplicant92"; "stubedApplicant93";
   "stubedApplicant94"; "stubedApplicant95"; "stubedApplicant96";
   "stubedApplicant97"; "stubedApplicant98"; "stubedApplicant99";
   "stubedApplicant100"; "stubedApplicant101"; "stubedApplicant102";
   "stubedApplicant103"; "stubedApplicant104"; "stubedApplicant105";
   "stubedApplicant106"; "stubedApplicant107"; "stubedApplicant108";
   "stubedApplicant109"; ...]

You can see that the result is quite different and comes from stubbed functions. We’ve achieved the same functionality that we are used to in OOP world with DIP and Dependency Injection.

In closing

The dependency injection pattern is much more powerful in OOP that what I’ve described here. Lifetime-managment of dependencies, graph composition and so on. What we were interested in is the decoupling that adoption of that pattern brings to the table. Moreover, lifetime-managment of dependencies is not necessary in F# because anyway all is immutable and everything gets it’s own copy so we don’t need to worry about this aspect. Graph composition is not something very useful in F# either. We’ve just simulated similar concept as dependency injection by using higher-order functions and partial application.

As disclaimer I would say that I’m still exploring and learning F#. So for a really experienced F# developer it might seem totally incorrect what I’ve written here. I would be really grateful to here your feedback and if I’m not correct I’ll make an update to my post. So don’t take all what I’m saying here as a standard or a way for doing things. For me it worked well but I would be very happy to hear how others do.

Posted in Articles, Technical Posts Tagged with: , , ,

Sharing code between C# and F#

This is the next part of my 2 weeks trip from OOP to FP programming with F#.

You would say what’s the big deal about sharing code between C# and F# if both of them rely on the .NET Framework and are compiled to the same IL ? And in fact there is no a big deal. My concern is about how to do it properly from the architectural point of view.

In my last post I made a gentle introduction to my first real world experience with F# development. In this part I would like to focus on the integration I had to made between my ASP.NET MVC project developed with C# and my recomendation engine alorithm written in F#.

My first concern was about the kind of integration that would be needed from the business point of view. Technically I could use direct synchronous integration by using recommendation engine in process or relay on eventual consistency with use of queues or message buses, etc. From the business point of view, it would be better to have results as soon as the user makes a request. The point is that the recommendation engine has to be fast enough to allow this feature. Otherwise the user would have to wait too long. With eventual consistency it would be also possible, but I would have to tweak some GUI screens to handle it properly.

Let’s KISS. I always like simplest and most pragmatic take on the whole problem. Before I could make my opinion I had to benchmark my recommendation engine in terms of calculation speed. I generated a test data about 600 000 items (which is 30 times more what I expect to have in production) to make the recommendations on. The overall calculation process was around 0.5s which is fast enough to consider a direct synchronous integration. Before having 600 000 items in production I would have time to make another implementation if needed.

Let’s look how the integration could be made :

Let’s look on the left side of the schema. It’s a classic hexagonal architecture (even if I drew the circles) that you’ve already seen in many projects. The domain is in the center of application and everything is wired using dependency inversion principle. I’m using StructureMap for wiring all the dependencies. In my domain I’ve defined an interface ICalculator which is supposed to be implemented by all kinds of calculators for recommendation engine. This simple interface looks like this :

public interface ICalculator
{
	void DoCalculation(int itemId);
}

It’s supposed to make a recommendation based on an item.

The more interesting part is on the right side with F# implementation. As you noticed, the architecture is quite different from the standard hexagonal architecture from the left side. Here are some reasons :

  1. I’ve never used F# in real world scenarios except playing around with “Hello world” examples. But I have a gut feelings that standard hexagonal architecture based on DI principle is just a nonsense in the FP world.
  2. The composition in F# is made on the function level and not on the object (component) level. So you deal differently with the coupling between layers.
  3. The Facade is supposed to provide an entry point for the C#/F# integration. It’s responsibility is also to coordinate internal functionalities from Domain and Infrastructure layers.

So the overall architecture is quite straightforward and simple. I would like to notice that while for the MVC each layer is contained in the separate assembly, for F# every layer is defined in the same assembly. I haven’t found any reason to make it more complicated.

We have a big picture of integration points. Let’s look how the calculator is implemented in F# :

type Calculator() =

    interface ICalculator with
        member v.DoCalculation itemId =
            // here will be called functions for doing actual calculations
            ()

Since I want to keep things simple and the calculation will be done in process I could inject my ICalculator instance into MVC controller by configuring my StructureMap container like this :

For<ICalculator>().Use(ctx => ctx.GetInstance<Calculator>()); 

Once you’ve done it you can declare dependency in MVC Controller constructor and the F# implementation will be injected.

I don’t know if it’s the best way to integrate it that way but it works for me and it’s really simple. In the next post I’ll focus on the internal architecture of my F# application and how I tried to achieve decoupling and testability.

Posted in News, Technical Posts Tagged with: , , , , ,

My 2 weeks trip from OOP to FP with F#

Introduction

I have been programing using and OOP language since long long time. Really, since the very first betas of .NET Framework I’ve been always using C# to accomplish almost every programming task. Working for companies that mainly used .NET platform and C# didn’t help me to push me in another direction… It’s not that I was not interested in other languages…it’s that I was afraid that the learning curve will be high and that I will never have enough time to learn everything to know in order to be efficient and to write a code that follows the best practices for a given language.

I know that OOP paradigm is not inherent to C# but my goal was not to learn another OOP language but to try something different. Moreover, if you spent so much time on trying to improve your programming skills applied to OOP paradigm (like AOP, DI, Unit Tests, DDD tactical patterns, etc.), you start to think about how I’d do it with another language, another platform or programming paradigm ? That’s maybe the main reason that hold me back from trying something else. And a lack of time of course…

And I’m not talking about just trying a “Hello world” or playing around with some basic stuff and nothing more. I’m talking about trying to make a real app even not very big, but to see how one can deal with aspects like domain logic, infra, cross cutting concerns, etc.

And the time came that I was able to invest 2 weeks of my time to learn something else in a real world project. I had to stay on the .NET platform but I’ve picked a language that has always attracted me : F#

First contact

The task I had to accomplish was to write an algorithm based on CF (Collaborative Filtering) for building recommendations (for my polish startup https://www.rocketcv.pl). I thought it was a great opportunity for learning F# and trying to build something useful. I won’t go into too much details in this post about all the impediments I’ve encountered. This will be for the next series of posts. In this installment I would talk about my first contact with the language, the FP paradigm.

Tools

You don’t need any IDE to program with F# (the same applies for every language, technology on the .NET platform) but I stuck with Visual Studio 2012 for simplicity.

What’s great with F# environment is the REPL (Read Eval Print Loop) console. You can try immediately to write some code and to check the outcome without recompiling all the stuff. That’s very handy. This way you can try very quickly different options and the move to code to unit test project as the first draft.

Microsoft (R) F# Interactive version 11.0.60610.1
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> let simpleList a = [ for i in 1..10 do
                          yield i * a ];;

val simpleList : a:int -> int list

> let initializedList = simpleList 10
;;

val initializedList : int list = [10; 20; 30; 40; 50; 60; 70; 80; 90; 100]

>

Here I’m defining a function that will produce a list by multiplying a passed parameter by the current value in for-in loop. As you can see the code is straightforward and easy to understand. Even if there is no real value in this snippet, the F# Interactive (repl console) allows you to tests your code.

FP Paradigm

Everybody has already heard about the principles of Functional Programming paradigm and is more or less aware of it. I’ve also heard about it and knew about the most common ones like immutability, functions, values, function composition, higher-order functions, declarative programming, etc. But when you come from the OOP world, the mind shift is not obvious. You have to put aside all you could learn before and try to start from the blank page. It’s better to not think about other side aspects related to programming like AOP, DI, etc.

Immutability

Even though you can use in F# whatever is available in .NET Framework like mutable data structures (Dictionary, List) you’re not supposed to do it. You’d better off not touch it because it’s supposed to be there for interoperability concerns between languages on the .NET platform or it may be useful in some narrow scenarios.

Immutability is one of the linchpins of FP paradigm so you’d better stick with it. Use all immutable data structures supported in F# like seq, list, record types, discriminated unions (not even types as it’s easy for beginner to mess everything up because of what you could learn in OOP world).

Declarative vs Imperative style

The main difference between the two styles is boiled down to “what” and “how” words. The imperative style of OOP programming tells the machine “how” to do stuff and on the other hand the functional programming style tell “what” you want to do. The OOP style is an execution of statements because the program is expressed as a sequence of commands which specify how to achieve the end result. The functional style we talk about evaluation of expressions because the program code is the expression that specifies the properties of the object we want to get as the end result. You should strive hard to embrace declarative style in functional code. Let’s compare two simple code snippets. What we want to get is the list of the products that are concerned by the current promotion.

In C# this could be done like that without LINQ :

public List<string> GetPromotionalProducts(List<Product> products)
{
	var filteredProductsInfos = new List<string>();
	foreach (var product in products)
	{
		if (product.IsPromotion)
			filteredProductsInfos.Add(string.Format("Product name : {0} | UT = {1}", product.Label, product.UnitPrice));
	}

	return filteredProductsInfos;
}

The code is the basic set of imperative commands telling how the goal should be achieved by giving the machine instructions on how to do it.

The same could be written with a functional style using LINQ :

public IEnumerable<string> GetPromotionalProducts(List<Product> products)
{
	return from product in products
		   where product.IsPromotion
		   select string.Format("Product name : {0} | UT = {1}", product.Label, product.UnitPrice);
}

The difference is obvious because now we are telling what we want. So the intention is more explicit.

Function = value

Another important aspect of FP is that the function is the value. I’ll write more about it in the next posts but let’s see a little snippets that illustrates the general idea :

Microsoft (R) F# Interactive version 11.0.60610.1
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> let rec fib n = if n < 2 then 1 else fib (n-2) + fib(n-1);;

val fib : n:int -> int

> let printIt valueToPrint = printfn "%d" valueToPrint;;

val printIt : valueToPrint:int -> unit

> printIt (fib 10);;
89
val it : unit = ()
>

In the first step we’ve defined a fibonacci function that takes as a parameter an integer and returns also and integer. In the second step we’ve defined another function that just prints the result. At the end we’re passing as argument a fibonacci function to the print function which prints the final result.

This is not a big deal and in C# you can also achieve the same with delegates. But there is a subtle difference between F# where a function passed as parameter is a value and in C# it would be a delegate.

Composition

The main building block in FP are values and functions. The composition is done with higher-level functions that takes other functions as parameters. Because the function is the relation between the input parameters and output result it’s easier to think about the composition in FP than in OOP where we talk about composition on the object level. I’ll write more about it in the next post.

Parallelizing immutable programs

I didn’t had enough time to explore this field but as you functions are immutable and there is no a shared state so parallelizing is easier. I’ll write more about it in the next post.

In Closing

When you switch from one paradigm to another there you start to think about what you knew from a different perspective. Even if I’m not en experienced F# developer or rather I’m a real nob there are so many advantages of using FP that learning it is a real pleasure.

What F# could offer to C# developer ?

1. Rapid prototyping and experimenting with the code as well for exploring how .NET libraries work. With REPL console it’s very easy and you can quickly write some sketch of the code. You’ll save much time.

2. F# libraries can be easily referenced from C# code so you can write a part of the solution in F# and there is no reason to get rid  of C# if you start to develop a parts of the system in F#.

And you ? What’s you’re experience in switching from OOP to FP ?

Posted in News, Technical Posts Tagged with: , , ,

How to integrate ASP.NET WebApi AttributeRouting with HyprLinkr

In my last post, I showed you how to configure ASP.NET WebApi, StructureMap and HyprLinkr.

Today I would like to show you how to use HyprLinkr with AttributeRouting. HyprLinkr works with the standard route configuration created by the Visual Studio project template (route named “API Default). If you would like to make it work with another routing framework you have to tweak somehow HyprLinkr. Hopefully, the API is very developer-friendly and Mark Seemann which is the creator of HyprLinkr has provided some means for doing it very easily. For those who don’t know AttributeRouting, it allows to define route templates in attributes directly on the controllers and actions. It allows some extra features like hierarchical route definition and so on.

Usage context

Once you’ve downloaded and configured AttributeRouting inside your WebApi application, your controllers might look something like this :

[RoutePrefix("api/rest")]
public class ItemsController : ApiController
{
	private readonly IItemsRepository _itemsRepository;
	private readonly IResourceLinker _resourceLinker;

	public ItemsController(IItemsRepository itemsRepository, IResourceLinker resourceLinker)
	{
		if (itemsRepository == null) throw new ArgumentNullException("itemsRepository");
		if (resourceLinker == null) throw new ArgumentNullException("resourceLinker");

		_itemsRepository = itemsRepository;
		_resourceLinker = resourceLinker;
	}

	[GET("items/{id}/{config?}")]
	public ItemRepresentation GetItem(int id, string config = "small")
	{
		var uri = _resourceLinker.GetUri<ItemsController>(a => a.GetRelatedItems(id, config, 1, 15)).ToString();
		return _itemsRepository.GetItem(id, config);
	}
}

Tweaking of HyprLinkr

If you recall from my previous post StructureMap is configured to inject IResourceLinker in order to resolve url for hypermedia links. However with AttributeRouting it won’t work. For that we need to implement our own IRouteDispatcher interface and to provide the implementation to the IResourceLinker instance being resolved by StructureMap. Here’s the implementation of IRouteDispatcher for AttributeRouting framework :

public class AttributeRoutingRouteDispatcher : IRouteDispatcher
{
	private readonly HttpRequestMessage _httpRequestMessage;

	public AttributeRoutingRouteDispatcher(HttpRequestMessage httpRequestMessage)
	{
		if (httpRequestMessage == null) throw new ArgumentNullException("httpRequestMessage");

		_httpRequestMessage = httpRequestMessage;
	}

	public Rouple Dispatch(MethodCallExpression method, IDictionary<string, object> routeValues)
	{
		if (method == null)
			throw new ArgumentNullException("method");

		var newRouteValues = new Dictionary<string, object>(routeValues);

		var controllerName = method
			.Object
			.Type
			.Name
			.ToLowerInvariant()
			.Replace("controller", "");
		newRouteValues["controller"] = controllerName;

		var attributeRoute = method.Method.GetCustomAttributes(true).Cast<HttpRouteAttribute>().FirstOrDefault();

		string routeName = null;

		if (attributeRoute != null)
		{
			const string patterToMatch = @"(:(.*?)*|(?)*)}";
			var strippedFromTokensUrl = Regex.Replace(attributeRoute.RouteUrl, patterToMatch, "}");
			var matchedRoute = _httpRequestMessage.GetConfiguration().Routes.FirstOrDefault(x => x.RouteTemplate.Contains(strippedFromTokensUrl)) as HttpAttributeRoute;
			if (matchedRoute != null) routeName = matchedRoute.RouteName;
		}

	return new Rouple(routeName, newRouteValues);
}

We need to return a “Rouple” object with a routeName and route values in order to allow HyprLinkr to generate the right url. As input we only need an instance of HttpRequestMessage. This is necessary to access the configured routes in the RouteTable. When the application is being run for the first time, the AttributeRouting framework scans all the controllers and extracts url templates from the attributes. Then it adds these to the Route table of WebApi configuration object. We need to match the template url extracted from the controller action attribute to the route configured in the route table and then get it’s name to construct our Rouple object. In order to match an url to the attribute’s template we have to apply a regex that strips some special characters from the template (type definitions like “:int?”, optional parameter “?”, etc.)

Configuring all

In order to construct our rouple object in the custom dispatcher, we need to tell to AttributeRouting framework to generate route names for each registered route (by default this is not done and we can’t use it with HyprLinkr). The configuration can be done in WebApi registration section. AutoGenerateRouteNames should be set to “true” :

var configuration = new HttpWebConfiguration
{
        AutoGenerateRouteNames = true
};
configuration.AddRoutesFromAssembly(Assembly.GetExecutingAssembly());
config.Routes.MapHttpAttributeRoutes(configuration);

The last step consist of configuring StructureMap for passing the right dependencies inside our instance of IResourceLinker :

For<IResourceLinker>().Use(ctx => new RouteLinker(ctx.GetInstance<HttpRequestMessage>(), ctx.GetInstance<AttributeRoutingRouteDispatcher>()));

 In closing

As you noticed it, it’s very straightforward to adapt HyprLinkr to work with other routes definition frameworks in WebApi. AttribouteRouting will be the next important feature of WebApi Next (V2) so it’s important to know how to make it work with HyprLinkr if you’re using it as your hypermedia links generating framework.

Posted in News, Technical Posts Tagged with: , , , , ,

Configure StructureMap in ASP.NET WebApi to play nicely with HyprLinkr

There is plenty of blog posts about how to configure StructureMap with ASP.NET WebApi so I won’t go into much details about that.

The goal of this post is to show you how to configure HyprLinkr to be correctly injected into your web api controllers. For those who don’t know what HyprLinkr is, here is a little description taken from the project wiki page https://github.com/ploeh/Hyprlinkr

Hyprlinkr is a small and very focused helper library for the ASP.NET Web API. It does one thing only: it creates URIs according to the application’s route configuration in a type-safe manner.

Let’s set up some context of how HyprLinkr is used.

Usage context

The main component in HyprLinkr is RouteLinker which implements IResourceLinker interface and that I want to inject inside my web api constructor.

public class ItemsController : ApiController
{
	private readonly IItemsRepository _itemsRepository;
	private readonly IResourceLinker _resourceLinker;

	public ItemsController(IItemsRepository itemsRepository, IResourceLinker resourceLinker)
	{
		if (itemsRepository == null) throw new ArgumentNullException("itemsRepository");
		if (resourceLinker == null) throw new ArgumentNullException("resourceLinker");

		_itemsRepository = itemsRepository;
		_resourceLinker = resourceLinker;
	}

        // some stuff here
}

 What doesn’t work ?

If you follow the best practices, the composition root of ASP.NET WebApi would be a custom controller activator (implementation of IHttpControllerActivator). In my case it’s based on StructureMap and is defined as follows :

public class StructureMapControllerActivator : IHttpControllerActivator
{
	private readonly Container _container;

	public StructureMapControllerActivator(Container container)
	{
		if (container == null) throw new ArgumentNullException("container");
		_container = container;
	}

	public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
	{
		try
		{
			return (IHttpController)_container.GetInstance(controllerType);
		}
		catch (Exception e)
		{
			// TODO : Logging
			throw e;
		}
	}
}

The registration of IResourceLinker in StructureMap is simple like that :

For<IResourceLinker>().Use<RouteLinker>();

The problem is when you run this code and when the first request hits ItemsController,  StructureMap throws an exception saying that the mapping for HttpRequestMessage is not defined and thus the IResourceLinker instance cannot be resolved.

When you look into the RouteLinker code you understand the error. You have to pass at least an instance of HttpRequestMessage to the constructor of RouteLinker. This is understandable as RouteLinker needs a request context in order to generate urls for the links.

How to make it work ?

The main issue is that we need to provide an HttpRequestMessage instance before the controller is resolved. The only place where we have an instance of HttpRequestMessage is in the controller activator. So we need to kind of inject the instance of HttpRequestMessage into the StructureMap container before the controller is resolved. Fortunately StrucureMap has such a feature and we could do something like this in the StrucureMapControllerActivator :

public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
{
	try
	{
		_container.Inject(typeof(HttpRequestMessage), request);
		return (IHttpController)scopedContainer.GetInstance(controllerType);
	}
	catch (Exception e)
	{
		// TODO : Logging
		throw e;
	}
}

This might be a good idea at first sight, except that is not…

Do you spot an eventual issue ?

What could happen with concurrent requests ? The container instance is shared between requests. It means that when an instance of HttpRequestMessage is injected into the container, and before the container resolves the controller, another request could come in and inject HIS instance of HttpRequestMessage. Then when the first controller is resolved, the wrong instance of HttpRequestMessage would be passed into the RouteLinked constructor.

Another point that I that maybe StructureMap doesn’t allow multiple injection of the same instance type. I haven’t checked that. But anyways i wouldn’t work for the first reason I mentioned.

StructureMap and nested containers

One handy feature of StructureMap are nested containers. If you don’t know what the nested containers are, please read Jeremy Miller post about it when it was introduced in StructureMap http://codebetter.com/jeremymiller/2010/02/10/nested-containers-in-structuremap-2-6-1/. To sum up we want to create a nested container on each incoming request and to inject there an instance of HttpRequestMessage. Each request will have it’s own nested container from which the controller and all dependencies will be pulled. Let’s refactor our StructureMapControllerActivator :

public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
{
	try
	{
		var scopedContainer = _container.GetNestedContainer();
		scopedContainer.Inject(typeof(HttpRequestMessage), request);
		request.RegisterForDispose(scopedContainer);
		return (IHttpController)scopedContainer.GetInstance(controllerType);
	}
	catch (Exception e)
	{
		// TODO : Logging
		throw e;
	}
}

As you can see, all dependencies are pulled from the nested container. However you might wonder why we need this line ?

request.RegisterForDispose(scopedContainer);

In fact nested containers has some feature that you might not be aware of.

The nested container will track all of the transient objects that it creates.  When the nested container itself is disposed, it will call Dispose() on any of the transient objects that it created.  A normal StructureMap Container does not track the transient objects that it creates

This is important because it means that if you don’t dispose of your nested container you may encounter some memory leaks as it track the resolution of all transient components. Fortunately the web api framework allows us to register our nested container for dispose.

There is a last thing to do. We have to change our IResourceLinker registration in StructureMap in order to tell which constructor we want to use when RouteLinker will be constructed. By default StructureMap selects most “greedy” constructor which means with most of the arguments.

For<IResourceLinker>().Use(ctx => new RouteLinker(ctx.GetInstance<HttpRequestMessage>()));

Now when you run the application, everything works as expected.

// Thomas

Posted in Articles, Technical Posts Tagged with: , , , ,

DUX – Developer User Experience

Complicated software

Sometimes, I wonder why things are not simple and expressive.

One thing that I try to advocate for many years is something that I should call DUX – Development User Experience. It means what it should mean : the user experience while you are doing your development.

We know that good software is hard to achieve and that most of the times people care only about what is visible. Crafting all the parts of the code in the right way is also important. The final end user experience is not the only part…

When you build frameworks or lower level components, even if their target is to be used by developers for higher level software, never forget that you still users! Your users are the developers working with your libraries.

Why do I complain? Because DUX is also an important part of the things you should care when you are building libraries. Framework developers usually though about:

  • Try to provide the functions needed for the framework with a defined priority order (maybe defined by marketing guys…)
  • Try to provide as much functions as possible because a richer framework is better than a framework providing a small set of things.
  • Use some unit tests for internal use because most people do.
  • Try to care about performance because a framework should do
  • Try to fit as much as possible in the current ecosystem
  • …certainly a lot of other things

I don’t know if all these things are the main reason I prefer to use open source libraries but in most cases these ones provide a slightly different results when they are realized with a decentralized unordered team:

  • Functions are provided because people really needs them
  • Delivery and functionalities are not subsidiaries of the market
  • You can control functionalities with the unit tests provided and also see how it works (most of the times this is much more efficient than a regular documentation…)

Another good point with open source is that the good and the bad are automatically currated by the community. If your framework doesn’t deserve enough needs, if it is not enough clear about it’s usage and so on, it will die. Simple and efficient.

Frameworks have to be simple and fun to use. The .Net world has a lot of legacy stuff and history that tends to be complicated by essence because it takes its roots in entreprise. Alternatively languages like Ruby have things simple and expressive mostly because it is a developer to developer platform with pragmatism in mind.

Take a simple exemple. Suppose, you are building a shell application. You want to print messages and draw some lines to present things clearly, like this:

 

--------------------------------------------------------------
-- My App (c) 2013
--------------------------------------------------------------

In C# to build lines like this, you should write all the line in a string and print it like that:

Console.WriteLine("--------------------------------------------------------------");

Or with an iteration if you want to get more reusability :

public string Repeat(string pattern, int timesToRepeat)
	{
		string[] resultArray = new string[timesToRepeat];
		string result = pattern;
		for (int i = 1; i < timesToRepeat; i++)
		{
			result = result+pattern;
		}
		return result;
	}
...
Console.WriteLine(Repeat("-",80));

Instead of that, in the ruby world, you just do that:

puts "-" * 80

Which literally means : “put in standard output the string ‘-’ repeated 80 times. Which one sounds better for you?

Attention: please note that I am a .Net developer, we are just trying to talk about DUX, I am far than saying that Ruby is better than C#, yeap?

Btw, C# is a very high level platform and you have a lot of better things but that’s not the problem here. And In fact you should get more fluent things with a simple extension method like that:

public static string Repeat(this string pattern, int timesToRepeat)
	{
		StringBuilder sb = new StringBuilder();
		for (int i = 0; i < timesToRepeat; i++)
		{
			sb.Append(pattern);
		}
		return sb.ToString();
	}
...
Console.WriteLine("-".Repeat(80));

If you want a shortest way to achieve that you should also try:

String.Join("",Enumerable.Repeat("-",80))

 

Never forget that software is complicated by essence, so, don’t help this complication, try to make things clean even if your audience are developers!

Posted in Articles Tagged with: , , ,

.Net Web dev Mac – Nuget, Monodevelop and Nancy

Nancy with twitter bootstrap

Basic Support

In the Beginning

Managing binaries, dependencies and packages is such an important point that it is unbelievable  that it takes 10 years to get something like Nuget on .Net from Microsoft…

If you are running Mono for .Net web development on your Mac, it is not better regarding this point…

It was painful with MonoDevelop was the lack of support for Nuget.

We can do things by hand since v2 of Nuget (and before 1.7 btw) because you can use nuget.exe from mono command line but as it wasn’t integrated in solutions and projects it only helps you get binaries.

But let’s see how to setup Nuget on mono.

Install & configure Nuget

First download the las exe:

curl -O -L  http://nuget.org/nuget.exe

As Nuget is a .Net binary, you need to launch it via mono. If your installation is ok – and it should with the installer -, you are able to launch mono from every path in your shell. Note also that you need to specify the framework 4 runtime compatibility to make it work properly. So start the nugget exe, just execute this line:

mono --runtime=v4.0 Nuget.exe [then follow your nuget args]

In order to make this work from every, I should advise you to create an executable shell script that handle that.

In the folder you downloaded Nuget, copy it to /usr/local/bin:

cp Nuget.exe /usr/local/bin

Create a new file for your command, just name it nuget to get the same semantics:

touch nuget

Edit your file and just copy/paste this code:

#!/bin/sh
# add a simple 'nuget' command to Mac OS X shell under Mono
PATH=/usr/local/bin:$PATH
mono --runtime=v4.0 /usr/local/bin/NuGet.exe $*

Then give the correct execution rights to your file:

sudo chmod +x nuget

Then open a new console and try nuget (with a nuget help for exemple). The good point is that you can now use Nuget from everywhere!

You should also have a default config path working with NuGet, if you don’t have one created automatically, create a new fresh one

Local config should be stored at:

~/.config/NuGet/Nuget.config

Once you have this, you can play with nuget, with your own repositories too, but you still closed in the command shell.

If you work with MonoDevelop, you have to create your package folder, your packages config file and then reference your binaries manually. It’s not an ideal configuration but that’s better than nohting ;-)

Nuget support on Monodevelop

Install

This morning my eyes flashed after a tweet of David Fowler:

David Fowler tweet about nuget monodevelop support

nuget monodevelop support

Ok, let’s see what is it about?

This is about an Addin written by Matt Ward. You can find the sources on his github 

 

You can build it and install it by your own, but Matt did the things well and provided a Monodevelop addin channel with this:

http://mrward.github.com/monodevelop-nuget-addin-repository/3.0.5/main.mrep

Click on Monodevelop, then Add-in manager. Select Gallery and manage repositories:

monodevelop addin repositories

monodevelop addin repositories

Then click [Add] and paste the url:

Add repository definition url

Add repository definition url

Once you have it, you should get a new addin available:

new addin

new addin

Select it and install.

Let’s try a web project with Nancy!

First, create a new empty ASP.NET web project:

New empty asp.net project

New empty asp.net project

Before playing with Nuget, Add a web.config file because it is not provided by default with this project template:

add web.config

add web.config on monodevelop

Then you should try to run it but obviously you’ll get nothing.

Now, right click on references:

References with Nuget

References with Nuget

So cool! We have now Nuget resources just here! Select Manage Nuget Packages.

You’ll have a box just as the one you use to use under Visual Studio (if you are in click-click mode instead of the Package manager console which is a Powershell host of course…):

Nuget box

Nuget box

Just start with typing ‘Nancy.host’, select ‘Nancy.Hosting.Aspnet’ then Add. You can see in the messages, that it handles correctly the download of Nancy.dll too.

Then add a new class to your project with your HomeModule with a simple test route:

using System;
using Nancy;

namespace Demo
{
	public class HomeModule : NancyModule
	{
		public HomeModule ()
		{
			Get ["/(.*)"] = _ => "Yes it works!";
		}
	}
}

That’s supposed to be enough for now. Build and run. Guess what? it doesn’t work.

Why? Remember that nuget installation features under Visual Studio are full of Powershell and transforming files are part of it. So, remember that when you install something with Monodevelop Nuget you don’t have the transforms like web.config updates!

UPDATE: Thanks to author Matt Ward that points an explication for that in the comments: “PowerShell is not actually needed for the web.config transform to be applied. It is not currently working in the MonoDevelop NuGet addin due to a bug. The addin is not detecting the project is a web project and ignoring the transform. Transforms for app.config files are currently working.”

Then open your web.config file and update  it  to setup Nancy handler:

<?xml version="1.0"?>
<!--
Web.config file for Demo.
The settings that can be used in this file are documented at
http://www.mono-project.com/Config_system.web and

http://msdn2.microsoft.com/en-us/library/b5ysx397.aspx

-->
<configuration>
<system.web>
<compilation defaultLanguage="C#" debug="true" targetFramework="4.0">
<assemblies>
</assemblies>
</compilation>
<customErrors mode="RemoteOnly">
</customErrors>
<authentication mode="None">
</authentication>
<authorization>
<allow users="*" />
</authorization>
<httpHandlers>
<add verb="*" type="Nancy.Hosting.Aspnet.NancyHttpRequestHandler" path="*"/>
</httpHandlers>
<trace enabled="false" localOnly="true" pageOutput="false" requestLimit="10" traceMode="SortByTime" />
<sessionState mode="InProc" cookieless="false" timeout="20" />
<globalization requestEncoding="utf-8" responseEncoding="utf-8" />
<pages>
</pages>
</system.web>
<system.webServer> <modules runAllManagedModulesForAllRequests="true"/> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="Nancy" verb="*" type="Nancy.Hosting.Aspnet.NancyHttpRequestHandler" path="*"/> </handlers> </system.webServer>
</configuration>

Re-run it and now it works!

How easy to do .Net web!

Just to finish with that and show you how cool it is, relaunch Nuget manager and Add also Nancy.ViewEngines.Razor, jQuery and Twitter bootstrap.

Create Folders Views/Home and add a new html file called “Index.cshtml”

If you’re like me and never remember the right syntax for bootstrapping things, keep a visible bookmark to Twitter bootstrap page and take this one as exemple:

http://twitter.github.com/bootstrap/examples/marketing-narrow.html

Copy paste the source on your file as exemple, then update it to fit your config, paths and update the main block (just extracts with updated things ):

...
<title>Nuget on MonoDevelop</title>
...
<link href="/Content/bootstrap.css" rel="stylesheet">
...
<link href="/Content/bootstrap-responsive.css" rel="stylesheet">
...
<div class="jumbotron">
<h1>Nuget on MonoDevelop!</h1>
<p class="lead">
Get started with .Net Web dev on Mac with Monodevelop, Nuget, Nancy and Bootstrap!
</p>
<p>Generated at @Model.Generated by @Model.Author
<a class="btn btn-large btn-success" href="#">It's alive</a>
</div>

Then update your home module to return the Razor view and with a small model class to test passing things to the view

public class HomeModule : NancyModule
	{
		public HomeModule ()
		{
		Get ["/(.*)"] = _ =>
                   View ["Index", new HomeInfo (
                          DateTime.Now, "Rui Carvalho")];
		}
	}

	public class HomeInfo
	{
		public DateTime Generated { get; private set; }
		public string Author { get; private set; }

		public HomeInfo (DateTime generated, string author)
		{
			Generated = generated;
			Author = author;
		}
	}

Run it and enjoy more happy dappy path of web development with Nancy, Nuget and Mono on your Mac

Nancy with twitter bootstrap

Nancy with twitter bootstrap

cheers!

 

Posted in Articles Tagged with: , , , ,

When .gitignore doesn’t want to work

There are times you feel so very stupid because you can’t do a simple thing… like for example making work a .gitignore file. I don’t know how much time I had spent looking into the guts of git commands before I realised what I’ve done. As you know you can’t really create .gitignore from Windows (or I don’t know how to do it). Every time I need this file, I copied it from another place, another project. This time I had a brilliant idea of creating it from the command line like this :

echo “packages/” > .gitignore

Well, that’s the problem. The file doesn’t work because it’s not encoded with UTF-8. So you have to encode and save it and everything works as expected.

Posted in Technical Posts Tagged with: