First time here? You are looking at the most recent posts. You may also want to check out older archives or the tag cloud. Please leave a comment, ask a question and consider subscribing to the latest posts via RSS. Thank you for visiting! (hide this)

How to access Council of EU data on votes on legislation using SPARQL and AngularJS

One of the areas I've been focusing on lately is the so called "Semantic Web", in particular Open Data as a way to make governments more transparent and provide data to citizens. From a technical point of view, these data are redistributed using the  RDF/LD format.

I’m particularly excited of having worked on the release of what I think is a very important data set that helps understand how decisions are taken in the Council of European Union.

The Council of European Union published how member states have voted in since 2010

In April 2015, the Council of European Union released as open data how Member States vote on legislative acts. In other words, it means that when the Council votes to adopt a legislative act (ie a regulation or a directive), the votes of each country are stored and made publicly visible. This means that you can see how your country voted when a given law was adopted, or you could get more aggregate data on trends and voting patterns.

Recently, the Council has also released two additional open datasets containing the metadata of all Council documents and metadata on requests for Council documents.

DiploHack, Open Data Hackathon

The Council will also organise for tomorrow 29 and 30 of April, together with the Dutch Presidency, DiploHack, an hackaton about open data, in Brussels. The goal of the hackaton is to make use of Council’s opendata sets, linking them with all the other datasets available from other EU institutions, and build something useful for citizens. You can still register for the hackathon.

This post will show you how to access the votes using SPARQL, which is a query language for data published in RDF format, and how to access those data using AngularJS.

A brief introduction to RDF/LF and SPARQL

In the context of Semantic Web, entities and relation between entities are represented in triples which are serialized in a format called “Turtle” or in RDF/XML (which is what is usually referred as RDF) and many others formats.

You can imagine a “triple” as a database with 3 columns: subject, predicate, object. And each of those is represented with a URI. This is a very flexible format that can be used to represent anything. For example you can say that the author of this blog is myself (univoquely identified by my github account url and with the name “Simone Chiaretta”) and that the topic of this blog is Web Development. The corresponding serialization in Turtle (using the simple notation) of these three information will be:

<http://codeclimber.net.nz/>
  <http://purl.org/dc/elements/1.1/creator>
  <https://github.com/simonech> .

<http://codeclimber.net.nz/>
  <http://purl.org/dc/elements/1.1/subject>
  "Web Development" .

<https://github.com/simonech>
  <http://xmlns.com/foaf/0.1/name>
  "Simone Chiaretta" .

Notice the use of the URI to represent entities, which gives them an unique identifier. In this case the http://purl.org/dc/elements refers to an URI defined by the Dublin Core’s  Metadata Terms. Another possible solution to represent the topic, could have been to refer to another URI coming from a managed taxonomy. This way it would have been possible to make “links” with other datasets.

But  how to query these data? We use SPARQL.

SPARQL uses a syntax very similar to Turtle, and uses SQL-like keywords like SELECT and WHERE.  Using the bibliographic example, one could query for all publications written by Simone Chiaretta. The syntax would be:

  SELECT ?publication
  WHERE {
    ?publication <http://purl.org/dc/elements/1.1/creator> <https://github.com/simonech> . 
  }

Basically the query is done by putting a variable in the element you want as result, and by specifying the other two elements of the tuple: a kind of query by example. The other 2 elements of the tuple can also be variables, in case you want to “join” different tuples. For example, if we want to search for all publications written by Simone Chiaretta, identified by his name instead of the URI, the query will be:

  SELECT ?publication
  WHERE {
    ?publication <http://purl.org/dc/elements/1.1/creator> ?author . 
    ?author <http://xmlns.com/foaf/0.1/name> "Simone Chiaretta" . 
  }

With these basic knowledge, we can now look at how to access the data released by the Council of European Union about votes on legislative acts.

How the data is modelled and how to query it

Data released include the information about an act (title, act number, various document numbers, policy area, etc…), the session in which it’s been voted (its date, the Council configuration, the number of the Council session) and how each country voted.

Instead of being modeled as hiearchical graph, in order to make it easier to analyze it and get aggregated data, we’ve modelled it as a Data Cube: an “observation” includes all the information in a flat and denormalized structure. So, a “line” includes how a country voted for a given act, followed by all the information about act and session, which are then replicated for how many countries voted in the act. This approach make it less space efficient (all acts and council information are replicated every time) but easier and faster to query as there is no need for “linking” different entities with “joins” in order to compute aggregated results.

Simple queries

For example, if you want to know all acts about fishery, you do:

  SELECT DISTINCT ?act
  where {
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/policyarea>
    <http://data.consilium.europa.eu/data/public_voting/consilium/policyarea/fisheries> .
    
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/act>
    ?act .
  }

The query basically asks: give me all the “observations” whose policy area is fisheries, and then, for these observations, give me their “act”. 

Notice the clause DISTINCT: this is important because, given the “data cube” approach, every act it replicated 28 times (there are usually 28 countries voting), so we need to take it only once.

The result will be 27 acts, each one identified by it’s URI. You can also execute the query directly in the interactive query tool online, and you will get the results as HTML.

all-acts-on-fisheries

If you want the title of the act, you also need to ask the “definition” for that URI, which has been mapped using the predicate http://www.w3.org/2004/02/skos/core#definition. So, the query will become:

  SELECT DISTINCT ?act ?title
  where {
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/policyarea>
    <http://data.consilium.europa.eu/data/public_voting/consilium/policyarea/fisheries> .
    
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/act>
    ?act .
  
    ?act
    <http://www.w3.org/2004/02/skos/core#definition>
    ?title .
  }

The result is as shown in the following screenshot (or can be seen online directly).

all-acts-on-fisheries-with-title

More complex aggregation queries

Now that you have the graps of it, let’s do some more interesting aggregated queries. Actually, given the modelling done, they are conceptually more complex, but easier to implement.

For example, you want to know how many time countries voted against the adoption of an act?

  PREFIX eucodim: <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/>
  PREFIX eucoprop: <http://data.consilium.europa.eu/data/public_voting/qb/measureproperty/>
  PREFIX eucovote: <http://data.consilium.europa.eu/data/public_voting/consilium/vote/>
  
  SELECT COUNT(?act) as ?count ?country
  from <http://data.consilium.europa.eu/id/dataset/votingresults>
  where {
    ?observation eucodim:country ?country .
    ?observation eucoprop:vote eucovote:votedagainst .
    ?observation eucodim:act ?act .
  }
  ORDER BY DESC(?count)

To keep the query more concise and readable, I used another SPARQL keywork, PREFIX, to avoid writing the whole URI all the times. Here is the countries that voted against the adoption of an act, sorted by who voted no the most (using the ORDER BY DESC keyword).

who-voted-no

If you want to see how a country voted in all the acts? It’s enough to switch country with vote, and you “pivot” the view of the data, aggregating by vote instead of by country:

  PREFIX eucodim: <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/>
  PREFIX eucoprop: <http://data.consilium.europa.eu/data/public_voting/qb/measureproperty/>
  PREFIX eucocountries: <http://data.consilium.europa.eu/data/public_voting/consilium/country/>
  
  SELECT COUNT(?act) as ?count ?vote
  from <http://data.consilium.europa.eu/id/dataset/votingresults>
  where {
    ?observation eucodim:country eucocountries:uk .
    ?observation eucoprop:vote ?vote .
    ?observation eucodim:act ?act .
  }
  ORDER BY DESC(?count)

And you see the country of the example voted 554 in favor of the adoption, 45 against, 42 abstained from voting and 39 didn’t participate in the voting (this happens because countries outside of the Eurozone do not vote in Euro-related matters).

how-country-voted

Council’s Github repository contains more information on the model itself as well as a list of other SPARQL queries.

How to exploit all these information from code

Now you know how to query the dataset via the interactive query tool, you probably want to do something with the data.

There are a few JavaScript libraries that make it easier to interact with SPARQL endpoints and also can navigate graphs, like RDFSTORE-JS or rdflib.js. Or dotNetRDF if you are looking to do some processing on the server-side in .NET.

But if you want just to query a SPARQL endpoint you can just make a standard http GET request, passing the SPARQL query as parameter. In return you can get the results in a variety of formats, including JSON. The format of this JSON is a W3C standard (like all the other format decribed on the page): SPARQL 1.1 Query Results JSON Format.

The last query, in JSON format, would have returned the following code.

json-result

Basically this JSON format has an head which tells which variables have been used, followed by the results, which contain a small set of metadata about the query (was it a distinct, was it sorted), followed by all the results, inside a bindings array. For each variable, the type, URI and value are specified.

Sample request with Angular

Using AngularJS, you can send SPARQL queries using the standard $http.get method. The following sample is part of the open source demo we published on Council’s Github repository. The demo allows searching of acts by specifying some properties. It is available online at: http://eucouncil.github.io/CouncilVotesOnActsDatasetSample/

First I built an AngularJS Factory to encapsulate the query to the SPARQL endpoint (http://data.consilium.europa.eu/sparql) and the manipulation of results.

angular.module('opendataApp', []).factory('sparqlQuery',function($http){
      return function(query){
        var baseAPI="http://data.consilium.europa.eu/sparql?";
        var requestUrl = baseAPI + "query="+query+"&format=application%2Fsparql-results%2Bjson";
  
        return $http.get(requestUrl)
        .then(function successCallback(response) {
          console.log(response.data.results.bindings);
          var acts = [];
          var bindings = response.data.results.bindings;
          for (var i = 0; i < bindings.length; i++) {
            var variable = bindings[i];
            // Does some processing to put together all properties of an act
          }
          return acts;
          }, function errorCallback(response) {
          });
      };
    })

Then, with this in place and using another service for concatenating the SPARQL string, I can send the query to the server and get back the results and display them in the page.

  vm.performSearch = function() {
    vm.searching=true;
    vm.noresults=false;
    vm.acts=[];
    vm.sparqlQuery = sparqlGenerator(vm.search); //concatenates string
    sparqlQuery(vm.sparqlQuery).then(function (data){
      vm.acts = data;
      vm.searching=false;
      if(vm.acts.length==0)
       vm.noresults=true;
    });
  };

You can play around with the demo online at: http://eucouncil.github.io/CouncilVotesOnActsDatasetSample/

So, come to the hackathon and even if you cannot, play with the data and make some nice analysis of them. If you do, please post your links in the comment section.

Voting Simulator Application

On a slightly related topic, if you want to see how agreements are reached and how the actual voting happens, you can play around with the Council Voting Calculator, availabe on the website, but also as iOS app and Android app (in both versions, phone and tablet). Following is a screenshot from the iPad version of the app.

Disclaimer: The views expressed are solely those of the writer and may not be regarded as stating an official position of the Council of the EU

Clause de non-responsabilité: Les avis exprimés n'engagent que leur auteur et ne peuvent être considérés comme une position officielle du Conseil de l'UE

Introduction to ASP.NET Core 1.0 video

Actually still called Introduction to ASP.NET 5 (I did it before the name change from .NET 5 to .NET Core), a few days ago Microsoft TechRewards published the video I produced for Syncfusion about the new open-source web framework by Microsoft.

In the video I go through a quick introduction, followed by installation producedures, and then how to create command line tools and simple websites using ASP.NET Core v1.0, using both Visual Studio Code and Visual Studio 2015.

You can read more about the content of my video on the post Video Review: Introduction to ASP.NET 5 with Simone Chiaretta and, of course watch the video (and take the quiz at the end).

video

Hope you like it, and let me know what you think about it in the comments.

Two Razor view errors you might be doing too

Lately I went back developing web sites with ASP.NET MVC (after quite some time in SPA and Web API), and I struggled for some time with some strange Razor views behaviours I couldn’t understand. Here are some of them. Hope this post will help you save some time in case you have the same problems.

Using Generics in Razor views

Generics’ syntax has a peculiarity that might interfere when writing inline inside HTML tags: the use of angular brakets. This confuses the Razor interpreter so much that it things there is missing closing tag.

For example, when trying to write @Model.GetPropertyValue<DateTime>(“date”) you’ll get an error and Visual Studio will show some wiggle with the following alert.

vs-alert

Basically he thinks <DateTime> is an HTML tag and wants you to close it.

htmlcompletion

Solution is pretty simple: just put everything inside some brakets, like @(Model.GetPropertyValue<DateTime>(“date”))

Order of execution of Body and Layout views

I wanted to set the current UI Culture of my pages with every request, so I wrote a partial view that I included at the top of my layout view: all text in the layout was correctly translated, while the text coming from the Body was not.

After some digging I realized that the order of execution of a Razor view starts with the view itself (which renders the body) and then goes on with the Layout. So my UICulture was set after the body was rendered. So I had to move the partial view that was setting the culture at the top of the “main” view.

If you have many views, just put all initialization code inside a view called _ViewStart.cshtml. This way the code is executed before body is rendered, for every view, and you don’t have to add it to each view manually.

That’s all for now.

ASP.NET 5 is now ASP.NET Core 1.0

A few months from the RTM of the new version of ASP.NET, Microsoft changed the name: what it was originally referred to as ASP.NET vNext and later as ASP.NET 5, it’s now called ASP.NET Core 1.0.

Also all the related libraries change name:

  • .NET Core 5 becomes .NET Core 1.0
  • ASP.NET MVC 6 becomes ASP.NET Core MVC 1.0
  • Entitiy Framework 7 becomes Entitiy Framework Core 1.0

I personally think this is a great move as it was causing a lot of confusion in people that where just looking at the whole thing from time to time and not following all the evolution.

Why this is a good move

Calling the next version v5, v6 and v7 (respectively for ASP.NET, MVC and EF) would have lead to think that they were actually the next version of the various libraries and frameworks. But they were not:

  • ASP.NET 5 would have not been a replacement for ASP.NET 4.6 because it was lacking a lot of its features (WebForms above all)
  • ASP.NET MVC 6 was not a replacement of MVC 5 because you couldn’t run it on top of ASP.NET 4.6

So it’s a good move to reboot the version number to 1.0, and start a new product from scratch, because this is indeed what ASP.NET 5 was: a compiletely new product, wrote from scratch, without backward compatibility and also running a different runtime.

Calling it 1.0 also opens the way to a future ASP.NET 5 running on the full framework and still supporting WebForms for example.

Calling everything 1.0 also clears up the versioning mess of all the libraries that ran on top of ASP.NET: MVC 5, WebAPI 2, SignalR, Web Pages 2. Now they’ll all be part of the Core family and will all go back to 1.0. And will evolve together with the Core family.

Why I don’t like it that much

But naming and versioning are hard, and also this naming has its faults: you can still run ASP.NET Core 1.0 on top of the “full” .NET Framework 4.6, same goes with EF Core 1.0. Will this lead to some confusion: I’m pretty sure it will. Also, if you search on Google for ASP.NET MVC 1.0 you’d have to make sure the v1.0 you are reading about is the the “Core” and not the old version of the “full” ASP.NET MVC.

Personally I’d have gone even farther, and I would have called completely differently: Foo 1.0.

But this would have had also pro and cons:

  • the main point in favour is that we’d finally getting rid of the legacy of “Active Server Pages” and losing the bad connotation that ASP.NET WebForms have in the other communities. Also any name would be better and more appealing than “ASP.NET Core 1.0 MVC” as this is getting very close to the long names that we had from Microsoft in the past.
  • the disadvantage of the new name is that they’ll lose all the ASP branding that has been build over 20 years.

How all the new parts stack up after the name change

Let’s try to clear up things a bit. As bottom level we'll have:

  • the "full" .NET Framework 4.6 which provides base class library and execution runtime for Windows;
  • .NET Core v1, which provides the base class library and many of the other classes. From RC2 it also provides the execution runtime and all related tools (packages, build, etc), everything that was before in DNX. This runs on all OS.

Then as base web framework level:

  • ASP.NET 4.6, runs on top of "full" .NET 4.6
  • ASP.NET Core v1, runs on top of .NET Core v1 and on top of the "full" .NET 4.6

Then at higher web libraries level:

  • ASP.NET MVC 5, Webforms, and so on and on run on top of ASP.NET 4.6
  • ASP.NET Core v1 MVC, which runs on top of ASP.NET Core v1 (and in RC2 looses the execution runtime and CLI part of it)

As ORM:

  • EF6 runs on top of "full" .NET 4.6
  • EF Core runs on top of .NET Core v1 and on top of the "full" .NET 4.6

Read more

Many other member of the .NET community wrote about their views on this change. Here some of the posts I found around the net.

What do you think? Like, dislike, love, hate? Let me know in the comments

Automatically applying styles to a Word document with search and replace

Word as end-use is a very strange topic for me to blog about, but I just discovered a tip that would have saved me countless hours of time. So I thought to share it.

At the moment I’m writing a book (yeah, another one): for my personal convenience I write it in Markdown, so that I can easily push it to GitHub, and work on it from different devices and even when travelling via tablet.

I’ve synced my private repository to Gitbook so that I can easily read it online or export it to PDF or Word, but unfortunately I cannot rely on these features to send the chapters to my publisher. In fact book publishers have very strict rules when it comes to styles in Word documents. For example, if I want a bullet list, I cannot just click the bullet list button button in the toolbar, but I’ve to apply a “bulletlist” style. Same goes for all the other standard styles.

For most of the styles it’s not a big deal: I just select the lines I need to re-style and in 15-20 minutes a 20 pages chapter is formatted.

The problem arrives when formatting “inline code”: in markdown, inline code is formatted with back-ticks (`), so each time I need to show something as inline I’ve to remove the the trailing and leading ticks, and then apply the “inlinecode” Word style. This process alone, in a typical chapter, takes away at least a few hours of time. After a few chapters and hours of frustration I asked for help to my girlfriend, whom, working in language translation, uses Word as her main working tool all day: she had a solution for this problem, so I’m sharing it in case other fellow technical writers need it.

First open the Advanced Find dialog, switch to the Replace tab:

  • In Find you put a kind simplified regular expression: (`)(*)(`). This means: find any sting which starts with a back-tick, and ends with a back-tick.
  • In Replace put \2. This means: replace it with the content of the second “match group”. Also specify the style you want applied, in my case “InlineCode”.
  • And remember to check the box Use wildcards, otherwise this won’t work.

Let’s see in action on some lines from my upcoming book with the markdown file:

markdown

Once pasted into Word (and applied the basic styling) it becomes (notice all that text with back-ticks):

word

I then apply the magic find&replace:

find

And voila! In a few seconds 20 pages of Word documents are correctly updated by removing the ticks around inline code and applying the correct style.

Post

it’s not my typical content, but I hope you’ve learnt some thing that you didn’t know.

To see all you can do with wildcards: How to Use Wildcards When Searching in Word 2013

Next step in automating this process would be writing some code that automatically formats it properly in one go.

Web European Conference Registrations opens 1st July 12:00CET

title

The moment has finally come: tomorrow at midday, Central European Time, it will be possible to start registering for the 2nd Web European Conference.

In the previous edition of the conference we sold out all the tickets available at the time (170) in the first few hours after opening: this year we’ll have 400 seats, but just to be sure, remember to set an alarm and get to the registration page on time not too loose the option to take part in the conference.

Register for the Web European Conference

Speakers and session

Tomorrow we’ll also close the Call for Presenters, and we’ll also ask for your opinion on which sessions to include in the conference: you can already see all the proposals on out github repository and from tomorrow you’ll be able to vote for your favorite sessions.

But we have already our two TOP speakers: Scott Hanselman and Dino Esposito.

speakers

 

Sponsors

A final word on our sponsors and partners, without whom this conference will not be possible.

sponsors

 

CodeGarden 2015: recap of day 1

Here I am again, for the third time at Umbraco CodeGarden. For those who do not know what it is, it's the yearly Umbraco developer conference, this year celebrating its 10th anniversary.

Before going to sleep after a long day I just wanted to post my recap of the day.

The Keynote

CodeGarden Keynote

Some numbers on the "size" of the community:

  • almost 200k active developers on the community site
  • almost 300k active public installation of Umbraco
  • over 200k installation of Umbraco v7 in the last year

In addition to giving all these figures, Niels also highlighted some popular packages contributed by the community (Vorto for 1 to 1 translations, NuPicker and Nested Contents for enhanced editors experience, LePainter a visual grid editor and BookShelf to provide inline contextual help to the backoffice).

Other announcements included the features that are coming with v7.3 (automatic load balancing, a new API library as first path to get rid of legacy API, authentication based on ASP.NET Identity, which enables twitter, Google, Active Directory and 2 factor authentication via Google) and future features that are currently being experimented like the new cache layer, a new content type editor and a full-fledged REST API based on the HAL standard.

Roadmap panel

Immediately after the keynote, 5 members of the core dev team answered questions on specific pain-points that users would like addressed in future (v8) releases, and also unveiled HQ's priorities:

  • Improving the UX
  • Fresh start on the code (getting rid of the decennial original legacy API)
  • Bringing many features of Umbraco.com (the SaaS platoform) to on-premises installations (like migrations from environments, syncronization of content and so on)
  • Segmentation, segments-based content variations and personalization

Contributing to the core

After the usual organic lunch, the afternoon started with some Git tips to better contribute to the core of Umbraco and make maintainers' life easier:

  • First squash all commits into one, making sure no typos or "missed file" kind of commits are sent in the pull request. The suggestion was to use the git rebase --interactive command.
  • Then making sure our pull request is based on a pretty recent version of the repository, using the following process:
    • Track upstream git remote add upstream ...
    • Fetch upstream git fetch upstream
    • Rebase your commit on top of the latest version of the repo git rebase upstream/dev-7
  • And finally, merge all the conflicts that might arise before doing the pull request

Make Editors Happy

As last year, one of the main tenet of the conference is reminding us developers that also content editors deserve love, and with Umbraco 7 it's very easy to craft data editors tailored to custom editing expectations and flows. But even without going down the path of customization with AngularJS, many things can be done also with the core editors and a few selected packages: group properties in tabs, remove from the RTE everthing tha editors do not need, provide contextual help (maybe consider the uEditorNotes package) and finally use NuPicker and Nested Content to provide a better experience when choosing node from the tree and when creating list of items.

How to sell Umbraco

The day ended with an amazing talk by Theo Paraskevopoulos with tips on how to sell Umbraco as platform when doing projects. Unfortunately the slides are not published yet, but will update the post as soon as they are.

Some impressive facts I didn't know about: NFL uses Umbraco for one of they sub-sites (http://operations.NFL.com) and Umbraco, with 0,7%, is the 5th platform in term of marketshare in the industry of CMS, after WordPress, Drupal, Joomla and DotNetNuke (1%); all the rest of the CMS account for 1.4%.

Conclusion

The evening ended with a protest march through the streets of Copenhagen, which unfortunately I had to miss due to a broken toe, caused by an injury in a recent triathlon race.

The first day was not super-tech, but more soft-skill and UX oriented, but very useful anyway, especially for me since my reason to be here is to get a feeling of where Umbraco is going in the future to see if it can be used as CMS platform at my workplace.

Tomorrow it looks like a more tech-focused agenda.

NBAs soon as slides and video are published, I'll update the post.

My new free eBook is out: OWIN Succinctly by Syncfusion

image I’m happy to announce that my latest book, OWIN Succinctly, has just been released by Syncfusion, within their “succinctly” series of eBooks.

I’ve written this book together with my friend and co-organizer of the 2nd Web European Conference, Ugo Lattanzi, with whom I’ve also give a speech in Paris last May, still about Owin.

Owin is a big inspiration for the new ASP.NET 5 stack, so we decided to write this book both to show how you can use this “philosophy” with current version of the ASP.NET, and to let you know how it could be in the future with ASP.NET 5.

The book covers all aspects of OWIN, starting with a description of the OWIN specification, moving on to how Katana, Microsoft’s implementation of the specs, works. Later we also show how to use Katana with various web frameworks, how to use authentication and finally how to write custom middleware.

The table of contents is:

  • OWIN
  • Katana
  • Using Katana with Other Web Frameworks
  • Building Custom Middleware
  • Authentication with Katana
  • Appendix

OWIN, and the new ASP.NET will be big actors in the 2nd Web European Conference in Milano next 26th of September, so, if you want to know more about those technologies, consider participating to the conference.

A big “thank you” goes to Syncfusion, for giving us the possibility to reach their audience, and to our technical reviewer Robert Muehsig, whose comments helped making the book even better.

If you have comments or feedback on the book, do not hesitate to write a comment on this post, or contacting me on twitter @simonech.

Using Entity Framework within an Owin-hosted Web API with Unity

After quite a lot of time of writing applications without direct interaction with Databases, lately I’ve been working on a pretty simple ASP.NET Web API project that needs to save data on a database. Despite the simplicity of the application, I faced some interesting problems, which I’m going to write about in a few blog posts over the next weeks.

The first of the problems, which I’m going to write about in this post, is how to configure an ASP.NET Web API application to run within Owin, have its dependencies resolved with Unity, and have Entity Framework DbContext injected via IoC/DI.

Setting up ASP.NET Web API with Owin

The first thing to do is getting the right packages:

  • first create a new ASP.NET Web Application, choose the Empty template, and tick the Web API option under “Add folders and core references for”: this will install all the Nuget packages needed for a Web API project, and will setup the folder structure;
  • then you need to install the Owin packages and the Owin-Web API “bridge”: by installing the Microsoft.AspNet.WebApi.Owin you’ll get everything you need;
  • finally, depending on where/how you want to run the Web API project, you also need the Nuget package for the Owin server you need: download Microsoft.Owin.Host.SystemWeb for starter if you want your app to run within IIS.

Once all the core dependencies are ready, you have to configure the Owin Startup class to fire up Web API: just add a OWIN Startup class from Visual Studio contextual menu and add to the Configuration method the right configuration for Web API.

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        HttpConfiguration config = new HttpConfiguration();
        // ... Configure you web api routes
        app.UseWebApi(config);
    }
}

And Voilá! You have a Web API running with an Owin host.

Adding Unity to Web API within Owin

Next step is adding Unity and configuring it to correctly resolve dependencies in a Web API application. Just add the Unity.AspNet.WebApi Nuget package and all the needed packages and bootstrapping code will be added to the project: in particular it will add two important files:

  • UnityConfig class, where the configuration of the Unity container should go
  • UnityWebApiActivator class, which is fired up using WebActivator, that registers the unity dependency resolver for Web API (by saving into the GlobalConfiguration.Configuration object)

Unfortunately, if you run you application now (and you have already some dependencies injected into your controllers via IoC/DI) nothing will be injected, simple because the DependencyResolver is still empty, despite being set by the Start method of the UnityWebApiActivator: this works fine in a normal Web API application, but not with Owin, because of the sequence in which the various services are instantiated.

The solution to the problem is pretty easy : just delete the UnityWebApiActivator class and put the same code into the Owin configuration method:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        HttpConfiguration config = new HttpConfiguration();

        // ... Configure you web api routes
        config.DependencyResolver = new UnityDependencyResolver(UnityConfig.GetConfiguredContainer());

        app.UseWebApi(config);
    }
}

For reference, the UnityConfig.GetConfiguredContainer is a static method exposed by the UnityConfig class that has been added with the Unity bootstrapper for ASP.NET Web API nuget package, and looks like the following snippet (the part within region is added by the package, the method RegisterTypes is the one we need to configure our Unity container):

/// <summary>
/// Specifies the Unity configuration for the main container.
/// </summary>
public class UnityConfig
{
    #region Unity Container
    private static Lazy<IUnityContainer> container = new Lazy<IUnityContainer>(() =>
    {
        var container = new UnityContainer();
        RegisterTypes(container);
        return container;
    });

    /// <summary>
    /// Gets the configured Unity container.
    /// </summary>
    public static IUnityContainer GetConfiguredContainer()
    {
        return container.Value;
    }
    #endregion

    public static void RegisterTypes(IUnityContainer container)
    {
        container.RegisterType<IUserRepository, MyUserRepository>();
    }
}

And now also Unity is wired up correctly and dependencies resolved.

Having Entity Framework DbContext injected via Unity

Let’s get now to the juicy bits: how do we configure Unity to inject EF6 DbContext so that a new context is created with each request? On the internet there are a lot of samples of how to do it, as this is a very common pattern used with all “heavy” ORMs, but all of them use other IoC/DI frameworks, like Ninject, StructureMap and so on, but none with Unity.

Unity doesn’t have a InRequestScope/PerRequest object-scope as most other IoC/DI, but is has something slightly different, called HierarchicalLifetimeManager: it basically creates a “singleton” for each child Unity container created. While being kind of strange, in reality it gives a bit more of flexibility as someone could create child containers for different occasions: which exactly what the Unity bootstrapper for ASP.NET Web API does: it introduces a new kind of DependencyResolver called UnityHierarchicalDependencyResolver that creates a new child container with every begin request.

Taking into consideration all the steps considered so far, here is the final Owin configuration method:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        HttpConfiguration config = new HttpConfiguration();
        
        // ... Configure you web api routes

        config.DependencyResolver = new UnityHierarchicalDependencyResolver(UnityConfig.GetConfiguredContainer());
        
        app.UseWebApi(config);
    }

}

and the Unity registration:

public static void RegisterTypes(IUnityContainer container)
{
    container.RegisterType<IUserRepository, MyUserRepository>();
    container.RegisterType<MyDBContext>(new HierarchicalLifetimeManager());
}

The last line is important, because if you forget to specify it, a new DbContext will be injected in each of your repositories/command/query objects, you'll not be able to share the context and everything will fall apart.

Wrapping up

Once you solve a few pitfalls (the GlobalConfiguration not being “seen” by the Owin-hosted Web API and the configuration of the “Per Request” scope with Unity), the setup of the infrastructure is pretty easy and is implemented in just 2 lines of code.

Hope this helps, and please let me know if you solved the problem with a different approach or if my solution misses something. And I’ll write a few more posts with my findings in the next weeks.

How to unset a proxy for a specific git repository or remote

In this post I’ll show something that I just discovered and solved a problem I had once we introduced a in-house git repository: how to have many git repositories using proxies but have one that connects directly without proxy.

Lately we moved our source code repository, from a “standard” TFS repo to the git-based TFS repository that has been introduced with TFS 2013. Besides working with github repositories, now I had to connect also to some a repository hosted inside the local network and authenticate using the local domain credentials.

All went well from within Visual Studio, but since you cannot do everything from VS, I also needed to connect to the internal repository via the git command line tools. The problem is that it didn’t connect.

After a bit of troubleshooting I realized that the problem was the proxy: I’m behind a corporate firewall, so I had to configure a proxy to connect to github. Unfortunately the proxy was not recognizing my connection as local, so was trying to resolve it on the internet, and of course it failed.

I had to remove the proxy configuration, and I could connect to my local git-based TFS repository, but I couldn’t connect to the other repositories unless I specified the proxy on each of the repositories that needed it, which was kind of tedious since I need proxy for all repos except one.

Looking through the git-config documentation I found the solution:

Set to the empty string to disable proxying for that remote.

This not only work when specifying a proxy for a specific remote, but also for the whole repository.

Without further ado, here are the command for this configuration.

First you specify your global proxy configuration

$ git config --global --add http.proxy "http://proxy.example.com"

Then you move to the repository for which you want to unset the proxy and add an "empty" proxy.

$ git config --local --add http.proxy ""

And in case you need to specify an empty proxy only for a specific remote

$ git config --local --add remote.<name>.proxy ""

It took me a day to understand the cause of the problem, hope this post will help other people in a similar situation.

Tags: , ,