First time here? You are looking at the most recent posts. You may also want to check out older archives or the tag cloud. Please leave a comment, ask a question and consider subscribing to the latest posts via RSS. Thank you for visiting! (hide this)

How to debug .NET Core RC2 app with Visual Studio Code on Windows

So, you installed .NET Core RC2 , you followed the getting started tutorial and you got your “Hello World!” printed on your command prompt just by using the CLI.

Then you went the next step and you tried to use Visual Studio Code and the C# extension to edit the application outside of Visual Studio.

And finally you want to try and debug and set a breakpoint inside the application, but you encountered some problems and nothing worked. Here is how to make it work.

Specify the launch configuration

Visual Studio Code needs to know how to launch your application, and this is specified in a launch.json file inside the .vscode folder. From the debug window, click the “gear” icon and Code will create it for you: just choose the right environment “.NET Core”.

Then you must specify the path to your executable in the program property. In the standard hwapp sample app, replace

"program": "${workspaceRoot}/bin/Debug/<target-framework>/<project-name.dll>",


"program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/hwapp.dll",

There is much more you can specify in the launch.json file. To see all the options have a look at the official doc: Debugging in Visual Studio Code.

Specify the task runner

If you try to debug now you’ll have another warning: “No task runner configured”.

This is because for launching, VS Code has to build the project, and this is done via a task.

But no worries, just click the “Configure Task Runner” button in the info box, choose which task runner you want to use, in this case “.NET Core”, and the tasks.json file will be created for you.

More info on task runners in VS Code can be found on the offical documentation: Tasks in Visual Studio Code.

Running and debugging

Now you can click the “Start Debugging” button or F5 and the application runs. Cool…

Now you set a breakpoint and the executions stops where you set it, doesn’t it? Well… if you are on Mac or Linux it does. But it doesn’t stop if you are on Windows and the Debug Console says something like:

WARNING: Could not load symbols for 'hwapp.dll'.
'...\hwapp\bin\Debug\netcoreapp1.0\hwapp.pdb' is a Windows PDB.
These are not supported by the cross-platform .NET Core debugger.

Introducing Portable PDBs

In order to be able to debug cross-platform, .NET Core has now a “portable PDB” format, and the newly introduced .NET Core debugger for Visual Studio Code only supports this format. Unfortunately by default, on Windows, the .NET Core build generates standard “Windows PDBs”, which are not supported. But the fix is easy, you just have to tell the compiler to generate portable PDBs.

This is done by specifying the debugType to be portable.

  "buildOptions": {
    "debugType": "portable"

And voila! Breakpoints are hit!


The .NET Core RC2 stack has been released, and a new platform download site

Finally, after some months of delay due to the replatforming of DNX on top of the new .NET Core CLI, at the beginning of the week all things RC2 have been released.

There are already tons of documention on how to get started, both on the ASP.NET Core Documentation and .NET Core Documentation sites, but in this post I just want to collect all the announcements.


The three main pieces of the puzzle, .NET Core, ASP.NET Core and Entity Framework Core, all RC2.

Then there is the Tooling, preview 1: Announcing Web Tooling for ASP.NET Core RC2.

It’s important to understand why one thing is RC2 and the other is preview.

Libraries and runtime are RC2, and will be RTM end of June: they are a real RC2, and they have been working on it for more than 2 years.

The tooling, that isthe CLI and the support inside Visual Studio and Visual Studio Code are still a preview, and they have been working on it, expecially the web tooling part, only since end of last year: they will become RTM only with the next version of Visual Studio “15”.


A lot changed, between RC1 and RC2, but do not worry too much: changes are mainly in the hosting and runtime parts of apps. No major change in the common APIs… well, maybe some renaming and moving of namespaces.

Here are links to what changed in .NET Core and ASP.NET Core between RC1/DNX and RC2:

New website

But there is more to it. All things .NET can now be downloaded from the, IMHO, super-cool new url:

From there you can download the standard framework, .NET Core, and mobile development tools for Xamarin.

How to access Council of EU data on votes on legislation using SPARQL and AngularJS

One of the areas I've been focusing on lately is the so called "Semantic Web", in particular Open Data as a way to make governments more transparent and provide data to citizens. From a technical point of view, these data are redistributed using the  RDF/LD format.

I’m particularly excited of having worked on the release of what I think is a very important data set that helps understand how decisions are taken in the Council of European Union.

The Council of European Union published how member states have voted in since 2010

In April 2015, the Council of European Union released as open data how Member States vote on legislative acts. In other words, it means that when the Council votes to adopt a legislative act (ie a regulation or a directive), the votes of each country are stored and made publicly visible. This means that you can see how your country voted when a given law was adopted, or you could get more aggregate data on trends and voting patterns.

Recently, the Council has also released two additional open datasets containing the metadata of all Council documents and metadata on requests for Council documents.

DiploHack, Open Data Hackathon

The Council will also organise for tomorrow 29 and 30 of April, together with the Dutch Presidency, DiploHack, an hackaton about open data, in Brussels. The goal of the hackaton is to make use of Council’s opendata sets, linking them with all the other datasets available from other EU institutions, and build something useful for citizens. You can still register for the hackathon.

This post will show you how to access the votes using SPARQL, which is a query language for data published in RDF format, and how to access those data using AngularJS.

A brief introduction to RDF/LF and SPARQL

In the context of Semantic Web, entities and relation between entities are represented in triples which are serialized in a format called “Turtle” or in RDF/XML (which is what is usually referred as RDF) and many others formats.

You can imagine a “triple” as a database with 3 columns: subject, predicate, object. And each of those is represented with a URI. This is a very flexible format that can be used to represent anything. For example you can say that the author of this blog is myself (univoquely identified by my github account url and with the name “Simone Chiaretta”) and that the topic of this blog is Web Development. The corresponding serialization in Turtle (using the simple notation) of these three information will be:

  <> .

  "Web Development" .

  "Simone Chiaretta" .

Notice the use of the URI to represent entities, which gives them an unique identifier. In this case the refers to an URI defined by the Dublin Core’s  Metadata Terms. Another possible solution to represent the topic, could have been to refer to another URI coming from a managed taxonomy. This way it would have been possible to make “links” with other datasets.

But  how to query these data? We use SPARQL.

SPARQL uses a syntax very similar to Turtle, and uses SQL-like keywords like SELECT and WHERE.  Using the bibliographic example, one could query for all publications written by Simone Chiaretta. The syntax would be:

  SELECT ?publication
    ?publication <> <> . 

Basically the query is done by putting a variable in the element you want as result, and by specifying the other two elements of the tuple: a kind of query by example. The other 2 elements of the tuple can also be variables, in case you want to “join” different tuples. For example, if we want to search for all publications written by Simone Chiaretta, identified by his name instead of the URI, the query will be:

  SELECT ?publication
    ?publication <> ?author . 
    ?author <> "Simone Chiaretta" . 

With these basic knowledge, we can now look at how to access the data released by the Council of European Union about votes on legislative acts.

How the data is modelled and how to query it

Data released include the information about an act (title, act number, various document numbers, policy area, etc…), the session in which it’s been voted (its date, the Council configuration, the number of the Council session) and how each country voted.

Instead of being modeled as hiearchical graph, in order to make it easier to analyze it and get aggregated data, we’ve modelled it as a Data Cube: an “observation” includes all the information in a flat and denormalized structure. So, a “line” includes how a country voted for a given act, followed by all the information about act and session, which are then replicated for how many countries voted in the act. This approach make it less space efficient (all acts and council information are replicated every time) but easier and faster to query as there is no need for “linking” different entities with “joins” in order to compute aggregated results.

Simple queries

For example, if you want to know all acts about fishery, you do:

  where {
    <> .
    ?act .

The query basically asks: give me all the “observations” whose policy area is fisheries, and then, for these observations, give me their “act”. 

Notice the clause DISTINCT: this is important because, given the “data cube” approach, every act it replicated 28 times (there are usually 28 countries voting), so we need to take it only once.

The result will be 27 acts, each one identified by it’s URI. You can also execute the query directly in the interactive query tool online, and you will get the results as HTML.


If you want the title of the act, you also need to ask the “definition” for that URI, which has been mapped using the predicate So, the query will become:

  SELECT DISTINCT ?act ?title
  where {
    <> .
    ?act .
    ?title .

The result is as shown in the following screenshot (or can be seen online directly).


More complex aggregation queries

Now that you have the graps of it, let’s do some more interesting aggregated queries. Actually, given the modelling done, they are conceptually more complex, but easier to implement.

For example, you want to know how many time countries voted against the adoption of an act?

  PREFIX eucodim: <>
  PREFIX eucoprop: <>
  PREFIX eucovote: <>
  SELECT COUNT(?act) as ?count ?country
  from <>
  where {
    ?observation eucodim:country ?country .
    ?observation eucoprop:vote eucovote:votedagainst .
    ?observation eucodim:act ?act .
  ORDER BY DESC(?count)

To keep the query more concise and readable, I used another SPARQL keywork, PREFIX, to avoid writing the whole URI all the times. Here is the countries that voted against the adoption of an act, sorted by who voted no the most (using the ORDER BY DESC keyword).


If you want to see how a country voted in all the acts? It’s enough to switch country with vote, and you “pivot” the view of the data, aggregating by vote instead of by country:

  PREFIX eucodim: <>
  PREFIX eucoprop: <>
  PREFIX eucocountries: <>
  SELECT COUNT(?act) as ?count ?vote
  from <>
  where {
    ?observation eucodim:country eucocountries:uk .
    ?observation eucoprop:vote ?vote .
    ?observation eucodim:act ?act .
  ORDER BY DESC(?count)

And you see the country of the example voted 554 in favor of the adoption, 45 against, 42 abstained from voting and 39 didn’t participate in the voting (this happens because countries outside of the Eurozone do not vote in Euro-related matters).


Council’s Github repository contains more information on the model itself as well as a list of other SPARQL queries.

How to exploit all these information from code

Now you know how to query the dataset via the interactive query tool, you probably want to do something with the data.

There are a few JavaScript libraries that make it easier to interact with SPARQL endpoints and also can navigate graphs, like RDFSTORE-JS or rdflib.js. Or dotNetRDF if you are looking to do some processing on the server-side in .NET.

But if you want just to query a SPARQL endpoint you can just make a standard http GET request, passing the SPARQL query as parameter. In return you can get the results in a variety of formats, including JSON. The format of this JSON is a W3C standard (like all the other format decribed on the page): SPARQL 1.1 Query Results JSON Format.

The last query, in JSON format, would have returned the following code.


Basically this JSON format has an head which tells which variables have been used, followed by the results, which contain a small set of metadata about the query (was it a distinct, was it sorted), followed by all the results, inside a bindings array. For each variable, the type, URI and value are specified.

Sample request with Angular

Using AngularJS, you can send SPARQL queries using the standard $http.get method. The following sample is part of the open source demo we published on Council’s Github repository. The demo allows searching of acts by specifying some properties. It is available online at:

First I built an AngularJS Factory to encapsulate the query to the SPARQL endpoint ( and the manipulation of results.

angular.module('opendataApp', []).factory('sparqlQuery',function($http){
      return function(query){
        var baseAPI="";
        var requestUrl = baseAPI + "query="+query+"&format=application%2Fsparql-results%2Bjson";
        return $http.get(requestUrl)
        .then(function successCallback(response) {
          var acts = [];
          var bindings =;
          for (var i = 0; i < bindings.length; i++) {
            var variable = bindings[i];
            // Does some processing to put together all properties of an act
          return acts;
          }, function errorCallback(response) {

Then, with this in place and using another service for concatenating the SPARQL string, I can send the query to the server and get back the results and display them in the page.

  vm.performSearch = function() {
    vm.sparqlQuery = sparqlGenerator(; //concatenates string
    sparqlQuery(vm.sparqlQuery).then(function (data){
      vm.acts = data;

You can play around with the demo online at:

So, come to the hackathon and even if you cannot, play with the data and make some nice analysis of them. If you do, please post your links in the comment section.

Voting Simulator Application

On a slightly related topic, if you want to see how agreements are reached and how the actual voting happens, you can play around with the Council Voting Calculator, availabe on the website, but also as iOS app and Android app (in both versions, phone and tablet). Following is a screenshot from the iPad version of the app.

Disclaimer: The views expressed are solely those of the writer and may not be regarded as stating an official position of the Council of the EU

Clause de non-responsabilité: Les avis exprimés n'engagent que leur auteur et ne peuvent être considérés comme une position officielle du Conseil de l'UE

Introduction to ASP.NET Core 1.0 video

Actually still called Introduction to ASP.NET 5 (I did it before the name change from .NET 5 to .NET Core), a few days ago Microsoft TechRewards published the video I produced for Syncfusion about the new open-source web framework by Microsoft.

In the video I go through a quick introduction, followed by installation producedures, and then how to create command line tools and simple websites using ASP.NET Core v1.0, using both Visual Studio Code and Visual Studio 2015.

You can read more about the content of my video on the post Video Review: Introduction to ASP.NET 5 with Simone Chiaretta and, of course watch the video (and take the quiz at the end).


Hope you like it, and let me know what you think about it in the comments.

Two Razor view errors you might be doing too

Lately I went back developing web sites with ASP.NET MVC (after quite some time in SPA and Web API), and I struggled for some time with some strange Razor views behaviours I couldn’t understand. Here are some of them. Hope this post will help you save some time in case you have the same problems.

Using Generics in Razor views

Generics’ syntax has a peculiarity that might interfere when writing inline inside HTML tags: the use of angular brakets. This confuses the Razor interpreter so much that it things there is missing closing tag.

For example, when trying to write @Model.GetPropertyValue<DateTime>(“date”) you’ll get an error and Visual Studio will show some wiggle with the following alert.


Basically he thinks <DateTime> is an HTML tag and wants you to close it.


Solution is pretty simple: just put everything inside some brakets, like @(Model.GetPropertyValue<DateTime>(“date”))

Order of execution of Body and Layout views

I wanted to set the current UI Culture of my pages with every request, so I wrote a partial view that I included at the top of my layout view: all text in the layout was correctly translated, while the text coming from the Body was not.

After some digging I realized that the order of execution of a Razor view starts with the view itself (which renders the body) and then goes on with the Layout. So my UICulture was set after the body was rendered. So I had to move the partial view that was setting the culture at the top of the “main” view.

If you have many views, just put all initialization code inside a view called _ViewStart.cshtml. This way the code is executed before body is rendered, for every view, and you don’t have to add it to each view manually.

That’s all for now.

ASP.NET 5 is now ASP.NET Core 1.0

A few months from the RTM of the new version of ASP.NET, Microsoft changed the name: what it was originally referred to as ASP.NET vNext and later as ASP.NET 5, it’s now called ASP.NET Core 1.0.

Also all the related libraries change name:

  • .NET Core 5 becomes .NET Core 1.0
  • ASP.NET MVC 6 becomes ASP.NET Core MVC 1.0
  • Entitiy Framework 7 becomes Entitiy Framework Core 1.0

I personally think this is a great move as it was causing a lot of confusion in people that where just looking at the whole thing from time to time and not following all the evolution.

Why this is a good move

Calling the next version v5, v6 and v7 (respectively for ASP.NET, MVC and EF) would have lead to think that they were actually the next version of the various libraries and frameworks. But they were not:

  • ASP.NET 5 would have not been a replacement for ASP.NET 4.6 because it was lacking a lot of its features (WebForms above all)
  • ASP.NET MVC 6 was not a replacement of MVC 5 because you couldn’t run it on top of ASP.NET 4.6

So it’s a good move to reboot the version number to 1.0, and start a new product from scratch, because this is indeed what ASP.NET 5 was: a compiletely new product, wrote from scratch, without backward compatibility and also running a different runtime.

Calling it 1.0 also opens the way to a future ASP.NET 5 running on the full framework and still supporting WebForms for example.

Calling everything 1.0 also clears up the versioning mess of all the libraries that ran on top of ASP.NET: MVC 5, WebAPI 2, SignalR, Web Pages 2. Now they’ll all be part of the Core family and will all go back to 1.0. And will evolve together with the Core family.

Why I don’t like it that much

But naming and versioning are hard, and also this naming has its faults: you can still run ASP.NET Core 1.0 on top of the “full” .NET Framework 4.6, same goes with EF Core 1.0. Will this lead to some confusion: I’m pretty sure it will. Also, if you search on Google for ASP.NET MVC 1.0 you’d have to make sure the v1.0 you are reading about is the the “Core” and not the old version of the “full” ASP.NET MVC.

Personally I’d have gone even farther, and I would have called completely differently: Foo 1.0.

But this would have had also pro and cons:

  • the main point in favour is that we’d finally getting rid of the legacy of “Active Server Pages” and losing the bad connotation that ASP.NET WebForms have in the other communities. Also any name would be better and more appealing than “ASP.NET Core 1.0 MVC” as this is getting very close to the long names that we had from Microsoft in the past.
  • the disadvantage of the new name is that they’ll lose all the ASP branding that has been build over 20 years.

How all the new parts stack up after the name change

Let’s try to clear up things a bit. As bottom level we'll have:

  • the "full" .NET Framework 4.6 which provides base class library and execution runtime for Windows;
  • .NET Core v1, which provides the base class library and many of the other classes. From RC2 it also provides the execution runtime and all related tools (packages, build, etc), everything that was before in DNX. This runs on all OS.

Then as base web framework level:

  • ASP.NET 4.6, runs on top of "full" .NET 4.6
  • ASP.NET Core v1, runs on top of .NET Core v1 and on top of the "full" .NET 4.6

Then at higher web libraries level:

  • ASP.NET MVC 5, Webforms, and so on and on run on top of ASP.NET 4.6
  • ASP.NET Core v1 MVC, which runs on top of ASP.NET Core v1 (and in RC2 looses the execution runtime and CLI part of it)


  • EF6 runs on top of "full" .NET 4.6
  • EF Core runs on top of .NET Core v1 and on top of the "full" .NET 4.6

Read more

Many other member of the .NET community wrote about their views on this change. Here some of the posts I found around the net.

What do you think? Like, dislike, love, hate? Let me know in the comments

Automatically applying styles to a Word document with search and replace

Word as end-use is a very strange topic for me to blog about, but I just discovered a tip that would have saved me countless hours of time. So I thought to share it.

At the moment I’m writing a book (yeah, another one): for my personal convenience I write it in Markdown, so that I can easily push it to GitHub, and work on it from different devices and even when travelling via tablet.

I’ve synced my private repository to Gitbook so that I can easily read it online or export it to PDF or Word, but unfortunately I cannot rely on these features to send the chapters to my publisher. In fact book publishers have very strict rules when it comes to styles in Word documents. For example, if I want a bullet list, I cannot just click the bullet list button button in the toolbar, but I’ve to apply a “bulletlist” style. Same goes for all the other standard styles.

For most of the styles it’s not a big deal: I just select the lines I need to re-style and in 15-20 minutes a 20 pages chapter is formatted.

The problem arrives when formatting “inline code”: in markdown, inline code is formatted with back-ticks (`), so each time I need to show something as inline I’ve to remove the the trailing and leading ticks, and then apply the “inlinecode” Word style. This process alone, in a typical chapter, takes away at least a few hours of time. After a few chapters and hours of frustration I asked for help to my girlfriend, whom, working in language translation, uses Word as her main working tool all day: she had a solution for this problem, so I’m sharing it in case other fellow technical writers need it.

First open the Advanced Find dialog, switch to the Replace tab:

  • In Find you put a kind simplified regular expression: (`)(*)(`). This means: find any sting which starts with a back-tick, and ends with a back-tick.
  • In Replace put \2. This means: replace it with the content of the second “match group”. Also specify the style you want applied, in my case “InlineCode”.
  • And remember to check the box Use wildcards, otherwise this won’t work.

Let’s see in action on some lines from my upcoming book with the markdown file:


Once pasted into Word (and applied the basic styling) it becomes (notice all that text with back-ticks):


I then apply the magic find&replace:


And voila! In a few seconds 20 pages of Word documents are correctly updated by removing the ticks around inline code and applying the correct style.


it’s not my typical content, but I hope you’ve learnt some thing that you didn’t know.

To see all you can do with wildcards: How to Use Wildcards When Searching in Word 2013

Next step in automating this process would be writing some code that automatically formats it properly in one go.

Web European Conference Registrations opens 1st July 12:00CET


The moment has finally come: tomorrow at midday, Central European Time, it will be possible to start registering for the 2nd Web European Conference.

In the previous edition of the conference we sold out all the tickets available at the time (170) in the first few hours after opening: this year we’ll have 400 seats, but just to be sure, remember to set an alarm and get to the registration page on time not too loose the option to take part in the conference.

Register for the Web European Conference

Speakers and session

Tomorrow we’ll also close the Call for Presenters, and we’ll also ask for your opinion on which sessions to include in the conference: you can already see all the proposals on out github repository and from tomorrow you’ll be able to vote for your favorite sessions.

But we have already our two TOP speakers: Scott Hanselman and Dino Esposito.




A final word on our sponsors and partners, without whom this conference will not be possible.



CodeGarden 2015: recap of day 1

Here I am again, for the third time at Umbraco CodeGarden. For those who do not know what it is, it's the yearly Umbraco developer conference, this year celebrating its 10th anniversary.

Before going to sleep after a long day I just wanted to post my recap of the day.

The Keynote

CodeGarden Keynote

Some numbers on the "size" of the community:

  • almost 200k active developers on the community site
  • almost 300k active public installation of Umbraco
  • over 200k installation of Umbraco v7 in the last year

In addition to giving all these figures, Niels also highlighted some popular packages contributed by the community (Vorto for 1 to 1 translations, NuPicker and Nested Contents for enhanced editors experience, LePainter a visual grid editor and BookShelf to provide inline contextual help to the backoffice).

Other announcements included the features that are coming with v7.3 (automatic load balancing, a new API library as first path to get rid of legacy API, authentication based on ASP.NET Identity, which enables twitter, Google, Active Directory and 2 factor authentication via Google) and future features that are currently being experimented like the new cache layer, a new content type editor and a full-fledged REST API based on the HAL standard.

Roadmap panel

Immediately after the keynote, 5 members of the core dev team answered questions on specific pain-points that users would like addressed in future (v8) releases, and also unveiled HQ's priorities:

  • Improving the UX
  • Fresh start on the code (getting rid of the decennial original legacy API)
  • Bringing many features of (the SaaS platoform) to on-premises installations (like migrations from environments, syncronization of content and so on)
  • Segmentation, segments-based content variations and personalization

Contributing to the core

After the usual organic lunch, the afternoon started with some Git tips to better contribute to the core of Umbraco and make maintainers' life easier:

  • First squash all commits into one, making sure no typos or "missed file" kind of commits are sent in the pull request. The suggestion was to use the git rebase --interactive command.
  • Then making sure our pull request is based on a pretty recent version of the repository, using the following process:
    • Track upstream git remote add upstream ...
    • Fetch upstream git fetch upstream
    • Rebase your commit on top of the latest version of the repo git rebase upstream/dev-7
  • And finally, merge all the conflicts that might arise before doing the pull request

Make Editors Happy

As last year, one of the main tenet of the conference is reminding us developers that also content editors deserve love, and with Umbraco 7 it's very easy to craft data editors tailored to custom editing expectations and flows. But even without going down the path of customization with AngularJS, many things can be done also with the core editors and a few selected packages: group properties in tabs, remove from the RTE everthing tha editors do not need, provide contextual help (maybe consider the uEditorNotes package) and finally use NuPicker and Nested Content to provide a better experience when choosing node from the tree and when creating list of items.

How to sell Umbraco

The day ended with an amazing talk by Theo Paraskevopoulos with tips on how to sell Umbraco as platform when doing projects. Unfortunately the slides are not published yet, but will update the post as soon as they are.

Some impressive facts I didn't know about: NFL uses Umbraco for one of they sub-sites ( and Umbraco, with 0,7%, is the 5th platform in term of marketshare in the industry of CMS, after WordPress, Drupal, Joomla and DotNetNuke (1%); all the rest of the CMS account for 1.4%.


The evening ended with a protest march through the streets of Copenhagen, which unfortunately I had to miss due to a broken toe, caused by an injury in a recent triathlon race.

The first day was not super-tech, but more soft-skill and UX oriented, but very useful anyway, especially for me since my reason to be here is to get a feeling of where Umbraco is going in the future to see if it can be used as CMS platform at my workplace.

Tomorrow it looks like a more tech-focused agenda.

NBAs soon as slides and video are published, I'll update the post.

My new free eBook is out: OWIN Succinctly by Syncfusion

image I’m happy to announce that my latest book, OWIN Succinctly, has just been released by Syncfusion, within their “succinctly” series of eBooks.

I’ve written this book together with my friend and co-organizer of the 2nd Web European Conference, Ugo Lattanzi, with whom I’ve also give a speech in Paris last May, still about Owin.

Owin is a big inspiration for the new ASP.NET 5 stack, so we decided to write this book both to show how you can use this “philosophy” with current version of the ASP.NET, and to let you know how it could be in the future with ASP.NET 5.

The book covers all aspects of OWIN, starting with a description of the OWIN specification, moving on to how Katana, Microsoft’s implementation of the specs, works. Later we also show how to use Katana with various web frameworks, how to use authentication and finally how to write custom middleware.

The table of contents is:

  • OWIN
  • Katana
  • Using Katana with Other Web Frameworks
  • Building Custom Middleware
  • Authentication with Katana
  • Appendix

OWIN, and the new ASP.NET will be big actors in the 2nd Web European Conference in Milano next 26th of September, so, if you want to know more about those technologies, consider participating to the conference.

A big “thank you” goes to Syncfusion, for giving us the possibility to reach their audience, and to our technical reviewer Robert Muehsig, whose comments helped making the book even better.

If you have comments or feedback on the book, do not hesitate to write a comment on this post, or contacting me on twitter @simonech.