First time here? You are looking at the most recent posts. You may also want to check out older archives or the tag cloud. Please leave a comment, ask a question and consider subscribing to the latest posts via RSS. Thank you for visiting! (hide this)

From iPhone to Windows Phone and back: why?

I just bought myself a new iPhone 7. This was a long overdue change, as since a few months my Microsoft Lumia was acting weird, and with no clear future path for Windows Phones, I didn’t want to buy something that would be useless in a few months.

Back to #iphone after seeing the #smartphone #dream of #Microsoft rise and fall #iphone7

A photo posted by Simone Chiaretta (@simonech) on

A bit of history

But let’s go back in time and see my phones.

I bought my first smartphone, an iPhone 3G, in June 2008, as soon as the first version of iPhone working in Europe was announced (iPhone 3G). I went on using iPhone for a few years, upgrading to iPhone 4 in 2010.

Then Microsoft announced its smartphone operating system, Windows Phone 7, and I almost immediately bought one to test how it worked and with the idea of building some apps. After some times I started using WP as my main phone, and never looked back for a few years, passing from the Samsung Omnia, to a Lumia 800, to a Lumia 930 in July 2014. I loved the UI, with dynamic tiles that displayed information at a glance. I also developed an newsreader for Windows Phone 7.

Then Windows Phone 8 came, and changed the way apps had to be built. And then again Windows Phone 10, and yet another change in how apps have to built.

While this is not a problem per se for end users, it is a problem for developers that had to rebuild their apps in order to be compatible with the new versions of the phones. Sometimes small changes, sometimes more fundamental changes.

This, over time, alienated developers which at the end turned into less applications available for end-users, which caused less users to buy WP phones, which didn’t incentivate developer to spend time in updating apps, and so on. And only the biggest app and companies that couldn’t afford to lose that small 5%-10% of the market (in EU, in USA I think it never grew more than 2-3 %) made apps for Windows Phone. Which IMHO was enough for most of the users: email, calendar, facebook, twitter, instagram, whatsapp, snapchat, wheather, maps, banking and the occasional games and fitness app.

Why moving back to iPhone? The fitness niche and no future

So why did I move to away from Windows Phone to an iPhone? Because recently I started being more involved into sport. I started training for triathlon. And none of the big fitness companies that produce sport watches or devices have apps for Windows Phone.  This is due to the fact that Windows Phone 10 lacks some support for connecting to these modern BLE devices (doesn’t support “code-less” BLE pairing and cannot act as BLE client to devices).

So I couldn’t sync my sport watch with my phone. Same for cyclocomputers and indoor trainers. And sync these big players do not support WP, none of the popular fitness apps like Strava, TrainingPeaks and more support it either.

Another is my growing interest in connected devices, most of which comes from startups from USA. And given the afortmentioned reasons (low market share) they obviously don’t spend time in making a Windows Phone app.

Yet another, more fundamental, reason is the (no) roadmap for the future of Windows Phone. Microsoft sold the featurephone division beginning of 2016, and it was hinted that they will not make new Lumia phones and even stop selling what they have in stock. They might produce a Surface Phone, or anything else, but this level of uncertainty doesn’t help keeping the few users they still have.

Why not an Android?

I own an Android phone from Sony for 2 years already, and recently I also had a Galaxy Express for a few weeks when I was in USA and both my other phones were dead (Lumia with a unresponsive touch-screen and Sony with a broken glass).

In general the feeling of Android is less polished and a bit too technical for the user point of view. Apps are almost as many as for iOS even though not many fitness app have the same level of quality that they have on iOS.

But the thing that annoys me the most is the update of the operating system. Since every vendor has its own flavour of Android, you don’t get updates as soon as Google releases a new version. And it might pass long time before it happens (or might even never happen as it’s the case of my Xperia from 2014 which is still at Android 4.4). I know you can install ROMs and so on, but I’d rather spend my free time on OSS development and swimming/biking/running and not fiddling with technology that should “just” work.

Do you also think the small experience of Microsoft with smartphones is over? Let me know in the comment here below.

Umbraco and Me

If I can summarize in one word what is my main field of expertise I would say CMS: since I started working in 1996 I always built public websites based on CMS.

When working with Esperia, around year 2000, I developed our internal CMS, which was used to power lots of very popular websites with lots of traffic (at least for the time), like all the top Italian soccer teams, winter sport and soccer portals.

At the time, around 2003-2006, I was also using DotNetNuke to develop some sites for small businesses. It was ok for simple sites, but customizing and making a site look like the designer envisioned was almost impossible.

Then in 2006 I stopped working professionally with CMS as I worked almost one year building emailing apps in New Zealand and then other 2 years and half doing ”IT Consultancy” in Avanade. But as side I was still working on a smaller CMS, the once famous Subtext Blogging engine, which still powers this blog.

Then, a few months after my ASP.NET MVC v1 book was released, I received an email from Niels Hartvig (CEO of Umbraco) asking me if I could go to Copenhagen and give him and the core team a quick start on ASP.NET MVC because they want to rebuild it using ASP.NET MVC. At that point I had never used Umbraco before, but just evaluated it a bit some years before.

Obviously I said yes, I went there, delivered the course and immediately felt like I knew these guys since ever. Also I was fascinated by how they were working with the community to bring an edge to the product.

umbraco

I immediately became engaged with the community, and gave two talks at CodeGarden 10 about ASP.NET MVC.

Accidentally, as soon as I started my new job after moving to Belgium, I was surprised that Umbraco was used for one of the main public sites of the organization I joined. So I also start working professionally with it.

Unfortunately I couldn’t always work with it, as my job requires me to juggle many different hats and various projects, but nevertheless I stayed involved into the community as much as I could, and I attended various conferences of the Belgian Umbraco User Group (or BUUG) and went again to two other CodeGarden, in 2014 and 2015.

Then the Big Bang happened… and a new project came by. For the next years, Umbraco will be the main product I’ll be working with as we’ll be rebuilding all our online presence using this amazing CMS, and most of the things we’ll customize will be given back to the community, both as packages and PR to the Core.

At first, coming from my background of “purist” .NET developer, I didn’t like too much the mixed approach that required developer to configure the system using the backoffice as it prevented a proper code versioning and Ci to happen. But with the help of the great people in the community I solved most of the issues. And now, with Umbraco 7.4, almost all these issues are solved, thanks to strongly typed models and some tools that help with the versioning of stuff that is still configured in the backoffice.

Now that I’ll be working full-time on Umbraco expect to see something more coming out of me in the Umbraco community and conferences in the future. And if you missed it, I also just had a talk about ASP.NET Core at CodeGarden 16 (slides and demo are available).

And hopefully soon I’ll be moving my blog from this totally dead Subtext to Articulate on Umbraco.

#h5yr

Slides and demo for my ASPNET Core talk at Umbraco CodeGarden 2016

Talk notes

Yesterday I had the pleasure to introduce ASP.NET Core to a very crowded and interested room at Umbraco CodeGarden.

I really liked the conference and the amazing best OSS community ever, and even got more hooked to Umbraco if that’s even possible.

Now I just want to list the links and resources I mentioned during my talk.

If you attended my talk, I’d love if you could comment or tweet me (@simonech) and tell me what you thought of it, both about the topic itself and about my presentation.

Apparently there were videos recorded, so I’ll post a link when they’ll be online.

How to debug .NET Core RC2 app with Visual Studio Code on Windows

So, you installed .NET Core RC2 , you followed the getting started tutorial and you got your “Hello World!” printed on your command prompt just by using the CLI.

Then you went the next step and you tried to use Visual Studio Code and the C# extension to edit the application outside of Visual Studio.

And finally you want to try and debug and set a breakpoint inside the application, but you encountered some problems and nothing worked. Here is how to make it work.

Specify the launch configuration

Visual Studio Code needs to know how to launch your application, and this is specified in a launch.json file inside the .vscode folder. From the debug window, click the “gear” icon and Code will create it for you: just choose the right environment “.NET Core”.

Then you must specify the path to your executable in the program property. In the standard hwapp sample app, replace

"program": "${workspaceRoot}/bin/Debug/<target-framework>/<project-name.dll>",

with

"program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/hwapp.dll",

There is much more you can specify in the launch.json file. To see all the options have a look at the official doc: Debugging in Visual Studio Code.

Specify the task runner

If you try to debug now you’ll have another warning: “No task runner configured”.

This is because for launching, VS Code has to build the project, and this is done via a task.

But no worries, just click the “Configure Task Runner” button in the info box, choose which task runner you want to use, in this case “.NET Core”, and the tasks.json file will be created for you.

More info on task runners in VS Code can be found on the offical documentation: Tasks in Visual Studio Code.

Running and debugging

Now you can click the “Start Debugging” button or F5 and the application runs. Cool…

Now you set a breakpoint and the executions stops where you set it, doesn’t it? Well… if you are on Mac or Linux it does. But it doesn’t stop if you are on Windows and the Debug Console says something like:

WARNING: Could not load symbols for 'hwapp.dll'.
'...\hwapp\bin\Debug\netcoreapp1.0\hwapp.pdb' is a Windows PDB.
These are not supported by the cross-platform .NET Core debugger.

Introducing Portable PDBs

In order to be able to debug cross-platform, .NET Core has now a “portable PDB” format, and the newly introduced .NET Core debugger for Visual Studio Code only supports this format. Unfortunately by default, on Windows, the .NET Core build generates standard “Windows PDBs”, which are not supported. But the fix is easy, you just have to tell the compiler to generate portable PDBs.

This is done by specifying the debugType to be portable.

{
  "buildOptions": {
    "debugType": "portable"
  },
  ...
}

And voila! Breakpoints are hit!

image

The .NET Core RC2 stack has been released, and a new platform download site

Finally, after some months of delay due to the replatforming of DNX on top of the new .NET Core CLI, at the beginning of the week all things RC2 have been released.

There are already tons of documention on how to get started, both on the ASP.NET Core Documentation and .NET Core Documentation sites, but in this post I just want to collect all the announcements.

Announcements

The three main pieces of the puzzle, .NET Core, ASP.NET Core and Entity Framework Core, all RC2.

Then there is the Tooling, preview 1: Announcing Web Tooling for ASP.NET Core RC2.

It’s important to understand why one thing is RC2 and the other is preview.

Libraries and runtime are RC2, and will be RTM end of June: they are a real RC2, and they have been working on it for more than 2 years.

The tooling, that isthe CLI and the support inside Visual Studio and Visual Studio Code are still a preview, and they have been working on it, expecially the web tooling part, only since end of last year: they will become RTM only with the next version of Visual Studio “15”.

Changes

A lot changed, between RC1 and RC2, but do not worry too much: changes are mainly in the hosting and runtime parts of apps. No major change in the common APIs… well, maybe some renaming and moving of namespaces.

Here are links to what changed in .NET Core and ASP.NET Core between RC1/DNX and RC2:

New website

But there is more to it. All things .NET can now be downloaded from the, IMHO, super-cool new url:

http://dot.net

From there you can download the standard framework, .NET Core, and mobile development tools for Xamarin.

How to access Council of EU data on votes on legislation using SPARQL and AngularJS

One of the areas I've been focusing on lately is the so called "Semantic Web", in particular Open Data as a way to make governments more transparent and provide data to citizens. From a technical point of view, these data are redistributed using the  RDF/LD format.

I’m particularly excited of having worked on the release of what I think is a very important data set that helps understand how decisions are taken in the Council of European Union.

The Council of European Union published how member states have voted in since 2010

In April 2015, the Council of European Union released as open data how Member States vote on legislative acts. In other words, it means that when the Council votes to adopt a legislative act (ie a regulation or a directive), the votes of each country are stored and made publicly visible. This means that you can see how your country voted when a given law was adopted, or you could get more aggregate data on trends and voting patterns.

Recently, the Council has also released two additional open datasets containing the metadata of all Council documents and metadata on requests for Council documents.

DiploHack, Open Data Hackathon

The Council will also organise for tomorrow 29 and 30 of April, together with the Dutch Presidency, DiploHack, an hackaton about open data, in Brussels. The goal of the hackaton is to make use of Council’s opendata sets, linking them with all the other datasets available from other EU institutions, and build something useful for citizens. You can still register for the hackathon.

This post will show you how to access the votes using SPARQL, which is a query language for data published in RDF format, and how to access those data using AngularJS.

A brief introduction to RDF/LF and SPARQL

In the context of Semantic Web, entities and relation between entities are represented in triples which are serialized in a format called “Turtle” or in RDF/XML (which is what is usually referred as RDF) and many others formats.

You can imagine a “triple” as a database with 3 columns: subject, predicate, object. And each of those is represented with a URI. This is a very flexible format that can be used to represent anything. For example you can say that the author of this blog is myself (univoquely identified by my github account url and with the name “Simone Chiaretta”) and that the topic of this blog is Web Development. The corresponding serialization in Turtle (using the simple notation) of these three information will be:

<http://codeclimber.net.nz/>
  <http://purl.org/dc/elements/1.1/creator>
  <https://github.com/simonech> .

<http://codeclimber.net.nz/>
  <http://purl.org/dc/elements/1.1/subject>
  "Web Development" .

<https://github.com/simonech>
  <http://xmlns.com/foaf/0.1/name>
  "Simone Chiaretta" .

Notice the use of the URI to represent entities, which gives them an unique identifier. In this case the http://purl.org/dc/elements refers to an URI defined by the Dublin Core’s  Metadata Terms. Another possible solution to represent the topic, could have been to refer to another URI coming from a managed taxonomy. This way it would have been possible to make “links” with other datasets.

But  how to query these data? We use SPARQL.

SPARQL uses a syntax very similar to Turtle, and uses SQL-like keywords like SELECT and WHERE.  Using the bibliographic example, one could query for all publications written by Simone Chiaretta. The syntax would be:

  SELECT ?publication
  WHERE {
    ?publication <http://purl.org/dc/elements/1.1/creator> <https://github.com/simonech> . 
  }

Basically the query is done by putting a variable in the element you want as result, and by specifying the other two elements of the tuple: a kind of query by example. The other 2 elements of the tuple can also be variables, in case you want to “join” different tuples. For example, if we want to search for all publications written by Simone Chiaretta, identified by his name instead of the URI, the query will be:

  SELECT ?publication
  WHERE {
    ?publication <http://purl.org/dc/elements/1.1/creator> ?author . 
    ?author <http://xmlns.com/foaf/0.1/name> "Simone Chiaretta" . 
  }

With these basic knowledge, we can now look at how to access the data released by the Council of European Union about votes on legislative acts.

How the data is modelled and how to query it

Data released include the information about an act (title, act number, various document numbers, policy area, etc…), the session in which it’s been voted (its date, the Council configuration, the number of the Council session) and how each country voted.

Instead of being modeled as hiearchical graph, in order to make it easier to analyze it and get aggregated data, we’ve modelled it as a Data Cube: an “observation” includes all the information in a flat and denormalized structure. So, a “line” includes how a country voted for a given act, followed by all the information about act and session, which are then replicated for how many countries voted in the act. This approach make it less space efficient (all acts and council information are replicated every time) but easier and faster to query as there is no need for “linking” different entities with “joins” in order to compute aggregated results.

Simple queries

For example, if you want to know all acts about fishery, you do:

  SELECT DISTINCT ?act
  where {
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/policyarea>
    <http://data.consilium.europa.eu/data/public_voting/consilium/policyarea/fisheries> .
    
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/act>
    ?act .
  }

The query basically asks: give me all the “observations” whose policy area is fisheries, and then, for these observations, give me their “act”. 

Notice the clause DISTINCT: this is important because, given the “data cube” approach, every act it replicated 28 times (there are usually 28 countries voting), so we need to take it only once.

The result will be 27 acts, each one identified by it’s URI. You can also execute the query directly in the interactive query tool online, and you will get the results as HTML.

all-acts-on-fisheries

If you want the title of the act, you also need to ask the “definition” for that URI, which has been mapped using the predicate http://www.w3.org/2004/02/skos/core#definition. So, the query will become:

  SELECT DISTINCT ?act ?title
  where {
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/policyarea>
    <http://data.consilium.europa.eu/data/public_voting/consilium/policyarea/fisheries> .
    
    ?observation
    <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/act>
    ?act .
  
    ?act
    <http://www.w3.org/2004/02/skos/core#definition>
    ?title .
  }

The result is as shown in the following screenshot (or can be seen online directly).

all-acts-on-fisheries-with-title

More complex aggregation queries

Now that you have the graps of it, let’s do some more interesting aggregated queries. Actually, given the modelling done, they are conceptually more complex, but easier to implement.

For example, you want to know how many time countries voted against the adoption of an act?

  PREFIX eucodim: <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/>
  PREFIX eucoprop: <http://data.consilium.europa.eu/data/public_voting/qb/measureproperty/>
  PREFIX eucovote: <http://data.consilium.europa.eu/data/public_voting/consilium/vote/>
  
  SELECT COUNT(?act) as ?count ?country
  from <http://data.consilium.europa.eu/id/dataset/votingresults>
  where {
    ?observation eucodim:country ?country .
    ?observation eucoprop:vote eucovote:votedagainst .
    ?observation eucodim:act ?act .
  }
  ORDER BY DESC(?count)

To keep the query more concise and readable, I used another SPARQL keywork, PREFIX, to avoid writing the whole URI all the times. Here is the countries that voted against the adoption of an act, sorted by who voted no the most (using the ORDER BY DESC keyword).

who-voted-no

If you want to see how a country voted in all the acts? It’s enough to switch country with vote, and you “pivot” the view of the data, aggregating by vote instead of by country:

  PREFIX eucodim: <http://data.consilium.europa.eu/data/public_voting/qb/dimensionproperty/>
  PREFIX eucoprop: <http://data.consilium.europa.eu/data/public_voting/qb/measureproperty/>
  PREFIX eucocountries: <http://data.consilium.europa.eu/data/public_voting/consilium/country/>
  
  SELECT COUNT(?act) as ?count ?vote
  from <http://data.consilium.europa.eu/id/dataset/votingresults>
  where {
    ?observation eucodim:country eucocountries:uk .
    ?observation eucoprop:vote ?vote .
    ?observation eucodim:act ?act .
  }
  ORDER BY DESC(?count)

And you see the country of the example voted 554 in favor of the adoption, 45 against, 42 abstained from voting and 39 didn’t participate in the voting (this happens because countries outside of the Eurozone do not vote in Euro-related matters).

how-country-voted

Council’s Github repository contains more information on the model itself as well as a list of other SPARQL queries.

How to exploit all these information from code

Now you know how to query the dataset via the interactive query tool, you probably want to do something with the data.

There are a few JavaScript libraries that make it easier to interact with SPARQL endpoints and also can navigate graphs, like RDFSTORE-JS or rdflib.js. Or dotNetRDF if you are looking to do some processing on the server-side in .NET.

But if you want just to query a SPARQL endpoint you can just make a standard http GET request, passing the SPARQL query as parameter. In return you can get the results in a variety of formats, including JSON. The format of this JSON is a W3C standard (like all the other format decribed on the page): SPARQL 1.1 Query Results JSON Format.

The last query, in JSON format, would have returned the following code.

json-result

Basically this JSON format has an head which tells which variables have been used, followed by the results, which contain a small set of metadata about the query (was it a distinct, was it sorted), followed by all the results, inside a bindings array. For each variable, the type, URI and value are specified.

Sample request with Angular

Using AngularJS, you can send SPARQL queries using the standard $http.get method. The following sample is part of the open source demo we published on Council’s Github repository. The demo allows searching of acts by specifying some properties. It is available online at: http://eucouncil.github.io/CouncilVotesOnActsDatasetSample/

First I built an AngularJS Factory to encapsulate the query to the SPARQL endpoint (http://data.consilium.europa.eu/sparql) and the manipulation of results.

angular.module('opendataApp', []).factory('sparqlQuery',function($http){
      return function(query){
        var baseAPI="http://data.consilium.europa.eu/sparql?";
        var requestUrl = baseAPI + "query="+query+"&format=application%2Fsparql-results%2Bjson";
  
        return $http.get(requestUrl)
        .then(function successCallback(response) {
          console.log(response.data.results.bindings);
          var acts = [];
          var bindings = response.data.results.bindings;
          for (var i = 0; i < bindings.length; i++) {
            var variable = bindings[i];
            // Does some processing to put together all properties of an act
          }
          return acts;
          }, function errorCallback(response) {
          });
      };
    })

Then, with this in place and using another service for concatenating the SPARQL string, I can send the query to the server and get back the results and display them in the page.

  vm.performSearch = function() {
    vm.searching=true;
    vm.noresults=false;
    vm.acts=[];
    vm.sparqlQuery = sparqlGenerator(vm.search); //concatenates string
    sparqlQuery(vm.sparqlQuery).then(function (data){
      vm.acts = data;
      vm.searching=false;
      if(vm.acts.length==0)
       vm.noresults=true;
    });
  };

You can play around with the demo online at: http://eucouncil.github.io/CouncilVotesOnActsDatasetSample/

So, come to the hackathon and even if you cannot, play with the data and make some nice analysis of them. If you do, please post your links in the comment section.

Voting Simulator Application

On a slightly related topic, if you want to see how agreements are reached and how the actual voting happens, you can play around with the Council Voting Calculator, availabe on the website, but also as iOS app and Android app (in both versions, phone and tablet). Following is a screenshot from the iPad version of the app.

Disclaimer: The views expressed are solely those of the writer and may not be regarded as stating an official position of the Council of the EU

Clause de non-responsabilité: Les avis exprimés n'engagent que leur auteur et ne peuvent être considérés comme une position officielle du Conseil de l'UE

Introduction to ASP.NET Core 1.0 video

Actually still called Introduction to ASP.NET 5 (I did it before the name change from .NET 5 to .NET Core), a few days ago Microsoft TechRewards published the video I produced for Syncfusion about the new open-source web framework by Microsoft.

In the video I go through a quick introduction, followed by installation producedures, and then how to create command line tools and simple websites using ASP.NET Core v1.0, using both Visual Studio Code and Visual Studio 2015.

You can read more about the content of my video on the post Video Review: Introduction to ASP.NET 5 with Simone Chiaretta and, of course watch the video (and take the quiz at the end).

video

Hope you like it, and let me know what you think about it in the comments.

Two Razor view errors you might be doing too

Lately I went back developing web sites with ASP.NET MVC (after quite some time in SPA and Web API), and I struggled for some time with some strange Razor views behaviours I couldn’t understand. Here are some of them. Hope this post will help you save some time in case you have the same problems.

Using Generics in Razor views

Generics’ syntax has a peculiarity that might interfere when writing inline inside HTML tags: the use of angular brakets. This confuses the Razor interpreter so much that it things there is missing closing tag.

For example, when trying to write @Model.GetPropertyValue<DateTime>(“date”) you’ll get an error and Visual Studio will show some wiggle with the following alert.

vs-alert

Basically he thinks <DateTime> is an HTML tag and wants you to close it.

htmlcompletion

Solution is pretty simple: just put everything inside some brakets, like @(Model.GetPropertyValue<DateTime>(“date”))

Order of execution of Body and Layout views

I wanted to set the current UI Culture of my pages with every request, so I wrote a partial view that I included at the top of my layout view: all text in the layout was correctly translated, while the text coming from the Body was not.

After some digging I realized that the order of execution of a Razor view starts with the view itself (which renders the body) and then goes on with the Layout. So my UICulture was set after the body was rendered. So I had to move the partial view that was setting the culture at the top of the “main” view.

If you have many views, just put all initialization code inside a view called _ViewStart.cshtml. This way the code is executed before body is rendered, for every view, and you don’t have to add it to each view manually.

That’s all for now.

ASP.NET 5 is now ASP.NET Core 1.0

A few months from the RTM of the new version of ASP.NET, Microsoft changed the name: what it was originally referred to as ASP.NET vNext and later as ASP.NET 5, it’s now called ASP.NET Core 1.0.

Also all the related libraries change name:

  • .NET Core 5 becomes .NET Core 1.0
  • ASP.NET MVC 6 becomes ASP.NET Core MVC 1.0
  • Entitiy Framework 7 becomes Entitiy Framework Core 1.0

I personally think this is a great move as it was causing a lot of confusion in people that where just looking at the whole thing from time to time and not following all the evolution.

Why this is a good move

Calling the next version v5, v6 and v7 (respectively for ASP.NET, MVC and EF) would have lead to think that they were actually the next version of the various libraries and frameworks. But they were not:

  • ASP.NET 5 would have not been a replacement for ASP.NET 4.6 because it was lacking a lot of its features (WebForms above all)
  • ASP.NET MVC 6 was not a replacement of MVC 5 because you couldn’t run it on top of ASP.NET 4.6

So it’s a good move to reboot the version number to 1.0, and start a new product from scratch, because this is indeed what ASP.NET 5 was: a compiletely new product, wrote from scratch, without backward compatibility and also running a different runtime.

Calling it 1.0 also opens the way to a future ASP.NET 5 running on the full framework and still supporting WebForms for example.

Calling everything 1.0 also clears up the versioning mess of all the libraries that ran on top of ASP.NET: MVC 5, WebAPI 2, SignalR, Web Pages 2. Now they’ll all be part of the Core family and will all go back to 1.0. And will evolve together with the Core family.

Why I don’t like it that much

But naming and versioning are hard, and also this naming has its faults: you can still run ASP.NET Core 1.0 on top of the “full” .NET Framework 4.6, same goes with EF Core 1.0. Will this lead to some confusion: I’m pretty sure it will. Also, if you search on Google for ASP.NET MVC 1.0 you’d have to make sure the v1.0 you are reading about is the the “Core” and not the old version of the “full” ASP.NET MVC.

Personally I’d have gone even farther, and I would have called completely differently: Foo 1.0.

But this would have had also pro and cons:

  • the main point in favour is that we’d finally getting rid of the legacy of “Active Server Pages” and losing the bad connotation that ASP.NET WebForms have in the other communities. Also any name would be better and more appealing than “ASP.NET Core 1.0 MVC” as this is getting very close to the long names that we had from Microsoft in the past.
  • the disadvantage of the new name is that they’ll lose all the ASP branding that has been build over 20 years.

How all the new parts stack up after the name change

Let’s try to clear up things a bit. As bottom level we'll have:

  • the "full" .NET Framework 4.6 which provides base class library and execution runtime for Windows;
  • .NET Core v1, which provides the base class library and many of the other classes. From RC2 it also provides the execution runtime and all related tools (packages, build, etc), everything that was before in DNX. This runs on all OS.

Then as base web framework level:

  • ASP.NET 4.6, runs on top of "full" .NET 4.6
  • ASP.NET Core v1, runs on top of .NET Core v1 and on top of the "full" .NET 4.6

Then at higher web libraries level:

  • ASP.NET MVC 5, Webforms, and so on and on run on top of ASP.NET 4.6
  • ASP.NET Core v1 MVC, which runs on top of ASP.NET Core v1 (and in RC2 looses the execution runtime and CLI part of it)

As ORM:

  • EF6 runs on top of "full" .NET 4.6
  • EF Core runs on top of .NET Core v1 and on top of the "full" .NET 4.6

Read more

Many other member of the .NET community wrote about their views on this change. Here some of the posts I found around the net.

What do you think? Like, dislike, love, hate? Let me know in the comments

Automatically applying styles to a Word document with search and replace

Word as end-use is a very strange topic for me to blog about, but I just discovered a tip that would have saved me countless hours of time. So I thought to share it.

At the moment I’m writing a book (yeah, another one): for my personal convenience I write it in Markdown, so that I can easily push it to GitHub, and work on it from different devices and even when travelling via tablet.

I’ve synced my private repository to Gitbook so that I can easily read it online or export it to PDF or Word, but unfortunately I cannot rely on these features to send the chapters to my publisher. In fact book publishers have very strict rules when it comes to styles in Word documents. For example, if I want a bullet list, I cannot just click the bullet list button button in the toolbar, but I’ve to apply a “bulletlist” style. Same goes for all the other standard styles.

For most of the styles it’s not a big deal: I just select the lines I need to re-style and in 15-20 minutes a 20 pages chapter is formatted.

The problem arrives when formatting “inline code”: in markdown, inline code is formatted with back-ticks (`), so each time I need to show something as inline I’ve to remove the the trailing and leading ticks, and then apply the “inlinecode” Word style. This process alone, in a typical chapter, takes away at least a few hours of time. After a few chapters and hours of frustration I asked for help to my girlfriend, whom, working in language translation, uses Word as her main working tool all day: she had a solution for this problem, so I’m sharing it in case other fellow technical writers need it.

First open the Advanced Find dialog, switch to the Replace tab:

  • In Find you put a kind simplified regular expression: (`)(*)(`). This means: find any sting which starts with a back-tick, and ends with a back-tick.
  • In Replace put \2. This means: replace it with the content of the second “match group”. Also specify the style you want applied, in my case “InlineCode”.
  • And remember to check the box Use wildcards, otherwise this won’t work.

Let’s see in action on some lines from my upcoming book with the markdown file:

markdown

Once pasted into Word (and applied the basic styling) it becomes (notice all that text with back-ticks):

word

I then apply the magic find&replace:

find

And voila! In a few seconds 20 pages of Word documents are correctly updated by removing the ticks around inline code and applying the correct style.

Post

it’s not my typical content, but I hope you’ve learnt some thing that you didn’t know.

To see all you can do with wildcards: How to Use Wildcards When Searching in Word 2013

Next step in automating this process would be writing some code that automatically formats it properly in one go.