First time here? You are looking at the most recent posts. You may also want to check out older archives or the tag cloud. Please leave a comment, ask a question and consider subscribing to the latest posts via RSS. Thank you for visiting! (hide this)

How to unset a proxy for a specific git repository or remote

In this post I’ll show something that I just discovered and solved a problem I had once we introduced a in-house git repository: how to have many git repositories using proxies but have one that connects directly without proxy.

Lately we moved our source code repository, from a “standard” TFS repo to the git-based TFS repository that has been introduced with TFS 2013. Besides working with github repositories, now I had to connect also to some a repository hosted inside the local network and authenticate using the local domain credentials.

All went well from within Visual Studio, but since you cannot do everything from VS, I also needed to connect to the internal repository via the git command line tools. The problem is that it didn’t connect.

After a bit of troubleshooting I realized that the problem was the proxy: I’m behind a corporate firewall, so I had to configure a proxy to connect to github. Unfortunately the proxy was not recognizing my connection as local, so was trying to resolve it on the internet, and of course it failed.

I had to remove the proxy configuration, and I could connect to my local git-based TFS repository, but I couldn’t connect to the other repositories unless I specified the proxy on each of the repositories that needed it, which was kind of tedious since I need proxy for all repos except one.

Looking through the git-config documentation I found the solution:

Set to the empty string to disable proxying for that remote.

This not only work when specifying a proxy for a specific remote, but also for the whole repository.

Without further ado, here are the command for this configuration.

First you specify your global proxy configuration

$ git config --global --add http.proxy ""

Then you move to the repository for which you want to unset the proxy and add an "empty" proxy.

$ git config --local --add http.proxy ""

And in case you need to specify an empty proxy only for a specific remote

$ git config --local --add remote.<name>.proxy ""

It took me a day to understand the cause of the problem, hope this post will help other people in a similar situation.

Tags: , ,

Using web api client in synchronous methods and some good async/await resources

Lately I’ve been trying to include a call to a REST service using the WebAPI client to a library already used from inside both an ASP.NET page and a WCF service.

The problem with mixing sync and async

Of course I started with an async/await code:

HttpResponseMessage response = await client.PostAsJsonAsync(GetUpdateUri(), payload);

Unfortunately both consumers were synchronous, so I had to wait for the async to complete before going on with the execution: so I did something that proved to be wrong (the following methods calls the line above):


Deadlock: this never completes as ASP.NET is blocking the context thread waiting for the webapi call to complete, which waits for the thread to be free so it can complete.

How to make a synchronous web api call

What I should have done, instead, would have been to change the ASP.NET page to be an async page (using the Page.RegisterAyncTask) and also change the WCF service to be async, and use the async/await from top to bottom as is best practice. In my case it was too complex and would have required touching code developer by others and already working, so I simply avoided starting the “async chain”:

HttpResponseMessage response = client.PostAsJsonAsync(GetUpdateUri(), payload).Result;

Async/Await resources

On the other end I realized I don’t know enough about the async way of coding, and I spent some time reading some good references, mostly by “async-guru” Stephen Cleary.

How to roll log files at the beginning of the day using Enterprise Library logging block

It took me a while to find out the (very simple) solution to my problem, so I thought it would have been a good idea to create a post explaining it.

The problem

I’m logging from my application using Enterprise Library Logging component, and I wanted to have a new file every day. So I configured the logger with the RollingFlatFileTraceListener. One of the properties you can specify during the registration is the rollInterval attibute, which configures the log to roll every minute, hour, day, week, month, year.

I wanted to have a new file every day, so I specified “day”: I got a new file per day, but the rolling was happening at “random” times during the day. Actually it was not really random: the rolling time was moving forward by a few minutes every day.

The solution

After looking around and finding nothing I looked by mistake at the docs of v5. They said:


Logging always occurs to the configured file name, and when roll occurs a new rolled file name is calculated by adding the timestamp pattern to the configured file name.

The need of rolling is calculated before performing a logging operation, so even if the thresholds are exceeded roll will not occur until a new entry is logged.

Both time and size thresholds can be configured, and when the first of them occurs both will be reset.

The elapsed time is calculated from the creation date of the logging file.

Here everything is explained: the elapsed time is a day from the creation of the file, and the actual moment in which the new file is created is the next log entry after that 24 hours.

I also found out that the rollInterval property is based on an enum, RollInterval,  which contains also another element that I had previously overlooked: Midnight. This makes the file to roll at the beginning of each day, instead of every 24 hours.

Don’t miss Connect(); a virtual conference about Visual Studio vNext

I just came back from the Microsoft MVP summit, where they announced (to MVPs) lots of interesting news about the future of web development with .NET: many were already well known like the new ASP.NET vNext and ASP.NET MVC 6, but other were pretty new also for us MVPs. Unfortunately, despite being super-excited about them, I cannot tell you anything as everything is under NDA.

I can tell you though that this Wednesday and Thursday, there will be a virtual developer conference, Connect();, where many of the news will be revealed to the World.

Specifically, on Wednesday 12th November, Scott Guthrie, “Soma” and Scott Hanselman will give their keynotes. The following day will be a more technical day, with sessions with product team members.

Hopefully there will be recordings for people that due to the time zone differences could not attend.

How to get integrated debugging in Visual Studio with OwinHost in Owin-based Katana web applications

In this blog post I want to share with you an hidden feature of Visual Studio 2013 that enables an integrated debugging experience with OwinHost and other custom hosts.

Options when building Owin-based apps

When you build an Owin-based web application with Katana you have 3 hosting options:

  • Use the System.Web Host, build your app as Web Application and run/debug it inside IIS Express from within Visual Studio;
  • Build your own custom host, build your app as Console Application and run/debug it as custom console application;
  • Use the OwinHost host that is part of the Katana suite, and build your app as Class Library and run it by manually launching the OwinHost.exe app from within the current project folder.

How to debug an Owin-based app running within OwinHost

In the third option I left out debug on purpose, it was not a mistake. This is because debugging an Owin app that is running in OwinHost is not as straightforward as the other 2 options:

  • First you have to make sure the project is always built into the \bin folder instead of \bin\Debug or \bin\Release as otherwise OwinHost will not work (by default it looks for classes inside the \bin folder);
  • Then you have to manually launch the OwinHost executable pointing to the folder in which it was installed by Nuget when you downloaded it (typically it would be ..\packages\OwinHost.(version)\tools\OwinHost.exe, and with the just released version, ..\packages\OwinHost.3.0.0\tools\OwinHost.exe)
  • And finally, if you want to debug the application, you have to attach the debugger to the process , and launch the browser manually.

Not a complicate procedure, but time consuming.

Introducing Visual Studio integration for external hosts

Visual Studio 2013 adds the possibility, in addition to the usual IIS Express and Local IIS, to launch a Web Application with a external url or by specifying an external hosting process. And the OwinHost nuget package takes advantage of this and when you install it within a Web Application project it also registers itself as additional custom host (here below the important part of the powershell install script in the nuget package):

$serverProvider = $dte.GetObject("CustomWebServerProvider")
$servers = $serverProvider.GetCustomServers($project.Name)
$servers.AddWebServer('OwinHost', $exeDir, '-u {url}', 'http://localhost:12345/', '{projectdir}')

This adds the following line to the .csproj file:

<servers defaultServer="OwinHost">
  <server name="OwinHost"
   cmdArgs="-u {url}"
   workingDir="{projectdir}" />

In concrete what you get is a new entry in the Servers dropdown list in the Web tab of the Server Properties window of your web application project.

Project Properties Window, Web tab

But since this option is only available inside a Web Application project, you have to create your application using the Empty template of a Web Application project and remove all the unneeded project references that come with Web Application projects, like System.Web for example.

But once the references are cleaned up (and you have to do it only once), you can just hit F5 or press the “Debug” button and OwinHost will fire-up loading your Owin startup class, your browser will start to the right url, and the debugger will be already attached to the process, just like with IIS hosted apps.

Compatibility issues

This new feature only works with Visual Studio 2013, so if you want to open the same project also in VS 2012 do not install it otherwise the project will not load (the Nuget Manager will ask you if you want to install the server extension or not).

Announcing the 2nd Web European Conference, next Spring, in Italy

We did it in October 2012, and we are doing it again: we are organizing the 2nd Web European Conference for Spring 2015, somewhere in Italy.

The conference will be about Modern Web Development, no matter which platform: .NET, Node.js, Ruby, JavaScript.

Our Manifesto

Modern web development is moving away from all the enterprisey features that used to rule the world of development in the past years, and is going toward a more light-weight and simple approach.

  • Code Craftsmanship: Developers are taking back the control, of the code they write, on how they write and on which tools make them productive.
  • Tiny assemblable frameworks: Gone are the huge monolithic frameworks from one vendor: Modern Web Apps are a mix and match of different frameworks, technologies and languages.
  • BYOT: Bring Your Own Tool: Modern Web Development is about productivity the way you want it, not the way the vendor wants it to be.
  • Mobile First: Web sites are now mostly viewed on mobile devices: web sites and apps must be built with that in mind.

Call for presenters

To make an awesome conference we need awesome speakers, and today we are also launching the call for presenters: if you want to propose a talk that matches the ideas of our Manifest, you can already submit your proposal. Go on our Github account, fork the c4p repository, add your proposal and submit a pull request.

Call for sponsors

And we also need awesome partners and sponsors: if your company embraces our manifesto, and want to be part of this conference, please contact us via the contact form available on our website.

Subscribe to get more info

We don’t have much more to share at the moment apart from the timeframe, Spring 2015, possibly April, and the location, possibly Milano. But if you want to be the first to know about the conference, and to have the possibility to pre-register before the official registration opens, go to our website

and subscribe to the mailing list.

My review of Umbraco Codegarden 2014– One word: amazing!

Over last week I've been in Copenhagen to attend the Umbraco CodeGarden: I’ve been at this conference 4 years ago, in 2010, when I had 2 talks at the MVC pre-conference, but this time I was just a normal attendee.

The atmosphere during the event was amazing, exactly like I remembered 4 years ago, and conference looked bigger and better organized. It really shows how one of the biggest feature of Umbraco is its community.

I was planning to do a session-by-session review, but I realized that the post would become to longs, so I’ll just recap the main takeaways from the conference while highlighting some of the sessions I preferred.


Summarizing in few lines I’d say that Umbraco 7 is a mature CMS, both from the content editor and from the developer point of view: for editors because there are so many powerful and easy editing controls (aka as Property Editors) and for developers because now you can apply all the best practices of web development on .NET (like MVC, DI, Unit Testing) also to Umbraco, and in addition to that, you can build back-office property editors in a much simpler way, and making the UX of editors even better.

And now, with the detailed review…

The future of Umbraco in the Keynote

Given the many tweets with spoilers, that was the most expected talk: Niels showed what’s happening with Umbraco.

They announced the release of version 7.2 which includes many new features like the most expected grid editor built by the Catalan company LECOATI, better document type management (mixins and finally the possibility to move to another document type with automatic migration) and better support for responsive design.

In a session during the second day, Sky is the Limit, the guys from LECOATI made more demos of the grid editor and some other amazing UI tools that they are bringing into the core of Umbraco and as packages.

Also they demoed Umbraco as a Service, running on Azure and focusing more on easing the development and deployment workflow rather then on scalability. The solution they found is pretty actually: every change that developers make via the backoffice  is serialized, and then versioned on git, and deploys and moving between environments is done by using git merge features.

Finally they gave a glimpse of what are the plans for vNext: they plan on moving the core to ASP.NET vNext and the “web/cloud optimized” CLR that doesn’t use System.Web, and they are planning on introducing the concept of node variations which will allow to have one-to-one translations of pages, but also to show different content based on other conditions (like referrals, devices and so on).

This was just a very quick review, but to get the full deal, you can watch the video of the keynote:

Best practices for achieving good software design with Umbraco: Our first Umbraco 7 build, Core Internals for Website Development and MVC Purée

Those three sessions where really nice walkthrough of the core aspects of building a web site in a professional way, applying also the best practices of “normal” ASP.NET development.

I’ll link to the 3 presentations in a few lines, but the key takeaways of those 3 sessions is that with Umbraco 7 developers have much more freedom and flexibility of doings things in their own ways: for example you can use ASP.NET MVC to build your templates, mapping document types to POCO classes and building the view model out of them. You can also do a much better url interception and generation to create more (or less) meaning urls.

Also they showed a few interesting packages and tools to help with Umbraco development: Archetype for making document type more flexible, Glimpse7 and Inspect to have a view on what’s going on in a Umbraco page, ModelBuilder and UmbracoMapper which are two different approaches to creating view models out of document types

Think about the UX of the editors, too… The Dark Side of the Moon and Thinking in Seven

Even if the two sessions had different objectives, they both brought mainly the same message: now that with Umbraco 7 and AngularJS it’s so easy to build property editors, you have to focus also on building great UX also for the editors.

Here some backend UX tips from the two sessions:

  • Give editors controls that match the way information will be displayed and the way editors work
  • In the back-end focus on the workflow used by editors
  • Build “pickers” vs copy-pasting strings (for example if they have to select a product from an external application make a picker that connects via API to the external app, instead of asking them to go in the oher application and copy-paste the id of the product)
  • Give editors immediate feedback of what they have entered (for example by showing a preview and giving context to what they entered)
  • Build as much automation as possible
  • But remember that since it involves development time that usually was not used before, this new approach has to be embraced both by PMs and by the client

For the dark side of the moon, the video is published already.

Mobile development on top of Umbraco: Going native with Umbraco and Phonegap

A few good tips came from this session too.

The first is that making a REST service to provide data to a mobile application is incredibly easy in Umbraco 7: just implement a class that inherits from UmbracoApiController (which on turn inherits from the ASP.NET WebAPI controller) to get a ASP.NET WebAPI endpoint as all the wiring up will be done by Umbraco.

The second is the choice of framework used to build the PhoneGap mobile application: he has chosen Ionic, a HTML5 mobile framework made to work with AngularJS and mainly designed for native apps rather than mobile websites like jQuery Mobile.

The last useful tip was: ditch jQuery and use native javascript calls and CSS3 animations as here you are dealing with just one browser (so no need of a compatibility layer that jQuery is) and with devices with a reduced computing power but with HW accelerated graphics (so, prefer native animations that can be rendered by the HW acceleration instead of js animation that need CPU time).

Here you can find the slides and some notes on the talk Mobile Development with Umbraco and PhoneGap.

The Future of web tooling

The usual demo about the new web tooling available in Visual Studio: SASS, LESS, support for Grunt, Node.js, and so on. But something I never heard anyone else saying, is that Microsoft is making efforts in supporting all the possible tools available in the frontend development scene, but they are heavily betting on AngularJS and on Bootstrap. So if you are .NET developer and still haven’t spent time in learning this part of the world, start from those 2 libraries.

What now

Now I understood that I have to contribute back to this great community, and even if at my job I’m not going to use Umbraco any more I definitely have to try building a web site with Umbraco 7… maybe the new site for the Web.Next Conference, and probably my blog using the new blog package for Umbraco called Articulate.

And I’ve also seen that I have to start learning AngularJS better, and try to pay more attention to the frontend development side.

Well… the post came out long anyway… thank you for reaching the end of the post.

Did you attend the conference too? What are your takeaways and comments?

PS: Video are still being published, I’ll update the links as soon as they become available.

How a bit of baking paper saved my Cinema Display

Last week I moved my home office, which consists mainly of a desk with a MacBookPro and a 7 years old 20” Cinema Display, from the room downstairs to another room upstairs.

Unfortunately when I plugged everything back on, the Cinema Display didn’t turn on, and the power light was flashing with the “short, long, short” code, which means “Make sure you are using the correct power adapter with the display”. Of course, since I’ve been using the display for 7 years, the power adapter was the right one.

Apparently this problem is very common, and looking on Google and on Apple discussion support forums there are hundreds of people reporting the same problem. The solutions varied from sending the display to repair for 400USD (apparently some problem with the display’s board) to buying the upsized power adapter (the one for a bigger display) for 150USD.

But one guy also suggested this nice zero-cost solution:

It turns out the middle pin in the power connector is a ground. When it read an incorrect voltage it makes the displays inverter turn off to protect the unit. One way to circumvent this is to cover the middle pin in the cord going from the monitor to the power brick. I used a piece of paper in the shape of a “W” covering just the middle pin. I double folded a piece of paper and folded it over the plug and down into the connector and then gently pushed the plug into the power brick. Once i did this the monitor popped on and all was good in the world…

This made the trick, and now I’m happily typing looking at my Cinema Display.

If the explanation is difficult to visualize, I found a video on Youtube the shows how to do it (it uses some tape instead of paper, but the concept is the same).

Techorama conference day 2

Let’s continue with the review of the second day of Techorama. I wrote about the first day already last week.

What’s New in ASP.NET and VS 2013

Overview of the new features of VS2013, especially focusing on the features that make web development easier, also for non MS technologies: browser link, side waffle for templates for just about everything’s, Angularjs support, Bootstrap CSS, less, sass, and more.

Interesting was the small view on the future, with integrated support for external tools (like Grunt) and ASP.NET vNext, with Project K, merged MVC, WebAPI, SignalR, and cross platform support.

HTML5 and JS communication API

Nice overview of all the messaging and communication API in modern browsers, shown using directly the JavaScript API instead of frameworks: Websockets, long polling (server-sent events) message passing. Always with a view in security, introducing also CORS, JSONP.

I liked the approach of showing how everything works from the “bare metal” point if view. In the afternoon there was the SignalR talk to show the tooling and framework on the .NET side of things.

Building great HTTP based API with MS WebAPI

I've used WebAPI already and the talk was basically an introduction to the framework, but despite that I really liked the speaker and the way he presented the topic.

I also learned a few interesting bits I didn’t know before, like the in-memory OWIN based server for testing (I'll probably try and experiment with this a bit and include the topic in my upcoming book on OWIN) and the attribute based routing.

The History of Programming

That was really a very funny talk, more a standup comedy than a IT keynote… It was a funny overview of all programming languages invented in the history of programming.

At the end of the talk it was presented a nice initiative whose aim is to teach programming to kids in a funny and peer-tought way: CoderDojo. It’s available in more then 40 countries, Belgium and Italy included (in Milano for example).

Introducing Nonacat

Nik Molnar gave lots of tips and tools to work efficiently with Github.

Here are some of the tips he showed, in addition he compiled a list of all those tools on his blog.

  • Markdown and the github flavoured syntax
  • Readme and contributors markdown files are treated by github in a special manner
  • Some 3rd party sites have been made so that just by replacing the domain “” with their domain you can get additional features, like generated PDFs of markdown files, or serve it over the web, or open the repo di Cloud9
  • Huboard allows a Trello like Kanban/scrum board on top of Github issues
  • Github has lots of keyboard shortcuts to search, create and reply to issues (press ? to see it)
  • You can integrate external services with webhooks, and you can even create your own service that gets notified via webhooks (and to test them you can use some services that allow you to receive requests from github, like and nGrok)
  • finally, you can build services that use the Github API, you can use the octokit library

We we ditched TFS and embraced Github, TeamCity, MyGet

This talk was a kind of “lessons learned” from a consultant that introduced the full TFS stack as soon as he entered the project he’s working on, but soon realized that using the source control part of TFS was making the team less effective and making it difficult to follow the best practices of branching (as working with branches in TFS is such a pain).

So the ditched TFS, went to a private Github, and TeamCity, leveraging also private MyGet to make builds always up to date with external components.

Of course this is not the silver bullet, since tools for Git are not that well integrated with visual studio, and using all those different systems required some work to integrate them. Also the team had to change a bit the way of working to better leverage the fast branch-switching feature of Git, and learning to use pull requests when they wanted to integrate their code into the main trunk, but after some time, the development process had less friction then in had before, and code quality increased, due also to easier peer reviews thanks to the usage of pull requests.

He also pointed out that even if the whole team do not move to Git, even single devs could start making the move by using Git locally on their machines and finally pushing to TFS using TFS to Git tools.

Closing up

Those two days were really full of interesting hints, and I think I'll now need some weeks to elaborate all of them and experimenting with some of the technologies.

Finally a great applause to the 3 guys behind Techorama: having organized conferences myself I know how difficult it is, and hitting sold out with 600 people on 350€ conference is a huge accomplishment for their first conference.

Review of Techorama day 1

The last two days I've been at the Techorama conference and I've to say it was a long time I was not that excited for developer conference for which I was not directly involved as organizer or speaker. So here is a quick review of the sessions I followed. (PS: I’ll link to slides and videos when they get published)

Faster faster… Async. ASP.NET

Probably the less interesting talk of the day: how to make WebForms faster by using Async pages even in scenario were you are forced to use still use old version of the framework that do not have the await async support. Great speaker skills, but topic was just not that interesting for me and the scenario I’m working in.

A frontend workflow

For this instead I have mixed feelings: it was great to see 2 frontend developers talk at a mainly server-side developer audience, showing a glimpse on the world CSS and frontend development and the tooling used by them on a non Microsoft environment (sass, bundling and minification using gruntjs on node.js), but since all those features are also available natively in Visual Studio and ASP.NET, I'd also loved to see how this frontend workflow could be integrated more tightly with the IDE all .NET devs use.

Full stack Web Performance

Delivered by Nik Molnar of Glimpse fame, this talk was about improving the performance of web applications. Performance is affected by many factors, starting with the few seconds of delay possibly introduced by the network, going till the client-side rendering that can affect performances in the order of few milliseconds.

The talk went through all the possible techniques and tools to help increase performances. Something I really wish I knew one year ago was the client-side profiler to troubleshoot sluggish rendering and scrolling performances.

Sides, demos and list of links to tools used during the talk:

Zone out, Check in, Move on

Ever wondered why developers are only productive when they are in the zone, and why it takes so much time to get back into the zone after interruptions? In this talk we got to see the reasons behind those and also a possible way of solving them:

  • It takes so much time to get back into the zone because to write correct software developers have to “load into memory” a map of the system they are developing. So the easy solution, which is nothing new to “good” developers, is working on independent modules rather than on whole systems: this reduces the number of line the developer have to keep in his brain to get a good understanding of the code he/she is writing. Basically apply the Single Responsibility Principle.
  • The second suggestions was to use a DVCS like git instead of a centralized one because it makes the cost of errors and experimentation very low: branches are easy to create and destroy and frequent checkins do not come with a network IO cost: developers can experiment and trying solutions even without understanding completely the system they are working on, and if their intuition was wrong, they didn’t mess up with production code, and didn’t spend lot of time setting up their safety net.
  • The last reason why developers get out of the zone is when they don’t get an immediate feedback on what they are doing: working on big systems makes testing a debugging slow and introduces dangerous delays that might make developers get out of the zone. To prevent this, the easy solution, again, nothing new, is to write unit tests that can immediately show whether the code written is breaking something.

To stay productive and reduce the cost of interruptions: write small and focused classes/modules, use git, write unit tests.

Web app security trends

The usual suspects of web security threats: XSS, CSRF, link hijacking, iframe hiding, and so on. The takeaway of this talk, at least for me, was that now browsers catching up and helping prevent some of these threats, just by honoring some http headers that limit the way external resources are used inside pages. It’s also pretty easy to prevent some most attacks, so, key takeaway of the session was: it takes so little to prevent attacks, so do it!

Introduction to Roslyin

The usual deep dive of Bart De Smet with syntax trees and low level stuff. Great to see Roslyin in action as I heard about it but never really looked into it. Not sure this stuff will be useful for “normal” web developer, but will definitely make life easier for tool vendors.

Wrapping up

All in all I've to say that the most inspiring talk of the day was the one I was not planning to attend, the one about the Zone… It really opened my eyes on the reasons why experienced developers tend to loose excitement, and also gave me more reasons to try and convince my colleagues to keep methods and classes focused and small.

Let’s now wait for the second day, which will be all about ASP.NET and web development.