First time here? You are looking at the most recent posts. You may also want to check out older archives or the tag cloud. Please leave a comment, ask a question and consider subscribing to the latest posts via RSS. Thank you for visiting! (hide this)

My review of Umbraco Codegarden 2014– One word: amazing!

Over last week I've been in Copenhagen to attend the Umbraco CodeGarden: I’ve been at this conference 4 years ago, in 2010, when I had 2 talks at the MVC pre-conference, but this time I was just a normal attendee.

The atmosphere during the event was amazing, exactly like I remembered 4 years ago, and conference looked bigger and better organized. It really shows how one of the biggest feature of Umbraco is its community.

I was planning to do a session-by-session review, but I realized that the post would become to longs, so I’ll just recap the main takeaways from the conference while highlighting some of the sessions I preferred.

TL;DR

Summarizing in few lines I’d say that Umbraco 7 is a mature CMS, both from the content editor and from the developer point of view: for editors because there are so many powerful and easy editing controls (aka as Property Editors) and for developers because now you can apply all the best practices of web development on .NET (like MVC, DI, Unit Testing) also to Umbraco, and in addition to that, you can build back-office property editors in a much simpler way, and making the UX of editors even better.

And now, with the detailed review…

The future of Umbraco in the Keynote

Given the many tweets with spoilers, that was the most expected talk: Niels showed what’s happening with Umbraco.

They announced the release of version 7.2 which includes many new features like the most expected grid editor built by the Catalan company LECOATI, better document type management (mixins and finally the possibility to move to another document type with automatic migration) and better support for responsive design.

In a session during the second day, Sky is the Limit, the guys from LECOATI made more demos of the grid editor and some other amazing UI tools that they are bringing into the core of Umbraco and as packages.

Also they demoed Umbraco as a Service, running on Azure and focusing more on easing the development and deployment workflow rather then on scalability. The solution they found is pretty actually: every change that developers make via the backoffice  is serialized, and then versioned on git, and deploys and moving between environments is done by using git merge features.

Finally they gave a glimpse of what are the plans for vNext: they plan on moving the core to ASP.NET vNext and the “web/cloud optimized” CLR that doesn’t use System.Web, and they are planning on introducing the concept of node variations which will allow to have one-to-one translations of pages, but also to show different content based on other conditions (like referrals, devices and so on).

This was just a very quick review, but to get the full deal, you can watch the video of the keynote: http://stream.umbraco.org/video/9918428/umbraco-codegarden-14-keynote

Best practices for achieving good software design with Umbraco: Our first Umbraco 7 build, Core Internals for Website Development and MVC Purée

Those three sessions where really nice walkthrough of the core aspects of building a web site in a professional way, applying also the best practices of “normal” ASP.NET development.

I’ll link to the 3 presentations in a few lines, but the key takeaways of those 3 sessions is that with Umbraco 7 developers have much more freedom and flexibility of doings things in their own ways: for example you can use ASP.NET MVC to build your templates, mapping document types to POCO classes and building the view model out of them. You can also do a much better url interception and generation to create more (or less) meaning urls.

Also they showed a few interesting packages and tools to help with Umbraco development: Archetype for making document type more flexible, Glimpse7 and Inspect to have a view on what’s going on in a Umbraco page, ModelBuilder and UmbracoMapper which are two different approaches to creating view models out of document types

Think about the UX of the editors, too… The Dark Side of the Moon and Thinking in Seven

Even if the two sessions had different objectives, they both brought mainly the same message: now that with Umbraco 7 and AngularJS it’s so easy to build property editors, you have to focus also on building great UX also for the editors.

Here some backend UX tips from the two sessions:

  • Give editors controls that match the way information will be displayed and the way editors work
  • In the back-end focus on the workflow used by editors
  • Build “pickers” vs copy-pasting strings (for example if they have to select a product from an external application make a picker that connects via API to the external app, instead of asking them to go in the oher application and copy-paste the id of the product)
  • Give editors immediate feedback of what they have entered (for example by showing a preview and giving context to what they entered)
  • Build as much automation as possible
  • But remember that since it involves development time that usually was not used before, this new approach has to be embraced both by PMs and by the client

For the dark side of the moon, the video is published already.

Mobile development on top of Umbraco: Going native with Umbraco and Phonegap

A few good tips came from this session too.

The first is that making a REST service to provide data to a mobile application is incredibly easy in Umbraco 7: just implement a class that inherits from UmbracoApiController (which on turn inherits from the ASP.NET WebAPI controller) to get a ASP.NET WebAPI endpoint as all the wiring up will be done by Umbraco.

The second is the choice of framework used to build the PhoneGap mobile application: he has chosen Ionic, a HTML5 mobile framework made to work with AngularJS and mainly designed for native apps rather than mobile websites like jQuery Mobile.

The last useful tip was: ditch jQuery and use native javascript calls and CSS3 animations as here you are dealing with just one browser (so no need of a compatibility layer that jQuery is) and with devices with a reduced computing power but with HW accelerated graphics (so, prefer native animations that can be rendered by the HW acceleration instead of js animation that need CPU time).

Here you can find the slides and some notes on the talk Mobile Development with Umbraco and PhoneGap.

The Future of ASP.net web tooling

The usual demo about the new web tooling available in Visual Studio: SASS, LESS, support for Grunt, Node.js, and so on. But something I never heard anyone else saying, is that Microsoft is making efforts in supporting all the possible tools available in the frontend development scene, but they are heavily betting on AngularJS and on Bootstrap. So if you are .NET developer and still haven’t spent time in learning this part of the world, start from those 2 libraries.

What now

Now I understood that I have to contribute back to this great community, and even if at my job I’m not going to use Umbraco any more I definitely have to try building a web site with Umbraco 7… maybe the new site for the Web.Next Conference, and probably my blog using the new blog package for Umbraco called Articulate.

And I’ve also seen that I have to start learning AngularJS better, and try to pay more attention to the frontend development side.

Well… the post came out long anyway… thank you for reaching the end of the post.

Did you attend the conference too? What are your takeaways and comments?

PS: Video are still being published, I’ll update the links as soon as they become available.

How a bit of baking paper saved my Cinema Display

Last week I moved my home office, which consists mainly of a desk with a MacBookPro and a 7 years old 20” Cinema Display, from the room downstairs to another room upstairs.

Unfortunately when I plugged everything back on, the Cinema Display didn’t turn on, and the power light was flashing with the “short, long, short” code, which means “Make sure you are using the correct power adapter with the display”. Of course, since I’ve been using the display for 7 years, the power adapter was the right one.

Apparently this problem is very common, and looking on Google and on Apple discussion support forums there are hundreds of people reporting the same problem. The solutions varied from sending the display to repair for 400USD (apparently some problem with the display’s board) to buying the upsized power adapter (the one for a bigger display) for 150USD.

But one guy also suggested this nice zero-cost solution:

It turns out the middle pin in the power connector is a ground. When it read an incorrect voltage it makes the displays inverter turn off to protect the unit. One way to circumvent this is to cover the middle pin in the cord going from the monitor to the power brick. I used a piece of paper in the shape of a “W” covering just the middle pin. I double folded a piece of paper and folded it over the plug and down into the connector and then gently pushed the plug into the power brick. Once i did this the monitor popped on and all was good in the world…

This made the trick, and now I’m happily typing looking at my Cinema Display.

If the explanation is difficult to visualize, I found a video on Youtube the shows how to do it (it uses some tape instead of paper, but the concept is the same).

Techorama conference day 2

Let’s continue with the review of the second day of Techorama. I wrote about the first day already last week.

What’s New in ASP.NET and VS 2013

Overview of the new features of VS2013, especially focusing on the features that make web development easier, also for non MS technologies: browser link, side waffle for templates for just about everything’s, Angularjs support, Bootstrap CSS, less, sass, and more.

Interesting was the small view on the future, with integrated support for external tools (like Grunt) and ASP.NET vNext, with Project K, merged MVC, WebAPI, SignalR, and cross platform support.

HTML5 and JS communication API

Nice overview of all the messaging and communication API in modern browsers, shown using directly the JavaScript API instead of frameworks: Websockets, long polling (server-sent events) message passing. Always with a view in security, introducing also CORS, JSONP.

I liked the approach of showing how everything works from the “bare metal” point if view. In the afternoon there was the SignalR talk to show the tooling and framework on the .NET side of things.

Building great HTTP based API with MS WebAPI

I've used WebAPI already and the talk was basically an introduction to the framework, but despite that I really liked the speaker and the way he presented the topic.

I also learned a few interesting bits I didn’t know before, like the in-memory OWIN based server for testing (I'll probably try and experiment with this a bit and include the topic in my upcoming book on OWIN) and the attribute based routing.

The History of Programming

That was really a very funny talk, more a standup comedy than a IT keynote… It was a funny overview of all programming languages invented in the history of programming.

At the end of the talk it was presented a nice initiative whose aim is to teach programming to kids in a funny and peer-tought way: CoderDojo. It’s available in more then 40 countries, Belgium and Italy included (in Milano for example).

Introducing Nonacat

Nik Molnar gave lots of tips and tools to work efficiently with Github.

Here are some of the tips he showed, in addition he compiled a list of all those tools on his blog.

  • Markdown and the github flavoured syntax
  • Readme and contributors markdown files are treated by github in a special manner
  • Some 3rd party sites have been made so that just by replacing the domain “github.com” with their domain you can get additional features, like generated PDFs of markdown files, or serve it over the web, or open the repo di Cloud9
  • Huboard allows a Trello like Kanban/scrum board on top of Github issues
  • Github has lots of keyboard shortcuts to search, create and reply to issues (press ? to see it)
  • You can integrate external services with webhooks, and you can even create your own service that gets notified via webhooks (and to test them you can use some services that allow you to receive requests from github, like requestb.in and nGrok)
  • finally, you can build services that use the Github API, you can use the octokit library

We we ditched TFS and embraced Github, TeamCity, MyGet

This talk was a kind of “lessons learned” from a consultant that introduced the full TFS stack as soon as he entered the project he’s working on, but soon realized that using the source control part of TFS was making the team less effective and making it difficult to follow the best practices of branching (as working with branches in TFS is such a pain).

So the ditched TFS, went to a private Github, and TeamCity, leveraging also private MyGet to make builds always up to date with external components.

Of course this is not the silver bullet, since tools for Git are not that well integrated with visual studio, and using all those different systems required some work to integrate them. Also the team had to change a bit the way of working to better leverage the fast branch-switching feature of Git, and learning to use pull requests when they wanted to integrate their code into the main trunk, but after some time, the development process had less friction then in had before, and code quality increased, due also to easier peer reviews thanks to the usage of pull requests.

He also pointed out that even if the whole team do not move to Git, even single devs could start making the move by using Git locally on their machines and finally pushing to TFS using TFS to Git tools.

Closing up

Those two days were really full of interesting hints, and I think I'll now need some weeks to elaborate all of them and experimenting with some of the technologies.

Finally a great applause to the 3 guys behind Techorama: having organized conferences myself I know how difficult it is, and hitting sold out with 600 people on 350€ conference is a huge accomplishment for their first conference.

Review of Techorama day 1

The last two days I've been at the Techorama conference and I've to say it was a long time I was not that excited for developer conference for which I was not directly involved as organizer or speaker. So here is a quick review of the sessions I followed. (PS: I’ll link to slides and videos when they get published)

Faster faster… Async. ASP.NET

Probably the less interesting talk of the day: how to make WebForms faster by using Async pages even in scenario were you are forced to use still use old version of the framework that do not have the await async support. Great speaker skills, but topic was just not that interesting for me and the scenario I’m working in.

A frontend workflow

For this instead I have mixed feelings: it was great to see 2 frontend developers talk at a mainly server-side developer audience, showing a glimpse on the world CSS and frontend development and the tooling used by them on a non Microsoft environment (sass, bundling and minification using gruntjs on node.js), but since all those features are also available natively in Visual Studio and ASP.NET, I'd also loved to see how this frontend workflow could be integrated more tightly with the IDE all .NET devs use.

Full stack Web Performance

Delivered by Nik Molnar of Glimpse fame, this talk was about improving the performance of web applications. Performance is affected by many factors, starting with the few seconds of delay possibly introduced by the network, going till the client-side rendering that can affect performances in the order of few milliseconds.

The talk went through all the possible techniques and tools to help increase performances. Something I really wish I knew one year ago was the client-side profiler to troubleshoot sluggish rendering and scrolling performances.

Sides, demos and list of links to tools used during the talk: http://bit.ly/full-stack-web-perf.

Zone out, Check in, Move on

Ever wondered why developers are only productive when they are in the zone, and why it takes so much time to get back into the zone after interruptions? In this talk we got to see the reasons behind those and also a possible way of solving them:

  • It takes so much time to get back into the zone because to write correct software developers have to “load into memory” a map of the system they are developing. So the easy solution, which is nothing new to “good” developers, is working on independent modules rather than on whole systems: this reduces the number of line the developer have to keep in his brain to get a good understanding of the code he/she is writing. Basically apply the Single Responsibility Principle.
  • The second suggestions was to use a DVCS like git instead of a centralized one because it makes the cost of errors and experimentation very low: branches are easy to create and destroy and frequent checkins do not come with a network IO cost: developers can experiment and trying solutions even without understanding completely the system they are working on, and if their intuition was wrong, they didn’t mess up with production code, and didn’t spend lot of time setting up their safety net.
  • The last reason why developers get out of the zone is when they don’t get an immediate feedback on what they are doing: working on big systems makes testing a debugging slow and introduces dangerous delays that might make developers get out of the zone. To prevent this, the easy solution, again, nothing new, is to write unit tests that can immediately show whether the code written is breaking something.

To stay productive and reduce the cost of interruptions: write small and focused classes/modules, use git, write unit tests.

Web app security trends

The usual suspects of web security threats: XSS, CSRF, link hijacking, iframe hiding, and so on. The takeaway of this talk, at least for me, was that now browsers catching up and helping prevent some of these threats, just by honoring some http headers that limit the way external resources are used inside pages. It’s also pretty easy to prevent some most attacks, so, key takeaway of the session was: it takes so little to prevent attacks, so do it!

Introduction to Roslyin

The usual deep dive of Bart De Smet with syntax trees and low level stuff. Great to see Roslyin in action as I heard about it but never really looked into it. Not sure this stuff will be useful for “normal” web developer, but will definitely make life easier for tool vendors.

Wrapping up

All in all I've to say that the most inspiring talk of the day was the one I was not planning to attend, the one about the Zone… It really opened my eyes on the reasons why experienced developers tend to loose excitement, and also gave me more reasons to try and convince my colleagues to keep methods and classes focused and small.

Let’s now wait for the second day, which will be all about ASP.NET and web development.

Slides and demos for the Owin and Katana talk from NCrafts conf in Paris

Last Friday Ugo and me were in Paris for the NCrafts Conference organized by the ALT.NET France and our friend Rui Carvalho.

We talked about “Owin and Katana”: at the beginning I saw many question marks on the head of people, but while going through the talk I saw those question marks becoming light bulbs, and at the end of the talk a very vibrant questions and answers session happened, something that I hoped, but frankly didn’t expect.

We published the slides on Slideshare:

We also put on Github the demo that I’ve shown during the talk:
There are 5 demos:

  • 01.OwinIIS: Demo of running Katana on IIS Host
  • 02.OwinHost: Demo of running Katana on OwinHost.exe
  • 03.OwinSelfHost: Demo of running Katana on self host and with custom error page
  • 04.OwinWebAPI: Running WebAPI on top of Katana
  • 05.OwinMiddleware: example of using 3 different middlewares in the Owin pipeline. The 3 middlewares are built using 3 different approaches.

I hope Rui gets to organize the conference also next year.
As bonus points, the conference was organized in a really nice location, 300 meters from Les Invalides and also very close to the Musée d’Orsay.

If you want to know more about Owin and Katana, we are writing a short e-book that will be available in a month or so, published by Syncfusion as part of its “Succinctly” e-book series.

Speaking about Katana and OWIN at NCrafts Conference in Paris

Lately I’ve been digging deeper into the OWIN specs and in particular into the implementation done by Microsoft: Katana.

For those interested in knowing more about this new lightweight and modular web server, Ugo Lattanzi and I are going to have a speech about it on the 16th of May in Paris at the NCrafts conference, organized by our friend Rui Carvalho.

ncrafts-manifesto

This conference, despite being in France, will be almost entirely in English and will have prominent members of the .NET developer community speaking, like Greg Young of CQRS fame  and Lucene.net core contributor Itamar Syn-Hershko.

You can have a look at the full agenda, and you still have 3 days (till Wednesday 16th April afternoon) to secure an early bird ticket with 50€ discount on the full price: don’t miss this opportunity to attend a developer conference with a view on the Tour Eiffel.

In addition to speaking at NCrafts conf, we are also writing a short e-book that will be published around the same timeframe by Syncfusion as part of its “Succinctly” e-book series.

Why is a 32bit Windows Azure WebSite running as 64bit?

Yesterday I updated my Ghost installation on Azure Websites and my test blog stopped working: I enabled error logging and the error I got was:

[31m
ERROR:[39m [31m Cannot find module './binding\Release\node-v11-win32-x64\node_sqlite3.node' [39m

Error: Cannot find module './binding\Release\node-v11-win32-x64\node_sqlite3.node'
    at Function.Module._resolveFilename (module.js:338:15)
    at Function.Module._load (module.js:280:25)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object.<anonymous> (D:\home\site\wwwroot\node_modules\sqlite3\lib\sqlite3.js:7:15)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17) 

Ghost uses node-sqlite3 which is a binding toward sqlite and thus requires the native binary for the OS and architecture of the server: the download is done during the npm installation and depends on the architecture on which npm is running. In my case, since I have a free windows azure web site, the architecture is 32bit, so after installing the package I got node_modules\sqlite3\lib\binding\Release\node-v11-win32-ia32\node_sqlite3.node, which seems to be legit.

The dynamic reference is done based on the process.arch variable, so you would expect it to be ia32 as well, since my website is configured as 32bit. So why is the sqlite3 module trying to reference the x64 version?

I did some debugging through the amazing Kudu Debug Console to try and understand why: I tried getting the process.arch variable both via console (which should reflect the condition seen by npm) and via a small node.js app running in the same web site.

Running on the console node -e "console.log(process.arch)" I got ia32.

On the other end, trying to detect the same variable via the website I got x64.

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Architecture:'+process.arch+'\n');
}).listen(process.env.PORT, '127.0.0.1');

Node-sqlite is behaving correctly: installing ia32 as npm runs under 32bit and requiring x64 as the website process is running 64bit.

Now I fixed my problem by running npm install sqlite3 --target_arch=x64 via the kudu debug console and forcing the installation of the 64bit version of the binary file for sqlite, but question still remains: why if the web site is configured as 32bit, the node.js process runs as 64bit while the console and deply process runs under 32bit?

UPDATE: I just installed the latest from Ghost master which references node-sqlite version 2.2 which changed binary dependency installation to node-pre-gyp. This installs the 64bit version of sqlite, so my problem is solved. Nevertheless, having the 64bit version when my web site is configured as 32bit seems a bug.

Show me your Visual Studio Extensions

All developers like to personalize their IDE, and even if lately I’m not doing that much of coding, I still like to have my Visual Studio 2013 “pimped” with the extensions that make work easier and faster. Here is a list of the extensions I’ve installed on my VS2013.

ReSharper

I guess this tool doesn’t need introductions: refactorings, code completion, code quality and coding standards checks and many other little helpers that make coding faster.

It’s a commercial tool that costs 134€ and comes with a 30 days trial. More info on JetBrains’ web site.

Nuget

Another extension that doesn’t need an introduction is Nuget. It ships with VS2013, but for older versions of Visual Studio you have to install it manually.

Web Essentials

Visual Studio 2013 has already a lot of HTML/CSS/JS editing features, but Mads Kristensen’s Web Essentials adds even more: zen coding, JSHint, LESS, SASS, CoffeeScript, Markdown, image optimization, intellisense for CSS Media Queries, robots.txt files, HTM5 appcache, AngularJS tags and much more.

Basically an essential extension for anyone doing modern web development using .NET.

SideWaffle

Going along with Web Essentials, SideWaffle adds a bunch of project and item templates and snippets for modern web development (and coming also Windows Phone and Windows Store).

CodeMaid

CodeMaid is an extensions that helps cleaning up and organizing your code. Useful even if you have already ReSharper… and vital if you don’t. Have a look at the documentation to see all that it does.

Productivity Power Tools

From Microsoft, this extension adds some pretty useful features that help you navigate through the IDE faster. Some duplicates features available in ReSharper, but others are unique. So, install it even if you have it. Check features and download it from the visual studio gallery. And there is also a video that explains the new features.

Routing Assistant

Not an extension for everyone, but very useful if you do ASP.NET MVC (ops… maybe it is for everyone then): it explores your solution and visualizes in a tree-like interface all the routes and acts also a routing debugger to help you see which routes are called by specific urls.

It’s a free product from a software company that commercializes ASP.NET MVC libraries.

Code Digger

Based on Pex, this extension analyzes possible executions paths of your code and tries to find possible edge cases and exceptions. Download it from visual studio gallery.

Visual Studio Team Foundation Power Tools

An essential extension for anyone “administering” a TFS 2013 installation. Download it.

What other extensions would you recommend?

Here was my list of extensions: what am I missing? What other would your recommend? Write it in a comment.

UPDATE: Ugo Lattanzi wrote a similar post a few one year and half ago, with extensions for Visual Studio 2012: most of the extensions mentioned there are still relevant for 2013, like Vs Commands, StyleCop and Ghost Doc.

How to update Ghost on Windows Azure

While trying Ghost I found myself in the need of updating the code to keep it up to date with the latest version available on github, and since both getting the updates and the deplyoment to Azure Web Sites is done via git I wrote a small batch file that automates the process.

#! /bin/bash
echo Pulling from Azure
git pull azure master
echo Pulling updates from origin
git pull origin
echo Compiling assets
grunt init
grunt prod
echo Committing to local repository
git commit -a -m "Applying changes from origin"
echo Pushing to Azure
git push azure master

A little explanation of what the script does:

  1. It pulls from Azure in case the site has been edited or updated already
  2. It pulls from the master of Ghost to get the latest changes
  3. Executes the grunt build file: compiles Sass, concat javascript files and finally mininfy them
  4. It commits to the local repository all the changes (including all the generated files)
  5. A finally pushes to git repository used by the Azure Web Site
Tags: ,,

How to install all web development tools needed for Ghost on Mac

As I’ve written last week, I’ve been starting to play around with Ghost and to do so I had to go through an extensive maintenance of my MacBook. But that also gave me the opportunity to revamp it installing the tools needed for a modern web development workstation. In this post I won’t focus that much on the Ghost installation part as it’s well explained on Scott Hanselman post about installing Ghost on Azure, but I’ll focus more on installing the tools needed using the best practices.

Tools needed

Basically you’d need to install the following tools:

  • Git
  • Node.js
  • Ruby

Git

Git is a prerequisite for installing most of the other tools, as nowadays most automatic installation processes for opensource tools go fetch the code directly from the git repository and build them. So we are going to install it first. You might already have it installed: if unsure just type git --version. If you don’t get anything you have to install it, downloading from http://git-scm.com/download/mac. Once installed you can open a terminal window and type again git --version. You should get something like:

git version 1.8.4.2

This means it’s been installed correctly and, at the time of writing, you are running on the latest available version.

Node.js tools

Ghost is a Node.js application, so you obviously need to install Node.js. But to make life easier in case you want to work with it for other projects you should also install NVM (a node version manager) which makes switching between versions very easy.

NVM

To install it just type this line on terminal window:

sudo curl https://raw.github.com/creationix/nvm/master/install.sh | sh
Node.js

Once NVM is installed you can download and install the latest version of nodejs by typing nvm install 0.10 to download the latest latest available 0.10.x release.

Grunt

Needed for building your own Ghost, but also very useful if you do JavaScript development,Grunt is a task runner that can automate almost everything: for .NET developers, it’s like MsBuild or NAnt… just running on Node.js. To install it type, and you’re good to go.

sudo npm install grunt-cli --g 

Remember the sudo as otherwise the installation will fail.

Ruby

Even if you are not going to work with Ruby, installing it is a good idea as many web developer tools, like Sass, require Ruby.

Like we did for Node.js with NVM, we are going to install Ruby via a version manager, called RVM.

sudo curl -sSL https://get.rvm.io | bash -s stable --ruby

The line above will install RVM, downloading it from source, and once done will automatically download the latest stable version of Ruby.

Sass and Bourbon

In addition you’ll also have to install Bourbon (which brings Sass as dependency): gem install bourbon will do the trick.

Installing Ghost

Now that you have all the tools needed for building Ghost (or any other modern Node.js application) you can go on and install it locally. Just type in the following lines in the terminal window.

git clone https://github.com/TryGhost/Ghost.git
npm install
grunt init
grunt prod

Running Ghost

At this point you have a local version of Ghost: next step is have it up and running: npm start and then point your browser to http://127.0.0.1:2368

Running Ghost on Azure

Now that you have Ghost running locally, you can install it on Azure… just follow the last part of Scott Hanselman step by step guide and you’ll get your blog up and running in no time.