Ben Wong is a Web/iOS Developer from Brisbane, Australia. When he's working he is usually coding in HTML, CSS, Javascript, ASP.NET (C#/VB.NET) or SQL. When he's not coding he's probably on a basketball court.

How to configure Umbraco ModelsBuilder to generate models in a separate project

When you first install Umbraco, Models Builder is configured by default to run in PureLive models mode and generates the model classes in the ~/App_Data/Models folder in the Umbraco.Web.PublishedContentModels namespace.

If you’re like me and want more control over namespacing and location of your classes, it’s possible. I’ve worked out 2 different ways of┬áto make Models Builder generate the models in a separate project in a custom namespace.

Changing the namespace is the same for both – set the namespace in an web.config app setting with key Umbraco.ModelsBuilder.ModelsNamespace.

The first method uses the Models Builder API and will require you to manually update the models using a custom tool in Visual Studio whenever you update your document types. So you will need to install the Umbraco.ModelsBuilder.API NuGet package and the Umbraco Models Builder Custom Tool

Dave Woestenborghs has a good description of how to set this up in his article about Models Builder.

The second method uses LiveAppData models mode. I had to work this method out for myself because there weren’t any articles specifically about setting up LiveAppData to generate models in a separate project. I pieced it together by reading the Install and Configure documentation for Models Builder.

The trick is to set the ModelsDirectory and the AcceptUnsafeModelsDirectory app settings. The directory will need to be set relative to the path of the project that has UmbracoCms installed. The AcceptUnsafeModelsDirectory setting needs to be set to true to allow the models directory to be set to a folder outside of the Umbraco website project.

Both methods have their merits, but I think if you’re document types are changing frequently you’ll want to use the LiveAppData method.

How to set up Gulp to optimise PNG images using Zopfli

Jeff Atwood posted about using Zopfli to optimise PNG images. Here’s how to set up Gulp to do it. I’m assuming you already have a project with Gulp set up that uses gulp-imagemin.

  1. Install imagemin-zopfli by running the following on the command line in the project folder.
  2. In the project’s gulpfile.js, update the task that runs gulp-imagemin to use imagemin-zopfli by setting the use option.

I did a quick comparison between the default imagemin PNG optimiser and Zopfli. Zopfli compressed my sample PNG better (59.8% vs 53.8%), but took longer to do it (866ms vs 101ms).

terminal screenshot

How to prevent a web page from being indexed by search engines

At work, there are times when we need to publish web pages with embargoed content for review by stakeholders. Often it’s too much hassle to add some basic password protection to the page to prevent unwanted visitors. All you really want is to make sure that it doesn’t appear in search results until you want it to. I’ve gotten the panicked phone call to urgently hide a web page too many times, so now I’m going to document how to hide a web page for search engines.

From my 5 minutes of research there are 2 simple ways to prevent search engines like Google, Bing and Yahoo from indexing a page.

  1. Use robots.txt to disallow access to a path on your website
  2. Add a robots meta tag to the web page with noindex

For the first method a simple text file named robots.txt needs to be added to the root folder of your web site with the following content:

This essentially tells any robot crawling the site to not index the page at path /path/of/webpage-to-hide.html. This method is not a good idea for hiding embargoed content as it’s a publicly viewable file. It would just point out what you don’t want people to see.

For the second method, the following tag needs to be inserted between the head tags of the web page you don’t want to be indexed.

Using a robots meta tag by itself is a good option as you would need to know the existed to know that we don’t want it to be indexed.

For more details on using the robots.txt and robots meta tag, check out the related links.

Related Links