Ben Wong is a Web/iOS Developer from Brisbane, Australia. When he's working he is usually coding in HTML, CSS, Javascript, ASP.NET (C#/VB.NET) or SQL. He also does iOS (iPhone and iPad) app development on the side. When he's not coding he's probably on a basketball court.

How to set up Gulp to optimise PNG images using Zopfli

Jeff Atwood posted about using Zopfli to optimise PNG images. Here’s how to set up Gulp to do it. I’m assuming you already have a project with Gulp set up that uses gulp-imagemin.

  1. Install imagemin-zopfli by running the following on the command line in the project folder.
  2. In the project’s gulpfile.js, update the task that runs gulp-imagemin to use imagemin-zopfli by setting the use option.

I did a quick comparison between the default imagemin PNG optimiser and Zopfli. Zopfli compressed my sample PNG better (59.8% vs 53.8%), but took longer to do it (866ms vs 101ms).

terminal screenshot

How to prevent a web page from being indexed by search engines

At work, there are times when we need to publish web pages with embargoed content for review by stakeholders. Often it’s too much hassle to add some basic password protection to the page to prevent unwanted visitors. All you really want is to make sure that it doesn’t appear in search results until you want it to. I’ve gotten the panicked phone call to urgently hide a web page too many times, so now I’m going to document how to hide a web page for search engines.

From my 5 minutes of research there are 2 simple ways to prevent search engines like Google, Bing and Yahoo from indexing a page.

  1. Use robots.txt to disallow access to a path on your website
  2. Add a robots meta tag to the web page with noindex

For the first method a simple text file named robots.txt needs to be added to the root folder of your web site with the following content:

This essentially tells any robot crawling the site to not index the page at path /path/of/webpage-to-hide.html. This method is not a good idea for hiding embargoed content as it’s a publicly viewable file. It would just point out what you don’t want people to see.

For the second method, the following tag needs to be inserted between the head tags of the web page you don’t want to be indexed.

Using a robots meta tag by itself is a good option as you would need to know the existed to know that we don’t want it to be indexed.

For more details on using the robots.txt and robots meta tag, check out the related links.

Related Links

Always specify radix parameter for Javascript parseInt

This week I learned how important the radix parameter for the javascript parseInt is. I implemented my own custom date string parser and discovered that older versions of Firefox behaved differently when running the following line.

All the latest version browsers (IE, Chrome, Firefox, Safari) I tested on returned 8, but older versions of Firefox returned 0. My solution to this was to set the radix to 10.

Mozilla Developer Network’s Javascript parseInt page recommends always specifying the radix parameter to guarantee predictable behaviour.

How to use ASP.NET FileUpload’s PostedFile.InputStream

This week I started working on a new file uploader for the company website admin. The current version uses the HttpPostedFile SaveAs method to save the file to a temporary directory before uploading to the web server using FTP. I decided to make the new version use HttpPostedFile’s InputStream property to transfer the file in one go instead.

Seemed straightforward enough, but I hit a snag.

My first attempt at the code to upload the file looked something like this:

It seemed to work, but it turned out while the image file I was uploading ended up being the right size, the image was corrupted.

After reading a bunch of MSDN articles, forum posts, blog posts and StackOverflow answers, I discovered that the seek position needed to be reset before reading the stream.

So the code needed to be like this: