How I Do Page Level Meta Tags With Docpad

DocPad is a fantastic Node-based tool for generating static web sites.  I have used it on a couple of simple, content only web sites and it gives you all of the needed tools for templates, programmability and creating reusable components.  For all of those sites that don’t really need any server side functionality, it’s a great alternative that leaves you with a set of web site files that you can host right on Amazon S3.

In my time with DocPad I have only found one missing feature built in support for page level meta tags.  Search engine optimization is constantly changing and new features such as Open Graph and Twitter card tags are absolutely necessary for the visibility of your content on social media.  If you’ve combed through the DocPad documentation like I have then you have no doubt come across DocPad’s meta block as a potential solution.  Except the meta block strangely does not allow you to add meta tags to it as is the case with the style and script blocks.

I came up with another solution using the document meta data in DocPad that works great for me.  Here’s how it works:

In Your Layout File

Your layout file should define all of the meta tags that will have the same values across the entire site and then pull in the page level values from the document.  Here’s an example of what I do:

<meta property="og:locale" content="en_US" />
<meta property="og:type" content="<%= if @document.url == "/index.html" then "website" else "article" %>">
<meta property="og:title" content="<%= @document.heading %>" />
<meta property="og:description" content="<%= @document.description %>" />

In the example above you can see that the og:locale tag is set to a static value, since that will stay the same across all pages. The og:type tag will choose between website or article depending on if it is the home page or not. Finally, I have the og:title and og:description tags pulling from my custom document metadata that we will add next.

In Your Documents

No configuration is required to add items to a document’s meta data on the fly.  All you have to do is update each file and define the meta data you need, then save and regenerate.  If you haven’t yet defined these, it’s no problem. Your generated output will leave them blank in the meantime.  Here’s the top of an example document:


---
title: "Home"
layout: "default"
isPage: true
heading: "My Home Page"
description: "A description for the home page."
---

…Your HTML content here.

When your site is generated, the heading and description meta data will be pulled into the meta tags that we set up in the layout file.  This method works great for me.  I only define information on the page level that is specific to that page.  It’s right where it should be.

.NET Development on Mac

What did you say?

Just a couple of short years ago, the idea of using a Mac for .NET development would have sounded completely insane, but much has changed.  The .NET Framework has since been open sourced and Microsoft has been clear about its intent to make .NET development cross-platform, which it has achieved with the release of ASP.NET 5.

Enter Visual Studio Code

It’s no Visual Studio 2015, but Visual Studio Code is an Electron-based editor for .NET along with support for a plethora of other languages.  It has a debugger, intellisense and many other features that you would expect from Visual Studio.  While I’m sure that their will be a full, cross-platform version of Visual Studio in the future, Visual Studio Code is a great option for Mac and Linux users.  (Tip:  If you ever need to do a web search for Visual Studio Code related content, search for VSCode instead.  That will return results specific to Visual Studio Code, rather than Visual Studio.)

Staying Connected

Visual Studio Code has git support built in, but if you’re like me then you also do work on .NET projects over FTP.  While Visual Studio provides FTP support, VSCode does not at this time and plugin support is still coming in a future release.  I have found a great option for using VSCode over FTP with the help of Transmit.  Transmit will allow you to mount an FTP site as a drive and then VSCode can open that drive as a working folder and take over from there.  Transmit will take care of transferring files in the background for you.  It is an all around great FTP client.

Where’s the Remote?

As a part of my switch to the Mac, I was almost embarrassed to be discovering so late that Microsoft Remote Desktop is so much better on the Mac than it is on Windows.  Remote Desktop has a much better way of managing saved connections than the simple drop down list on the Windows version.  My favorite feature by far is that each open connection displays as a new desktop in Expose, making it very easy to manage all of your open desktops.

For All Else, There’s Virtualization

Sometimes you just have to work in Windows.  There’s no way of getting around it and Windows 8.1 is great and only getting better with Windows 10, so why should you?  Using Boot Camp you can install Windows on a separate partition on your Mac, but then you need to completely shut down your session in OS X when you need to do work in Windows.  These days that is just not necessary as virtualization is as good as working on a machine natively.  With Windows installed in VirtualBox, an adequate amount of memory assigned and the VM in full screen mode, I can hardly tell the difference between working directly on a Windows machine.

I’d love to hear about your experiences moving to the Mac for development.  Leave a comment below.

CodeCalculated Web Site Launch!

For the launch of CodeCalculated, I have built an entirely new web site.  I wanted to throw together a blog post that goes in depth on the technologies that were used and a little bit about the development workflow.

The web site was built using Docpad, a fantastic, node based static site generator that I am growing to love.  Docpad takes a set of template files, static content and HTML documents which it combines to create a full fledged web site.   Your template contains all of the repeating information on your site, such as navigation, footer and stylesheets.  A document file represents each page of your site.  You just tell Docpad which template to use, provide the page content in your document and the output is a set of flattened pages ready to be pushed out to your web host.  I keep an instance of Docpad running with a web server pointed to the output directory so that I can view my changes instantly.

The front end is much more basic.  I chose Bootstrap, with some slight style modifications to achieve my look.  I tend to waver back and forth between Bootstrap and Foundation, but now I’m back to Bootstrap.  I find it much easier to customize and it contains much more of the visual elements that I like to use, such as the glyph icons.  With either framework you get a set of scaffolding components that make responsive design a piece of cake.

At the moment I do not have any true back end functionality on the web site, so I can host it easily on Amazon S3.  The process for setting up a bucket for hosting static web sites on Amazon S3 is very easy.  You can check that out here if you are interested.  In the end, it’s the most affordable static hosting solution available and you only pay for what you use.

The web site was coded entirely using Atom on the Mac.  Some graphics work was done using Paint.net.   That’s it.  A very simple tool chain for development.

Take a look at the web site at codecalculated.com and let me know if you have any feedback.

Paste XML as Classes Missing In Visual Studio

Just wanted to pass along a quick tip for working with the Paste Special features in Visual Studio.  You may be aware that Visual Studio has options for generating classes based on XML and JSON under the Edit, Paste Special menu.

Paste Special

The Paste Special menu and options in Visual Studio 2013.

These options will generate VB.NET or C# classes that work with the built-in .NET serialization libraries.  However, you may only see the option to “Paste JSON as Classes” from the menu in your Visual Studio window.  This is because XML option is only available in projects targeting .NET Framework 4.5 and above.  Change your project’s target framework version and the option will become available.

If you are not able to upgrade your project, you could always create a new .NET 4.5 project, generate the classes there and then copy them to the destination.  You may need to correct some version specific errors, but this option will still get you most of the way there.

SuperSecretary

The following is the beginning of a series of posts with detailed information on my open source personal projects. The intention is to provide information about the application, development, design decisions and lessons learned.  Enjoy!

Overview

I don’t recall exactly when I started working on SuperSecretary, but I originally looked at a folder of various photos on my machine and set out to look for a way to keep them under control. I looked for software to sort photos by date taken and couldn’t find anything to meet my criteria. It didn’t seem that there were any options that were user-friendly, and there certainly weren’t any that were free.

I started building SuperSecretary specifically as an app to manage photos, but quickly realized that the concept could be expanded upon to manage all types of files. There are even some features, such as sorting music files by ID3 tags that are on my to-do list for future versions.

Design

I knew from the beginning that I wanted the application to eventually support plugins for users to handle files in any matter that they deemed necessary. This led me to the creation of the “Handler” in SuperSecretary. It is essentially just a simple, verbatim implementation of the Strategy pattern. Each handler has a Do method that passes in the path to the file that is being acted upon, along with some options that have been selected by the user. The handler returns the name of the folder that the file should be sorted into.

For example, when the user chooses to sort a photo by the Date taken attribute, the system uses a DateTakenHandler. The sorting engine loops through the files that need to be sorted and runs the DateTakenHandler for each one. The DateTakenHandler will retrieve the Date taken property from the EXIF data and format it based on the format selected by user. For cases when the user chooses to sort based on multiple attributes, the system will return and then move on to the next Handler. The handlers all inherit from the same interface, allowing a plugin system to be implemented.

The only thing I dislike about this design is that it is not the most efficient option. The system loops through each file, then loops through each of the handlers and acts before moving on to the next. However, some processes are more efficient when grouped together. For example, if the user chooses to sort on two EXIF attributes, say Camera Model and Camera Maker, it makes sense to retrieve that information together. However, batching these operations together makes the system less extensible. Ultimately I chose the more modular approach.

Development

I started development on a proof of concept right away. Since the application was meant to be a small utility that I used myself, I did not put much effort into researching features or deciding on what options to provide right off the bat. I kept the user interface compact and simplified as possible.

After completing the proof of concept, I began to think about the bigger picture of all of the things that could be possible. I pondered on a plugin system and the possibility of multiple interfaces, including a console front end for automating the application as a scheduled task. I even started building a WinRT front-end, which didn’t go so well. More on that in another post.

All of these more end user focused features led to many development decisions. I re-factored the application into two projects, a core library and the Windows Forms application. I focused on keeping the Windows Forms view as thin as possible and moved all of the logic to the core library.  I moved detailed output of the results of each run to an update event that could be implemented in to display information in the UI, a log file, or any manner that is specific to the target platform.

Packaging

Releasing a desktop application as a product was an entirely new concept for me. All applications need a few common elements, one of which is a deployment package. I had very little knowledge on creating a Windows installer. I really only knew that Visual Studio Installer Projects had been recently discontinued and that I needed to find a new option.

I discovered the WiX Toolset, a fantastic framework for building MSI installers. I admit that my implementation is very simple, but to this point I have not run into anything that I needed that I could not achieve using WiX.

The other common element that an application needs is assets and branding. I come from a primarily web focused background and working as a part of a team, so I had never had the complete responsibility of creating assets for a project. Icons, logos, screenshots and information were all things that I had never created and compiled from scratch. I am not a designer so I tackled those to the best of my ability. Paint.NET became my friend.

Distribution

I initially built the Razium web site specifically for hosting an installer for SuperSecretary. I created the site in a format that was more focused on the end user as a target audience, rather than toward developers. I will do a post in the future entirely focused on the creation of the Razium site, since there are some interesting tidbits there.

Originally I hosted the files there as well, but I have since moved those to Sourceforge. Hosting on Sourceforge obviously requires that your application be open source, but for those that are, it provides the added benefits of handling bandwidth costs and also the exposure that you get from being included in their library. In the first day on Sourceforge I had more downloads than the whole time that SuperSecretary was up on the Razium site.

Thanks for reading.  For more information on SuperSecretary, or to download version 1.1, click here.  To view the source code or fork the project, visit the GitHub repository.

Razium

I have taken a little bit of time away from updating the blog lately to work on a few little personal projects. Some of these were ideas that started years ago that never got into any kind of working order and others were utilities that I wanted to create simply because there was no free option available. A few of these projects have been open sourced and in development for some time. From a glance at my GitHub history you can see that my development schedule is a little bit scattered to say the least. A good summary would be to say that I work on what I want when I want to and I develop selective amnesia regarding the rest of the projects until I find my way back to them. After all, they are personal projects, right?

To break from my habits I have been putting some effort into seeing a couple of these projects through to usable applications. I have done that with the 1.0 release of SuperSecretary, my file sorting application, and I am near that point with MembershipManager, my ASP.NET Membership utility. Most developers are used to relying on GitHub for open source projects, but since these applications are more than just developer tools, it makes sense to have something more than a GitHub repository where the source code can be pulled and built. Enter Razium.

Razium is a site where I will provide more end-user targeted information about these projects. This may be lists of features, screen shots or comparisons depending on the project. It will also be where your average user goes to download compiled binaries and installers for the applications (via SourceForge). The site is very basic at the moment and only contains detailed information for SuperSecretary, but will be evolving over the coming months.

In the near future, I will be posting blog entries about each of my open source projects. I hope to provide a brief overview of the current state of each of the projects with some challenges and lessons learned from development to date. I am excited to document and share the knowledge that I have gained throughout the process of turning an open source project into a product. Stay tuned!

Automatically Re-size Height in a Facebook Page Tab App to Prevent Scrolling

Facebook Page Tab Apps allow you to show any web content in an app that displays on your Facebook page.  This could be anything from a contact form to a survey, or even a newsletter signup form.  The content is displayed on Facebook through an IFrame, but it resides on your server, allowing you to connect to any necessary databases or web services to create a rich user experience.

If you have created a Page Tab App, you know that they are restricted to a maximum of 810 pixels in width, but no maximum height is specified in Facebook’s documentation.  During testing, you may find that you have to scroll in both the Facebook window and in the IFrame to view all of your app’s content, resulting in very unfriendly behavior for the end user.

With a small snippet of JavaScript and HTML, you can automatically re-size your Page Tab App’s IFrame to the size of your content.  You just need to include Facebook’s JavaScript SDK and a call to your app’s canvas.

Here is the code:

<div id="fb-root"></div>
<script>
  window.fbAsyncInit = function() {
	FB.init({
	  appId      : '{PASTE_YOUR_APP_ID_HERE}',
	  status     : true,
	  xfbml      : true
	});
	FB.Canvas.setAutoGrow();
  };

  (function(d, s, id){
	 var js, fjs = d.getElementsByTagName(s)[0];
	 if (d.getElementById(id)) {return;}
	 js = d.createElement(s); js.id = id;
	 js.src = "//connect.facebook.net/en_US/all.js";
	 fjs.parentNode.insertBefore(js, fjs);
   }(document, 'script', 'facebook-jssdk'));
</script>

Just include this code on your page and Facebook will automatically re-size your app to the correct height so you no longer have to scroll inside the IFrame. The great thing about the setAutoGrow method is that it sets a timer to automatically re-size your app’s canvas at a 100ms interval, so it will update if you have loaded any dynamic content on your page. Nice and easy!

Manually Repairing Crashed Tables for WordPress

Any number of server issues can cause your MySQL tables to crash, requiring your instance of WordPress to be repaired.  You may see this issue emerge in the form of the error message, “wp_options table marked as crashed”, though it can occur with any table and not just wp_options.

At this point, WordPress will try and walk you through the use of its own database repair tool, which requires first adding this line to your wpconfig.php file.

define('WP_ALLOW_REPAIR', true);

This will allow the tool to run and you should be able to repair your database with no trouble.  However, you may see the “Failed to repair the wp_options table. Error: Table is marked as crashed” message.  What happens when the WordPress repair tool is unable to fix the crashed database table?  If you have phpMyAdmin on your database server (you likely do), you can repair the table manually.

Log in to phpMyAdmin and select the table you need to repair from the left hand navigation.  On the following details page for that table, if you scroll down to the bottom of the page, you will see a section titled “Table Maintenance”.

The Repair Table Link in phpMyAdmin

The Repair Table Link in phpMyAdmin

From the “Table Maintenance” section, click the “Repair table” link.  You should see a message that the table was repaired successfully.  You will need to do this individually for any tables that the WordPress repair tool reports that it is unable to repair.  Once this task is complete, you will be able to access your WordPress instance again.

Thanks for reading!  Feel free to leave a question or a comment.

Generating an Image of a PDF Page

I recently completed a project that required a thumbnail image automatically be generated from the first page of every PDF file that is uploaded to the system.  I was rather surprised to find that there was no drop in solution for such a thing.  There are many libraries out there that can create a PDF file from HTML content or an image, but no standalone libraries that could go from PDF to an image.  After testing out a couple of methods, I was able to find what I believe to be the easiest and least invasive method to implement it in a web application..

What You Will Need

To generate images from PDF in your project, you will need a couple of things.

Ghostscript, a set of libraries for working with a PDF.  You will need to download the version specific to your environment (32 bit/64 bit).  You can download those here.

GhostscriptSharp, a wrapper for using the Ghostscript libraries in .NET.  You can download it here.  It is written in C#, so if you are using VB.NET you will need to create a code sub-directory in your Web.config to use the file in your project.

Setup

You will need to do a couple of things to get the Ghostscript components to function before you write any code.  First, you need to extract the gsdll file to a location on your system.  (Either gsdll32.dll or gsdll64.dll depending on your CPU.)

Then, you need to modify GhostscriptSharp.cs to specify the path to the Ghostscript library that you extracted before.  Look for this code on line 12 of the GhostscriptSharp.cs file:

#region Hooks into Ghostscript DLL

[DllImport("gsdll64.dll", EntryPoint = "gsapi_new_instance")]

private static extern int CreateAPIInstance(out IntPtr pinstance, IntPtr caller_handle);

[DllImport("gsdll64.dll", EntryPoint = "gsapi_init_with_args")]

private static extern int InitAPI(IntPtr instance, int argc, string[] argv);

[DllImport("gsdll64.dll", EntryPoint = "gsapi_exit")]

private static extern int ExitAPI(IntPtr instance);

[DllImport("gsdll64.dll", EntryPoint = "gsapi_delete_instance")]

private static extern void DeleteAPIInstance(IntPtr instance);

#endregion

You will need to change the paths in the DllImport constructors to the location of the file on your machine.  The DllImportAttribute only accepts a constant string, so you cannot pass a variable with the path or using a Web.config value.  This prevents you from using Server.MapPath to generate the path to your project’s bin folder, which is also unfortunate.

The Code

The actual implementation of the image generation from PDF is very simple once you have reached this point.  You just need to make a call to the GhostscriptWrapper.GeneratePageThumb method with the path to the PDF, and the path where the image should be saved.  You also specify the page number you want to generate from and the height and width of the final image.  A call to GeneratePageThumb might look like this:

// Creates a 100 x 100 thumbnail of page 1.
GhostscriptWrapper.GeneratePageThumb(pdfFileName, outputFileName, 1, 100, 100);

You will also need to import the GhostscriptSharp namespace in your code file.  That is all that there is to it!  After running your application, you will see the image file in the specified path.

Its important to note that GeneratePageThumb is a shortcut method that will only generate a JPG image.  If you need to have more control over the output of your image, then you will need to use the GenerateOutput method and pass in a GhostscriptSettings object that contains all of your required values.  The GhostscriptSharp link above provides more detailed examples if you need them.

Thanks for reading!  If you have any troubles with the process, feel free to leave a comment below.

Fixing reCAPTCHA.net 404 Errors

Last week, Google decommissioned the reCAPTCHA API components that were hosted on recaptcha.net.  According to reCAPTCHA support, the change was supposed to have occurred back in April, but for some reason or another the site has only now been replaced with a 404 error.

The good news is that Google has not made any changes to the reCAPTCHA API, so you will just need to change the reCAPTCHA path from the old location to the new location hosted with Google.

In your application, all instances of this URL:

http://api.recaptcha.net

need to be replaced with this:

http://www.google.com/recaptcha/api

You will need to change all references to that URL, including the recaptcha.js or recaptcha_ajax.js files and the /verify URL.  It is as simple as that.  Keep in mind that if your site uses SSL then you need to change the “http” in the URL to “https”.

Many Joomla users have been affected by this problem as well.  Luckily a proposed fix has already been committed to Joomla’s GitHub repository.  To view those changes or to download the updated file, click here.  If you’re not confident enough to fix the issue on your own, a formal patch should be on it’s way from Joomla soon.

Feel free to leave a comment if you have any questions.  Thanks for reading!