I was working on setting up a DRBD cluster to use for NFS storage for VMWare. Although I had done this numerous times on Gentoo based distributions, this was the first time I was using CentOS. Getting DRBD installed and configured was pretty simple. In this example /dev/sdb is my physical or underlying device.

DRBD

First step is to add the ELRepo repository which contains the packages for DRBD.

rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

Next do the install.

yum install -y kmod-drbd84 drbd84-utils

Now we can configure our DRBD resource.

Improving Performance

At first, the network performance was poor even after changing the network MTU to 9000. We were averaging about 40MB/s, less than 1/3 of our maximum 1Gb network performance.

version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by mockbuild@Build64R6, 2014-08-17 19:26:04

 1: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:45725696 dw:45724672 dr:0 al:0 bm:0 lo:2 pe:2 ua:1 ap:0 ep:1 wo:f oos:5196995292
        [>....................] sync'ed:  0.2% (5075188/5081664)M
        finish: 36:11:07 speed: 39,880 (39,224) want: 50,280 K/sec

At that speed, the initial sync was going to take 36+ hours!!! But after a little bit of tweeking to the configuration based on our underlying hardware, we achieved a 2.5x performance increase.

version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by mockbuild@Build64R6, 2014-08-17 19:26:04

 1: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:13627448 dw:13626368 dr:608 al:0 bm:0 lo:1 pe:0 ua:1 ap:0 ep:1 wo:d oos:5026573020
        [>....................] sync'ed:  0.3% (4908760/4922068)M
        finish: 13:36:03 speed: 102,656 (89,644) want: 102,400 K/sec
2.5x performance increase!

2.5x performance increase!

That’s MUCH better! The little blips in the graph is when I was playing around with settings. In the end, the initial sync still took 11 hours for the 5TB disk to complete.

Below is the final result of our configuration file: /etc/drbd.d/nfs-mirror.res.

resource nfs-mirror {
  startup {
    wfc-timeout 30;
    outdated-wfc-timeout 20;
    degr-wfc-timeout 30;

    become-primary-on both;
  }

  net {
    protocol C;

    allow-two-primaries;

    after-sb-0pri discard-least-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri violently-as0p;

    rr-conflict disconnect;

    max-buffers 8000;
    max-epoch-size 8000;
    sndbuf-size 512k;
  }

  disk {
    al-extents 3389;

    disk-barrier no;
    disk-flushes no;
  }

  syncer {
    rate 100M;
    verify-alg sha1;
  }

  on host1 {
    device minor 1;
    disk /dev/sdb;
    address 192.168.55.1:7789;
    meta-disk internal;
  }

  on host2 {
    device minor 1;
    disk /dev/sdb;
    address 192.168.55.2:7789;
    meta-disk internal;
  }
}

Now that we had DRBD configured it was time to setup our NFS servers.

Creating our LVM Volumes

And that’s when the fun began…

Instead of having to deal with the complexities of a clustered file system (i.e. OCFS2,GFS) that would allow a true primary/primary mode, we decided to split the storage in half and each ESXi host would have one of the volumes mounted. In the event of a problem with one of the NFS servers, the remaining NFS server could take over the duties of the other NFS server since it had a real-time up-to-date copy of the other NFS partition containing our VMs. This post doesn’t cover the automatic fail-over of those resources.

Note: A previously built cluster which used LXC containers ran an OCFS2 filesystem on top of DRBD. At first glance, OCFS2 ran wonderfully, but then we started having weird problems with out of space errors even though there was plenty of inodes and actual space free. In short, with OCFS2, you need to make sure the applications you intend to run on OCFS2 are “cluster-aware” and use the proper API calls for kernel locks, writes, etc.

Setting up LVM with an underlying XFS filesystem was pretty simple. We’re going to use LVM on top of DRBD. Optionally, you can use DRBD on top of LVM.

pvcreate /dev/drbd/by-res/nfs-mirror

vgcreate nfs /dev/drbd/by-res/nfs-mirror

lvcreate -l 639980 --name 1 nfs
lvcreate -l 639980 --name 2 nfs

mkfs.xfs /dev/nfs/1
mkfs.xfs /dev/nfs/2

So far so good. After a quick reboot, we check our drbd status and find the following.

version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by mockbuild@Build64R6, 2014-08-17 19:26:04

 1: cs:Connected ro:Secondary/Secondary ds:Diskless/Diskless C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Uh oh! It’s showing that the disk isn’t found. But I can see the LVM disk recognized and available. Attempting to force one of the nodes to become primary results in the following dmesg error.

block drbd1: State change failed: Need access to UpToDate data
block drbd1:   state = { cs:Connected ro:Secondary/Secondary ds:Diskless/Diskless r----- }
block drbd1:  wanted = { cs:Connected ro:Primary/Secondary ds:Diskless/Diskless r----- }

I remember having problems with LVM and DRBD so I quickly do a google search and find the Nested LVM configuration with DRBD. So we make the following change to the filter setting in our /etc/lvm/lvm.conf file.

filter = [ "a|sd.*|", "a|drbd.*|", "r|.*|" ]

The above is basically a regular expression that LVM uses to know what block devices to search through looking for file systems. Essentially it says to accept (a) all sd* (sda, sdb, sdc, etc.) devices as well as any drbd (drbd0, drbd1, etc.) device and then to ignore (r) everything else.

Another reboot and the problem persists. With no data on the devices, I decided to wipe the LVM configuration information.

dd if=/dev/zero of=/dev/drbd1 bs=1M count=1000

And then reboot.

CentOS: LVM & DRBD Incompatibility

Suddenly on reboot, I again see the filesystem doing a sync. DRBD is working again and shows the disk online.

version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by mockbuild@Build64R6, 2014-08-17 19:26:04

 1: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:2197564 dw:2196480 dr:604 al:0 bm:0 lo:2 pe:6 ua:1 ap:0 ep:1 wo:d oos:395996
        [===============>....] sync'ed: 84.9% (395996/2592476)K
        finish: 0:00:06 speed: 64,512 (62,756) want: 71,200 K/sec

OK. It’s working again! LVM must be incomptible with DRBD on CentOS?

No. Lets step back and think this through. We know that LVM scans the block devices looking for configured LVM filesystems to initialize. It scans our underlying DRBD device (/dev/sdb) and sees LVM partitions so it maps then. Then DRBD comes along and attempts to grab a handle to the underlying device only to find that someone else (LVM) was already there… Hence, the disk is unavailable since LVM has it locked.

That makes logical sense. Let’s see if our theory is correct:

[root@host ~]# ls /etc/rc.d/rc3.d/ -la
total 8
lrwxrwxrwx.  1 root root   22 Oct 21 12:47 S02lvm2-monitor -> ../init.d/lvm2-monitor
lrwxrwxrwx   1 root root   14 Oct 22 10:35 S70drbd -> ../init.d/drbd

Yes. LVM (021) scans and monitors before DRBD is initialized (70). So how do we fix it…

The Solution

One solution would be to start DRBD before the LVM file system is initialized, but that could cause other timing issues. And more specifically, a “yum update” could override our configuration. Instead, how about we go back into our /etc/lvm/lvm.conf file and see if we can fix the filter parameter.

Because our underlying drbd block device is /dev/sdb, how about if we explicitly exclude it from the list? The filter parameter works such that the first matching regular expression is the action (accept or reject). Once matched for a block device, the remaining parameters are ignored. So the correct filter would be:

filter = [ "r|sdb|", "a|sd.*|", "a|drbd.*|", "r|.*|"]

Essentially, the above filter explicitly excludes DRBD’s underlying block device (/dev/sdb), then looks for any SCSI hard disks, following by DRBD devices and then excludes everything else.

After a reboot of the nodes, and everything stays update and active.

SUCCESS! Now off to finish setting up the NFS storage space…

Good luck and hopefully this helped you solve a diskless/diskless DRBD problem that wasn’t due to network connectivity problems (or an actual failed disk).

Oracle / Sun License Agreement

Oracle / Sun License Agreement

Many Linux distributions do not include an easy RPM installation of the office Sun/Oracle Java JDK/JRE requiring users to manually download the rpm from Oracle. Unfortunately, before you download, Oracle requires you to accept their Oracle Binary Code License Agreement for Java SE. Therein presents the problem.

Because most of my Linux servers do not have a X-Windows environment installed, I’m forced to use command line tools: curl or wget. A few text based browsers exist (Lynx, Links), but these are typically not installed by default and often come with their own additional dependencies. I prefer to keep my servers running as minimal as possible. “Less is more…”. In the past, I’ve accepted the license and downloaded via the client and then used SCP to transfer the file. Not ideal… but it works. But let’s see if we can do better.

So we browse to the page on our client workstation with our browser, accept the agreement and notice that the urls for the downloads change: http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.tar.gz.

PERFECT! Let’s download that:

[user@server ~]# wget http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm
--2014-10-14 11:01:45--  http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm
Resolving download.oracle.com... 205.213.110.138, 205.213.110.139
Connecting to download.oracle.com|205.213.110.138|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://edelivery.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm [following]
--2014-10-14 11:01:45--  https://edelivery.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm
Resolving edelivery.oracle.com... 172.226.99.109
Connecting to edelivery.oracle.com|172.226.99.109|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: http://download.oracle.com/errors/download-fail-1505220.html [following]
--2014-10-14 11:01:45--  http://download.oracle.com/errors/download-fail-1505220.html
Connecting to download.oracle.com|205.213.110.138|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5307 (5.2K) 
Saving to: jdk-7u67-linux-x64.rpm

100%[==========================================================================================>] 5,307       --.-K/s   in 0s

2014-10-14 11:01:45 (218 MB/s) - jdk-7u67-linux-x64.rpm

It looks like it worked… but if you look closer, you’ll notice that our 100+ MB file is only about 5K in size:

[root@nexus1 t]# ls -la
total 16
drwxr-xr-x. 2 root root 4096 Oct 14 11:01 .
dr-xr-x---. 5 root root 4096 Oct 14 11:01 ..
-rw-r--r--. 1 root root 5307 Mar 20  2012 jdk-7u67-linux-x64.rpm

Viewing the contents of the file, we downloaded an HTML file that includes the following message:

In order to download products from Oracle Technology Network you must agree to the OTN license terms..

*SIGH*. Nothing is ever easy…

Let’s look at the HTML/javascript for that download page and see if we can figure out how it’s working, perhaps reverse engineer it? The interesting piece of code is when the Accept License Agreement button is clicked.

<form name="agreementFormjdk-7u67-oth-JPR" method="post" action="radio" class="lic_form">
  <input type="radio" value="on" name="agreementjdk-7u67-oth-JPR" onclick="acceptAgreement(window.self, 'jdk-7u67-oth-JPR');"> &nbsp;Accept License Agreement&nbsp;&nbsp;&nbsp; 
  <input type="radio" value="on" name="agreementjdk-7u67-oth-JPR" onclick="declineAgreement(window.self, 'jdk-7u67-oth-JPR');" checked="checked"> &nbsp; Decline License Agreement
</form>

A call is made to the acceptAgreement function.

As we wade through the page, which horribly pollutes the global namespace and doesn’t follow any javascript best practices, we come across our function:

// Dynamically generated download page for OTN. 
// Aurelio Garcia-Ribeyro, 2012-05-21, based off of pre-existing code for OTN license acceptance
function acceptAgreement(windowRef, part){
	var doc = windowRef.document;
	disableDownloadAnchors(doc, false, part);
	hideAgreementDiv(doc, part);
	writeSessionCookie( 'oraclelicense', 'accept-securebackup-cookie' );
}

So basically, the download links are using a handler that looks for a cookie called ‘oraclelicense’. That we can work with. We basically need to send that cookie header with our command line request.

Lets try it using wget:

[user@server ~]# wget --header='Cookie: oraclelicense=accept-securebackup-cookie' http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm
--2014-10-14 11:12:39--  http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm
Resolving download.oracle.com... 205.213.110.138, 205.213.110.139
Connecting to download.oracle.com|205.213.110.138|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://edelivery.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm [following]
--2014-10-14 11:12:39--  https://edelivery.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm
Resolving edelivery.oracle.com... 172.226.99.109
Connecting to edelivery.oracle.com|172.226.99.109|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm?AuthParam=1413303279_659b15372dcaf37e8073becb5f049d60 [following]
--2014-10-14 11:12:39--  http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.rpm?AuthParam=1413303279_659b15372dcaf37e8073becb5f049d60
Reusing existing connection to download.oracle.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 126857158 (121M) [application/x-redhat-package-manager]
Saving to: jdk-7u67-linux-x64.rpm

100%[==========================================================================================>] 126,857,158 28.1M/s   in 4.4s

2014-10-14 11:12:44 (27.5 MB/s) - jdk-7u67-linux-x64.rpm

SUCCESS!!! We now have the real files, but you could do an MD5 check to verify the contents (if an MD5 file was published…).

For quick reference, the command to set the cookie for the GET operation is:

wget --header='Cookie: oraclelicense=accept-securebackup-cookie' <download url>

As everyone knows, mocking the HttpContext and associated classes is a nightmare and should just be avoided. I recently joined a different team at work where they were still running a lot of .NET 1.0 code. Most of the code was poorly designed and highly coupled having been written primarily by developers without proper object oriented design training. Calling this code “spaghetti code” would have been an insult to spaghetti code.

How bad? Most methods are over 1000 lines of code, filled with nested if/else statements and copy/pasted code all over. Extracting the business logic from one method and class resulted in 15 new classes. Here is an quick example of the code quality.

if (_username.ToLower().PadRight(12, ' ').Substring(0, 7).Equals("demo123")) {
}

Lots of useless string parsing and casting to wade through… But anyway, that isn’t the point of this post. Long story short is that as I’m modularizing this code I’ve run into the dreaded HttpContext.Current integrated throughout the code. Before I make extensive changes to the code, I wanted to have some unit tests to ensure that I wasn’t breaking the existing functionality as I modified the code. So the first thing I did was to inject the HttpContext as a dependency into the class. Although far from ideal, it allows me to at least run the code outside of IIS.

Here is my test helper to get the HttpContext:

/// <summary>
/// Retreives an HttpContext for testing.
/// </summary>
/// <returns>An HttpContext for testing.</returns>
internal HttpContext GetHttpContext(string url = "http://127.0.0.1/")
{
  var request = new HttpRequest(String.Empty, url, String.Empty);
  var response = new HttpResponse(new StringWriter());
  var context = new HttpContext(request, response);
  return(context);
}

Unfortunately, the code has numereous references to Request.ServerVariables. If you try to add to this NameValueCollection you’ll find that it is a read only collection. Here is the decompiled code:

public sealed class HttpRequest
{
  private HttpServerVarsCollection _serverVariables;
  public NameValueCollection ServerVariables
  {
    get
    {
      if (HttpRuntime.HasAspNetHostingPermission(AspNetHostingPermissionLevel.Low)) {
        return this.GetServerVars();
      }
      return this.GetServerVarsWithDemand();
    }
  }
  private NameValueCollection GetServerVars()
  {
    if (this._serverVariables == null) {
      this._serverVariables = new HttpServerVarsCollection(this._wr, this);
      if (!(this._wr is IIS7WorkerRequest)) {
        this._serverVariables.MakeReadOnly();
      }
    }
    return this._serverVariables;
  }
}

We can’t override ServerVariables since it’s not virtual and there is no setter. Digging deeper finds an internal HttpServerVarsCollection class which has the following Add signature:

public override void Add(string name, string value)
{
  throw new NotSupportedException();
}

The rabbit hole keeps getting deeper. Fortunately we find the AddStatic method which gives us some hope:

internal void AddStatic(string name, string value)
{
  if (value == null) {
    value = string.Empty;
  }
  base.InvalidateCachedArrays();
  base.BaseAdd(name, new HttpServerVarsCollectionEntry(name, value));
}

That looks promising. So let’s try making this work using reflection.

  var field = request.GetType()
                     .GetField("_serverVariables", BindingFlags.Instance | BindingFlags.NonPublic);
  if (field != null) {
    var variables = field.GetValue(request);
    var type = field.FieldType;
    if (variables == null) {
      var constructor = type.GetConstructor(BindingFlags.Instance | BindingFlags.NonPublic, null,
                                            new[] { typeof(HttpWorkerRequest), typeof(HttpRequest) }, null);
      variables = constructor.Invoke(new[] { null, request });
    }
    type.GetProperty("IsReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
        .SetValue(variables, false, null);
    var addStatic = type.GetMethod("AddStatic", BindingFlags.Instance | BindingFlags.NonPublic);
    addStatic.Invoke(variables, new[] { "REMOTE_ADDR", "127.0.0.1" });
    addStatic.Invoke(variables, new[] { "HTTP_USER_AGENT", "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36" });
  }

SUCCESS! But we can do even better. How about we make this code extend the HttpRequest object and clean things up a little.

/// <summary>
/// Extension methods for the HttpRequest class.
/// </summary>
public static class HttpRequestExtensions
{
  /// <summary>
  /// Adds the name/value pair to the ServerVariables for the HttpRequest.
  /// </summary>
  /// <param name="request">The request to append the variables to.</param>
  /// <param name="name">The name of the variable.</param>
  /// <param name="value">The value of the variable.</param>
  public static void AddServerVariable(this HttpRequest request, string name, string value)
  {
    if (request == null) return;

    AddServerVariables(request, new Dictionary<string, string>() {
      { name, value }
    });
  }

  /// <summary>
  /// Adds the name/value pairs to the ServerVariables for the HttpRequest.
  /// </summary>
  /// <param name="request">The request to append the variables to.</param>
  /// <param name="collection">The collection of name/value pairs to add.</param>
  public static void AddServerVariables(this HttpRequest request, NameValueCollection collection)
  {
    if (request == null) return;
    if (collection == null) return;

    AddServerVariables(request, collection.AllKeys
                                          .ToDictionary(k => k, k => collection[k]));
  }

  /// <summary>
  /// Adds the name/value pairs to the ServerVariables for the HttpRequest.
  /// </summary>
  /// <param name="request">The request to append the variables to.</param>
  /// <param name="dictionary">The dictionary containing the pairs to add.</param>
  public static void AddServerVariables(this HttpRequest request, IDictionary<string,string> dictionary)
  {
    if (request == null) return;
    if (dictionary == null) return;

    var field = request.GetType()
                       .GetField("_serverVariables", BindingFlags.Instance | BindingFlags.NonPublic);
    if (field != null) {
      var type = field.FieldType;

      var serverVariables = field.GetValue(request);
      if (serverVariables == null) {
        var constructor = type.GetConstructor(BindingFlags.Instance | BindingFlags.NonPublic, null,
                                              new[] { typeof(HttpWorkerRequest), typeof(HttpRequest) }, null);
        serverVariables = constructor.Invoke(new[] { null, request });
        field.SetValue(request, serverVariables);
      }
      var addStatic = type.GetMethod("AddStatic", BindingFlags.Instance | BindingFlags.NonPublic);

      ((NameValueCollection) serverVariables).MakeWriteable();
      foreach (var item in dictionary) {
        addStatic.Invoke(serverVariables, new[] { item.Key, item.Value });
      }
      ((NameValueCollection)serverVariables).MakeReadOnly();
    }
  }
}

You might have noticed, that I also created a NameValueCollection extension to modify the IsReadOnly property. Of course, use this with care… “with great power comes great responsibility“. The creator of the NameValueCollection you’re consuming likely set the IsReadOnly property for a reason…

/// <summary>
/// Extension methods for the NameValueCollection class.
/// </summary>
public static class NameValueCollectionExtensions
{
  /// <summary>
  /// Retreives the IsReadOnly property from the NameValueCollection
  /// </summary>
  /// <param name="collection">The collection to retrieve the propertyInfo from.</param>
  /// <param name="bindingFlags">The optional BindingFlags to use. If not specified defautls to Instance|NonPublic.</param>
  /// <returns>The PropertyInfo for the IsReadOnly property.</returns>
  private static PropertyInfo GetIsReadOnlyProperty(this NameValueCollection collection, BindingFlags bindingFlags = BindingFlags.Instance | BindingFlags.NonPublic)
  {
    if (collection == null) return (null);
    return(collection.GetType().GetProperty("IsReadOnly", bindingFlags));
  }

  /// <summary>
  /// Sets the IsReadOnly property to the specified value.
  /// </summary>
  /// <param name="collection">The collection to modify.</param>
  /// <param name="isReadOnly">The value to set.</param>
  private static void SetIsReadOnly(this NameValueCollection collection, bool isReadOnly)
  {
    if (collection == null) return;

    var property = GetIsReadOnlyProperty(collection);
    if (property != null) {
      property.SetValue(collection, isReadOnly, null);
    }
  }

  /// <summary>
  /// Makes the specified collection writable via reflection.
  /// </summary>
  /// <param name="collection">The collection to make writable.</param>
  public static void MakeWriteable(this NameValueCollection collection)
  {
    SetIsReadOnly(collection, false);
  }

  /// <summary>
  /// Makes the specified collection readonly via reflection.
  /// </summary>
  /// <param name="collection">The collection to make readonly.</param>
  public static void MakeReadOnly(this NameValueCollection collection)
  {
    SetIsReadOnly(collection, true);
  }
}

And there you have it. A way to add ServerVariables. Keep in mind that this code is extremely fragile because it’s using reflection to access the internal workings of code that we don’t have control over. Below are examples of using the extension method.

public class Example
{
  public void Test() 
  {
    string url = "http://127.0.0.1";
    var request = new HttpRequest(String.Empty, url, String.Empty);
    request.AddServerVariable("REMOTE_ADDR", "127.0.0.1");

    // or
    
    request.AddServerVariables(new Dictionary<string, string>() {
      { "REMOTE_ADDR", "127.0.0.1" },
      { "HTTP_USER_AGENT", "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36" }
    });
  }
}

I hope you find this useful and it can save you some time.

I’m working on a project to bridge the gap between some legacy infrastructure code with newer infrastructure code based on a newer .NET framework code. The legacy infrastructure code is .NET 3.5, while the newer infrastructure code is a mixture of .NET 4.0 and 4.5. v4.5 framework usage is currently limited to modules implementing Windows Identity Foundation (WIF) v4.5. Thankfully, the bridging code doesn’t require those features so we can isolate just the relevant infrastructure code.

While writing the bridging code, I needed to compile the relevant portions of the newer infrastructure with 3.5 for compatibility. A couple of problems were encountered with backporting:

Usage of .NET 4.0 specific methods

There were two compilation problems:

String.IsNullOrWhiteSpace()

The first step was to fix the missing IsNullOrWhiteSpace() function. To do this, I simply created my own extension methods and related tests.

  /// <summary>
  /// Extensions to the string class.
  /// </summary>
  public static class StringExtensions
  {
    /// <summary>
    /// Indicates whether a specified string is null, empty, or consists only of white-space characters.
    /// </summary>
    /// <param name="value">The string to test.</param>
    /// <returns>true if the value parameter is null or String.Empty, or if value consists exclusively of white-space characters. </returns>
    public static bool IsNullOrWhiteSpace(this string value)
    {
#if NET35
      /// The IsNullOrWhiteSpace function was added in NET40
      return (String.IsNullOrEmpty(value) || (value.Trim().Length == 0));
#else
      return (String.IsNullOrWhiteSpace(value));
#endif
    }
  }

Inside of our *.csproj file, I set a CONSTANT based on the framework. There are lots of ways to do this, but the simplest is to just hard code it in the relevant PropertyGroup setting.

  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug-Net35|AnyCPU'">
    <DebugSymbols>true</DebugSymbols>
    <OutputPath>bin\Debug-Net35\</OutputPath>
    <DefineConstants>DEBUG;TRACE;NET35</DefineConstants>
    <DebugType>full</DebugType>
    <PlatformTarget>AnyCPU</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>

The line of interest is:

<DefineConstants>DEBUG;TRACE;NET35</DefineConstants>

An lastly we have our tests to make sure our implementation functions correctly:

  [TestClass]
  public class StringExtensionTests
  {
    [TestMethod]
    public void IsNullOrWhiteSpace_With_Null_Returns_True()
    {
      Assert.IsTrue(((string) null).IsNullOrWhiteSpace());      
    }

    [TestMethod]
    public void IsNullOrWhiteSpace_With_EmptyString_Returns_True()
    {
      Assert.IsTrue(String.Empty.IsNullOrWhiteSpace());
    }

    [TestMethod]
    public void IsNullOrWhiteSpace_With_Spaces_Returns_True()
    {
      Assert.IsTrue("   ".IsNullOrWhiteSpace());
    }

    [TestMethod]
    public void IsNullOrWhiteSpace_With_Linefeeds_Returns_True()
    {
      Assert.IsTrue("\r\n".IsNullOrWhiteSpace());
    }

    [TestMethod]
    public void IsNullOrWhiteSpace_With_Whitepace_Returns_True()
    {
      Assert.IsTrue(" \r\n   \t ".IsNullOrWhiteSpace());
    }

    [TestMethod]
    public void IsNullOrWhiteSpace_With_Nonwhitespace_Returns_False()
    {
      Assert.IsFalse("   NO WHITESPACE HERE  ".IsNullOrWhiteSpace());
    }
  }

Fixing the code is pretty simple. We add a reference to our namespace to enable the extension methods and then reverse the parameters for the code:

Before
String.IsNullOrWhiteSpace(where)
After
where.IsNullOrWhiteSpace()

String.Join()

In .NET 3.5, the String.Join() method has 2 overloads:

Join(String, String[])
Join(String, String[], Int32, Int32)

In .NET 4.0, there are now five overloads. The overloads mostly allow for an object instead of a string to be specified, along with the ability to use an IEnumerable.

Join(String, IEnumerable<String>)
Join<T>(String, IEnumerable<T>)
Join(String, Object[])
Join(String, String[])
Join(String, String[], Int32, Int32)

Unfortunately, the newer framework contained a number of LINQ statements which passed the statement as an IEnumerable. The solution was to simply add .ToArray() to the end of the IEnumerable.

Before
IEnumerable<T> items;
Func<T, object> output;
String.Join(",", items.Select((item) => (output != null) ? output.Invoke(item) : item))
After
String.Join(",", items.Select((item) => (output != null) ? output.Invoke(item) : item).ToArray())

Unfortunately, that isn’t enough since we need to convert our type of T to a string. Based on the documentation, and viewing the decompiled source for the method, we can see that the type of T is converted by simply calling the .ToString() method on the item. So we can simply fix that by adding .ToString() to each of the ternary results:

String.Join(",", items.Select((item) => (output != null) ? output.Invoke(item).ToString() : item.ToString()).ToArray())

BUT… what about if the invoked output or item are null? We can handle that all by simply using the Convert.ToString() method. And instead of doing it for each ternary result, we’ll just convert the final result from the ternary, making our code more readable and maintainable.

String.Join(",", items.Select((item) => Convert.ToString((output != null) ? output.Invoke(item) : item)))

External library dependencies

Now, the next step is getting conditional compilation working for our 3rd party libraries. Many of the tutorials you’ll find will show string matching against the $(TargetFrameworkVersion) build variable.

<PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v3.5' ">
    <DefineConstants>NET35</DefineConstants>
</PropertyGroup>

Ideally, we’d like to be able to do a simple number comparison (i.e. $(TargetFrameworkVersion) > 3.5). Unfortunately, because $(TargetFrameworkVersion) is prefixed with a ‘v’ it’s not interpreted as a number. Not to worry though, in newer versions of MSBuild, we can actually make calls to the .NET library.

To start, we’re going to create a PropertyGroup AFTER the existing property groups. Any variables declared AFTER your property group are invalid and empty strings inside of your PropertyGroup. Then we’re going to create our own custom variable called TargetFrameworkVersionNumber that we can use in our conditional expressions. The simplest option is to simply hard code this value.

<PropertyGroup>
  <TargetFrameworkVersionNumber>2.0</TargetFrameworkVersionNumber>
</PropertyGroup>

While hard coding the value might work in simple scenarios, it can be tedious to maintain. Ideally, we’d like to dynamically set that value based on the $TargetFrameworkVersion value. While there are a number of ways to exclude the ‘v’ from the preceding value, such as using Double.Parse(), what fun would that be? We’re going to use a regular expression because they unlock the possibilities of what you can do.

<PropertyGroup>
  <TargetFrameworkVersionNumber>$([System.Text.RegularExpressions.Regex]::Replace($(TargetFrameworkVersion), '[^\d\.]+', '', System.Text.RegularExpressions.RegexOptions.IgnoreCase))</TargetFrameworkVersionNumber>
</PropertyGroup>

The above regular expression “[^\d\.]+”, simply says to match anything that isn’t a digit or a decimal point (period) and then replace those matches with an empty string / nothing. If you’re not already using regular expressions in your code, they’re worth their weight in gold to learn and most modern languages include support for them including JavaScript.

At this point, if we add a Target element with a Message to our *.csproj file, we can output and inspect the values.

<Target Name="BeforeBuild">
  <Message Text="$(TargetFrameworkVersionNumber)" Importance="High" />
</Target>

You should be able to see the output now during your compilation. Let’s create our conditional compilation statement now.

<ItemGroup Condition=" $(TargetFrameworkVersionNumber) >= 3.5 ">
  <Reference Include="Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
    <SpecificVersion>False</SpecificVersion>
    <HintPath>..\packages\Newtonsoft.Json.4.5.7\lib\net35\Newtonsoft.Json.dll</HintPath>
  </Reference>
</ItemGroup>

Unfortunately, if you run or open this build file, you’ll get an error that you can’t compare the string “” with a number. Even though our number looks like a number, internally the build it treating it like a string. MSBuild is supposed to automatically convert from string to number, and vice versa, but “my mileage varied”… Not to worry, we can use some additional .NET library function calls to convert that string to a number for us.

<ItemGroup Condition=" $([System.Single]::Parse($(TargetFrameworkVersionNumber))) <= 3.5 ">
  <Reference Include="Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
    <SpecificVersion>False</SpecificVersion>
    <HintPath>..\packages\Newtonsoft.Json.4.5.7\lib\net35\Newtonsoft.Json.dll</HintPath>
  </Reference>
</ItemGroup>

Here you see we made another call to $([System.Single]::Parse( )).

Unfortunately, now you’ll get an error the “The project file could not be loaded. ‘&lt’, hexadecimal value 0x3C, is an invalid attribute character.’. Thankfully, the fix is simple, you just need to encode the ‘<‘ as ‘&lt;’ as follows:

<ItemGroup Condition=" $([System.Single]::Parse($(TargetFrameworkVersionNumber))) &lt;= 3.5 ">
  <Reference Include="Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
    <SpecificVersion>False</SpecificVersion>
    <HintPath>..\packages\Newtonsoft.Json.4.5.7\lib\net35\Newtonsoft.Json.dll</HintPath>
  </Reference>
</ItemGroup>

Now, under each ItemGroup, you can add the custom references that are framework version dependent. You don’t need to repeat the ItemGroup for each Reference you want to add for each framework. Just add additional Reference items accordingly.

<Choose>
  <When Condition=" $([System.Single]::Parse($(TargetFrameworkVersionNumber))) &lt;= 3.5 ">
    <ItemGroup>
      <Reference Include="Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\Newtonsoft.Json.4.5.7\lib\net35\Newtonsoft.Json.dll</HintPath>
      </Reference>
    </ItemGroup>
  </When>
  <Otherwise>
    <ItemGroup>
      <Reference Include="Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\Newtonsoft.Json.4.5.7\lib\net35\Newtonsoft.Json.dll</HintPath>
      </Reference>
    </ItemGroup>
  </Otherwise>
</Choose>

Hopefully that helps you when attempting to backport a library or do conditional compilation in your projects.

References / Additional Information

Disclaimer: I am neither an expert, nor fan of SharePoint. Simple fact is I abhore SharePoint.

Unfortunately, I was brought into a project that required a SharePoint list to be updated via an external web service. There are two ways to accomplish this:

  • Use the SharePoint Client Object Model (client.svc) interface and get the rich client interface similar to writing applications that are hosted within SharePoint. This requires that the Microsoft.SharePoint.Client libraries are included or available to your application.
  • Use the WCF DataServices or REST interfaces (listdata.svc) to interact with the Lists on the SharePoint site. Unlike the Client Object Model, you’ll be limited to only working with the list data doing CRUD (Create, Read, Update, Delete) operations. Note: If the list columns change, or new lists added to the site, the service reference will need to be updated since the generated code is type-safe.

For this project, we opted to go with the WCF DataServices approach since we only needed to populate and update existing list data. In Visual Studio it’s easy to add the WCF web service reference which will create the proxy for you. Basically reference your site url and prefix ‘_vti_bin/listdata.svc’ to the end of the url.

When creating a new column in SharePoint, you have the following options for the ‘type’:

  • Single line of text
  • Multiple lines of text
  • Choice (menu to choose from)
  • Number (1, 1.0, 100)
  • Currency ($, ¥, €)
  • Date and Time
  • Lookup (information already on this site)
  • Yes/No (check box)
  • Person or Group
  • Hyperlink or Picture
  • Calculated (calculation based on other columns)
  • External Data
  • Managed Metadata

When using the WCF proxy, the bold items identified above require special treatment for both reading and writing the values. Let’s look at reading these values via the WCF proxy.

For this example setup, we’re going to have a ‘Products’ list with the following columns:

  • Shipping: Choice (Multi-value)
  • Color: Choice (single value)
  • Category: Lookup (Multi-value)
  • Manufacturer: Lookup (single value)

Two additional lookup lists were created called ‘Categories’ and ‘Manufacturers’. Nothing special about these lookup lists, just used the default ‘Id’ and ‘Title’ columns to store our data.

For the purpose of this demo, the SharePoint site name was ‘WCF Test’. In our Visual Studio project, the web service reference was named ‘StoreSite’.

Reading Lists

First we’ll create a little utility function to get our DataContext.

/// <summary>
/// Get the WCF data context proxy.
/// </summary>
/// <param name="url">The optional Url for the sharepoint site listdata.svc endpoint.</param>
/// <returns>The DataContext to operate on.</returns>
static WCFTestDataContext GetDataContext(string url = null)
{
  if (url == null) url = ConfigurationManager.AppSettings["SharePointSiteURL"];
      
  var context = new StoreSite.WCFTestDataContext(new Uri(ConfigurationManager.AppSettings["SharePointSiteURL"]));
  context.Credentials = CredentialCache.DefaultNetworkCredentials;
  return (context);
}

The first step is reading the results from the ‘Products’ list. Go ahead and create some sample data manually via the SharePoint site.

/// <summary>
/// Displays the products to the console.
/// </summary>
static void DisplayProducts()
{
  var context = GetDataContext();
  var products = context.Products;
  foreach (var product in products) {
    Console.WriteLine("[{0}] {1} @ {2}", product.Id, product.Title, product.Created);
    Console.WriteLine("  Manufacturer: [{0}] {1}", product.Manufacturer.Id, product.Manufacturer.Title);
    Console.WriteLine("  Color:        {0}", product.Color.Value);
    Console.WriteLine("  Category:     {0}", String.Join(",", product.Category.Select(c => String.Format("[{0}] {1}", c.Id, c.Title))));
    Console.WriteLine("  Shipping:     {0}", String.Join(",", product.Shipping.Select(s => s.Value)));
  }
}

If you run this, you’ll get a System.NullReferenceException. If you trace through the debugger, you’ll find that all of the choice and lookup columns are null and/or don’t contain any items. Essentially, SharePoint is not doing the joins on that data to avoid unnecessary data transfer and optimal query performance. We have to explicitly tell SharePoint that we want that data included in our results. At the same time, we’ll make our output code a little more robust with some null checking.

Replace the previous function with the following:

/// <summary>
/// Displays the products to the console.
/// </summary>
static void DisplayProducts()
{
  var context = GetDataContext();
  var products = context.Products.Expand(p => p.Manufacturer)
                                 .Expand(p => p.Category)
                                 .Expand(p => p.Shipping)
                                 .Expand(p => p.Color);
  foreach (var product in products) {
    if (product.Manufacturer != null) {
      Console.WriteLine("  Manufacturer: [{0}] {1}", product.Manufacturer.Id, product.Manufacturer.Title);
    }
    if (product.Color != null) {
      Console.WriteLine("  Color:        {0}", product.Color.Value);
    }
    if (product.Category != null) {
      Console.WriteLine("  Category:     {0}", String.Join(",", product.Category.Select(c => String.Format("[{0}] {1}", c.Id, c.Title))));
    }
    if (product.Shipping != null) {
      Console.WriteLine("  Shipping:     {0}", String.Join(",", product.Shipping.Select(s => s.Value)));
    }
  }
}

The key in the above code is the .Expand() function.

  var products = context.Products.Expand(p => p.Manufacturer)
                                 .Expand(p => p.Category)
                                 .Expand(p => p.Shipping)
                                 .Expand(p => p.Color);

It basically instructs SharePoint to also return that auxiliary data in the results. With that change, we should now get our expected results. For each choice and lookup field that you want included or populated, you need to include a corresponding .Expand() statement with a lambda selector.

[1] Bosch 10-in Table Saw @ 7/1/2014 11:51:35 AM
  Manufacturer: [6] Bosch
  Color:        Green
  Category:     [8] Tools,[9] Saws
  Shipping:     FedEx,UPS

Creation

For creating a new item, we’ll need to create a new ProductsItem. Once you’re ProductsItem is created, you’ll need to add it to your context which is keeping track of all changes.

var context = GetDataContext();
var product = new ProductsItem() {
  Title = "Bosch 10-in Table Saw"
};
context.AddToProducts(product);

// TODO: Assign addition properties / fields

context.saveChanges();

The above code snippet will be used for each specialized example. Note: The new product MUST be added to the context (tracked) before you can attach or link the choice and lookup columns.

Modifying Single Choice Columns

Modifying a single choice column is pretty simple. In our case, our ‘Color’ field in a single choice. Essentially we just need to set the value using the *Value attribute. Using the Visual Studio intellisense, you’ll see that ProductsItem has a ‘Color’ as well as a ‘ColorValue’ property. We can simply set the ‘ColorValue’ property.

// Color: Choice (Single)
product.ColorValue = "Green";

You can get the list of all the available choices with the following:

foreach (var color in context.ProductsColor) {
  Console.WriteLine("{0}", color.Value);
}

Note: You can set the ‘ColorValue’ string to anything. It doesn’t have to exist in the list although the native SharePoint tools and editor will likely not be happy and lose the custom value on a subsequent edit.

Modifying Multiple Choice Columns

For a multiple choice column, the ‘ProductItem’ class contains a ‘Shipping’ field of type DataServiceCollection<>. Included with this method are convenient .Add() methods. You might think that you only need to do the following:

var product = new ProductItem();
var ups = ProductsShippingValue.CreateProductsShippingValue("UPS");
product.Shipping.Add(ups);

Unfortunately, the above won’t generate an error, but neither will your data be saved. Go ahead and try it.

To save this item, we must get an existing ProductsShippingValue which is already being tracked by the context or create a new one and manually attach it to the context.

Use / Retrieve Existing Tracked Context

The following code shows how to query the list of available choices and add it to the multi choice column.

// Shipping: Choice (Multiple)
var ups = context.ProductsShipping.Where(s => s.Value == "UPS").FirstOrDefault();
var fedex = context.ProductsShipping.Where(s => s.Value == "FedEx").FirstOrDefault();
product.Shipping.Add(ups);
product.Shipping.Add(fedex);
context.AddLink(product, "Shipping", ups);
context.AddLink(product, "Shipping", fedex);

Essentially we lookup one of the available choice values which is being tracked, add it to multi-choice column and then notify the DataContext that the values are “linked”. Please note, that the above code should be made more robust by checking for null values, etc.

Create New Entity and Track It

The above example has the overhead of running a remote query. Since we’re just matching on a predetermined or known string we can instead manually create our ProductsShippingValue and accomplish the same things. The only difference is that we need to make our context start tracking our new item. We accomplish this by “attaching” it. Otherwise the code is nearly identical.

// Manufacturer: Lookup (Single)
var ups = ProductsShippingValue.CreateProductsShippingValue("UPS");
var fedex = ProductsShippingValue.CreateProductsShippingValue("FedEx");
context.AttachTo("ProductsShipping", ups);
context.AttachTo("ProductsShipping", fedex);
product.Shipping.Add(ups);
product.Shipping.Add(fedex);
context.AddLink(product, "Shipping", ups);
context.AddLink(product, "Shipping", fedex);

Either option works although I would argue that the later option, although more code, makes more sense since you’re matching and selecting based predetermined strings. An enumeration would probably be ideal for this and could be streamlined with some extension method overloads.

Modifying Single Lookup Columns

For modifying a lookup field with a single value, we simply set the appropriate ‘*Id’ value that corresponds to our lookup value. You can also dynamically lookup this value which the following example demonstrates:

// Manufacturer: Lookup (Single)
var manufacturer = context.Manufacturers.Where(m => m.Title == "Bosch").FirstOrDefault();
product.ManufacturerId = manufacturer.Id;

For brevity, null checks and other exceptions were omitted and should be included in your production code.

Modifying Multiple Lookup Columns

Setting a multiple lookup column is very similar to setting a multi-choice column value. We can either query the existing lookup value which will already be “tracked” by the DataContext or we can manually create our items.

Use / Retrieve Existing Tracked Context

We query the existing lookup value although we’re only interested and need to set the Id value.

// Category: Lookup (Multiple)
var tools = context.Categories.Where(m => m.Title == "Tools").FirstOrDefault();
var saws = context.Categories.Where(m => m.Title == "Saws").FirstOrDefault();
product.Category.Add(tools);
product.Category.Add(saws);
context.AddLink(product, "Category", tools);
context.AddLink(product, "Category", saws);

Create New Entity and Track It

We can also create our objects manually, assuming we know their Id fields. When creating the CategoriesItem, the only thing we need to set is the Id field.

// Category: Lookup (Multiple)
var tools = new CategoriesItem() { Id = 8 };
var saws = new CategoriesItem() { Id = 9 };
context.AttachTo("Categories", tools);
context.AttachTo("Categories", saws);
product.Category.Add(tools);
product.Category.Add(saws);
context.AddLink(product, "Category", tools);
context.AddLink(product, "Category", saws);

Remarks

Hopefully this helps save you time. At the time I did this, it took my several days of digging, searching and experimenting before I found the right references and ordering. Special thanks to the following post on the MSDN forums which really helped to get things going in the right direction.

And yes, SharePoint sucks…


Below is the complete example of adding a product with basic error checking:

class Program
{
  /// <summary>
  /// Get the WCF data context proxy.
  /// </summary>
  /// <param name="url">The optional Url for the sharepoint site listdata.svc endpoint.</param>
  /// <returns>The WCFDataContext to operate on.</returns>
  static WCFTestDataContext GetDataContext(string url = null)
  {
    if (url == null) url = ConfigurationManager.AppSettings["SharePointSiteURL"];
    
    var context = new StoreSite.WCFTestDataContext(new Uri(ConfigurationManager.AppSettings["SharePointSiteURL"]));
    context.Credentials = CredentialCache.DefaultNetworkCredentials;
    return (context);
  }

  /// <summary>
  /// Displays the products to the console.
  /// </summary>
  static void DisplayProducts()
  {
    var context = GetDataContext();
    var products = context.Products.Expand(p => p.Manufacturer)
                                   .Expand(p => p.Category)
                                   .Expand(p => p.Shipping)
                                   .Expand(p => p.Color);
    foreach (var product in products) {
      Console.WriteLine("[{0}] {1} @ {2}", product.Id, product.Title, product.Created);
      if (product.Manufacturer != null) {
        Console.WriteLine("  Manufacturer: [{0}] {1}", product.Manufacturer.Id, product.Manufacturer.Title);
      }
      if (product.Color != null) {
        Console.WriteLine("  Color:        {0}", product.Color.Value);
      }
      if (product.Category != null) {
        Console.WriteLine("  Category:     {0}", String.Join(",", product.Category.Select(c => String.Format("[{0}] {1}", c.Id, c.Title))));
      }
      if (product.Shipping != null) {
        Console.WriteLine("  Shipping:     {0}", String.Join(",", product.Shipping.Select(s => s.Value)));
      }
    }
  }

  private static void Main(string[] args)
  {
    var context = GetDataContext();
    var product = new ProductsItem() {
      Title = "Bosch 10-in Table Saw"
    };
    context.AddToProducts(product);

    // Color: Choice (Single)
    product.ColorValue = "Teale";
    foreach (var color in context.ProductsColor) {
      Console.WriteLine("{0}", color.Value);
    }

    // Shipping: Choice (Multiple)
    var ups = context.ProductsShipping.Where(s => s.Value == "UPS").FirstOrDefault();
    var fedex = context.ProductsShipping.Where(s => s.Value == "FedEx").FirstOrDefault();
    //var ups = ProductsShippingValue.CreateProductsShippingValue("UPS");
    //var fedex = ProductsShippingValue.CreateProductsShippingValue("FedEx");
    //context.AttachTo("ProductsShipping", ups);
    //context.AttachTo("ProductsShipping", fedex);
    product.Shipping.Add(ups);
    product.Shipping.Add(fedex);
    context.AddLink(product, "Shipping", ups);
    context.AddLink(product, "Shipping", fedex);

    // Manufacturer: Lookup (Single)
    var manufacturer = context.Manufacturers.Where(m => m.Title == "Bosch").FirstOrDefault();
    if (manufacturer != null) {
      product.ManufacturerId = manufacturer.Id;
    }

    // Category: Lookup (Multiple)
    var tools = new CategoriesItem() { Id = 8 };
    var saws = new CategoriesItem() { Id = 9 };
    context.AttachTo("Categories", tools);
    context.AttachTo("Categories", saws);
    //var tools = context.Categories.Where(m => m.Title == "Tools").FirstOrDefault();
    //var saws = context.Categories.Where(m => m.Title == "Saws").FirstOrDefault();
    if (tools != null) {
      product.Category.Add(tools);
      context.AddLink(product, "Category", tools);
    }
    if (saws != null) {
      product.Category.Add(saws);
      context.AddLink(product, "Category", saws);
    }

    Console.WriteLine("Adding new product '{0}'...", product.Title);
    context.SaveChanges();

    DisplayProducts();
  }
}

If you deal with SSL certificates long enough, eventually you’ll run into a trust issue or error regardless of what programming language you are using. Dealing with SSL certificates can be confusing at first, but hopefully this article can simplify a few things or at least get you past the error.

Recently, a user reported the following error when using my node-activedirectory plugin. The error was:

throw er; // Unhandled 'error' event
^
Error: CERT_UNTRUSTED
at SecurePair. (tls.js:1370:32)
at SecurePair.EventEmitter.emit (events.js:92:17)
at SecurePair.maybeInitFinished (tls.js:982:10)
at CleartextStream.read as _read
at CleartextStream.Readable.read (_stream_readable.js:320:10)
at EncryptedStream.write as _write
at doWrite (_stream_writable.js:226:10)
at writeOrBuffer (_stream_writable.js:216:5)
at EncryptedStream.Writable.write (_stream_writable.js:183:11)
at write (_stream_readable.js:583:24)

Well, what does that mean? At first you might think something is wrong with your code or perhaps a problem with the library you are using. No, that’s not the problem. The basic problem is that the SSL certificate the remote server is sending you is not “trusted” by your computer or potentially it has been tampered with (i.e. man-in-the middle attack).

What do you mean by “trusted”? I just want my connection encrypted!

Default List of Trusted Certificates
The “trust” is a key part of what makes SSL work. With SSL, there are a number of certificate authorities (CA) which your computer has been preconfigured to “trust”. When someone needs an SSL certificate, they will typically send their request to a certificate authority to “sign” their certificate which creates a “chain of trust”. With SSL, if you trust a certificate, you trust that certificate and all of the certificates below it or certificates which that certificate has “signed” or verified for integrity. Although your computer was preconfigured with a bunch of trusted certificate authorities, you can add or remove from that list as needed. A certificate can also be signed by itself, this is referred to as a self-signed certificate and is common for quick testing and securing internal resources when a public key infrastructure (PKI) doesn’t exist. There is also a method for revoking bad or hacked certificates which is beyond the scope of this article; just know that this is typically referred to as a “certificate revocation list” or CRL for short.

That brief introduction was only 300,000 foot view and barely does the topic justice but should be sufficient background to understand the error. To learn even more visit the following links:

TL;DR – How Do I Fix It?

To fix the problem, you typically have three options:

  1. Don’t use SSL for the connection. But that’s probably not a good idea…
  2. Disable certification trust verification in your application framework. Another bad idea since it makes a man-in-the middle attack possible. But sometimes for quick testing and proof of concept this is acceptable. But NEVER do this in production!!!
  3. Verify, import and trust the certificate authority or individual certificate.

Of those options, let’s do it the right way and get our trust properly established.

Getting the certificate

The first step is getting the public certificate (or preferably the certificate authority for that certificate) that you want to trust. You don’t need the private key that goes with that certificate and no admin that knew what they were doing would give it to you anyway. There are a number of different ways to get this information depending on what type of access you have to the server that contains the SSL certificate.

Access To The Server

If you have direct access to the server, you can export the certificate to a file. To get started, follow these steps:

Export Certificate via Microsoft Management Console (mmc).
  1. On the server that has the certificate you want to export, start the Microsoft Management Console. Press Windows-R on your keyboard to bring up the “Run…” command line. Then enter ‘mmc’ and press enter.
  2. File -> Add/Remove snap in…
  3. windows-ssl-export-01Choose ‘Certificate’ from the list and click on the ‘Add >’ button.
  4. windows-ssl-export-02Choose ‘Computer Account’ and click on ‘Next >’.
  5. windows-ssl-export-03Choose ‘Local computer: (the computer this console is running on)’. Click on ‘Finish’.
  6. Find the certificate in the list that you want to export. Most likely located in the ‘Personal > Certificates’ or ‘Trusted Root Certification Authorities > Certificates’ locations. Right click on the certificate you want to export, then choose ‘All Tasks’ -> ‘Export…’.
  7. windows-ssl-export-04This starts the Certificate Export Wizard. Click ‘Next >’.
  8. windows-ssl-export-05If the certificate you have also had a corresponding private key, choose ‘No, do not export the private key’. As discussed previously, you do NOT want to export the private key or give it to anyone else. Click on ‘Next >’.
  9. windows-ssl-export-06Choose ‘Base-64 encoded X.509 (.CER)’ and click on ‘Next >’. Note: You could choose any of the other formats, depending on where you will be using the certificate. I personally find the Base-64 encoded certificate to be more compatible across platforms.
  10. windows-ssl-export-07Choose a location and filename to save the file to. Make sure it ends with either ‘*.cer’ or ‘*.crt’ so that the Windows operating system will recognize the type of file. Click on ‘Next >’.
  11. windows-ssl-export-08Click on ‘Next >’.
  12. windows-ssl-export-08You should receive a confirmation that the certificate was exported successfully. Click on ‘OK’.
No Access to the server

If you don’t have direct access to the server, you can still get the public certificate that is presented to you. If you’re trying to get a certificate for an HTTP/HTTPS server, you can easily view the certificate and save it to file. However, if you’re trying to get the SSL certificate from LDAPS or perhaps IMAPS, we’ll need to use the OpenSSL utilities to view the certificates. OpenSSL isn’t installed by default on Windows machines, however if you have access to a Linux server, typically the openssl tools will be available. For Windows, we can download the OpenSSL binaries or use Cygwin to install GNU / POSIX compatible binaries.

OpenSSL Tools

Once you have the openssl binaries available, we can use the following command to view the SSL certificates for any type of connection. The following example would view the certificates for an LDAPS (LDAP via SSL) connection on the default port. Please note that Active Directory and LDAP are basically the same. Active Directory is Microsoft’s implementation of LDAP.

openssl s_client -showcerts -host remoteserver.domain.name -port 636

Running that command, you should see something similar to the following:

CONNECTED(00000003)
---
Certificate chain
 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
   i:/C=US/O=Google Inc/CN=Google Internet Authority G2
-----BEGIN CERTIFICATE-----
MIIEdjCCA16gAwIBAgIICNMg30SopiMwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UE
BhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl
cm5ldCBBdXRob3JpdHkgRzIwHhcNMTQwNTIyMTEyNTU4WhcNMTQwODIwMDAwMDAw
WjBoMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN
TW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEXMBUGA1UEAwwOd3d3
Lmdvb2dsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCNJSwk
PPpMgw/J/diF5cAqGbmNe/Bih1rLVsfBwJDS3zunxMI1IAhoudccuQd0h4OWYcGc
z1Y8aNvpPz+3qY0GUvQcVGLh8JydQJI8eBlXL9v8J/uK2GBT/37Bkcga94DqpOLG
9n5Fvsd6F87+jpuCyDXW1hv6aNr4uiyFwa7I3HlTSr6BauM+aS0PXUTJSBi0BG73
gJbpTB/MgFlILp3x5bYSpn+3eSdME4EKEq42uy/oVHFrXsgZA6/lmWMiM/Is530x
FJfu0Bz7OgPRYsAiGiGjhPyPUs4oTOQERq2j9cIM4OXHVtZqehESE6noDvlNhptA
R+6lpPoDgQp2O5BjAgMBAAGjggFBMIIBPTAdBgNVHSUEFjAUBggrBgEFBQcDAQYI
KwYBBQUHAwIwGQYDVR0RBBIwEIIOd3d3Lmdvb2dsZS5jb20waAYIKwYBBQUHAQEE
XDBaMCsGCCsGAQUFBzAChh9odHRwOi8vcGtpLmdvb2dsZS5jb20vR0lBRzIuY3J0
MCsGCCsGAQUFBzABhh9odHRwOi8vY2xpZW50czEuZ29vZ2xlLmNvbS9vY3NwMB0G
A1UdDgQWBBQAC/6CV31piwAaimR5f1OK/QdilTAMBgNVHRMBAf8EAjAAMB8GA1Ud
IwQYMBaAFErdBhYbvPZotXb1gba7Yhq6WoEvMBcGA1UdIAQQMA4wDAYKKwYBBAHW
eQIFATAwBgNVHR8EKTAnMCWgI6Ahhh9odHRwOi8vcGtpLmdvb2dsZS5jb20vR0lB
RzIuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQB3JODw4UTUX3xJyr55rbd5EIeMQjcs
sKRvH/oJmEcIl1hrOaiNnpEbQis+2N5YR2PMMU825iO30L66hIswPOxfFiBeb1ZM
TWPJG5WXx7fctFPDXbJ3Zjq3cANIaX8Vlu4nSayNEhKnNuZog1YVSrg3Mu3E+8Ln
bzt+pZa9iTH821hu/il3TmRzXndT3dEz+n1XkrT3F9NBL3ZyYceDU5uB9fo7x25H
pLP/8pxIqu3+AcoGqNmJpxSfWlaqKXqd3TZ++edHtgTO5t8KV65GgCQ9+Wl0amtG
odT0vGI0eRPRl6s+Nnk6Aguz4bkRPsYTuEVJdEd3F+f9kxHrVWI0c+J4
-----END CERTIFICATE-----
 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2
   i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
-----BEGIN CERTIFICATE-----
MIIEBDCCAuygAwIBAgIDAjppMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
YWwgQ0EwHhcNMTMwNDA1MTUxNTU1WhcNMTUwNDA0MTUxNTU1WjBJMQswCQYDVQQG
EwJVUzETMBEGA1UEChMKR29vZ2xlIEluYzElMCMGA1UEAxMcR29vZ2xlIEludGVy
bmV0IEF1dGhvcml0eSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AJwqBHdc2FCROgajguDYUEi8iT/xGXAaiEZ+4I/F8YnOIe5a/mENtzJEiaB0C1NP
VaTOgmKV7utZX8bhBYASxF6UP7xbSDj0U/ck5vuR6RXEz/RTDfRK/J9U3n2+oGtv
h8DQUB8oMANA2ghzUWx//zo8pzcGjr1LEQTrfSTe5vn8MXH7lNVg8y5Kr0LSy+rE
ahqyzFPdFUuLH8gZYR/Nnag+YyuENWllhMgZxUYi+FOVvuOAShDGKuy6lyARxzmZ
EASg8GF6lSWMTlJ14rbtCMoU/M4iarNOz0YDl5cDfsCx3nuvRTPPuj5xt970JSXC
DTWJnZ37DhF5iR43xa+OcmkCAwEAAaOB+zCB+DAfBgNVHSMEGDAWgBTAephojYn7
qwVkDBF9qn1luMrMTjAdBgNVHQ4EFgQUSt0GFhu89mi1dvWBtrtiGrpagS8wEgYD
VR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwOgYDVR0fBDMwMTAvoC2g
K4YpaHR0cDovL2NybC5nZW90cnVzdC5jb20vY3Jscy9ndGdsb2JhbC5jcmwwPQYI
KwYBBQUHAQEEMTAvMC0GCCsGAQUFBzABhiFodHRwOi8vZ3RnbG9iYWwtb2NzcC5n
ZW90cnVzdC5jb20wFwYDVR0gBBAwDjAMBgorBgEEAdZ5AgUBMA0GCSqGSIb3DQEB
BQUAA4IBAQA21waAESetKhSbOHezI6B1WLuxfoNCunLaHtiONgaX4PCVOzf9G0JY
/iLIa704XtE7JW4S615ndkZAkNoUyHgN7ZVm2o6Gb4ChulYylYbc3GrKBIxbf/a/
zG+FA1jDaFETzf3I93k9mTXwVqO94FntT0QJo544evZG0R0SnU++0ED8Vf4GXjza
HFa9llF7b1cq26KqltyMdMKVvvBulRP/F/A8rLIQjcxz++iPAsbw+zOzlTvjwsto
WHPbqCRiOwY1nQ2pM714A5AuTHhdUDqB1O6gyHA43LL5Z/qHQF1hwFGPa4NrzQU6
yuGnBXj8ytqU0CwIPX4WecigUCAkVDNx
-----END CERTIFICATE-----
 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
   i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority
-----BEGIN CERTIFICATE-----
MIIDfTCCAuagAwIBAgIDErvmMA0GCSqGSIb3DQEBBQUAME4xCzAJBgNVBAYTAlVT
MRAwDgYDVQQKEwdFcXVpZmF4MS0wKwYDVQQLEyRFcXVpZmF4IFNlY3VyZSBDZXJ0
aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDIwNTIxMDQwMDAwWhcNMTgwODIxMDQwMDAw
WjBCMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UE
AxMSR2VvVHJ1c3QgR2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA2swYYzD99BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9m
OSm9BXiLnTjoBbdqfnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIu
T8rxh0PBFpVXLVDviS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6c
JmTM386DGXHKTubU1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmR
Cw7+OC7RHQWa9k0+bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5asz
PeE4uwc2hGKceeoWMPRfwCvocWvk+QIDAQABo4HwMIHtMB8GA1UdIwQYMBaAFEjm
aPkr0rKV10fYIyAQTzOYkJ/UMB0GA1UdDgQWBBTAephojYn7qwVkDBF9qn1luMrM
TjAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjA6BgNVHR8EMzAxMC+g
LaArhilodHRwOi8vY3JsLmdlb3RydXN0LmNvbS9jcmxzL3NlY3VyZWNhLmNybDBO
BgNVHSAERzBFMEMGBFUdIAAwOzA5BggrBgEFBQcCARYtaHR0cHM6Ly93d3cuZ2Vv
dHJ1c3QuY29tL3Jlc291cmNlcy9yZXBvc2l0b3J5MA0GCSqGSIb3DQEBBQUAA4GB
AHbhEm5OSxYShjAGsoEIz/AIx8dxfmbuwu3UOx//8PDITtZDOLC5MH0Y0FWDomrL
NhGc6Ehmo21/uBPUR/6LWlxz/K7ZGzIZOKuXNBSqltLroxwUCEm2u+WR74M26x1W
b8ravHNjkOR/ez4iyz0H7V84dJzjA1BOoa+Y7mHyhD8S
-----END CERTIFICATE-----
---
Server certificate
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
issuer=/C=US/O=Google Inc/CN=Google Internet Authority G2
---
No client certificate CA names sent
---
SSL handshake has read 3231 bytes and written 432 bytes
---
New, TLSv1/SSLv3, Cipher is RC4-SHA
Server public key is 2048 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : RC4-SHA
    Session-ID: D915A1902D9384B5E11F80BF76037C61048F24C763802E9C5CE9F56684F713B2
    Session-ID-ctx: 
    Master-Key: DB66D3BC903435A25D3C436D39E4ED2E513731ED7DF3F6BB6AA8C7245ED70433B61546CB0BA3F1465D8B87A978CB5715
    Key-Arg   : None
    Start Time: 1401542147
    Timeout   : 300 (sec)
    Verify return code: 20 (unable to get local issuer certificate)
---

Note: This example was run against a pubic Google WWW / HTTPS server.

In the example output above, we see a full certificate authority chain. Typically when working with SSL, you want to trust the certificate authority that signed the actual certificate. You could trust each certificate individually, however this doesn’t scale beyond a handful of servers.Each BEGIN CERTIFICATE and END CERTIFICATE block contains the base-64 encoded version of that SSL certificate. Once you find the certificate you want, copy the BEGIN / END block inclusively and save to a file with a *.crt or *.cer extension.

-----BEGIN CERTIFICATE-----
MIIEBDCCAuygAwIBAgIDAjppMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
YWwgQ0EwHhcNMTMwNDA1MTUxNTU1WhcNMTUwNDA0MTUxNTU1WjBJMQswCQYDVQQG
EwJVUzETMBEGA1UEChMKR29vZ2xlIEluYzElMCMGA1UEAxMcR29vZ2xlIEludGVy
bmV0IEF1dGhvcml0eSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AJwqBHdc2FCROgajguDYUEi8iT/xGXAaiEZ+4I/F8YnOIe5a/mENtzJEiaB0C1NP
VaTOgmKV7utZX8bhBYASxF6UP7xbSDj0U/ck5vuR6RXEz/RTDfRK/J9U3n2+oGtv
h8DQUB8oMANA2ghzUWx//zo8pzcGjr1LEQTrfSTe5vn8MXH7lNVg8y5Kr0LSy+rE
ahqyzFPdFUuLH8gZYR/Nnag+YyuENWllhMgZxUYi+FOVvuOAShDGKuy6lyARxzmZ
EASg8GF6lSWMTlJ14rbtCMoU/M4iarNOz0YDl5cDfsCx3nuvRTPPuj5xt970JSXC
DTWJnZ37DhF5iR43xa+OcmkCAwEAAaOB+zCB+DAfBgNVHSMEGDAWgBTAephojYn7
qwVkDBF9qn1luMrMTjAdBgNVHQ4EFgQUSt0GFhu89mi1dvWBtrtiGrpagS8wEgYD
VR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwOgYDVR0fBDMwMTAvoC2g
K4YpaHR0cDovL2NybC5nZW90cnVzdC5jb20vY3Jscy9ndGdsb2JhbC5jcmwwPQYI
KwYBBQUHAQEEMTAvMC0GCCsGAQUFBzABhiFodHRwOi8vZ3RnbG9iYWwtb2NzcC5n
ZW90cnVzdC5jb20wFwYDVR0gBBAwDjAMBgorBgEEAdZ5AgUBMA0GCSqGSIb3DQEB
BQUAA4IBAQA21waAESetKhSbOHezI6B1WLuxfoNCunLaHtiONgaX4PCVOzf9G0JY
/iLIa704XtE7JW4S615ndkZAkNoUyHgN7ZVm2o6Gb4ChulYylYbc3GrKBIxbf/a/
zG+FA1jDaFETzf3I93k9mTXwVqO94FntT0QJo544evZG0R0SnU++0ED8Vf4GXjza
HFa9llF7b1cq26KqltyMdMKVvvBulRP/F/A8rLIQjcxz++iPAsbw+zOzlTvjwsto
WHPbqCRiOwY1nQ2pM714A5AuTHhdUDqB1O6gyHA43LL5Z/qHQF1hwFGPa4NrzQU6
yuGnBXj8ytqU0CwIPX4WecigUCAkVDNx
-----END CERTIFICATE-----

You now have the public certificate of the remote server or certificate authority. Our next step will be to import that certificate into our Trusted Root Certificate Authorities.

Trusting / Importing a Certificate

Once you have your certificate, we can go ahead and import that certificate and trust it. Follow these steps:

  1. windows-ssl-import-01 In Windows, double click or open the certificate that you extracted earlier. Click on the ‘Install Certificate…’ button.
  2. windows-ssl-import-02 In the Certificate Import Wizard, click on ‘Next >’.
  3. windows-ssl-import-03 Choose ‘Place all certificates in the following store’. Then click on the ‘Browse…’ button.
  4. windows-ssl-import-04 Click on the ‘Trusted Root Certificate Authorities’ and then click on the ‘OK’ button.
  5. windows-ssl-import-05 Click on ‘Next >’.
  6. windows-ssl-import-06 Click on ‘Finish’.
  7. windows-ssl-import-07 Depending on the certificate, you may receive the following warning. Basically this is warning you to make sure that you have the right certificate. Click on ‘Yes’ to continue.
  8. windows-ssl-import-08 Click on ‘OK’.

The certificate should now be installed and trusted on your local computer.

Note, the above steps would import the certificate for your account. To add it for the computer (and all users), you would need to use the Microsoft Management Console (MMC). And use the ‘All Tasks -> Import…’ option.

Hopefully this has helped. Please note, that if you’re using a Java application or doing application development with Java, it’s trusted keystore information is stored separately from the operating system and is managed through the ‘keytool’ utility. We’ll save that topic for another day…

If you’ve just deployed a .svc file to a server (or your local IIS server) and you get an error that it doesn’t recognize the .svc mime type (or the type was blocked), they you need to do the following:

  1. Ensure that the .NET 3.5.1 Framework is installed.
  2. Ensure that “Windows Communication Foundation HTTP Activation is enabled and installed. This can be access via the Program Features in the Control panel under “Turn Windows Features on or off”.
    Enable WCF HTTP Activation.

    Ensure that Windows Communication Foundation HTTP Activation is installed.

  3. Execute the following command with elevated privileges (as administrator):
    "%WINDIR%\Microsoft.Net\Framework\v3.0\Windows Communication Foundation\ServiceModelReg.exe" -i
         

    When you run the WCF service model registration, your machine web.config file will be updated. Below is output from the above command:

    Microsoft(R) Windows Communication Foundation Installation Utility
    [Microsoft (R) Windows (R) Communication Foundation, Version 3.0.4506.5420]
    Copyright (c) Microsoft Corporation.  All rights reserved.
    
    Installing: Machine.config Section Groups and Handlers (WOW64)
    Installing: Machine.config Section Groups and Handlers
    Installing: System.Web Build Provider (WOW64)
    Installing: System.Web Compilation Assemblies (WOW64)
    Installing: HTTP Handlers (WOW64)
    Installing: HTTP Modules (WOW64)
    Installing: System.Web Build Provider
    Installing: System.Web Compilation Assemblies
    Installing: HTTP Handlers
    Installing: HTTP Modules
    Installing: Protocol node for protocol net.tcp (WOW64)
    Installing: TransportConfiguration node for protocol net.tcp (WOW64)
    Installing: ListenerAdapter node for protocol net.tcp
    Installing: Protocol node for protocol net.tcp
    Installing: TransportConfiguration node for protocol net.tcp
    Installing: Protocol node for protocol net.pipe (WOW64)
    Installing: TransportConfiguration node for protocol net.pipe (WOW64)
    Installing: ListenerAdapter node for protocol net.pipe
    Installing: Protocol node for protocol net.pipe
    Installing: TransportConfiguration node for protocol net.pipe
    Installing: Protocol node for protocol net.msmq (WOW64)
    Installing: TransportConfiguration node for protocol net.msmq (WOW64)
    Installing: ListenerAdapter node for protocol net.msmq
    Installing: Protocol node for protocol net.msmq
    Installing: TransportConfiguration node for protocol net.msmq
    Installing: Protocol node for protocol msmq.formatname (WOW64)
    Installing: TransportConfiguration node for protocol msmq.formatname (WOW64)
    Installing: ListenerAdapter node for protocol msmq.formatname
    Installing: Protocol node for protocol msmq.formatname
    Installing: TransportConfiguration node for protocol msmq.formatname
    Installing: HTTP Modules (WAS)
    Installing: HTTP Handlers (WAS)
    

In the process of working through this problem, I first attempt to add the WCF HTTP Activation, which unfortunately corrupted my machine web.config file which prevented access to the IIS manager and associated application pools due to the error:

The configuration section 'system.serviceModel' cannot be read because it is missing a section declaration  

It broke everything .NET related… GREAT!!!

Basically, for some reason my machine web.config was missing the configuration section DLL registrations for the WCF serviceModel. By using the ServiceModelReg.exe tool, it ensured that those base registrations were entered correctly. Once that was updated, the add new features option was able to complete successfully.

If after those changes, you are receiving the following error:

Could not load type ‘System.ServiceModel.Activation.HttpModule’ from assembly ‘System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089

This error can occur when there are multiple versions of the .NET Framework on the computer that is running IIS, and IIS was installed after .NET Framework 4.0 or before the Service Model in Windows Communication Foundation was registered. [MSDN]

To fix this error, start a command prompt as an administrator. And run the following commands:

cd %WINDIR%\Microsoft.NET\Framework64\v4.0.30319
aspnet_regiis.exe -iru
iisreset

On my ActiveDirectory (AD) node.js plugin, a user requested support for LDAP referrals and chasing (#8). Unfortunately, I didn’t have access to a partitioned AD installation to test this.

The user wasn’t very helpful with giving me additional information or test scenarios to work with it so I took it upon myself to install a partitioned AD environment… as well as some heavy reading about how Microsoft implemented their referrals and partitioning. I probably still don’t understand it…

The one interesting thing is that by default they create two application partitions, one for DNS entries (Forest & Domain) and one for configuration which allows for these areas to be replicated. Those example referrals look like the following, although replace domain.com with your context.

  • ldap://ForestDnsZones.domain.com/dc=domain,dc=com
  • ldap://DomainDnsZones.domain.com/dc=domain,dc=com
  • ldap://dc.domain.com/CN=Configuration,dc=domain,dc=com

With referral chasing, you end up basically sending a request to each of the referrals with the same original query in order to get “everything”. So that one LDAP query can quickly lead to an “N+1” select problem, overhead on the network and slower responses. To get around the problem, I just created a couple of regular expressions to exclude and ignore those referrals.

var defaultReferrals = {
  enabled: false,
  // Active directory returns the following partitions as default referrals which we don't want to follow
  exclude: [
    'ldaps?://ForestDnsZones\\..*/.*',
    'ldaps?://DomainDnsZones\\..*/.*',
    'ldaps?://.*/CN=Configuration,.*'
  ]
};

The other way around this problem is to use the Global Catalog (GC) instead of direct LDAP queries. Essentially the GC just listens on a different port (3268) but any LDAP search requests will be for the entire “forest”.

Who doesn’t love LINQ? Who doesn’t love extension methods in .NET?

Unfortunately, Microsoft could have made things easier for developers by handling nulls gracefully. What do I mean? Take the following code as an example:

IEnumerable<int> numbers = null;
if (numbers.Any()) {
}

Obviously, as a programmer you would know that you should get a NullReferenceException from the above code. Below is the correct way to write that function:

IEnumerable<int> numbers = null;
if ((numbers != null) && (numbers.Any())) {
}

Simple and logical fix, but it’s just “noisy”. When dealing with an extension method, I disagree with how null references were handled in the LINQ libraries. Any extension function should detect null and return early (when possible). I believe that Microsoft’s view and defense is that an IEnumerable should never be “null” but instead be an empty collection.

Below is the decompiled implementation of the .Any() LINQ extension method (courtesy of .Net Reflector v6).

[__DynamicallyInvokable]
public static bool Any<TSource>(this IEnumerable<TSource> source)
{
  if (source == null) {
    throw Error.ArgumentNull("source");
  }
  using (IEnumerator<TSource> enumerator = source.GetEnumerator()) {
    if (enumerator.MoveNext()) {
      return true;
    }
  }
  return false;
}

The following simple change to the above method would remove all of that extra “noise” and make our code easier to read.

[__DynamicallyInvokable]
public static bool Any<TSource>(this IEnumerable<TSource> source)
{
  if (source == null) return(false);
  using (IEnumerator<TSource> enumerator = source.GetEnumerator()) {
    if (enumerator.MoveNext()) {
      return true;
    }
  }
  return false;
}

Of course, the other solution (and recommended best practice) is to not return null from a function that returns an IEnumerable, but to instead return an empty collection. Unfortunately, when dealing with other people’s code or libraries you may not have that luxury. Below is a simple and efficient example of how to NOT return null for an IEnumerable result. Please note that the example is contrived and a String.Split function already exists.

public IEnumerable<string> Split(string input, string value) {
  if (input == null) return(new string[0]);
  ...
}

So when you’re writing you own extension methods in .NET, do the world a favor and handle null much better.

HTML5

If you’re lucky enough to be able to switch to HTML5, you’ll be able to use the new built-in date picker controls.

<input type="date" id="mydate" value="2014-03-14" />

If you’re viewing this page with an HTML5 capable browser, the value will render as follows:

No special jQuery plugins or custom calendar controls necessary! With HTML5, there are a total of three different date and date/time picker controls:

  • date – The date (no timezone).
  • datetime – The date/time (with time zone).
  • datetime-local – The date/time (no timezone).

JavaScript Date/Time Parsing

If you’ve ever worked clientside with JavaScript, you’ll know that any new Date() function call will return a Date object that is localized based on the timezone set for the computer. You can also create a new date from an existing date or by parsing the string as follows:

  var date = new Date("Tue Mar 04 2014 16:00:00 GMT-0800");
  console.log(date);

With the above example, assuming you are located in CST (or a different timezone than PST), you’ll find the following output:

Tue Mar 04 2014 18:00:00 GMT-0600 (Central Standard Time)

Notice that the time has automatically corrected itself to the local timezone. OK, that’s good to know, but how does that affect me?

Try the following time:

  var date = new Date("Tue Mar 04 2014 00:00:00 GMT-0000");
  console.log(date);

Assuming you’re in the western hemisphere and you don’t live near the prime meridian, you should see something like the following:

Mon Mar 03 2014 18:00:00 GMT-0600 (Central Standard Time)

Did you catch that? The date has moved back to March 03 instead of March 04 because of the timezone conversion. OK, you get it, enough with the geography and math lessons… why should you care?

HTML5 Date Input

When using the <input type=”date” /> control, the value that is posted and returned from the input control is formatted as: YYYY-MM-DD. Since we have a date, we can assume that we can just let the JavaScript Date object parse it for us.

  var date = new Date('2014-03-04');
  // Mon Mar 03 2014 18:00:00 GMT-0600 (Central Standard Time)

What?!?! Why is it March 03 again? Lets try playing around with the Date object a little bit and see what happens.

  var date = new Date(2014, 2, 4); // Recall that month is 0 based, where 0=January
  // Tue Mar 04 2014 00:00:00 GMT-0600 (Central Standard Time)
  var date = new Date('03/04/2014');
  // Tue Mar 04 2014 00:00:00 GMT-0600 (Central Standard Time)

OK. Now that is what I’m expecting!!! So basically if we provide the string as YYYY-MM-DD, the Date object will parse that date automatically as a GMT time and then localize. However, if we pass it in as MM/DD/YYYY it will already be localized.

So how do we handle this for our HTML5 input control to get the date we expect? The following naive JavaScript will attempt to parse the input received and create a localized Date object. It will also attempt to parse a standard date in the MM/DD/YYYY format if HTML5 isn’t supported.

/**
 * Parses the date from the string input.
 * @param {Number|Date|string} date Teh value to be parsed.
 * @param {Function|Date} [defaultValue] The default value to use if the date cannot be parsed.
 * @returns {Date} The parsed date value. If the date is invalid or can't be parsed, then the defaultValue is returned.
 */
function parseDate(date, defaultValue) {
  if (! date) return(getDefaultValue());
  if (typeof(date) === 'date') return(date);
  if (typeof(date) === 'number') return(new Date(date));

  /**
   * Gets the default value.
   * @returns {Date}
   */
  function getDefaultValue() {
    return((typeof(defaultValue) === 'function') ? defaultValue(name) : defaultValue);
  }

  var results;
  // YYYY-MM-DD
  if ((results = /(\d{4})[-\/\\](\d{1,2})[-\/\\](\d{1,2})/.exec(date))) {
    return(new Date(results[1], parseInt(results[2], 10)-1, results[3]) || new Date(date) || getDefaultValue());
  }
  // MM/DD/YYYY
  if ((results = /(\d{1,2})[-\/\\](\d{1,2})[-\/\\](\d{4})/.exec(date))) {
    date = new Date(results[3], parseInt(results[1], 10)-1, results[2]) || new Date(date) || getDefaultValue();
  }
  return(new Date(date) || getDefaultValue());
}

So the next time you find yourself struggling with dates and times in JavaScript, be wary of the input parsing or the format of your input and / or timezone.