Monthly Archives: June 2014

Avoid embarrassment. Perform testing.

This was originally going to be a post about the evolution of the IT department and how if we wish to stay employed and interested in our jobs we must move along with the rest of the industry.

Then an email arrived in the inbox of a colleague.

First some background. As you can see from my linkedin profile (I hate those things too; unfortunately until I write more blog posts it’ll be essential to my ongoing employment). I’ve recently started working at The Royal Society of Chemistry after a few years owning and running a small IT Consultancy.

As a provider of services when I worked for Avanade, IBM & myself I adopted a number of provider <-> client behaviours that I believe should be adopted by all IT departments, one of these was performing managed testing, and handing over both the results and testing to the customer.

I’ll assume that if you’re reading this blog you’re familiar with a software development lifecycle and software testing. I see testing like this:

  • Does this unit do what I want it to do (unit testing);
  • Does this requirement do what we think you want it to do (system testing);
  • Does it do what it used to do (regression testing);
  • Does it do what you want it to do (acceptance testing).

Sometimes we merge these (I’m thinking system and regression) and sometimes we do them informally. But we should do them before moving from one stage to the next. Sometimes we do additional types of testing (load testing, integration and penetration) but we should be doing the four types of testing above.

As a provider of services; when my team (or sometimes just me!) handed something over to a customer we wrote a test plan and executed any tests we thought appropriate. Broadly though, the test plan included manual and automated tests which covered every area of the application. Your requirements help to identify the areas.

Sometimes it was: Click here, do this, install this framework does it still work, uninstall that, does it still work?

Sometimes it was: Does this web service return this result.

It depended on the requirements.

In short though, we had a test plan we created tests and we executed those. We then passed this over to the customer when testing was completed.

Yes. We hand it all over. The tests, what failed, how many bugs, how many cycles. Everything. My main reason for doing this initially was a feeling that the data belonged to the customer. We charged them to create it, we executed it, they owned it.

I must admit, I didn’t think of that as being revolutionary or some extraordinary insight. I still don’t think it is. But if you’re not doing this today for your customers you should be. As I said, they paid for it and it’ll help get them feel a sense of ownership for the product (which they should have!) as well as show how a third party can add value – those tests are a gold mine of useful information for regression, future requirements, and user manuals.

This brings me back to that email that arrived in the inbox of a colleague that prompted this blog post and a slightly ranty tweet.

My colleague (a .NET developer who’s moving into a more TA/BA role, it’s all very woolly) has been working with a third party developer to create a new application which is going onto our website. We’ve spent a fair bit of cash on this, and they have charged us for 30+ days of system testing.

It arrived, and it didn’t work.

Despite the test plan being charged for, and also being someone to execute tests (30+ days!) they state “we only provide unit test” and “you’re the quality gate”

Not acceptable. While I’d never advocate going from supplier to production, I expect the application to hang together and meet the broad set of requirements we’ve provided (or if you’re the supplier the requirements that have been provided to you.)

If you’re an IT department you should be behaving in this way already. If you’re an IT department using a third party supplier you need to make sure these steps are performed before letting your users near any new application. Allow time to do it and allow time to check up on your suppliers.

Now, all I have to do is convince the business that they should be the owners of the test plan.

– Alan Smith

A Simple A* Path-Finding Example in C#

A few weeks back, I needed a path-finding solution for a little home project of mine, a game in which a character must move from location to location without walking through walls or other obstacles. A bit of research showed that an algorithm called A* (pronounced “A Star”) underpins most solutions to this problem—not just in games but also in other applications such as satellite navigation devices.

Browse or download the example project

Header Image

There must be thousands of pages on the Internet discussing this topic. The write-up I’ve used most frequently for guidance is Patrick Lester’s article, A* Pathfinding for Beginners. I’d recommend that article as a good place to get to grips with the fundamentals. The Wikipedia article, A* search algorithm, is also a helpful resource.

Before I start, take note that this is a really dumb, stripped-back, bare-bones implementation that won’t always give the best result. It’s very much a learning exercise for me, so it’s likely to be wrong in places. What I hope it will do is provide a working example that’s easy to follow and extend. There are much better example implementations around and I’ll include links to a couple of them at the bottom of the page.

First Steps

I’ll start with a grid of booleans, where false means the location is blocked (e.g. a wall) and true means it’s clear. Then I’ll identify two points on the grid: a start location and a finish location.

Here’s a representation of my sample grid. It’s 7×5 and includes an L-shaped wall with a one-node-wide gap along the bottom.

A simple grid with a start and end location
Figure 1: A simple grid with a start and end location

The above information will be used to initialise the path-finding class. I create a class called SearchParameters to be the container for this information.

public class SearchParameters
{
    public Point StartLocation { get; set; }
    public Point EndLocation { get; set; }
    public bool[,] Map { get; set; }
    ...
}
Note: I’m using the Point structure from the System.Drawing assembly to store grid coordinates.

Internally, the algorithm needs to hold a little more information about each node. It needs to keep a record of a few things as it goes along:

G: The length of the path from the start node to this node.
H: The straight-line distance from this node to the end node.
F: An estimate of the total distance if taking this route. It’s calculated simply using F = G + H.

Figure 2 gives a visual representation of how these values are calculated for the node immediately to the right of the start node. The distance along the path so far (G) is 1 step. Using Pythagoras’ theorem, the estimated distance from here to the end node (H) is 3 steps. Adding these two together gives the total estimated ‘cost’ in taking this path (F) of 4 steps.

Calculating G, H and F
Figure 2: Calculating F, G and H

The calculations are repeated for each adjacent node.

The 'total estimated cost' (F) is calculated for each adjacent node
Figure 3: The ‘total estimated cost’ (F) is calculated for each adjacent node

Now that the F-value for each node is known, it can be used to work out which path to try out first by putting all the options in a list and sorting them by F. Clearly, the node at grid location 2,2 with F=4 is the best bet.

Adjacent nodes sorted by F-value
Figure 4: The adjacent nodes are sorted in ascending order of F-value

This seems like a good place to introduce the Search(...) method that’s core to the implementation. The code listing below is incomplete but I’ll build upon it later. It shows the method taking currentNode as its starting point, getting a list of any adjacent nodes that are walkable, and then sorting the list by F-value before iterating over its contents.

private bool Search(Node currentNode)
{
    ...
    List<Node> nextNodes = GetAdjacentWalkableNodes(currentNode);
    nextNodes.Sort((node1, node2) => node1.F.CompareTo(node2.F));
    foreach (var nextNode in nextNodes)
    {
        ...
    }
    ...
}

So now the process is repeated using the node at location 2,2. However, there’s some more information that needs to be recorded about each node before moving on much further.

Any nodes that have been added to an ‘adjacent nodes’ list like the one above are marked as ‘Open’, i.e. they’re considered an open option for the search. However, as soon as a node becomes part of a path, it’s marked as ‘Closed’ and it remains closed even if that path ends up being discarded. Marking a node as closed means it won’t be considered again.

The search moves first to the node with the lowest F-value
Figure 5: The search moves first to the node with the lowest F-value

In addition, every ‘Open’ node is given a reference to its ‘Parent’ node so that the path to get there can be traced back to the start. The starting node doesn’t have a parent.

So now the node needs to store the following information:

G: The length of the path from the start node to this node.
H: The straight-line distance from this node to the end node.
F: Estimated total distance/cost.
Open/closed state: Can be one of three states: not tested yet; open; closed.
Parent node: The previous node in this path. Always null for the starting node.
Is walkable: Boolean value indicating whether the node can be used.
Location: Keep a record of this node’s location in order to calculate distance to other locations.

In code, it looks something like this:

public class Node
{
    public Point Location { get; private set; }
    public bool IsWalkable { get; set; }
    public float G { get; private set; }
    public float H { get; private set; }
    public float F { get { return this.G + this.H; } }
    public NodeState State { get; set; }
    public Node ParentNode { get { ... } set { ... } }
    ...
}

public enum NodeState { Untested, Open, Closed }

The next part shows where the ‘Open’ and ‘Closed’ node states comes into play. Just as before, the algorithm needs to calculate the G-, H- and F-values of each adjacent node. This time, though, the ‘Closed’ node at 1,2 is ignored, along with the non-walkable nodes to the right. This leaves four ‘Open’ nodes that have already had their G-, H- and F-values calculated on the basis that the node at 1,2 was their direct parent.

This seems like a reasonable point to show the implementation of GetAdjacentWalkableNodes(...). For each adjacent node, it filters out the ones that are outside the grid’s boundaries, that aren’t walkable, that are ‘Closed’, and that are already on an ‘Open’ list and can’t be reached more efficiently via the current route. The complete listing is as follows:

private List<Node> GetAdjacentWalkableNodes(Node fromNode)
{
    List<Node> walkableNodes = new List<Node>();
    IEnumerable<Point> nextLocations = GetAdjacentLocations(fromNode.Location);

    foreach (var location in nextLocations)
    {
        int x = location.X;
        int y = location.Y;

        // Stay within the grid's boundaries
        if (x < 0 || x >= this.width || y < 0 || y >= this.height)
            continue;

        Node node = this.nodes[x, y];
        // Ignore non-walkable nodes
        if (!node.IsWalkable)
            continue;

        // Ignore already-closed nodes
        if (node.State == NodeState.Closed)
            continue;

        // Already-open nodes are only added to the list if their G-value is lower going via this route.
        if (node.State == NodeState.Open)
        {
            float traversalCost = Node.GetTraversalCost(node.Location, node.ParentNode.Location);
            float gTemp = fromNode.G + traversalCost;
            if (gTemp < node.G)
            {
                node.ParentNode = fromNode;
                walkableNodes.Add(node);
            }
        }
        else
        {
            // If it's untested, set the parent and flag it as 'Open' for consideration
            node.ParentNode = fromNode;
            node.State = NodeState.Open;
            walkableNodes.Add(node);
        }
    }

    return walkableNodes;
}

Applying the above code to the example scenario, it performs a ‘what-if’ calculation to determine whether it’s more efficient to reach any of the adjacent nodes via location 2,2 than via their existing parent. Note that H doesn’t need to be recalculated.

The adjacent nodes can be reached more efficiently via a different route
Figure 6: The adjacent nodes can be reached more efficiently via a different route

It’s clear from these ‘what-if’ F-values that it’s less efficient to go via this node than via the starting node to reach any of the four locations accessible from 2,2.

Note: If it turns out to be more efficient to reach an Open node via the current node, the Open node’s parent is changed to the current node and the ‘what-if’ G- and F- values are applied to it. Unfortunately, this situation isn’t encountered in this walkthrough.

A dead end has been reached, so what now?

Searching Beyond the First Node

In the sample code, I’ve used a recursive Search(...) method. A dead end situation is identified by the absence of any nodes to move to from the current location. To put it another way, the search will only continue along a given path if there’s somewhere to go next. I’ll expand on the earlier code listing for Search(...).

private bool Search(Node currentNode)
{
    currentNode.State = NodeState.Closed;
    List<Node> nextNodes = GetAdjacentWalkableNodes(currentNode);
    nextNodes.Sort((node1, node2) => node1.F.CompareTo(node2.F));
    foreach (var nextNode in nextNodes)
    {
        ...
        if (Search(nextNode)) // Note: Recurses back into Search(Node)
            return true;
    }
    return false;
}

If a dead end is detected, Search(...) simply returns false so that control is returned to the next level up in the call stack, and that’s exactly what happens here.

Returning to the search from the starting node, the next choice could be either at location 2,1 or at location 2,3 since they both have the same F-value and share #2 position in the list of adjacent nodes. For brevity’s sake, let’s assume the algorithm chooses the node at 2,1 next.

Try the next-best option
Figure 7: Try the next-best option

The Open node at 1,1 is left alone since there’s no advantage going via this route. That leaves three possibilities, with the node at 3,0 achieving the lowest F-value.

Crossing the corner to location 3,0
Figure 8: Crossing the corner to location 3,0

Note: This implementation permits the corners of obstacles to be crossed as shown in Figure 8. While it’s arguably valid in this situation, it probably wouldn’t be considered valid if there were an obstacle at location 2,0 (to the lower-left of the blue line). It’s worth considering how to deal with corners and diagonal obstacles if implementing A*.

From 3,0 there’s only one possible next node: the one at 4,0.

Crossing to location 4,0
Figure 9: Crossing to location 4,0

Of the two next options, crossing the corner to 5,1 is the best.

Crossing the corner to location 5,1
Figure 10: Crossing the corner to location 5,1

There are now five options. Unsurprisingly, the node with the lowest F-value is also the finish node.

Reaching the finish node
Figure 11: Reaching the finish node

I’ll expand upon Search(...) one last time. The additional code checks whether the last node has been reached and returns true if it has. GetAdjacentWalkableNodes(...) has already set the parent node on the finish node, so nothing else needs to happen inside this method.

private bool Search(Node currentNode)
{
    currentNode.State = NodeState.Closed;
    List<Node> nextNodes = GetAdjacentWalkableNodes(currentNode);
    nextNodes.Sort((node1, node2) => node1.F.CompareTo(node2.F));
    foreach (var nextNode in nextNodes)
    {
        if (nextNode.Location == this.endNode.Location)
        {
            return true;
        }
        else
        {
            if (Search(nextNode)) // Note: Recurses back into Search(Node)
                return true;
        }
    }
    return false;
}

Back to the example scenario: the new condition is hit, so the method returns true. This brings the application all the way back up the call stack to where Search(...) was first called. Returning true lets the caller know that a path was found.

Compiling the Path

Building a list of the nodes that comprise the path is simple: starting at the finish node, follow the line of ancestors, adding the location of each to a new list, until a null parent is reached (i.e. the start node has been reached).

Follow successive parent nodes to build the list of locations along the path
Figure 12: Follow successive parent nodes to build the list of locations along the path

The result is a list of locations running backwards from the finish location to the start location. The list is reversed so that they can be returned in the correct order. It can all be wrapped up in a public method along these lines:

public List<Point> FindPath()
{
    List<Point> path = new List<Point>();
    bool success = Search(startNode);
    if (success)
    {
        Node node = this.endNode;
        while (node.ParentNode != null)
        {
            path.Add(node.Location);
            node = node.ParentNode;
        }
        path.Reverse();
    }
    return path;
}

If a path isn’t found, it just returns an empty list, otherwise it returns a list of locations starting with the first adjacent location to the starting point and ending with the finish point.

The sample project includes a console application that shows the algorithm being run across three different grids: one that’s completely open; one with the L-shaped obstacle used in this walkthrough, and; one in which a wall prevents a path from being found.

Screenshot of the sample application
Figure 13: Screenshot of the sample application

Final Notes

The goal of this blog post is to show the fundamentals of A* through a really simple C# implementation. As a result of trying to keep the code short and easy to follow, it’s a bit inefficient and it doesn’t produce particularly good routes. Improving its efficiency shouldn’t be too hard: use lazy instantiation when building the Node grid; use fields rather than properties (although, of course, this goes against general .NET design guidelines—see MSDN article Properties (C# Programming Guide)); the other articles linked from this page highlight other optimisations.

As for ways to find better routes, there are plenty of C# examples around that are far better and richer than this one. CastorTiu has a really nice demo solution on CodeProject, A* algorithm implementation in C#, that animates the search algorithm and allows the user to tweak a few settings.

I also spent a while examining Woong Gyu La‘s solution on CodeProject, EpPathFinding.cs- A Fast Path Finding Algorithm (Jump Point Search) in C# (grid-based). It has a nice, clear GUI and allows a few settings to be tweaked. I’d recommend taking at look.

– Mike Clift

t: @mclift

A Lambda Expression Pattern for WCF Clients

I’ve sometimes ended up repeating the same blocks of boilerplate code when consuming WCF services: open the connection; do work; close the connection; wrap it all in a try-catch; close the exception differently depending on the type of exception thrown. I wanted to put the boilerplate code somewhere else to keep the rest of my service-calling code as clean and DRY as possible. This post discusses an approach I’ve used to achieve it.

Download the example project

Screenshot of server and client applications

There were a few key things I wanted from the pattern:

  • I don’t want to worry about the connection code. I just want to call the service and move on.
  • Exceptions must bubble up—unmodified—so that they can be handled appropriately.
  • If an exception does get thrown, the connection needs to be closed or aborted as appropriate. It’s important to do this correctly since connections left hanging around can very quickly kill your application—for more on this, see the MSDN article: Expected Exceptions.

I decided to use a lambda expression pattern so I could replace code like this:

Code snippet

…with something like this:

Code snippet

It’s worth pointing out that I prefer to use a shared services assembly if it’s feasible to do so, and that’s the approach I’ve taken in this example. It’s a topic for another blog post, but things can get complicated with svcutil-generated proxies if you have DataContract classes that are shared across multiple endpoints.

The example shows a really simple service that takes a name and returns a greeting such as “Hello, Mike!” The service contract looks like this:

[ServiceContract]
public interface IGreetingService
{
    [OperationContract]
    string SayHello(string name);
}

If you’ve made it this far, I expect you can guess what the implementation looks like. The example solution includes a really simple Console application to host the service on TCP port 8000 using the following WCF configuration:

<system.serviceModel>
  <services>
    <service name="GreetingServices.GreetingService">
      <endpoint
        address="net.tcp://localhost:8000/GreetingService"
        binding="netTcpBinding"
        contract="GreetingServices.IGreetingService"/>
    </service>
  </services>
</system.serviceModel>

On the client side, there’s a similar TCP/IP binding to the service:

<system.serviceModel>
  <client>
    <endpoint
      name="GreetingServices.IGreetingService"
      address="net.tcp://localhost:8000/GreetingService"
      binding="netTcpBinding"
      contract="GreetingServices.IGreetingService"/>
  </client>
</system.serviceModel>

Now for the important bit: the services will be called using a lambda expression, so the first thing is to declare a delegate through which the client logic will be passed.

public delegate void DoWithServiceDelegate(T serviceClient);

Next, a “Services” class will present the proxy into the service. It contains a private method that sends a generic delegate into a generic service endpoint:

private static void WithServiceClient<T>(T serviceClient, DoWithServiceDelegate<T> serviceDelegate)
{
    try
    {
        serviceDelegate(serviceClient);
        ((IClientChannel)serviceClient).Close();
    }
    catch (TimeoutException)
    {
        // Abort the connection if it times out
        ((IClientChannel)serviceClient).Abort();
        throw;
    }
    catch (CommunicationException)
    {
        // Abort the connection if there's a connection-level failure
        ((IClientChannel)serviceClient).Abort();
        throw;
    }
    catch (Exception)
    {
        // The connection should be valid for other exception types, so close it normally
        ((IClientChannel)serviceClient).Close();
        throw;
    }
}

The key points covered by the above code are:

  • Make sure the connection gets closed properly, even if something throws an exception.
  • Allow exceptions to bubble up to the surface intact.

Finally, the service proxy is presented through a public method:

public static void WithGreetingService(DoWithServiceDelegate<IGreetingService> serviceDelegate)
{
    // Create the connection to IGreetingService
    var channelFactory = new ChannelFactory<IGreetingService>("*");
    IGreetingService channel = channelFactory.CreateChannel();

    // Execute the delegated logic
    WithServiceClient(channel, serviceDelegate);
}

Each new service will need its own public method similar to the WithGreetingService(…) method but, unless project contains a really high number of endpoints, this shouldn’t become too cumbersome. To make a really simple, one-line call to the service:

Services.WithGreetingService(service => service.SomeMethod(…));

– Mike Clift

JSON Services: A Comparison of WCF and Web API

This blog post shows two implementations of a really simple JSON calculator service: one using WCF and one using Web API. To show the services in action, there’s also a single webpage that exposes the calculator services though a Web form.

Download the example project

Blog post header

WCF

I’ve used a self-hosted WCF service for this example as I want control over the URL through which it’s accessed. I start by creating a new Console Application and adding references to System.Runtime.Serialization, System.ServiceModel and System.ServiceModel.Web.

The service will expose a method to add two numbers together, so I add a class to contain the parameters to this method:

[DataContract]
public class AddParameters
{
    [DataMember(Name = "left")]
    public decimal Left { get; set; }

    [DataMember(Name = "right")]
    public decimal Right { get; set; }
}

I also create a class to contain the result of the calculation.

[DataContract]
public class CalculationResult
{
    [DataMember(Name = "result")]
    public decimal Result { get; set; }
}

Now I’m ready to define the WCF service interface…

[ServiceContract]
public interface ICalculator
{
    [OperationContract]
    CalculationResult Add(AddParameters addParameters);
}

…and its implementation.

public class Calculator : ICalculator
{
    [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, 
        ResponseFormat = WebMessageFormat.Json,
        BodyStyle = WebMessageBodyStyle.Wrapped)]
    public CalculationResult Add(AddParameters addParameters)
    {
        decimal result = addParameters.Left + addParameters.Right;

        return new CalculationResult
        {
            Result = result
        };
    }
}

The code to create the service host is pretty straightforward. In Program.cs, I update the Main(…) method:

static void Main(string[] args)
{
    using (ServiceHost serviceHost = new ServiceHost(typeof(Calculator)))
    {
        serviceHost.Open();

        Console.WriteLine("The Calculator service is available at:");
        foreach (var endpoint in serviceHost.Description.Endpoints)
        {
            Console.WriteLine(endpoint.Address);
        }

        Console.WriteLine();
        Console.WriteLine("Press any key to exit...");
        Console.ReadKey();
    }
}

Next, I add the WCF service configuration that’ll allow the service to be called from within a webpage.

<system.serviceModel>
  <services>
    <service name="WcfJsonServices.Calculator">
      <endpoint address="http://localhost:8080/Calculator"
                binding="webHttpBinding"
                contract="WcfJsonServices.ICalculator"/>
    </service>
  </services>
  <behaviors>
    <endpointBehaviors>
      <behavior>
        <webHttp />
      </behavior>
    </endpointBehaviors>
  </behaviors>
</system.serviceModel>

Finally, since I’m going to run this on a system with UAC enabled, I give my user account the necessary permission to create an HTTP service on port 8080. To do this, I use the “Run as Administrator” option to open a new Command Prompt window and then enter the following command:

netsh http add urlacl url=http://+:8080/ user="Mike"

WebAPI

To make the demo hang together as simply as possible, I’m going to host my client webpage and my Web API service in the same project. I start by creating a new ASP.NET Web Application project and choosing the Web API project template when prompted.

As per the WCF example, I create classes to contain my input and output parameters for my Add service. I create them both under the Models folder.

public class AddParameters
{
    public decimal Left { get; set; }

    public decimal Right { get; set; }
}

public class CalculationResult
{
    public decimal Result { get; set; }
}

Notice that the above classes don’t need to be declared as Serializable since the ASP.NET MVC framework automatically handles the JSON serialisation.

Next, I create a new controller class that’ll expose the addition service. I create the AddController class under the Controllers folder.

public class AddController : ApiController
{
    public CalculationResult Post(AddParameters addParameters)
    {
        decimal result = addParameters.Left + addParameters.Right;

        return new CalculationResult
        {
            Result = result
        };
    }
}

Note that there’s no need to declare a separate service interface. Just inherit from ApiController and ensure the method name begins with the HTTP verb (“POST’ in this case) that the client is expected to use.

I don’t need to add any special HTTP routes as the default handles it nicely—the new addition service has the URL /api/Add (see the default mappings in the WebApiConfig class under the App_Start folder) and it accepts POST requests.

The Client Web Page

To test the two services side-by-side, I want a single Web page that lets me fire JSON requests at each service and displays the results they return. As I’m going to use jQuery to do most of the client-side work, I first add the following line into the <head> section of _Layout.cshtml, which is located under /Views/Shared:

@Scripts.Render("~/bundles/jquery")

Now, I delete the contents of Index.cshtml, which is located under /Views/Home, and import my default layout.

@{
    ViewBag.Title = "Home Page";
    Layout = "~/Views/Shared/_Layout.cshtml";
}

Now I add the UI components that’ll allow users to generate the service requests.

<div class="jumbotron">
    <h1>JSON Client Samples</h1>
</div>
<div class="row">
    <div class="row">
        <h2>Use WCF</h2>
        <p>
            Add <input type="text" id="wcfLeft" />
            to <input type="text" id="wcfRight" />
            <input type="button" id="wcfAdd" value="=" />
            <span id="wcfAnswer">&nbsp;</span>
        </p>
    </div>
    <div class="row">
        <h2>Use WebAPI</h2>
        <p>
            Add <input type="text" id="waLeft" />
            to <input type="text" id="waRight" />
            <input type="button" id="waAdd" value="=" />
            <span id="waAnswer">&nbsp;</span>
        </p>
    </div>
    <div>
        <p id="error"></p>
    </div>
</div>

Now I add a script block and begin wiring up the Web UI with the JSON services. I start by wiring up the each of the two buttons so that the “click” event will cause a method to be called with the values of the adjacent text boxes. For WCF, wcfAdd(…) will be called. For Web API, waAdd(…) will be called.

$(function () {
    // Bind the WCF button
    $("#wcfAdd").bind("click", function () {
        var left = $("#wcfLeft")[0].value;
        var right = $("#wcfRight")[0].value;
        wcfAdd(left, right);
    });

    // Bind the WebAPI button
    $("#waAdd").bind("click", function () {
        var left = $("#waLeft")[0].value;
        var right = $("#waRight")[0].value;
        waAdd(left, right);
    });
});

The function that calls WCF begins with some object literal notation to set up the parameters expected by the service (just a single parameter in this case: addParameters). The parameters are then converted into a JSON string. Finally, the JSON is POSTed to the service. On success, the calculation result is displayed; on failure, the user is notified that something went wrong.

// Call the WCF service to add the numbers together
function wcfAdd(left, right) {
    // Prepare the instance of AddParameters with the two numbers
    var parameters = {
        addParameters: {
            left: left,
            right: right
        }
    };
    // Convert to an escaped JSON string
    var json = JSON.stringify(parameters);
    // Make the request
    $.ajax({
        type: "POST",
        url: "http://localhost:8080/Calculator/Add",
        data: json,
        contentType: "application/json; charset=utf-8",
        success: function (result) {
            $("#wcfAnswer").text(result.AddResult.result);
        },
        error: function (xhr, msg, ex) {
            handleError(xhr, msg, ex);
        }
    });
}

The Web API version doesn’t need to explicitly define addParameters as a named object literal. Instead, it just needs to create an object with the correct members: Left and Right. The rest is almost identical to the WCF version, the only differences being the sevice URL and the structure of the returned result.

// Call the WebAPI service to add the numbers together
function waAdd(left, right) {
    // Prepare the instance of AddParameters with the two numbers
    var parameters = {
        Left: left,
        Right: right
    };
    // Convert to an escaped JSON string
    var json = JSON.stringify(parameters);
    // Make the request
    $.ajax({
        type: "POST",
        url: "/api/Add",
        data: json,
        contentType: "application/json; charset=utf-8",
        success: function (result) {
            $("#waAnswer").text(result.Result);
        },
        error: function (xhr, msg, ex) {
            handleError(xhr, msg, ex);
        }
    });
}

Finally, I add a method to notify the user of any errors.

function handleError(xhr, msg, ex) {
    $("#error").html(xhr.responseText);
    alert("Call failed: [" + xhr.status + "] " + ex);
}

Demo

The WCF service application needs to be running for the demo to work. When the Web application is launched, a simple form allows the use to call each service.

Screenshot of the test Web page

Conclusions

I’ve shown how to create a JSON service that’s accessible from a Web page with jQuery by using two different technologies: WCF and Web API. Going through this exercise has highlighted a few things for me about each technology.

WCF: Pros and Cons

  • Pro: I like that WCF can be hosted in a number of different ways, from a stand-alone console application to an IIS application.
  • Con: Perhaps because of WCF’s flexibility in terms of the transports and message formats it supports, getting the configuration right can be difficult. What makes it worse is that it really doesn’t help you when things aren’t set up correctly: you don’t get a chance to intercept an exception; you don’t get any hints about what the service is expecting; most of the time, you just get a boilerplate HTTP exception with no clues as to the cause of what’s wrong.

Web API: Pros and Cons

  • Pro: It’s really simple to get up and running, especially if you’re already used to the ASP.NET MVC framework.
  • Pro: Web API benefits from the transparency of the ASP.NET MVC framework: when something isn’t configured correctly or isn’t being called in the right way, there’s still usually somewhere where you can set a breakpoint or intercept an exception so you can see what’s going on. Having the Request and Response properties right there can also be really useful whilst debugging.
  • Con: This is a very pernickety point, but it feels like Web API isn’t that great as an API presentation framework. WCF’s requirement that any service be declared as an interface makes it very clear where your API’s boundaries lie. Not only does Web API not require your interfaces to be declared, it also obscures things by mapping HTTP verbs to methods by their names and signatures. Of course, this convention-over-configuration approach is one of the cornerstones of the technology that brings many advantages, but it also ties the API to HTTP.

At the time of writing, I’m a newbie to Web API but in a situation where I were consuming JSON services from the browser, I’d lean toward Web API as the framework of choice. This would be especially true if the consuming Web application were already an ASP.NET MVC project—the benefits of using the same underlying technology for the Web application and JSON services would be difficult to ignore.

Of course, this post has focused on a very small, specific scenario. WCF is a much broader technology than Web API in terms of the transports, methods of communication and types of hosting it supports, whilst Web API expands upon the very successful ASP.NET MVC to make it easy to write services for browsers. There’s a great blog post by Ido Flatow, “WCF or ASP.NET Web APIs? My two cents on the subject”, that gives a little bit of the history behind these two technologies and presents a range of scenarios in which one might be a better fit than the other.

For more information about Web API, the Official ASP.NET site is a good place to start. For WCF, visit the WCF portal on MSDN.

  – Mike Clift