Thursday, December 19, 2013

How to get Ref parameter values in RhinoMock

Mocking ref (and out) parameters with RhinoMock is a bit tedious. I ran into something that could possibly be considered a bug. At the very least it's not really expected behavior. The good news is I found a work-around. Hopefully this will help someone else (including perhaps my future self).

The basic problem is: Ref parameter values passed to mocked methods have the default values specified in the mock setup and not the values passed by the calling code.

The background

Take the following interface as an example.
interface ISomething
{
    void DoStuff(string a);
}
Now supposed this is mocked and some things need to be done based on the value of parameter a.

Normally this is easily done:
public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(Arg.Is.Anything())
          .Do((Action)(arg =>
               {
                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }
               }));
}
Every time something calls mockSomething.DoStuff, the code passed to the Do method will be executed and the parameter arg will contain whatever value was passed to DoStuff. This is routine stuff for RhinoMocks and works as expected.

The setup

Now suppose the parameter for DoStuff was a ref.
interface ISomething
{
    void DoStuff(ref string a);
}
This is where things get a bit dicey. The interface is still mocked and some things still need to be done based on the value of parameter a. So, some minor syntax changes in the arguments constraints to handle the ref stuff for the compiler are done:
private delegate void DoStuffDelegate(ref string a);

public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(ref Arg.Ref(Is.Anything(), string.Empty).Dummy)
          .Do((DoStuffDelegate)((ref arg) =>
               {
                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }
               }));
}

The problem

The above code compiles. And it runs. It just doesn't run correctly. The problem is that second parameter in the Arg<T>.Ref method. It indicates a return result for the the value. The problem is RhinoMocks sets the parameter value to the return result before calling Do's method. In other words, in this example, arg will always be string.Empty. The code in Do will never be called with the values sent to DoStuff by the original caller.

Looking at the call stack, I could see the original method call with the correct parameter values. Then it went into the RhinoMock and proxy code and then Do's method was called, clearly with the unexpected value.

Looking for solutions

Digging around, I found the WhenCalled method. This appears to be a bit earlier in the mock/proxy processing so I changed the test.
public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(ref Arg.Ref(Is.Anything(), string.Empty).Dummy)
          .WhenCalled(invocation =>
               {
                    var arg = (string)invocation.Arguments[0];

                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }
               }));
}
Nope. This didn't work either. The value for Arguments[0] has already been set to the return value.

While searching around, I found other people asking about the same issue. In their cases they found alternative solutions based on constraints of when their methods were called and what values the parameters could have. With known values, constants can be used via hard coding. For example, instead of using Is.Anything() as above, Is.Equal("abc") can be used and the second parameter can be "abc". Everything is fine.

I was semi-successful with the special case by using this technique. But then I needed to do the special action and use Is.NotEqual("abc"). I ran into the same problem as with Is.Anything(). I didn't know the original value for arg.

The solution

Widening my search, I stumbled upon an old article by Ayende talking about the Callback method. He considered it for weird edge case parameter checking and indicated it shouldn't generally be used. As far as I could make out from his write-up, it's purpose is as an alternative to the Arg constraints when they aren't sufficient.

Having nothing to lose, I changed my code to give it a try:
private delegate bool DoStuffCallbackDelegate(ref string a);

public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(ref Arg.Ref(Is.Anything(), string.Empty).Dummy)
          .Callback((DoStuffCallbackDelegate)((ref arg) =>
               {
                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }

                    return true;
               }));
}
Hurrah! This worked!

Since Callback's intent is to determine if the constraint on the stub is valid, it gets the original parameter values, rather than the return result.

And, yes, I realize I'm misusing the intent of Callback. But when nothing else works, you go with what does.

The conclusion

Since this met my needs, I stopped here. If this was a function rather than a method, I suppose a Do or Return method would have to be chained to the setup code after the Callback method in order to return a valid value for the stubbed function. Also note, if this stub should be ignored, then false can be returned from Callback instead of true. This would allow other stubs for the same method to be handled differently.

I'm not sure if this is a bug, intended behavior or an unconsidered edge case that has, at least to some, unexpected behavior. Both Out and Ref methods have the return result. It makes sense to be mandatory for Out, but I think it should be optional for Ref. I can see cases where you'd want to stub in a particular value all the time. The current syntax supports this well. But I can also see cases where it shouldn't be changed, at least by the Ref handling code. An overloaded version of Ref without the parameter would work well. In any case, I don't think it should be set before WhenCalled and Do are are invoked. At a minimum it should be after and better yet only if the original value hasn't been changed by WhenCalled or Do.

Well, that's my latest story regarding RhinoMocks. I hope it helps someone.

Friday, December 6, 2013

How to generate an alert tone from a computer

I recently read a Code Project thread that got into a discussion about beeps, internal speakers and alerting the user. This reminded me of a story from a previous life...

It was the early 90s and I worked for a company that provided software to E-911 dispatch centers. There was a machine room with racks of computers and then the actual dispatch center which only had user I/O devices. The monitors and keyboards were run through special extension hardware that allowed VGA, keyboard, mouse, serial and parallel ports to be removed from the computer by a fair distance. It seems like the system would drive them up to 100 feet away, but my memory could be faulty. In any case, we had all these extension boxes scattered throughout the center but all the built-in computer speakers were back in the server room.

A feature of the software allowed dispatchers to place calls in a pending queue. This was for low priority calls that didn't have resources available for immediate dispatch. After a timeout, things in the queue would start escalating. First they'd change color. Then they'd start flashing. The Chief wanted the next level to be an audible alert. And, with this being the fire department, audible alert didn't mean a simple "beep." It needed to be impossible to ignore, klaxon-loud and obnoxious. And of course there wasn't any budget for any significant hardware. I thought about it on the drive home and had an idea as I fell asleep.

The next morning I stopped by Radio Shack and picked up a $10 monophonic amplifier. Something simple. (Amazingly, I just searched for it and they still sell the same model 20+ years later, albeit at a slightly higher price.) I also got a 1/8" patch cord and power adapter. When I got to the dispatch center, I cut off one end of the patch cord and connected the leads to the ground and transmit pins on an RS-232 connector on one of the extension boxes. Then I plugged in the amplifier, turned the volume down low, created a short text file with some random characters and cat'ed it to the appropriate /dev/tty port. An earful of noise rewarded my efforts.

Now I knew my idea would work. Playing around with different repeating character sequences gave different patterns. Changing the baud rate would change the frequency. Eventually I came up with a combination that worked pretty well. It was loud, obnoxious and impossible to ignore. The Chief loved it. And it cost less than $20 and an hour or two of work.

Monday, November 11, 2013

When is the right time to select a source code control tool?


If you’re about to start a new project it’s a good time to consider what version control solution you’re going to use.-- Article discussing TFS vs Git
I read the above statement in an article recently and my brain threw an exception when it parsed it. You know how that works. You're scanning along the text and all of a sudden, half a paragraph later, an interrupt is raised "Wait, did I really just read that?" You stop and go back over it.

There are plenty of articles covering how to choose a source code control tool. Many places compare and contrast all the options,[1] both open and close sourced. You can find articles that discuss the pros and cons of distributed VCSs vs centralized ones. However, I don't recall ever seeing one about when to choose one. The following is my opinion on this seemingly under-discussed topic.

The quote above is one of many similar statements I've seen over the years. It wouldn't surprise me if many project startup policies include choice of VCS as one of their bullet points. So, I'm not picking on this particular article, it just presents some common wisdom that I don't consider all that wise and I'm using it as a jumping off point to open the discussion.

First some context. Most of my career has been in developing shrink wrap software. The projects I work on are small parts of larger products that in turn are one of many that make up a company's product line.

The VCS contains vital enterprise assets. It should contain everything needed to build the current products for a company: the source code, build scripts, installer scripts, developer environment configuration scripts. It should also contain all the supporting documentation for users, both external and internal. Because it maintains versions of all these files, it also contains metadata about all them. It tracks who changed what, when and, if good change set notes are used, why. It may also be linked to an issue tracker, putting those changes in a larger context than just the source code. There may be a CRM system that gets linked to it. It is one piece of a larger whole.

For a development group to start a new project and consider what VCS they're going to use is like the accounting department opening a new checking account and considering what accounting package they're going to track the transactions in. The accounting department will have an existing system in place that tracks more than simple checking registry entries. They would not change this just for a single account. In the same way, the development group should have existing systems in place to handle not just VCS but all the support systems that surround a project. They should keep all the projects in the same ecosystem.

Keeping everything in one place does a number of things. It decreases startup time; there's one less decision to make. It reduces training; developer's don't need to become proficient in yet another tool. It eliminates confusion about "where do I go to get X?" It enhances cohesion among projects; it's easier to share code that may be common. It reduces maintenance costs; there's one less server that needs to be provisioned, integrated, maintained, backed up, and so on.

In my opinion, choice of VCS is something that should be done long before any particular project starts and should not be within the purview of any single project. So, when I read something that says a new project is a good time to choose a version control system, by initial, totally biased reaction is to scream "NO!"[2]


1. And in fact that was the focus of the above article.
2. And the same could be said, to varying degrees, of any software project support tool, be it VCS as discussed here or issue tracking or process control or documentation tools or build tools or UI widget libraries or IDEs or -- the list can go on and on.

Friday, May 3, 2013

Extension methods are cool

You are creating a vocabulary, not writing a program. Be a poet for a moment.

-- Kent Beck

When Microsoft first introduced extension methods to C#, my first reaction was "eh". I viewed them as a novelty without much use. As time has worn on, I've come to appreciate them more and more. Their biggest win for me is to add features to basic system defined types and to fix what I consider deficiencies in the .Net libraries.

Static methods that take as a parameter an instance of the class they are defined in really annoy me. One frequent irritation: string.IsNullOrEmpty(). Every time I go to use this I always start writing the variable I want to test and then realize the method is static and have to go back and insert the "string.IsNullOrEmpty" at the front. This is simply one of many, many other similar methods spread throughout the framework with this.

Shortly after extension methods were added, one day while I was again grumbling at the IsNullOrEmpty implementation, I realized this would be an easy thing to fix. About five minutes later, after figuring out the syntax for extension methods, I had something like:
public static class StringExtensions
{
     public static bool IsNullOrEmpty(this string target)
     {
          return string.IsNullOrEmpty(target);
     }
}

Now I could write tests in the much more natural (for me) "if (someString.IsNullOrEmpty())..." format. Personally I find this much easier to read.

Flush with this success I immediately added another library function I remembered from Delphi that, in my opinion, helps with readability:

public static class ObjectExtensions
{
     public static bool IsAssigned(this object target)
     {
          return target != null;
     }
}

Instead of "if (someObj != null)..." I could now say "if (someObj.IsAssigned())..."

There is a real downside I with this though: Resharper does not recognize this as a test for null and reports a possible null value warning on subsequent accesses.

I'll admit, these aren't earth-shaking, industry-changing algorithms. But in day-in, day-out coding, I find the resulting code much easier to read.

Today I threw together a couple extensions to simplify work with Points, Rectangles and ranges:
public static Extensions
{
     public static Point Center(this Rectangle bounds)
     {
          return new Point((bounds.Left + bounds.Width) / 2, (bounds.Top + bounds.Height) / 2);
     }

     public static double DistanceTo(this Point p1, Point p2)
     {
          return Math.Sqrt(Math.Pow(p1.X - p2.X, 2) + Math.Pow(p1.Y - p2.Y, 2));
     }

     public static int ConstrainTo(this int constrainedValue, int min, int max)
     {
          return Math.Max(min, Math.Min(constrainedValue, max));
     }
}

With these, I transformed a method where the purpose was lost in all the notation to one where the purpose was eminently clear. Extension methods truly enable the craftsman to apply the opening quote from Kent Beck. They easily allow the developer to introduce, at the application level, domain vocabulary to system and other 3rd party classes.

Yes, extension methods are pretty cool!

Friday, April 12, 2013

Unit Tests Not Found in VS2012 and How It Was Fixed (partially)

I am setting up a virtual machine for a new upcoming project. It will use some core libraries that have already been developed in-house with Visual Studio 2012 as the development environment. The team that developed the existing libraries did a good job to make sure there was a suite of unit tests that worked and was reasonably comprehensive. So, after I installed VS2012 and Resharper I retrieved the code from the repository, compiled it and tried to run the unit tests.

The Resharper test runner (the one I normally use) listed all the tests and reported their status as "Inconclusive. Test not run."

The test runner built into Visual Studio didn't list any tests at all. Zip. Nada. Nothing.

A web search revealed some posts indicating there was a compatibility issue between VS2012 and Resharper that was supposedly fixed in VS2012's Update 2. I downloaded and installed the update.

After this, Resharper listed all the tests and reported their status as "Pending" without doing anything further. A change, but arguably not as good as before. To exit with things pending just seems strange. At least the previous "Inconclusive" state gives a sense that something's wrong, even if it doesn't say what.

On the other hand, Microsoft's test runner... still did the same thing. Absolutely no indication that any tests even existed.

After quite a bit more searching, chanting various incantations and sacrificing not a few livestock, I stumbled upon an ancient forum post from early last summer when VS2012 was still gestating as an RC. It indicated there was a bug with unit tests when they existed on a network share.

Hmm. I wanted my VM to be just the build environment and had placed the source code on a shared folder that was mapped alongside all the other source code on the host machine's disk. I wondered what would happen if I moved the code to the C: drive.

As soon as I loaded the project from its new location on C:, Microsoft's test runner immediately found and ran the tests.

Unfortunately, Resharper's still doesn't work. It continues to set them to "pending" and never does anything else. Apparently this is a known issue that hopefully will be fixed soon.

(Hopefully I've added enough keywords to this that others will be able to find this problem and solution without having to spill blood all over the keyboard.)