Saturday, August 23, 2014

What you know and have impacts your design



Description

You can only design from what you know and/or have available to you. Here's a short video showing the evolution of a electric switch design based on new information.

Transcript

Shop equipment frequently have two push-button toggle switches to turn things on and off. I recently worked on a project where I wanted to replicate this behavior to control an appliance and do it inexpensively. My hope was to get by with junk bin parts.

Looking around at my spare parts, I found a project box, a couple momentary switches and a relay with contacts rated above what I wanted to control. So I started with these items.

It was a double-pole relay. My thought was to use one pole of the relay to hold itself energized and the other pole to switch my device. So, one switch energizes the relay and the relay then holds itself closed. This should allow me to turn on the device.

I figured a transistor could be placed between supply voltage and the relay's coil with the base wired to the supply voltage. The "off" switch could connect to the base of the transistor and ground. In this configuration, the transistor would normally be on allowing, the relay to be energized. But when the button is pressed, the transistor would turn off the power to the coil, switching the relay off.

I wired it all together and it worked!

The relay circuit was too big to fit in the project box though, so I wired the switches through a four conductor wire and put the circuit in a separate box next to the switched item.

Even though it worked, I wasn't too wild about having high voltage and low voltage running through the different poles of the relay. It seemed like a pretty bad hack.

While I worked on other aspects of the project, I ran across something called a "PowerSwitch Tail." This looks like a short extension cord but it has a twist. It has a solid-state relay built into it. Normally the outlets are turned off, but place a low voltage across these connectors and the outlets will turn on.

This seemed like a much better solution than my relay hack.

So I completely rethought my approach.

I could have simply replaced the high voltage side of the relay circuit with a low voltage source going to the power tail's connectors. This would work, but now the relay seemed way overkill. I had a low voltage circuit controlling another low voltage circuit through a relay.

Instead I replaced the relay based circuit with a flip-flop circuit. These can be built from discrete components, but I had a quad NAND gate chip in the parts box and this would shrink the size significantly.

I wired up the new circuit and it worked. A much cleaner approach.

As I worked on other aspects of the project, I had a realization: that flip-flop circuit was small enough it could fit in the project box with the switches. If I reconfigured the 4-conductor wire to have voltage, ground and signal, I could move the circuit to the box.

In the end, I did this as it allows future changes to the switching mechanism. Right now I have the two push button toggle switch configuration. But at some point, I'm thinking I want to have a current detecting switch to turn one appliance on and off based on another appliance being on or off. This is easily done with the new configuration but wasn't so convenient with the old.

So, that's my story about how this design evolved over time as I discovered new resources and thought about the problem at a deeper level.

Catch you next time.

Wednesday, April 9, 2014

Are enumerations evil?

Or, How to refactor an enumeration to multiple classes

I ran across this article about how to refactor a switch statement to a data structure in JavaScript a while ago. This is not a terrible refactoring technique. I've used something similar with dictionaries in C# where the key was an enumeration and the value was an Action (or Func). It can be a minor refactoring to a single method that can help clean-up the code. Then this week I had a conversation with a fellow developer about the appropriateness of an enumeration in existing code from a couple different libraries he was reading.

This led me to wonder afresh if perhaps enumerated types aren't something that we as the development community should treat as an anti-pattern. In the days before class types, they were a means for the compiler to force integers to known values and provide more readable code for the developer. The compiler could check values better at compile time and catch usage errors earlier. I have worked on languages without them and embraced them when they became available. At that point they were a good thing.

Many times enumerations are used to change behavior of some code based on a variable's value. This is usually done through switch or if-then-else statements. Or, as discussed above, using some sort of dictionary like structure to store behavior associated with a specific enumerated value.

The problem is this pattern tends to get replicated within the class containing the enumerated variable. The class using the enumeration needs to do different things based on the various values the variable can have and these different things spread throughout the class (or classes) using the enumeration. Then, when a value is added to the enumeration, each place the variable is tested needs to be updated to handle the new value.

This can lead to a number of problems. With behavior based on the enum scattered throughout classes, the intent can be both obfuscated and duplicated. When adding a value to the enumeration, it's easy to miss a place that needs new behavior, introducing hard to detect bugs. The code is fragile in the face of changes. And because the state and associated behavior associated with the enumeration is mixed in with the class (or classes) using it, violation of the Single Responsibility Principle frequently is seen.

Refactoring to a data structure as mentioned above is a good first step. It helps address a number of the problems. But in today's object oriented world, there is a better way. In many (perhaps most) cases it's better to refactor to multiple classes, one for each value in the enumeration. Fortunately, this is fairly easily done.

Here's the original starting code:
enum SomeItemEnum = { one, two, three };
class A
{
  SomeItemEnum someItemValue;
  void SomeItemUser()
  {
    switch(someItemValue)
    {
      case one:
        // Some complicated code for case one
        break;
      case two:
        // Some complicated code for case two
        break;
      case three:
        // Some complicated code for case three
        break;
    }
  }
}
First, a base class is created as an ancestor for the enumeration.
enum SomeItemEnum = { one, two, three };

class SomeItemEnumBase
{
}

class A
{
  SomeItemEnum someItemValue;
  void SomeItemUser()
  {
    switch(someItemValue)
    {
      case one:
        // Some complicated code for case one
        break;
      case two:
        // Some complicated code for case two
        break;
      case three:
        // Some complicated code for case three
        break;
    }
  }
}
Then, each place where something needs to be done based on the value of the enumeration, a method is added to the class. If there is no default behavior for a method, it should be abstract, otherwise it should be virtual.
enum SomeItemEnum = { one, two, three };

abstract class SomeItemEnumBase
{
  abstract void ComplicatedCode();
}

class A
{
  SomeItemEnum someItemValue;
  void SomeItemUser()
  {
    switch(someItemValue)
    {
      case one:
        // Some complicated code for case one
        break;
      case two:
        // Some complicated code for case two
        break;
      case three:
        // Some complicated code for case three
        break;
    }
  }
}
A child class should be made for each value of the enumeration with the value specific behavior moved to the appropriate overridden method.
enum SomeItemEnum = { one, two, three };

abstract class SomeItemEnumBase
{
  abstract void ComplicatedCode();
}

class SomeItemEnumOne : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case one
  }
}

class SomeItemEnumTwo : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case two
  }
}

class SomeItemEnumThree : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case three
  }
}

class A
{
  SomeItemEnum someItemValue;
  void SomeItemUser()
  {
    switch(someItemValue)
    {
      case one:
        // Some complicated code for case one
        break;
      case two:
        // Some complicated code for case two
        break;
      case three:
        // Some complicated code for case three
        break;
    }
  }
}
Change the type of the class' variable from the enumeration to the base type.
enum SomeItemEnum = { one, two, three };

abstract class SomeItemEnumBase
{
  abstract void ComplicatedCode();
}

class SomeItemEnumOne : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case one
  }
}

class SomeItemEnumTwo : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case two
  }
}

class SomeItemEnumThree : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case three
  }
}

class A
{
  SomeItemEnumBase someItemValue;
  void SomeItemUser()
  {
    switch(someItemValue)
    {
      case one:
        // Some complicated code for case one
        break;
      case two:
        // Some complicated code for case two
        break;
      case three:
        // Some complicated code for case three
        break;
    }
  }
}
Instead of setting discrete values on the enumerated variable, now a class instance of the appropriate type can be set.

Old method:
someItemValue = SomeItemEnum.one;
someItemValue = SomeItemEnum.two;
New method:
someItemValue = new SomeItemEnumOne();
someItemValue = new SomeItemEnumTwo();
All the switch/if-else statements can now be changed to simple method calls that have polymorphic behavior based on the class type.
enum SomeItemEnum = { one, two, three };

abstract class SomeItemBase
{
  abstract void ComplicatedCode();
}

class SomeItemOne : SomeItemBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case one
  }
}

class SomeItemTwo : SomeItemBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case two
  }
}

class SomeItemThree : SomeItemBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case three
  }
}

class A
{
  SomeItemEnumBase someItemValue;
  void SomeItemUser()
  {
    someItemValue.ComplicatedCode();
  }
}
And finally, the unused enumeration declaration can be removed.
enum SomeItemEnum = { one, two, three };

abstract class SomeItemEnumBase
{
  abstract void ComplicatedCode();
}

class SomeItemEnumOne : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case one
  }
}

class SomeItemEnumTwo : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case two
  }
}

class SomeItemEnumThree : SomeItemEnumBase
{
  override void ComplicatedCode()
  {
    // Some complicated code for case three
  }
}

class A
{
  SomeItemEnumBase someItemValue;
  void SomeItemUser()
  {
    someItemValue.ComplicatedCode();
  }
}
Now, when a new value is added, only places that actually care about the new value specifically (e.g. where its value is set) need to be touched. As a bonus, the compiler will complain about any unimplemented abstract functions, ensuring all required behavior is implemented.

Finally, admittedly, in this toy example, the final result is more complicated and obtuse than the original. If production code is as simple as this example, then it doesn't make sense to make this change. However, when class A is larger and more complex, the increased clarity and robustness of the code can be significant. In my experience, the latter is more typical than the former and so in general, my conclusion is enumerations border on evil.

Friday, March 14, 2014

Weird compiler error: C2360: initialization is skipped by label

I ran into a new error the other day that was non-obvious at first glance.

I had code structured something like (greatly simplified by way of example):
void MyFunction()
{
     int a = GetValueOfA();
     switch(a)
     {
          case 1:
               int b = 2;
               DoSomething(b);
               break;
          case 2:
               int c = 3;
               DoSomethingElse(c);
               break;
     }
}
This gave me an error on the second case "error C2360: initialization of 'b' is skipped by 'case' label." *

What?!? This is one of those cases where the message is technically accurate after you understand what it says but utterly useless to explain what is going on or what the solution is at first glance. Or perhaps I'm slow. I stared at this for a bit before I comprehended what it was trying to tell me.

The root of the issue is that, while the case statements appear to be in their own scope, they aren't. (I'm sure I knew this at some point, but the memory was overridden long ago. Or perhaps I have too many languages floating around in my head.) The scope for variables inside a switch statement is all the cases, not just the current case. Therefore, in the code above, variables b and c are available anywhere inside the switch statement. The error says b is defined, but may not be initialized properly, when a = 2.

All the solutions involve changing the scope of b and c. There are a couple immediately obvious solutions:

1) Put braces around the contents of the case statements to limit their scope.
void MyFunction()
{
     int a = GetValueOfA();
     switch(a)
     {
          case 1:
               {
                    int b = 2;
                    DoSomething(b);
               }
               break;
          case 2:
               {
                    int c = 3;
                    DoSomethingElse(c);
               }
               break;
     }
}
2) Put the contents of the case statements in their own functions, another way of limiting their scope.
void PerformCase1()
{
     int b = 2;
     DoSomething(b);
}

void PerformCase2()
{
     int c = 3;
     DoSomethingElse(c);
}

void MyFunction()
{
     int a = GetValueOfA();
     switch(a)
     {
          case 1:
               PerformCase1();
               break;
          case 2:
               PerformCase2();
               break;
     }
}
3) Move the declaration to before the switch statement and don't do initialization/declaration on the same line.
void MyFunction()
{
     int b, c;
     int a = GetValueOfA();
     switch(a)
     {
          case 1:
               b = 2;
               DoSomething(b);
               break;
          case 2:
               c = 3;
               DoSomethingElse(c);
               break;
     }
}
Another option would be to change the structure of the code such that the switch isn't necessary. In general, this would be my favored solution although in this specific case of legacy code, that would have involved more work than the budget allowed.

* Actually my notes say I also got a second error on the switch stating "transfer of control bypasses initialization of variable b" but I could not reproduce this one. Perhaps I simplified too much. Or perhaps it was a different version of the compiler. Or perhaps different option settings. Or something else entirely.

Monday, February 24, 2014

Constant integer values and multi-language COM interop

I recently moved some code from a legacy C++ application into a COM library for more general use. The original code was duplicated a couple times in different C++ applications. Then a need arose to use this code in C#. To clean up both the duplication and make it available to .Net applications, we decided to untangle it from the original applications and move it into its own COM library.

One of the issues I ran into that took a bit to figure out involved constant values.

The old C code had several things defined as integers with associated constant definitions to handle bit-mapped values. Some might argue these should be converted to enumerations but we wanted to minimize changes to existing code structure and so decided to keep them as integer constants. The issue with this was how to move them to a common location for all COM clients to access.

It took a bit of research to find the answer as I didn't find a complete answer in one place. Hopefully this article will help fill that gap.

IDL files can have #define statements. This works for C code. The IDL compiler maps these as corresponding #defines in the corresponding intermediate .h files. The problem is when .Net creates an Interop assembly, they are ignored. This means they don't exist for use in .Net languages.

Next I used const definitions in the library. Again, this worked for C code but the Interop assembly in the .Net world did not have them.

I moved the const definitions into an interface. Still, C was perfectly happy but the Interop ignored them.

I search around some more. I ran into some forum discussions that indicated it was impossible.

Finally I found a reference to something that pointed me in the correct direction. IDL files can have the concept of modules, something I hadn't run into before. I put the const declaration in a module section in the IDL file. In the intermediate .h file for C++, these are simply constants and callers are still happy. But in this case, when the .Net Interop file is created they are not ignored. The Interop generator maps them to an abstract class named with the name of the module and the constants become static consts inside the class. This makes them available for any .Net user to access.

So, the IDL file ended up looking like this:
library SomeComLibrary
{
    module Constants
    {
        const DWORD UniqueValue = 0xFFFFFFFF;
    }
}
C's intermediate .h file looks like this:
#ifndef __Constants_MODULE_DEFINED__
#define __Constants_MODULE_DEFINED__
/* module Constants */
const DWORD UniqueValue = 0xffffffff;
#endif /* __Constants_MODULE_DEFINED__ */
And the .Net Interop file looks like this:
namespace SomeComLibrary
{
    public abstract class Constants
    {
        public const uint UniqueValue = -1;
    }
}
The solution ended up being quite easy, but finding it was a bit of a challenge.

Things I searched for trying to find a solution to this problem included:
  • idl const not in .net
  • idl const not in interop
  • idl const not in C# interop
  • midl keywords
  • idl const .net

Thursday, December 19, 2013

How to get Ref parameter values in RhinoMock

Mocking ref (and out) parameters with RhinoMock is a bit tedious. I ran into something that could possibly be considered a bug. At the very least it's not really expected behavior. The good news is I found a work-around. Hopefully this will help someone else (including perhaps my future self).

The basic problem is: Ref parameter values passed to mocked methods have the default values specified in the mock setup and not the values passed by the calling code.

The background

Take the following interface as an example.
interface ISomething
{
    void DoStuff(string a);
}
Now supposed this is mocked and some things need to be done based on the value of parameter a.

Normally this is easily done:
public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(Arg.Is.Anything())
          .Do((Action)(arg =>
               {
                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }
               }));
}
Every time something calls mockSomething.DoStuff, the code passed to the Do method will be executed and the parameter arg will contain whatever value was passed to DoStuff. This is routine stuff for RhinoMocks and works as expected.

The setup

Now suppose the parameter for DoStuff was a ref.
interface ISomething
{
    void DoStuff(ref string a);
}
This is where things get a bit dicey. The interface is still mocked and some things still need to be done based on the value of parameter a. So, some minor syntax changes in the arguments constraints to handle the ref stuff for the compiler are done:
private delegate void DoStuffDelegate(ref string a);

public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(ref Arg.Ref(Is.Anything(), string.Empty).Dummy)
          .Do((DoStuffDelegate)((ref arg) =>
               {
                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }
               }));
}

The problem

The above code compiles. And it runs. It just doesn't run correctly. The problem is that second parameter in the Arg<T>.Ref method. It indicates a return result for the the value. The problem is RhinoMocks sets the parameter value to the return result before calling Do's method. In other words, in this example, arg will always be string.Empty. The code in Do will never be called with the values sent to DoStuff by the original caller.

Looking at the call stack, I could see the original method call with the correct parameter values. Then it went into the RhinoMock and proxy code and then Do's method was called, clearly with the unexpected value.

Looking for solutions

Digging around, I found the WhenCalled method. This appears to be a bit earlier in the mock/proxy processing so I changed the test.
public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(ref Arg.Ref(Is.Anything(), string.Empty).Dummy)
          .WhenCalled(invocation =>
               {
                    var arg = (string)invocation.Arguments[0];

                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }
               }));
}
Nope. This didn't work either. The value for Arguments[0] has already been set to the return value.

While searching around, I found other people asking about the same issue. In their cases they found alternative solutions based on constraints of when their methods were called and what values the parameters could have. With known values, constants can be used via hard coding. For example, instead of using Is.Anything() as above, Is.Equal("abc") can be used and the second parameter can be "abc". Everything is fine.

I was semi-successful with the special case by using this technique. But then I needed to do the special action and use Is.NotEqual("abc"). I ran into the same problem as with Is.Anything(). I didn't know the original value for arg.

The solution

Widening my search, I stumbled upon an old article by Ayende talking about the Callback method. He considered it for weird edge case parameter checking and indicated it shouldn't generally be used. As far as I could make out from his write-up, it's purpose is as an alternative to the Arg constraints when they aren't sufficient.

Having nothing to lose, I changed my code to give it a try:
private delegate bool DoStuffCallbackDelegate(ref string a);

public void Initialize()
{
     mockSomething = MockRepository.GenerateStub();
     mockSomething.Stub(s => s.DoStuff(ref Arg.Ref(Is.Anything(), string.Empty).Dummy)
          .Callback((DoStuffCallbackDelegate)((ref arg) =>
               {
                    // Perform general action with arg
                    
                    if (arg == "abc")
                    {
                         // Special case action for arg
                    }

                    return true;
               }));
}
Hurrah! This worked!

Since Callback's intent is to determine if the constraint on the stub is valid, it gets the original parameter values, rather than the return result.

And, yes, I realize I'm misusing the intent of Callback. But when nothing else works, you go with what does.

The conclusion

Since this met my needs, I stopped here. If this was a function rather than a method, I suppose a Do or Return method would have to be chained to the setup code after the Callback method in order to return a valid value for the stubbed function. Also note, if this stub should be ignored, then false can be returned from Callback instead of true. This would allow other stubs for the same method to be handled differently.

I'm not sure if this is a bug, intended behavior or an unconsidered edge case that has, at least to some, unexpected behavior. Both Out and Ref methods have the return result. It makes sense to be mandatory for Out, but I think it should be optional for Ref. I can see cases where you'd want to stub in a particular value all the time. The current syntax supports this well. But I can also see cases where it shouldn't be changed, at least by the Ref handling code. An overloaded version of Ref without the parameter would work well. In any case, I don't think it should be set before WhenCalled and Do are are invoked. At a minimum it should be after and better yet only if the original value hasn't been changed by WhenCalled or Do.

Well, that's my latest story regarding RhinoMocks. I hope it helps someone.

Friday, December 6, 2013

How to generate an alert tone from a computer

I recently read a Code Project thread that got into a discussion about beeps, internal speakers and alerting the user. This reminded me of a story from a previous life...

It was the early 90s and I worked for a company that provided software to E-911 dispatch centers. There was a machine room with racks of computers and then the actual dispatch center which only had user I/O devices. The monitors and keyboards were run through special extension hardware that allowed VGA, keyboard, mouse, serial and parallel ports to be removed from the computer by a fair distance. It seems like the system would drive them up to 100 feet away, but my memory could be faulty. In any case, we had all these extension boxes scattered throughout the center but all the built-in computer speakers were back in the server room.

A feature of the software allowed dispatchers to place calls in a pending queue. This was for low priority calls that didn't have resources available for immediate dispatch. After a timeout, things in the queue would start escalating. First they'd change color. Then they'd start flashing. The Chief wanted the next level to be an audible alert. And, with this being the fire department, audible alert didn't mean a simple "beep." It needed to be impossible to ignore, klaxon-loud and obnoxious. And of course there wasn't any budget for any significant hardware. I thought about it on the drive home and had an idea as I fell asleep.

The next morning I stopped by Radio Shack and picked up a $10 monophonic amplifier. Something simple. (Amazingly, I just searched for it and they still sell the same model 20+ years later, albeit at a slightly higher price.) I also got a 1/8" patch cord and power adapter. When I got to the dispatch center, I cut off one end of the patch cord and connected the leads to the ground and transmit pins on an RS-232 connector on one of the extension boxes. Then I plugged in the amplifier, turned the volume down low, created a short text file with some random characters and cat'ed it to the appropriate /dev/tty port. An earful of noise rewarded my efforts.

Now I knew my idea would work. Playing around with different repeating character sequences gave different patterns. Changing the baud rate would change the frequency. Eventually I came up with a combination that worked pretty well. It was loud, obnoxious and impossible to ignore. The Chief loved it. And it cost less than $20 and an hour or two of work.

Monday, November 11, 2013

When is the right time to select a source code control tool?


If you’re about to start a new project it’s a good time to consider what version control solution you’re going to use.-- Article discussing TFS vs Git
I read the above statement in an article recently and my brain threw an exception when it parsed it. You know how that works. You're scanning along the text and all of a sudden, half a paragraph later, an interrupt is raised "Wait, did I really just read that?" You stop and go back over it.

There are plenty of articles covering how to choose a source code control tool. Many places compare and contrast all the options,[1] both open and close sourced. You can find articles that discuss the pros and cons of distributed VCSs vs centralized ones. However, I don't recall ever seeing one about when to choose one. The following is my opinion on this seemingly under-discussed topic.

The quote above is one of many similar statements I've seen over the years. It wouldn't surprise me if many project startup policies include choice of VCS as one of their bullet points. So, I'm not picking on this particular article, it just presents some common wisdom that I don't consider all that wise and I'm using it as a jumping off point to open the discussion.

First some context. Most of my career has been in developing shrink wrap software. The projects I work on are small parts of larger products that in turn are one of many that make up a company's product line.

The VCS contains vital enterprise assets. It should contain everything needed to build the current products for a company: the source code, build scripts, installer scripts, developer environment configuration scripts. It should also contain all the supporting documentation for users, both external and internal. Because it maintains versions of all these files, it also contains metadata about all them. It tracks who changed what, when and, if good change set notes are used, why. It may also be linked to an issue tracker, putting those changes in a larger context than just the source code. There may be a CRM system that gets linked to it. It is one piece of a larger whole.

For a development group to start a new project and consider what VCS they're going to use is like the accounting department opening a new checking account and considering what accounting package they're going to track the transactions in. The accounting department will have an existing system in place that tracks more than simple checking registry entries. They would not change this just for a single account. In the same way, the development group should have existing systems in place to handle not just VCS but all the support systems that surround a project. They should keep all the projects in the same ecosystem.

Keeping everything in one place does a number of things. It decreases startup time; there's one less decision to make. It reduces training; developer's don't need to become proficient in yet another tool. It eliminates confusion about "where do I go to get X?" It enhances cohesion among projects; it's easier to share code that may be common. It reduces maintenance costs; there's one less server that needs to be provisioned, integrated, maintained, backed up, and so on.

In my opinion, choice of VCS is something that should be done long before any particular project starts and should not be within the purview of any single project. So, when I read something that says a new project is a good time to choose a version control system, by initial, totally biased reaction is to scream "NO!"[2]


1. And in fact that was the focus of the above article.
2. And the same could be said, to varying degrees, of any software project support tool, be it VCS as discussed here or issue tracking or process control or documentation tools or build tools or UI widget libraries or IDEs or -- the list can go on and on.