Thursday, April 18, 2013

Desperate Housewives and Software Engineering


Perception is reality

-- Bree from Desperate Housewives

During the years I have noticed – and become convinced - that success of software projects has surprisingly very little to do with software technologies but much more to do with our perception of the world. Software projects seldom fail because Java was used instead of C++ or because PERL was used instead of Python. The hard truth is that – ignoring project management - software projects succeed because of our ability to align our perception of the world with problems in the domain in which we work as well as with the underlying technologies we use.

When thinking about perception and reality in the area of IT development I come to think about the TV soap opera Desperate Housewives where one of the main characters Bree in an episode condensed one of the corner stones of software engineering into three simple words: Perception is reality. The interpretation I attach to Bree's statement is that each person views and understands the stuff that is out there – the real world - through their perception of it.

What Bree stated may sound like a triviality but the three magic words is the stardust of which software is made. Success in software is not fundamentally based on technical gadgetry or the latest hardware. It is based on how we view the real world – our perception of reality. Unfortunately this point is more than often missed in IT shops.

Most people, myself included, are used to real things and do not want to learn a new skill such as dealing directly with information that is more suitable for a computer. Our brains, through evolution, were trained to react to the stuff we call reality and will fight very hard when trying to adapt to new artificial concepts. With modern software techniques we can avoid torturing our brains by creating software that generates illusions of familiar objects from reality, or our perception of reality, as opposed to dealing with bits and bytes.

Now, what has illusions to do with perception and modeling? It has everything to do with modeling. Our perception must be presented by a computer to a user and for that we need to trick the computer into generate an illusion of what we want the user to perceive. To create the illusion we must first model our perception of the real world.

You may wonder why I keep emphasizing - and separating - the real world from our perception of the world. It's simple: in many IT departments there is a view that the two are one and the same. Not so – the real world can be viewed from many angles in different ways where some features are emphasized in favor of others. To understand that the two are not the same avoids dogmatic views that one view of the world is the right one whereas all others are wrong.

In order to perform the feat of modeling our perception of the world, as a developer I must know and understand the illusion before I can write the code so the computer can present the illusion to a user. In this sense as a developer I must work at two levels simultaneously: what’s behind the illusion and at the level of the illusion.

It is becoming clear that one way or another we must transfigure our perception of the real world into something a computer can understand. It is important to understand that in the process of doing so we will remove large chunks of the information that form our view of reality. After all, no software projects attempts to create an illusion of everything – if that was the case the DOIT function would do the job.

Now - I can understand you might wonder why I go on rambling about the real world, perception, illusion and modeling. It is simple: I just want to turn the focus away from the almost obsessive focus on technologies in today's IT development. I have seldom seen a software project go bad because of the choice of a technology. However,  have seen MANY projects go over budget, fail or simply be cancelled because developers, designers, architects and managers believed that the latest products from the main software vendors could solve all problems when in fact, the products with few exceptions often created far more problems than they solved.

I have to say thanks to Bree for pointing out a cornerstone in software engineering. It is a shame though that a fictional character from a TV Soap Opera has to teach the IT community a fundamental principle that more than often is ignored in IT shops.

Before closing this entry - this final quote has always been of great help when trying to get a perspective on what we are really doing when developing software systems:


He said that for a sorcerer, the world of everyday life is not real, or out there, as we believe it is.
For a sorcerer, reality, or the world as we all know, is only a description’

-- Carlos Castaneda, Jurney to Ixtlan, The lessons of Don Juan.

Friday, April 12, 2013

Why the obsession with object Life Cycles?


The less we understand something, the more variables we require to explain it.

-- Ackoff, The Art Of Problem Solving


The obsession of designing IT systems by using the notion of Life Cycles of business entities has bothered me for many years. This is not specific to IT but rather to modeling at any level within an enterprise.

Let me give an example of the absurdity of modeling a business by describing life cycles of business entities.

Say I'm planning to fly to London early tomorrow morning. A person of normal constitution would plan the trip according to something closely following the sequence:

·         Set the alarm clock
·         Go to bed
·         When the alarm clock rings get out of bed
·         Eat breakfast
·        
·         Go to the car, start it and drive to the airport
·         Get on the plane
·        

I can with almost certainty say that few, if any, persons would describe their plan in terms of the life cycles of the alarm clock, the car, the plane and so on. What I mean by this is that few persons would describe the trip in terms of internal state transitions of the alarm clock, the car and the plain. This sounds pretty obvious but it is not uncommon to do just that in many large IT shops such as banks.

Let's look at an example. A Financial Instructions in a bank can be seen as an object that transitions between states. For example, an instruction may be in NOT-VALIDATED state, then transition to INVALID state, continuing to REPAIRED state, VALID state, PROCESSED state and so on. Valid transitions are often described in a state diagram where events trigger state transitions. The state diagram is what is normally called the life cycle of the Instruction.

Now, let's see what's wrong with this picture.

If I describe my business in terms of life cycles of business entities I'll run into trouble. The reason is simple: the lifecycle of a business object is a consequence of one or more processes managing the object. Making life cycles of many business objects smoothly work together is a non-trivial tasks that inevitable will create a rats nest of design and code hacks.

When fusing a lifecycle together with an entity I effectively grab the entity and eliminate the chance that it can easily be used in some other processing or, even worse, modified to serve the business better. After all, I'll have to associate a state attribute to the entity keeping track of where in the lifecycle the entity is. If by any chance I one day decide that the entity is to be used in some other processing scheme, I'll have to add another state variable to the object which, by the way, must cooperate with the other state variable.

Clearly, this situation is non-manageable – so what's the solution? Let's start with looking at business processes as a potential solution.

There are two common views of what a business process is. The first one is simple. A business process is a sequence of events that are ordered in time. Processes are commonplace in everyone’s life. Waking up in the morning, starting the coffee machine, eating breakfast, showering and driving to work is part of a process many of us have to endure in everyday life. Whether the events are planned ahead or not, the events form a sequence that is ordered in time. They form a process.

A second – more meaningful – view of what a business process is goes as follows:In a business process exists for the purpose of achieving some goals that have been setup by a business. Processes manipulate business entities in such a way that they optimize some state of the business. Typically that state is called profit. In other words, the purpose of a business process is to move entities in a business to what would be seen as an optimal state for the business. Essentially, a process describes how you want the world – i.e. your business - to behave.

Does this not slap Object Orientation right in the face – having manager objects (processes) manage other objects? If we really want to get into semantics we can always say that a business process IS AN object that knows what and when something needs to be done. Separating the what and when from the how to do something follows good IT practice. The process knows when and what needs to be done whereas business objects knows how to change its internal state in a consistent manor. Isn't that what software design is all about – avoiding tight technical coupling between concerns that are not conceptually tightly coupled?

And by the way, why is Object Orientation seen as a sacred cow? Whatever OO means it should not be the guiding principle when modeling or developing software. The guiding principle should be to create simple and clean software.

Now, why am I writing a blog entry about all this stuff? The reason is that more than often designers, managers and architects enforce the view that business entities has life cycles. In a naïve simplistic view business entities do have life cycles, but only as a consequence of the execution of one or more business processes. It must not be the life cycles of entities that drive processes but rather the opposite - processes must drive sequences of state changes of objects which are often called life cycles.

At the end it all boils down to what perspective we want to take. Should we model our business by having a bird's eye view of the management of our business entities? Or, should be manage it as a bottom up approach where we start with the intricate details of business entities and work ourselves up. Viewing myself as lazy – hating to keep track of lots of state variables - I would go for the former approach anytime!


Wednesday, April 10, 2013

Elegance in C++ parameter passing



You have disengaged from the planetary brain and no longer serve a useful purpose

-- The Outer Limits, last episode of 3rd season


There is no doubt that C++ Meta programming is here to stay. Ignoring it as being too complex or too convoluted will over time render anyone's C++ programming skills outdated and obsolete

With the C++11 standard at the finger tips many Meta programming functions are readily available. I believe that with a set of standard Meta functions incorporated into the C++11 standard together with the drift of C++ applications towards incorporation of Meta programming techniques, a host of new clever and possibly insidious idioms will appear.

In this post I'll show a Meta programming hack I picked up on the WEB that is simple but yet elegant. I did not invent or come up with this idiom and unfortunately I cannot remember where I got it from so regrettably I can't give credit to anyone.

A problem that can easily be solved using Meta programming is to transparently modify the parameter type when calling a function. Why would anyone want to do that you may ask? The reason is simple: when passing integral types we want to pass them by value whereas when passing more complex types we want to pass them by const reference – unless of course, they can be modified by the function. This will allow (some) compilers to pass parameters through registers as opposed to on the stack. Passing parameters through registers is significantly faster and Meta programming techniques will help us out to do just that.

By using a few Meta functions from the C++ library it is surprisingly easy to do manipulate types so that we automatically pass them in the right way. The following example illustrates how it can be done:


Here it goes!

#include <iostream>
#include <type_traits>
using namespace std;

// type converter for passing parameters
template<typename T>
struct param{
  typedef typename conditional<is_integral<T>::value,
T,                                                                                             // (1)
typename add_lvalue_reference<const T>::type>::type           // (2)
type;                                                                                        // (3)
};

// a sample function for testing the type converter                                   // (4)
template<typename T>
void foo(typename param<T>::type t){
  cerr<<boolalpha<<"parameter type is reference: "<<is_reference<decltype(t)>::value<<endl;
}

int main(){                                                                                               // (5)
  int  i=5;
  string s="Hello";

  foo<int>(i);
  foo<string>(s);
}

The output is:

parameter type is reference: false
parameter type is reference: true

The param meta function is written in a standard way where the result is retrieved through the type in the Meta function. Point (1) is selected when the type is an integral type whereas point (2) is otherwise selected. At point (2) a new type is constructed by creating a const l-value reference from the template parameter.

That's really all there is to it. A small sample program (5) using the test function (4) shows that the type generator works correctly.

Now, in terms of performance does it really make a difference? To test if there really is a performance increase by passing an integral type by value, I will modify the foo function slightly to avoid the gcc optimizer in-lining foo by prepending it with a gcc specific attribute. The test harness is simple and somewhat artificial but I want to get some indication of what type of performance enhancement I can expect.

First I'll modify the foo() function to not be inlined by the compiler. I'll also have to call a function in a different translation unit – bar() – or gcc will still inline calls to foo():

template<typename T>
__attribute__ ((noinline)) void foo(typename param<T>::type t)){
  bar();
}

The function bar() is defined in a different translation unit:

void bar(){}


The test program is a simple main() that runs two tests. The first one calls a function foo_optimized() which makes use of the param type conversion meta function whereas the second test calls foo_normal() which is defined as:

template<typename T>
__attribute__ ((noinline)) void foo_normal(T const&t){
  bar();
}

Here is the test program using C++11 std::chrono library to measure time:

int main(){
  using Clock=chrono::high_resolution_clock;
  using TP=Clock::time_point;
  using DURATION=Clock::duration;
  using PARAM=unsigned long long;

  Clock clock;
  const size_t niter=1000000000;

  TP tpStart1{clock.now()};
  for(PARAM i=0;i<niter;++i)foo_normal<PARAM>(i);
  auto duration1=chrono::duration_cast<DURATION>(clock.now()-tpStart1);
  auto t1=chrono::duration_cast<chrono::duration<double,std::milli>>(duration1).count();

  TP tpStart2{clock.now()};
  for(PARAM i=0;i<niter;++i)foo_optimized<PARAM>(i);
  auto duration2=chrono::duration_cast<DURATION>(clock.now()-tpStart2);
  auto t2=chrono::duration_cast<chrono::duration<double,std::milli>>(duration2).count();

  cerr<<"milli sec time passing integer by reference: "<<t1<<endl;
  cerr<<"milli sec time passing integer by value: "<<t2<<endl;
  cerr<<"%time decrease: "<<100*(t1-t2)/t1<<"%"<<endl;
}

As you can see, most of the code is just time measurements. Normally this would be managed by some StopWatch class, but here I prefer to make it explicit.

The printout when executed on a virtual Linux box and compiled with –O3 flag is:

milli sec time passing integer by reference: 4984.19
milli sec time passing integer by value: 3388.31
%time decrease: 32.0189%

Now, a 32% decrease in execution time is not too bad I would say!