Thursday, May 1, 2008

Model Execution Platform - build vs assemble

As one decides to embark on the model driven development journey, one has to answer how a platform would be created that would execute the model. By executing the model i mean, converting it into a runnable code or interpreting it. Performance is something that i have not personally seen examples of in MDD. Till we really get a hand on performance it is difficult to decide which way one should go - build or assemble. Given that performance goals are honored in both approaches, what would be next deciding factor.

Building whole framework from scratch is just impractical given time to market. Assembling open source and commercial softwares doesn't work as these things do not work seemlessly. There are MDD platforms being built, which i still see organizations will not bet on.

The approach then some cautious organizations take is to take open source / commercial products and write code to create the platform. And slowly one goes overboard developing their own versions of what is available, just because as implementation gets deeper the stakes get higher and requirements from new component increase.

There is one bottleneck which is increasingly becoming important and it is learning curve. Learning curve needs to be managed as one builds proprietory stuff. Learning curve is important for following reasons,

1. Small learning curve means higher productivity. There is more room left in ur RAM to look into other things than managing your platform.
2. Faster ramp up of new hires means tolerance to churn.
3. Ramp up experience also ends up positioning the platform. An easier platform gives an impression of well thought out and well designed platform.
4. Smaller learning curve might be the only way as systems become complex.

Now smaller learning curve can be achieved by either hiding details from people or using well documented, well designed components. Hiding details and presenting a good front to system take lots of effort not only in development but maintenance too. Well documented and well designed compnents that confirm to standards shorten the learning curve. Open source components have the benefit of good documentation and almost always adherance to standards. But then if u get a similar component for money, chances are higher that it would be good quality. But then documentation is always more for open source.

Model is the Front Gate to your code

Treat model as the front gate to you Code House. A guest/newcomer to your house would enter through the front gate, will be welcomed by the garden on both sides, will be guided by the road that takes her to the main door. And he will ring the bell, will find a drawing room etc. and then you take her around the house.

Contrast that with current state with out of sync, half finished design documents, which are like backyard entry. You don't know which room will u land up in, and you don't know how many rooms you would have to navigate urself till you find someone who can show u around. And all other ways are like entering through windows, roof or chmineys.

This thinking will go a long way in making your code more maintainable. Besides cost benefits (less learning curve and lesser bugs), it has soft benefits like developer morale. This motivate s developers to innovate. So, use Models whereever possible just to organize code, just to create that front door.

Domain Driven Design and Model Driven Design

Here is a article that talks about both,

http://domaindrivendesign.org/discussion/blog/evans_eric_ddd_and_mdd.html

While MDD can be applied with being at distance from the domain, DDD takes the concept further saying that Domain should be at the centre. And then MDD should be applied to different pieces of software.

UML as a modeling language

UML has emerged as the language for modeling. Its the common language spoken for communicating by software developers. It is very good for modeling a software system and is mostly targetted towards software engineers. But, modeling a system is mostly the domain of end-user or domain expert. Then comes a need for defining a language for the domain which is to be modeled. UML provides for profiles, it is like defining a language for modeling.

That is where i question. Are we not using UML for sake of using? How easy it will be for domain experts to model or even understand the models in created with UML profiles? The notion of stereotype is understandable for a software engineer who treats class concept as more fundamental. But a domain expert doesn't need to know what a class is. I guess its because of this that we have languages like RDF, OWL to model resources and real things. These languages are specifically designed for the domain they will be used for. Ofcourse you can always create a UML model for anything that is described using these languages and it should be done if we have enough toolset to create products from UML.

If you have to describe a banking system you need concepts like Account, Customer, Interest etc. and a way to use these to describe a particular banking system. Domain expert doesn't need to know any more. Once done, this may be translated to UML models which could lead to the product.


Modeling is human centric and needs to be simplified for that particular use.

Abstraction and “What”

If you look at way our software programming has evolved abstraction would be the word that ties most of it. We started with assembly language with a linear sequence of instructions, abstracted out concepts like blocks of code into functions and created languages like C. We simulated concurrencies with another abstraction threads. Object orientation helped us manage much more complex projects by taking away focus from problem solving to describing a system. It has been a journey of abstracting out concepts and the direction is What. With every abstraction we are coming closer to specifying “what” we want than “how” it will be done. Although both are important, “how” is reusable much more than “what” and we have done a good job of “how”. And it will always awe us the ways in which we use “how” and permute different “hows” to create different “whats”. So, there will always be much more “whats” than “hows”.

Highly specialized DSLs (Domain Specific Languages) are being used nowadays and this brings us closer to describing (“what”) the system with minimum effort. This was the reason that we developed languages like Mathematics. I remember my first job involved creating simulators. Mathematical models of a system element was provided and then any system with any combination of system elements could be modeled and simulated/solved. We are following the same path to handle complexity – the model way.

Highly specialized modeling language should allow domain experts to define the “what”. Software should be then written to interpret or consume in some way that model to realize the “what”. This means software development should start with defining the modeling language and then should branch onto two tasks. One to develop the software for modeling language and other to define the “what” using the modeling language.

The “whats” will always change as change is the only constant atleast in this real world. If we develop software targeting at “what” we will always create software that will age, that may not satisfy the “what” in the first iteration as the “what” got changed by the time product was out. At times “what” is not definable in one shot, it evolves and can’t be verified until the product is made. The modeling language would more or less remain constant, is very well defined and opens options to multitude of possibilities.

Infact the “whats” should be in public domain, it should be shared and jointly owned. The software that would realize the “whats” may be owned by companies so that better realizations are always created.

Is Model an end?

Creation of Universe is an example that fits the current discussion. The big bang model says that it started with an explosion, intelligent design points at a God like entity. These models can explain the current observable phenomena. Is it end of the analysis?

Or because we have multiple models to explain current situation, and we believe that there has to be only truth, we should dig deeper to eliminate models. That means we have to try to play with model and predict some other things beyond current observations. And look for inconsistencies with existing models. And then choose the best model and live with it till an observation invalidates it or a new model superior to it is proposed.

But if there were no models would u still do all this. Well, there is no need for all the applications of it, but to believe in it we would still like to dig deeper. Validate it for extreme situations that are of no immediate necessity. And one could say that understanding is an end, so model is not an end by itself. That is, models should be proposed, evolved, scrapped, recreated for the end goal of understanding.

What do i mean by model?

By modeling i mean identifying and specifying concepts and relationships for the domain which it is modeling. Modeling a domain not only represents the collective knowledge, but aids understanding and in advanced usage can help prediction. Most of our science and mathematics is aimed at modeling the world. What is newton's gravitation law but a modeling of real world for its mechanical properties.

Scientific experiments start with an hypothesis and then verify the hypothesis but a model. It identifies the concepts that are relevant to experiment, it declares relationship between concepts. The aim of experiment is to verify the relationships.

Modeling is common to many domains and current movements in semantic we are about modeling domains so that software can process it. It is with this thought that i start this blog.

Small teams vs organizational uniformity

Companies like Google have small teams that use their own set of technologies and churn out products faster. And this had always worked. And there are companies that prefer organizational uniformity. If one builds a MDD platform, most probably one is going with organizational uniformity.

What i have seen that standardizing on technolgoy and platform tends to cut the innovation. Broadly two reasons,

1. When new people come onboard, they have new pair of eyes. Given a chance that they use what they know and develop what they see lacking in offerings, innovation is higher. In uniformity based environments they end up spending time learning (and unlearning what they knew) and most of the time end up as good as (or as bad as) the existing employees.
2. This might look as not that important, but my experience says it is. Given directions from top, right at the time of joining sets up "take all" attitude in people. They stop questioning or thinking out of the box. Not too many things should be etched in stone.

But the maintenance of a uniform platform out weighs other options. And what needs to be managed is again the learning curve.