In the last posts, we have treated about *what* MDA aims to do. Now I’m going to introduce you *how* MDA aims to do it.

Firstly, I advice you I will need to use some more *technical terms*, and, as a large explanation about them could be very cumbersome here, if you fell in some doubt you can go to the Glossary page and clear it or just google it for more details. OK?

The MDA Framework is, in essence, a set of concepts you need to understand MDA.

So, try to read and understand the figure below.

Simplified MDA Framework.

Simplified MDA Framework.

It is a simplified version of the MDA Framework, and it is very straightforward to see what it does.

First, a Computational Independent Model (CIM), that is a model of the bussiness rules of the system in focus and has no computer implementation details specification, is transformed into a Platform Independent Model (PIM), a high level model tied to computational concepts but not to a computer platform specific implementation. Second, the PIM is transformed into a Platform Specific Model (PSM), a model of a specific computer platform implementation. Third, the model with platform specific information (the PSM) is translated into code.

Performing this steps, in an ideal scenario, would be enough to have a runnig system.

So, things seems to be very straightforward in MDA, doesn’t it? Not so simple as it seems in the last figure, but not so complicated, as you can see in the next. See below a more detailed version of the MDA Framework.

MDA Framework.

MDA Framework.

Have you completely understood it? If your answer is “For sure, I have!”, feel comfortable to leave this post, but if it is something like “Not at all” or “A little bit”, I will give you some explanation.

Let’s read this figure from bottom to up, OK? You can see that the process [CIM->]PIM->PSM->Code is the same as we had presented in the previous figure (Simplified MDA Framework), except from the lack of the CIM representation here, for the sake of simplicity.

The input or output models for each transformation are writen in some language, e.g. UML for PIMs and Java for PSMs. In our figure, the PIM is writen in language 1 and the PSM is writen in language 2.

The transformation from PIM to PSM is performed by a transformation tool that uses a transformation definition. This definition is a set of transformation rules which maps elements in language 1 to elements in language 2. For instance, in a simple transformation from UML to Java, UML classes could be mapped to Java classes, UML attributes to Java attributes, and UML operations to Java methods.

Just as models, transformation definitions are also writen in some language, called transformation definition languages.

For each model language (language 1 and language 2) and transformation language (transformation definition language) there is a metalanguage which descbribes them, generally called metamodel. Once, in a Java model, you have an if statement, in the metamodel of the Java language you have a description of such statement, and this happens for each element of the Java language.

As models are described by metamodels, metamodels are described by metametamodels, and metametamodels generally describes itself. The OMG standard metametamodel is the Meta Object Facility Language (MOF). Another metametamodel is Ecore, it is an Eclipse standard since this is the metametamodel for implementations in the Eclipse Modelling Framework (EMF), the framework for supporting MDA solutions in the Eclipse platform.

Since OMG has published the MOF2Text specification, we have included in the right side of the figure, the part concerning the transformation from models to concrete code.

This kind of transformation is also made by a transformation tool that uses a transformation definition writen in some transformation specification language that conforms to MOF. The main difference between model-to-model (M2M)  transformations and model-to-text (M2T) transformations is that the last ones does not declare an output metamodel, as the output is textual code.

Well, this is the MDA framework. I hope you understood it.

There are a lot of benefits to implementing MDA. The first step to getting started is to educate management about the risks and the advantages. This article will help you with that task. In the process, I will point out some of the subtler benefits of MDA and show how implementing MDA will change the way you do business in ways you never imagined.

MDA’s best known benefit is the fact that code can be generated directly from the model. The benefit realized is, any additions and changes to the model will be automatically be reflected the next time the code is generated. People are often skeptical of code that is produced by a generator. Generators in the past have produced code that is either inefficient or does not provide much value. There are technologies available now which allow for meaningful code (code that will compile and run business logic) to be generated from a model.

Using a code generator provides additional benefits. Code generators can be a single point of reference, which may be used to update code in multiple locations. For example, if there were a requirement in the system that stated, “The first name shall be 30 characters or less”. There can be multiple locations where code will need to be written which will realize this requirement. A good example of this is in validation logic. In the UI Layer there will need validation logic to ensure that the first name entered is not greater than 30 characters. This validation may also need to be performed by a web service and also at the database layer when saving data in the database.

Code written from a code generator can be used to enforce coding standards. Since everyone has their own style for coding the opportunity can exist where some developers write code more or less efficiently than others. Having code be created out of the generator ensures that efficient coding practices decided upon by your development group are implemented and enforced. Additionally code created from the generator is written in a consistent manner.

Organizations will find once they begin using a code generator their focus will change from writing all their code by hand to identifying opportunities to move more hand code into the generator. When code needs to be optimized or refactored that effort should be done within the generator as well.

What is not always mentioned is that by being able to automate more tasks from the code generation frees up capacity among your Engineering and Quality Assurance resources. These resources tend to be the bottle neck because they are the last to perform their work before the product ships to the customer. Having redundant tasks removed from their plate allows them to get involved sooner in the process and spend more time solving problems.

Code generation is the most known benefit, but there are several others as well. Working within a model requires a paradigm or a shift in your way of thinking. Once you realize what your various artifacts or documents (ex. requirements, use cases, features, wireframe diagrams, behavior diagrams, test cases, even code) are containers for information; it opens up the capabilities of your development process. Entering all that project artifact information into the model stream lines your development process.

  • Looking in a model provides you the ability to see your system at various levels. You are able to see a big picture view of your system and then be able to drill down into greater levels of detail.
  • With information living in the model not documents, there is no extra overhead wasted updating and moving/protecting documents (i.e. checking in and out of version control). Documentation is also completed along the way while as part of the development effort.
  • Using a model allows your organization to balance the need for documenting your system for the future and not wasting time getting the code out the door.
  • People enjoy working in models. Let’s face it; it’s not very fun writing in documents. Documenting various aspects of your system is a lot more enjoyable when working with a graphical interface.
  • Models also are a much better tool to share information with. Using UML Diagrams, team members are able to speak in a common language. The graphical presentation allows for “Business People” to understand and “Technical People” to derive some actual benefit from the diagram that describes something.
  • Working in a model inspires Teamwork – Team members are able to make modifications “on the fly” in the model right there during the meeting. There is no need to write down notes then update documentation.
  • Capturing information in a central repository creates a center of truth – Since the model is the source and location where all project information is stored. There is one place where every stake holder knows where to go to find answers and enter their information. Conversely anything not in the model cannot support code generation and in a sense “Does not Exist”. This creates clarity among the team members, about what they have to do and where to find information that they need.
  • Traceability – Having all the information in one place lends some advantages. Some modeling tools are equipped with a relational database to persist the model data. Having a relational database is very powerful because it allows data to be related to other data. With these relationships created, the data about your project can be used to manage your project. For example let’s say that you model the relationships between your requirements, use cases, wireframe diagrams, the methods that “power” those wireframe diagrams, and test cases. You have now a very powerful information architecture that you are able to query to answer any questions about those objects and their progress during development. Answers to questions like “How many test cases to I have per Use Case?” can help you determine if you have the appropriate level of test coverage. How much have we completed since the last sprint? If I need to modify this requirement, what would be the impact? All these questions can now be answered with facts from the model, not from estimates from your project resources.

 

In the figure above the Model becomes the Center of Truth or the place where stakeholders find information and where they solve problems.

In the figure above the Model becomes the Center of Truth or the place where stakeholders find information and where they solve problems.

Using a model to support MDA, changes the way an organizations conduct their development activities. Not only are there efficiencies gained from being able to generate code. Using a model creates a central “Information Hub”. All team members will know where to find the information they need and will know exactly what they need to do. The model becomes the place where work is done and team members collaborate to solve problems.

Here is something to show your boss to demonstrate the benefits of MDA.

http://www.canoniccorp.com/pdf/Canonic-IBSCaseStudy.pdf

If you have any additional questions about starting an MDA project or questions about MDA in general, you can contact me through my company contact form http://canoniccorp.com/company-contact.aspx. I hope this helps and good luck in beginning your journey to MDA.

About the Author:

Eric Keich is a Principal Consultant at Canonic Corp. He has been involved in bringing modeling techniques to companies for 8 years and has been involved in highly successful, enterprise-class MDA implementations. For more information about Canonic’s Model Driven Business® visit www.canoniccorp.com

In the last post, we’ve started to discuss the present software development life cycle. If you want to know how it was but you’d rather a quicker way other than reading it, just taking a look at the classical picture below is enough.

 

 

It’s about mistakes between system requirements and implementation. This and other important issues were treated in the last post.

The problem with the present SDLC is that, in some fashion, implementation becomes so far requirements as code becomes so far models.

So, how to make code and models follow each other? This is the big question MDA aims to answer.

MDA aims to shift the development focus from code to models, in such a way models become the code and developer’s work doesn’t concern anymore into code, but into models. Then, since developers have the right and well specified models, these ones are automatically transformed into code.

This approach improves productivity since high level models are a faster solution than coding and, to a certain extent, models represent the code. But, automatic code generation seems to be the main key to improve productivity.

Portability becomes more achievable because since you have a higher level model(a Platform Independent Model), you can transform it into lower level models(Platform Specific Models) for many other target platforms without having to rebuild or remake all your code.

Interoperability is achieved by means of bridges provided between models. Bridges allow platform specific and independent implementations to interoperate.

And about maintenance, it becomes more manageable since changes are not made into code anymore, but into models. Thus, as a model also plays the documentation role, documentation is always up to date.

The higher level approach MDA provides to software development can contribute to maintain the specification, as a high level model, rather close to the implementation, as code, and then we make costumers happier.

 

We’ve explained *what* MDA aims to do, but if you are interested in *how* it can be done wait the next post where the MDA Framework will be presented.

The first step in our journey through Model Driven Architecture technology is to analyze why MDA must succeed in the next years. And the best way to do this is to analyze why the present software life cycle doesn’t.

Software development seems to had been highly improved since the last decades. It’s rather thought because the more complex systems we build today. However, costs, maintenance and deadlines are still challenges in the software industry. Software is still a very expensive product.

Taking an iterative and incremental approach, the present software life cycle can be understood by the figure below.

Present Software Development Life Cycle

Let’s make a quick and shallow explanation(if you feel familiarized with these concepts, you can skip the italicized text below):
Analysis: The phase where the requirements are defined, and you’ll know *what* the system you are developing is supposed to do. It’s a well defined picture of the problem your system aims to solve. In the most common cases, the final results are represented by CRC cards or UML Use Case Diagrams.


Design: This phase aims to develop a solution for the problem you have to solve. The solution you develop is often a computation model which describes, possibly, the best way to attend the requirements described in the previous phase. Here, you’ll know *how* the problem is solved.


The focus of the last two phases are on models and not so in implementation details, although the design model can encompass some of these ones.


Implementation: This phase is focused on coding the solution from the design phase. Here, the code is produced to run in a hardware system and using some specific technologies for specific platforms.


Testing: A phase where the produced code is validated according to the requirements and the design class model. The system robustness, in a great part, depends on tests. However, when a system pass the tests it doesn’t mean it’s right, it only means it passes the tests. And that’s why tests are essential to achieving a robust system.


Maintenance: This is a special phase, because one can know if the system were well specified, designed, coded, tested and (a very important issue) documented, if the software maintenance is easy to manage. When a software is hard to maintain is due, among others reasons, bad technology choices, bad design, dirty code, or even due to a bad documentation.

So, what’s the problem with the present life cycle?

The problem is that the first two phases are focused on modeling and the last three are focused on coding. Thus, at the time you need to make changes in your software you will update your code, frequently letting the models and documentation become out of date. And so, ask yourself how worthy are models and documentation that don’t represent what they are supposed to and how they are supposed to? Not much, don’t you agree? And, as the models don’t follow up the software life cycle evolution, they become obsolete.

Thus, we can list some of the major bad consequences:

  • The model don’t provide a reliable understanding of the system anymore;
  • If some of the development team’s members goes out and should be replaced, how could new members understand the system? Looking at code? From where to start?
  • Documentation updating becomes a boring activity as it actually is.

So, as more as code goes far models, models become not so useful as they used to be.

The most obvious troubles rely in the maintenance phase, but if we think about the implementation phase, how code is built? Code is often written down by hand and although there are code generating tools, they do a very shallow work and the big picture is broad painted by the programmer, so a “monkey work” has to be done. The testing phase is almost the same situation.

And what if one needs to change some issues, such as target platform or programming language for sake of efficiency, so that the chosen implementation technology should be changed? At this point, when models are not so reliable anymore, the one thing you have are the requirements… So, you have to do almost the whole work again. “Oh, my God !!! More and more costs !!!”

And what if your system now is meant to be integrated to other systems? But no interoperability solutions were thought in development time?

Well, we could spend our lives here talking about software development problems and I apologize for taking a pessimist approach… Thus, I will stop here, and in the next post I will present you a most hopeful approach Model Driven Architecture can provide us.

I see you soon !

Hi there ! This is just the starting point of what I believe to be a great journey in Model Driven Architecture research.

I’d like to post here some interesting points of view and information about this worthy new technology. This space is intended for those who also want to contribute and/or believe in the MDA promise.

If you’re interested in MDA, I invite you to enjoy and contribute with this Blog.

Categories

July 2017
M T W T F S S
« Aug    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Blog Stats

  • 31,569 hits