Fluid Code
A blog about refactoring, .Net and all things agile by Danijel Arsenovski

Parameterized tests in JUnit

February 12th, 2013 by admin

Parameterized tests in JUnit can be very useful when writing tests based on tabular data. These type of tests can save you from writing a lot of duplicate or boilerplate code. While there is a fair amount of articles on the subject on the Internet, I wasn’t able to find a code sample that you can simply copy into your project and execute.  So, here it goes.

A Few Notes
@Parameterized.Parameters annotation is available sin JUnit 4.11 and it is used to generate more readable test titles your IDE will display when executing tests. This makes it easier to see what test case failed without looking into the detailed test trace.
It seems that this version is still not propagated in Maven repositories, so I added 4.11-beta-1 version of JUnit as a Maven dependency in order to get all of this working. If you are using some earlier version of JUnit, you simply eliminate the annotation. If you would still like to improve on how the test title is reported, you can take a look at few alternatives proposed in this Stackoverflow question:
http://stackoverflow.com/questions/650894/changing-names-of-parameterized-tests

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in tdd | No Comments »

Grails as a DDD Platform

August 17th, 2011 by admin

Grails is par-excellence platform for implementing applications in Domain Driven Design style . At the center of Grails approach are Domain Classes that drive the whole development process. As you are probably guessing, the choice of word domain in Grails is not just a coincidence.

You start by defining your Domain Classes and then you can use Grails to do all heavy lifting in providing persistence and generating the GUI. It’s worth noting that when the DDD book was written, it was before the Grails or other similar frameworks were created, so a lot of problematic dealt with in a book has to do with issues resolved or greatly reduced by the framework.

Some of DDD concepts resolved by Grails

I will use DDD pattern summary to address different DDD elements. (Quotes italicized in the text below).

Domain Model

Domain model is structured through Domain classes, Services, Repositories and other DDD Patterns. Let’s take a look at each of these in detail.

Entities

“When an object is distinguished by its identity, rather than its attributes, make this primary to its definition in the model”

These are Domain Classes in Grails. They come with persistence already resolved through GORM. Model can be finely tuned using the GORM DSL. Take a look at hasOne vs. belongsTo property. It can be used to define the lifecycle of entities and their relationships. belongsTo will result in cascading deletes to related entities and other will not. So, if you have a Car object, you can say that Motor “belongsTo” a Car and in that case Car is an Aggregate Root and Motor an aggregate.

Value Objects

“When you care only about the attributes of an element of the model, classify it as a VALUE OBJECT. Make it express the meaning of the attributes it conveys and give it related functionality. Treat the VALUE OBJECT as immutable. Don’t give it any identity…”

In Grails, you can use “embedded” property in GORM field to manage a value object. Value object can be accessed only through an entity it belongs to, does not have its own ID and is mapped to same table as the entity it belongs to. Groovy also supports @Immutable annotation but I am not sure how it plays with Grails.

Services

“When a significant process or transformation in the domain is not a natural responsibility of an ENTITY or VALUE OBJECT, add an operation to the model as a standalone interface declared as a SERVICE. Make the SERVICE stateless.”

Just like Entities, Services are natively supported in Grails. You place your Grails Service inside the services directory in your Grails project. Services come with following out of the box:

  • Dependency Injection
  • Transaction Support
  • A simple mechanism for exposing services as web services, so that they can be accessed remotely.

Modules

“Choose MODULES that tell the story of the system and contain a cohesive set of concepts. “ Grailsplug-in mechanism provides this and much more: a very simple way to install and create plugins, defines how application can override plugins etc.

Aggregates

“Cluster the ENTITIES and VALUE OBJECTS into AGGREGATES and define boundaries around each. Choose one ENTITY to be the root of each AGGREGATE, and control all access to the objects inside the boundary through the root. Allow external objects to hold references to the root only.”

I already mentioned some lifecycle control mechanisms. You can use Grails Services and language access control mechanism to enforce access control. You can have a Grails Service playing the role of DDD Repository that permits access to Aggregate Root only. While Controllers in Grails can access GORM operations on Entities directly, I’d argue that for better layered design, Controllers should be injected with services that delegate to GORM Active Record operations.

Factories

“Shift the responsibility for creating instances of complex objects and AGGREGATES to a separate object, which may itself have no responsibility in the domain model but is still part of the domain design.”

Groovy builders are excellent alternative for constructing complex objects through rich DSL. In DDD, Factories are more loose term and does not translate directly to GoF Abstract Factory or Factory Method. Groovy builders are DSL implementation of GoF Builder pattern.

Repositories

“For each type of object that needs global access, create an object that can provide the illusion of an in-memory collection of all objects of that type. Set up access through a well-known global interface. Provide methods to add and remove objects, which will encapsulate the actual insertion or removal of data in the data store. Provide methods that select objects based on some criteria and return fully instantiated objects or collections of objects whose attribute values meet the criteria, thereby encapsulating the actual storage and query technology. Provide repositories only for AGGREGATE roots that actually need direct access. Keep the client focused on the model, delegating all object storage and access to the REPOSITORIES.”

Grails Service can be used to implement a dedicated Repository object that simply delegates its operation to Grails GORM. Persistence is resolved with GORM magic. Each Domain class provides a set of dynamic methods that resolve typical CRUD operations including ad-hock querying.

Assertions

“State post-conditions of operations and invariants of classes and AGGREGATES. If ASSERTIONS cannot be coded directly in your programming language, write automated unit tests for them.”

  • Take a look at Groovy @Invariant, @Requires, @Ensures annotations, these can be used to declare DbC style Invariants and Pre and Postconditions
  • When you create your domain classes with Grails command line, test classes are created automatically and these are another mechanism for expressing assertions in your domain.

Declarative Style of Design

“A supple design can make it possible for the client code to use a declarative style of design. To illustrate, the next section will bring together some of the patterns in this chapter to make the SPECIFICATION more supple and declarative.”

This is where Grails excels because of dynamic nature of Groovy language and Builder pattern support for creating custom DSLs.

Layered Architecture

Comes “out-of-the-box” with Grails through proposed “Convention over Configuration” application structure in a form of a layered MVC based implementation.

*Originally published as an answer on Stackoverflow: http://bit.ly/mWtLFc

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Uncategorized | No Comments »

“Programming Through Configuration”: a Bitter Cure for a Broken Deployment Pipeline

July 21st, 2011 by admin

I was curious when our client asked us to put our web flow definition files into the database. We used these files to define a basic navigation flows in our web application. To put it simply, our homemade framework was conceptually very similar to a Spring Web Flow. Such flow definition is a first class programming artifact and putting it into a database doesn’t make too much sense. You do not need to maintain application logic similar to some application business entity, since modifying it equals to deploying a new version of the application. What’s more, client is asking for a file to be saved in database in unstructured form, in a textual field.

Actually, storing the flow file in the database will provoke more than one problem. For example, preforming rollback becomes very unreliable since you need to synchronize the rest of the artifacts, for which on the other hand all you need to do is to deploy the old war, with the flow definition. If the database fails, the application will not be able to perform even the simplest navigation etc. Then there are security issues-you cannot sign the code anymore, probable performance issues etc. Even after being confronted with these issues, the client was still adamant that he needs this feature.

What was behind this request? As it happens, our client was working in a large corporation with strict, rigid policies in the IT department. Those policies defined a number of steps that had to be performed before new application could be put into production. And the process was terribly slow. Even the simplest change of to an application could take more than a month to perform. Our client, working in a certain department had to follow corporate policies, was unable to change them, but needed to move much faster.

So how placing the file in the database would help? Well, since application files would stay intact, he could simply update the database to change the application “no questions asked”. Of course, this means outwitting policies that were put in place in order to supposedly provide quality and security to IT operations. In this case, the client is at mercy of corporate policies and pushing the application logic into the database is the only way for him to do his job.

Another client has a similar story. A number of cryptic flags in the database are used to “configure” core business rules. No versioning or roll-back strategy in place, poor security (binaries are not signed), difficult to understand logic etc. Similar to the case above, procedures for getting the new version of base code into the production are quite complicated. There is a separate DB Admin Department in charge of performing the Q.C. of a proposed DB structure: in reality Q.C. consists of verifying that some cryptic prefixes are used when naming fields and tables. In this case, the team is much more empowered and could probably fix the deployment procedures. Unfortunately, the idea that the deployment pipeline could be automated and performed in such a short time to render the configurable logic mechanisms useless looks completely unrealistic.

Using configuration to store application logic has a number of downsides:

  • Cryptic code difficult to interpret and maintain
  • Poor security since configuration-code is generally not signed and access to modifying it is less restricted
  • Generally no versioning or roll back mechanisms in place
  • Probable performance issues

Conclusion? Fix and automate your development pipeline. Make use of tools and techniques like Continuous Integration and Deployment and Automated Testing. Make the development and deployment process work for you, not against you.
Finally: “Make your configuration complex enough and you will end up implementing a new programming language – poorly!” a wise man once said. (Or maybe it was just me?).

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Agile, Programming | No Comments »

Requirements are Perishable Goods

December 4th, 2010 by admin
A prospective client of mine is a local electronic payment hub whose principal clients are other companies that are using their services. They are a relatively large company (by local standards) and have invested a lot in engineering and formalizing the whole process. At the moment, they are CMMI level 3 appraised and are going for the level 5.

Operations are heavily compartmentalized, with high degree of specialization between the departments. For example there is “backend factory” doing the development on the legacy backend system, “frontend factory”, “UAT”, “PMO”, “QA”, “Architecture” department etc. The way they implemented the process can be best described as textbook waterfall.  It goes like this:
  • Analyst receives the requirement and enters it into the flow
  • Requirements committee assesses the requirement and produces estimation:
    • Non-requirements are sent to help desk or refused
    • Requirements are classified as projects or simple requirements
    • Information is sent to all departments in order to obtain the estimation (effort and delivery date)
    • Estimation total ($ and delivery date) is produced by summing all individual estimations and sent to a client
    • After client accepts the estimation, the Project Manager is assigned to the requirement.
  • Project Manager role is to track individual tasks that requirements was broken into and to prod and beg departments to have their task delivered first. As far as I could see, there is no transparent mechanism for managing task priorities. The initial estimation date seems to be the early factor, more priority given to those who are the most behind the deadline. Then project managers can “scale up” their requests, occasionally reaching the CEO himself. In the end, it is the employee seniority that decides the task priority.
  • It’s the different factories that implement the tasks once they can get their hands on them, according to prioritization that I just explained.
  • Since each factory is specialized and works on the single application layer, there is often some integration work to be done. This work is (reasonably enough, right?) performed by the Integration department.
  • After all this, requirement reaches a UAT departments and if it is not turned back (as it often happens) it is ready for the  production.
The reason why they are reaching out is that the process is taking too long. For the moment they are mostly preoccupied by two symptoms:
  • There is a lot of defects caught by QA (or “UAT”)
  • Producing the initial estimate is taking more than a month in average! This is a constant complaint by the customer.
During the course of our initial interview, there were some other interesting facts mentioned.
  • There are a high number of “frozen” projects. These are those projects that are put on hold in any moment of time. It is not clear how many of these are resumed (I doubt that many are resumed).
  • There are a high number of projects that have poor (even null) exploitation once they reach the production.
  • While client can opt-out from a project in the very early phase, this is not so once the project is set in motion and man-hours spent on it.
Here is the rub: Being the electronic hub, most of the company’s income is generated by “per transaction” charge. Charging transactions is their main line of business and the source of profit. While they do charge client companies for implementing different projects, they know that the main income will be generated through additional traffic that these projects will create.

Imagine the loss that is generated when projects are frozen, canceled or unused in production! How much profit and growth would a company achieve if all software they produce would to generate transactions in the production! The whole process simply reeks of waste. By insisting on implementing  initial requirements without being able to change them later on, not only are they making the life more difficult for the client, they are shooting their own foot. If these requirements do not result in software used in production by final client (not just deployed so that project can be marked as finished and payment collected), they are losing money they could earn from charging transactions. By prioritizing requirements based on how late they are, they are making it more probable that these will come to production too late. Talk about waterfall double whammy!

I can only imagine the person in charge of the project on the client side. All the trouble he would have to go into if he discovers that a requirement hasn’t been specified well initially. Or maybe the project is taking so long that initial requirements have no value. If he was to cancel or change requirement, this would surely provoke a number of questions from his superiors. It’s much better to keep quiet and get the project to “successful” finish, even though no one will ever use it.

Solution? Let customers change they requirements as they go along, prioritize requirements by their potential for generating transactions in production, work on customer development, generate multidisciplinary teams that can react quickly etc. In one work, make the whole process much more agile. Taking into account all the interests involved and all the politics generated in company as large and compartmentalized as this, it’s easier said than done.

As I mentioned, the prospective client has a formalized, engineered process and while it is lacking in the aspects of quality, it is relatively mature. But, if you take too long to get things done, being formal and disciplined is just not enough.
Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Agile | No Comments »

The truth about CMMI certification: It does not exist!

May 1st, 2010 by admin

No matter where you look, everyone talking about CMMI will sooner or later mention the “c” word. Even Google, when you enter “CMMI” as a search term will try to help you by presenting “CMMI Certification” as a related search.

No wonder your surprise to learn that such thing ,as a CMMI Certification, does not exits! Software Engineering Institute (SEI) of Carnegie Mellon University issues a meager “appraisal”. Well, you might say, appraisal, certification what’s the difference? In the end, it’s the same thing and to reason to make a fuss about it.

Let’s see what does SEI has to say about it:

“The SEI does not certify the results of any appraisal nor is there an official accreditation body for CMMI. True certification of appraisal results would involve the ongoing monitoring of organizations’ capabilities, a shelf life for appraisal results, and other administrative elements. When an organization is appraised against the CMMI model, their Lead Appraiser’s findings may indicate that the organization is operating at a particular “maturity level.” The SCAMPI appraisal method maturity ratings are 1 through 5.

The SEI does not have a defined requirement for periodic follow-up after appraisals, nor does it accept legal responsibility for the performance of appraised organizations. All of these characteristics are required for a program that would provide certification of appraisal results. However, CMMI Appraisal results do expire after a period of three years.”

Source: SEI website

While difference is subtle, it is by no means innocuous, as it is explained clearly on the SEI website. SEI does not accept any legal responsibility for performance of companies they awarded appreciation report nor does it monitor them directly.

As you can see, the choice of words was not random, but carefully made. No reason then to be surprised by reports (and here) where companies that get CMMI “Certified” use only one department or team on order to get the logo, where companies after certification soon go back to old ways, where only motivation they have is winning a contract or a tender etc. Ever heard of the LCPBCs? If not you can google it, I will not spoil the fun :)

In the end, one can not but wonder how come that some agile circles were not able to avoid the certification trap, when even some of the organizations that inspired the whole movement, were smarter than that?

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Agile | No Comments »

It’s NOT OK to cut corners, but it’s OK to cut features.

February 15th, 2010 by admin

Just had my say in a discussion on LIDNUG group on LinkedIn. The question put for discussion was “Is it ok to cut corners to meet a deadline?”. Most of answers (or at least the way I interpreted them) say that you generally shouldn’t, but that sometimes you just might have to compromise, especially if that can be justified from business point of view. I think that from agile point of view, the reply is quite obvious. Here is what I had to say:

It’s NOT OK to cut corners, but it’s OK to cut features.

I guess that sums a great deal that agile development is all about! You do very short iterations but you do them properly (no cutting corners). Once you start with new iteration, your client can invent new, eliminate old, reprioritize all features etc. (This is what I actually mean by “cutting features”.) As it happens, most of the time the client will realize that some features are not needed, that other are, and will accept that he can live without “nice to have” as long as the core features are done right and without bugs. As a matter of fact, it is difficult (no to say impossible) to know all the features you will need right at the start of the project. So, why should you cut corners to deliver a feature you are not even sure it’s needed? This doesn’t necessarily have to do with 80-20 rule, it’s more of looking at the software as the “work in progress”. You can release the first version once you have implemented the minimum set of core features that do something useful.

Once you and the client change the mentality from “features initially put into the contract” to “real business value delivered”, you will have no need to cut corners. Switching to short iterations and having users participate in planning and accepting results of each feature had a profound effect on how my team is operating and had enabled us to really uphold the quality aspect of software.

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Agile | No Comments »

SOLID design principles explained

December 28th, 2009 by admin

Take a look at this post at lostechies.com.
Good one :)

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Uncategorized | No Comments »

Telerik announces (yet another) .NET refactoring tool

December 3rd, 2009 by admin

Does this say anything about refactoring adoption among .NET developers? Maybe. It definitely says something about Microsoft refactoring tools state of the art (thanks Jeff). Refactoring support in Visual Studio 2010 lags miles behind refactoring support in free tools like Eclipse or Netbeans. JustCode features here.

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Refactoring, Refactoring in C#, Refactoring in VB, Visual Studio 2010 | No Comments »

Visual Studio and TDD: Better late than never

September 11th, 2009 by admin

After hearing that UML is touted as a “next big thing” in Visual Studio 2010, I must admit I was less than elated. Since I am hardly a “new kid on the block”, I freely admit that I remember that quirky diagramming tool called Visual Modeler that shipped with Visual Studio 6.0. (“Ten years after” already?). Had it been 1999, I guess UML might even sound, well… intriguing.
Fortunately, I recently came across this video:
TDD with Visual Studio
Believe it or not, but 2010 version of Visual Studio should finally provide a lot less friction for TDD developers. It can generate class and member stubs based on client code. More surprisingly, there is an integrated Test Runner that does not “fall apart” (just pick your song!) if you have tests written is some 3rd party unit testing frameworks. (On the video Karen shows executing MbUnit tests with VS Test runner.)
TDD is hardly a news these days, but hardly feels as passé as UML!

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Agile, Programming, tdd, Visual Studio 2010 | No Comments »

YAGNI is not SLACKING!

June 4th, 2009 by admin

YAGNI is a clever rule. It says that by doing less your are doing yourself and everyone else a favor. In some schools of Agile Thought like Lean it has become a whole philosophy (although they are more keen on some strange sounding Japanese words, mind you).

Only a decade or so ago, general approach was quite different. You’d try to make your system as encompassing, flexible and configurable as possible. You’d design your system for tomorrow’s needs.

I will give you a quick example. Imagine you need to program a feature on your website where you export some data to an excel file. In the old days, you’d analyze the problem, design your classes let’s say in a form of hierarchy where ExcelExporter and PdfExporter inherit BaseExporter class. You do not need to export data to pdf just yet, but you think you might need it some day, so you better be ready.
These days, however, you live in the present. You just program ExcelExporter. If one day you need pdf export feature, you will reorganize your classes so that both ExcelExporter and PdfExporter inherit BaseExporter class that contains some features common to both child Exporter classes so you can avoid duplication. If you never come to need this feature, you leave your ExcelExporter alone. You will implement new features as they are needed: just-in-time.

But there is a catch. In order to be able to move just-in-time, you should have a well designed, mercilessly refactored code covered with tests. Without refactoring or automated testing you are probably better of doing things the old fashioned way.
I think YAGNI is great and I try to follow YAGNI mercilessly. However, sometimes I hear YAGNI principle invoked in a way that it is clearly misinterpreted. For example “Maybe you do not need to refactor this code just yet” or “Maybe you do not need all those unit tests”. Thing is, to be able to do YAGNI, you need to have your code refactored and covered with tests. You need continuous integration and automated builds. Without these practices, once you need to implement a new feature in JIT fashion, things will inevitably start to break. One way to avoid “Maybe you do not need good quality software” kind of dilemmas is to make practices like TDD, refactoring and continuous integration integral part of your development process. Then, there is no need to think that you might leave out unit tests; since you are doing TDD these are in place already; since you refactor all the time, then there is no way to leave it out.

Remember, YAGNI applies to features, not to quality! One way to use YAGNI properly is to think about Technical Debt. Is the decision NOT to do something resulting in Technical Debt? Technical Debt has to be payed off with interest and with software rates are extremely steep. If you are getting into debt, you are not YAGNI, you are plain’ SLACKING!

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • del.icio.us
  • Reddit
  • Digg
  • StumbleUpon
  • Bloglines
  • Google Bookmarks
  • Y!GG

Posted in Agile, Programming, Refactoring | 4 Comments »

« Previous Entries