Companies use nowadays more and more Commercial Off The Shelf (COTS) software products. I have seen many of these projects that launch a big software product within an larger IT organization. My feeling is that it not always helps the organization and that it is usually a very costly endeavour.
I'm looking for explanations, why these projects are causing so much effort. One explanation could be found in the number of dependencies that a software product has to other IT systems. The more interfaces a COTS product has, the more complex is the integration. But why is it more complex than integrating a software written from the scratch? There are two reasons:
We know less about the internals of a COTS product. Since it's not written by the developers themselves, if a problem during integration occurs, it will be more difficult to find the causes of the problem.
The COTS product implements more than required. A product must be adaptable in many organization. It come with features that one organization finds useful, others don't. But it contains all the features. It will also have some kind variability build in - to adapt it to all different situations.
Because of these two reasons, the variety of a COTS integration is higher. If a company that integrates a COTS software has to deal with a larger variety, we must have in the organization with a higher variety, to be able to control it. This is due to Ashby's Law, a very important law in cybernetics. If one system wants to control another system than it must have a higher variety.
So there are two ways of dealing with it:
Decrease the variety of the system that you want to control. We do that for instance in coding software by type checking or information hiding.
Increase the variety of the system that wants to control the other. We do that in software development by self-organized teams. Those teams are able to have a higher variety than a team lead by a command-and-control style manager.
Okay - so what we know is that COTS will have a higher variety - maybe. If a COTS product has less interfaces to other systems/components the integration will not amplify the uncertainty into the project, and thus may still be manageable.
I have found a post on the State of the .NET Culture. It reminds me on a project which I did recently in the role of the Scrum Master. There were some .NET developers in the team. We took over a large website that was running on Sharepoint and that was developed by another supplier for a fixed price. Our goal was to get it under control and to deliver regular releases to the live environment. The code was not very good and there were no automated tests. We started with the first Sprints and could see very soon, that we cannot deliver the expected velocity.
The developers had no expecience with the XP software engineering practices. No test driven development. No refactoring. Although visual studio has some build-in refactorings, they did not use it. We were trying to find the .NET developers on the market with this background, but were not successful. What I also found difficult to handle was Sharepoint. In order to have a developer to start developing, you need a Windows Server with a Sharepoint on it. It was difficult to set up. All that lead us to the conclusion, that we should replatform the website to Java or PHP.
Yes, of course the software was bad quality in the first place. Maybe with the right skills to build good .NET software one can be a well paid developer. Microsoft makes it very hard to achieve this. With a JDK, Eclipse and Maven I can be much quicker. But maybe that's possible on .NET too?
I found the blog of Bredex, a small company in the north of Germany I have worked 10 years before. They started to blog last year and have quite some nice posts on it. One post diskusses whether feature driven development is a double edged sword. Alex wrote that they experienced in a project that focusing on feature delivery is causing problems in the software quality. Refactorings that must be done to keep the software maintainable will not be done, because the team wants to deliver features. Yes - that is the main goal of agile software development - deliver a high business value quickly.
Recent financial crises must teach us another aspect. We can live a high quality live on the cost of the future. Some like to overspend and increase their debt. The same you can find in software projects. If you try to deliver as much features as you can, you will increase your technical debt. Even without much pressure from as business department a team can be overcommitted and will deliver features on the cost of quality. The bad quality can be visible to the customer, due to too many defects in the delivery, or it can be unvisible due to an awkward design. The last one is technical debt.The first one - high defect rates are usually tackled with proper conventional quality assurance. The technical debt can be tackled with contiuous refactoring.
If you are planning a sprint and you have delivered 15 story point and your technical debt is not increasing, then you are running on a reasonable velocity. But if the quality metrics show you an increase of the technical debt, then you are too fast and you should put less story points into your next sprint and plan tasks to refactor or clean up the code.
Some questions pop into my mind now: How do I explain this to the customer? If I report the velocity, they will see a drop of it. So better not to report the velocity? Or start with a low pace assuming that you can get faster in later sprints? Maybe we can report the quality metrics too - defect rates and technical debt. It really depends on our customer's nature.
In the current project the customer wants to have a simple Performance Compare Test. The customer wants to be sure, that the changes we made inthe software will not worsen the performance. Since the effort to make a full-blown load- and performance test is very high, this Performance Compare Test should be very simple.
The solution I found is the following: We will use Selenium IDE to record and play script. This script will be run on the old and the new version of the software. Because we only want to see, whether our changes have worsen the performance, this is acceptable. In Selenium IDE you have a button "Play with Selenium TestRunner". In this mode Selenium logs timestamps. We will use these timestamps to measure the time period it takes to click through the application. In the following image you can see the log from Selenium.
The advantages of this:
Simple to install and easy to use, even on a developer's machine
You still have to plan the tests, e.g. what test data you want to use and how do you want to click through the application. And of cause - this will not replace a full-blown load and performance test.
After having read "Getting Things Done" (GTD) by David Allen, I realized that one can merge the ideas of GTD and agile software development processes. In GTD David Allan proposes to have several goal level. The vision of life leads to long term goals, which leads to projects, and weekly goals and so forth. I've tried to match this idea to a process like Scrum. Starting from the bottom:
During the daily scrum meeting the team defines a daily goal. This goal is influenced by the goal of the sprint.
In the sprint planning the team fixes the goal of the iteration. If the iteration length is a week, it can be compared to the weekly planning in GTD.
The project goal is formed by the product owner. The goal is not well defined, it's more a vision that is refined during the iterations.
So - what's the next level. It's the strategic level. It's defined by the business vision and influences the IT strategy and activities in enterprise architecture.
If we introduce an agile process, we should address all goal levels. I think it falls to short, if the agile process is introduced only up to the project level. But how does the strategic level influence the project level? Do we need a stratigic backlog, like a product backlog?