Friday, November 4, 2011

SCRUMHockey

For not so long ago I started playing icehockey. I came in a team as a complete rookie. For a mounth ago, I came in a new firm for a project as an external developer in a SCRUM team. In my experience there are many simmilarities how good softeware and icehockey teams work.

The first time I was on ice this year, it was a disaster for me. The guys there were all well trained playing hockey since ever, making a good team; and then I came there with really basic skills and no training at all. It was pretty devastating to be such a nut, however I made some perceptions how the team reacts and what measures they toke to improve the teams performance (and of course mines too).

The team had started the training as usual and I was just dropped into the deep water. They let me suffer for a while but then one of the best player in the team, who also had some coaching experienses, just stood beside me and started to gave me some advices on how to do the excercises better.

The first training match I played, was ad-hoc at all. I was asked which position I liked to play and just tried to keep up with the others (with rather less than more success). After the match the team made some discussions and I was briefed again and got some technical and strategical hints, how to make the most out of my lousy abilities. Of course during the game there was also hints mad by the others, but they were more concentrated on playing the game, and making the most out of it.

So that is how actually we train in the team, and after some trainings I was just wondering how similar it is to the way a good SCRUM team should work.

Actually when I went to a new firm for about a month ago, I have had a very similar experience. I was new to the project, to the team and the whole framework they used was also pretty new for me. So after two days of trainings (some very basic overview of the architecture) and basic project information I was started a sprint with the team. In the first sprint I was the one who got the small issues, mainly bugfixes, to warm up and have some overview over the modules integrated into the project. For the first two issues I was in pair programming with a senior team member, and for the rest it was mainly a question/answer game I played.

In the second sprint I was already working on more complex issues, and now I'm just about implementing additional features for some user stories. By the sprint reviews and retrospectives we discussed about the tooling and the way we should process our issues. We coverd some coding practices and some testing questions, and I was asking some open questions to get a better view of the project.

My conclusion from this story is, that a team is successfull if there is a strong coherence. All the success is about teamplay! In the sport is it most of the time self-evident, but why is it all so many times overseen by software teams? Why do teams still rely on estimates of some 'superhero', and let one senior write specifications and stories all alone? Why do managers beleave, that a team review or a sprint planning meeting with all the team members is waste of time and/or resources and try undermine it, or just pull the majority of the team out of those meetings?

In my eyes the only way to succeed with software projects is to enforce teamwork. And the more risky the project is the more this rule applies!

Tuesday, September 27, 2011

Detailed software specifications

I've been working by a middle sized software company on a product for the last two years. I've had the luck to work with some really good fellows and learned a lot how to write good code. On the other had, I also learned how to fail.

I don't want to bother you with the whole story, I just want to pick one aspect which I'm concerning as the major cause of failing with a product or project:
BIG REQUIREMENT UP FRONT

It is a fairly big product, with about 30 developers working on (two to four scrum teams). The company spent a lot of efforts on gathering requirement (some man years) resulting in a handful of requirement documents (so about 3-4 kg if you print). Its really impressive and a honorable job writing that much, however, after the requirement phase was closed we have started by creating technical specifications.

It was partially allowed us to write some proof of concept but it was about 6-7 month (with about 3-6 chaps) writing on specifications, attending workshops, negotiating with field experts (not with customers, but with experienced developers with exhaustive knowledge of the business domain).

Since we were intended to be agile (sorry for telling this) we started in the meantime to implement the features/requirements, that was already specified in detail. The implementation was written partially by the same people who wrote the specification, or by someone else.

I had the favor in both writing and implementing specifications, and I must say that I recognized how senseless those specifications are. We have hundreds of thousands of sites of documentation divided in numerous word documents hanging around on the internal network. Each of them was reviewed and released and corrected and reworked, and still after a year there barely exists a chapter which describes what the software really does!

I mean there are three situations I saw by implementing a part of such a detailed specification:

The developer say: "What the hell is this? This will never work!...

1. ... "But who cares, its written by XY analyst and reviewed by this and that committee, so who cares..."

2. ... A new meeting initiated with the product owner and the field experts to clarify the specification. This takes of course about a week to organize, and actually ends up with a whole new specification because the "old" one has a lot of unanswered questions that was forgotten by everybody, and so on...

3. ... Some of the developers discussing on the topic during the development and made some correction to the specification.

I don't want to discuss these options one by one, but the quint essence of each of them is: the detailed specification has never hit the target, and I dare to say, that the reason is, because in such a complex domain it is not possible to specify selected requirements up front! It will always fail for some degree!

Ok I know it seems just like a big complaint, but I wrote this to show you the problems we really faced by this kind of process.

So actually my answer for the question: 


How could we make it better?

is
AGILE MODEL DRIVEN DEVELOPMENT

It is not a kind of magic, just the agile approach to requirement elicitation and is part of the agile release management. The lifecycle for software projects using AMDD is as follows:

Read the original article at www.agilemodeling.com

So without explaining a lot (there are many articles about this, written much better than I could write it), the clear advantage is that there is no such a huge wasted time. The specification and the requirement are just as accurate as you need to start the development.

Of course this does not explicitly mean that no exhaustive specifications are written, but if you really do timeboxing your iterations, you even do not have the time for detailed specifications.

It's really odd for many developers for the first time, but it works. It works even more if you consider creating other specifications like executable test cases, wiki pages, whiteboards, etc... (You can read more about media richness theory and its advantages at modernanalyst.com or even more in detail at www.agilemodeling.com).

Do not forget, the real aim of planning is to get a shared understanding of a specific problem. If your planning process does not increase the shared understanding of the problem, than it fails!

According to my experience the BRUF approach with detailed specifications doesn't really increases the shared understanding of problems.

I do cut this post here, and I would really like to get some feedback from you!

What are your experiences with your specifications?
What kind of specifications do you use?
etc,etc,etc...

Sunday, August 14, 2011

Building the right team(s)

The system being produced will tend to have a structure that mirrors the structure of the group that is producing it, whether or not this was intended. One should take advantage of this fact and then deliberately design the group structure so as to achieve the desired system structure. (Conway's Law)

If Conway's law does really apply, one of the most important things you should do is to build the right teams.

The first crucial question is about the size of the team. SCRUM  suggest to build small teams with 5 +/-2 peoples. Amazon.com introduced the "Two pizzas rule" to build their software teams.

One may ask: Why should we build a small team, when a big team has some clear advantages: 
  • they can include members with more diverse skills, experiences and approaches
  • they are not as much at risk to the loss of a key person
  • they can provide more opportunities for individuals to specialize in a technology, or in a subset of the application.
These are really impressive properties of large teams, but what about the small ones? Here are some of the advantages of small teams:
  • less social loafing
  • more chance of constructive interaction
  • nobody is going to fade in the background
  • harmful over-specialization is less likely to occur
  • less time is spent coordinating effort
Without discussing these properties in detail, just move along to the next problem:
The product/project you need to accomplish has a pretty small time budget, and its business domain is too complex to finish it with a small team. 

The question is how to build small teams that cooperate on building a complex system?

By building the teams keep "Conway's Law" in mind. Either you build feature teams or component teams.

Feature teams are responsible for an end-to-end delivery of working (tested) features, whereas component teams works on some part of the system, such as on a persistency framework, on business layer or may build some GUI framework.

Organizing a multiteam project into feature teams have many advantages. Since the feature team delivers end-to-end functionality, its members working through all the layers of the architecture, which maximizes the learning about the architecture and design of the product. A feature team includes all skills needed to go from an idea to a running, tested feature and so it ensures that these individuals are communicate at a daily basis.

Component teams, are used to deliver software to another team on the project rather than directly to users. Take care that a component team only builds components as a feature team ask for them. Since in this case the feature team is a kind of product owner, they must prioritize and also review the work of the component team. Guessing about future requirements is dangerous, and might lead to develop unnecessary,  or even worth, unusable components.

If you recognize that the team structures impeding the ability to use scrum, that issue should be raised during an end-of-sprint retrospective. You should prefer stable teams over a project, but do not stick on a structure if it does not work! Take care on personal conflicts too. Social issues might reduce the productivity of the team on the short and can lead to quitting of individuals on the long.  

Sunday, July 31, 2011

The power of immutability

The final keyword in java has a wide scope of usage. It can be used with:

  • classes - public final class Foo {...}
  • methods - public final void Bar() {...}
  • member variables - private final int a;
  • local variables - final int xMin = 2;
In generally the final keyword makes the corresponding class/method/variable immutable.  
For classes and methods immutability means that it can not be specialized respectively overridden, whereas for variables it means that after the initialization, their value can not be changed.

Java does not make any analysis of your classes, so member variables declares as final must be initialized in the constructor of the corresponding class. After the initialization, their value is freeze.


Consider the following snippet:

1:  public class Foo {  
2:    protected final int xMin;  
3:    protected final int xMax = 10;  
4:    
5:    public Foo() {  
6:      xMin = 5;  
7:      xMax = 10; // This is of course a compiler error  
8:    }  
9:  }  
10:    
11:  public class Bar extends Foo {  
12:    public Bar() {  
13:      super():  
14:      xMax = 15; // Compiler error  
15:    }  
16:  }  


I used to use final really intensive in my code, for two reason:
  • it reduces the possible states of a class
  • it makes multithreading a lot easier
The possible states of a class are reduced, since the amount of its members that might change (put the class in a different state) is reduced. It makes much more easier to understand, but also to test your classes. Further more, it reduces the possibility of failures. I saw an extreme example, how bad it can come. 

If you use final, it is good to know how it really works. Consider the following snippet:
1:  public class Foo {  
2:    private final List<String> names = new ArrayList<String>();  
3:    public void addName(String name) {  
4:      names.add(name); // this gonna work 
5:    }  
6:    public void releaseList() {  
7:      names = null; // compiler error  
8:    }  
9:  }  

For some time ago I received an issue that I had to fix in some module, that I was not familiar with. I tracked down the bug to a particular class, and I recognized that the class has more than 20 member variables and about 15-20 methods on them. In this particular case it is not just an issue of immutability, but also of a poor or even missing design. After some basic refactoring (only aiming to reduce the possible states of the class) about 16 of the member variables was transformed in a final variable. This step made the class much more easier to understand and to manage. 

If you go further it is even meaningful to use immutable variables in a method. These provides some hints on your intentions, and might help in the future if you need change the method or fix bugs.

Now, let's consider the impacts of immutability on multithreading:

One of the most difficult things about writing concurrent programs is deciding how to protect mutable shared state. Java provides a locking primitive, synchronized, but locking primitives are difficult to use. You have to worry about


  • data corruption (if you lock too little)
  • deadlock (if you lock too much)
  • performance degradation (even if you get it right)
Using immutable objects, you don't need to take care about synchronization. Functional languages, such as SCALA do enforce immutability, and are really powerful in multithreaded applications.

Tuesday, July 26, 2011

My thoughts on assertions in Java

An assertion is a statement in the JavaTM programming language that enables you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. 


Oracle gives clear statements where assertions should be used, and in which situation you should avoid using them, however, I used to use assertions in a bit different way.

Assertion is not just a good stuff to check invariants, pre- and postconditions in your code, but it helps for your colleagues by working with your code. You can understand assertions as a kind of security cordon.

They tell a lot for someone who need to fix some bugs or implement some additional feature in or to your code. In many cases they are even more useful than comments (ok, I'm clearly not a fan of comments... just se my corresponding post).

Oracle says: Do not use assertions for argument checking in public methods.
You should follow this for your public API. You really should. But I decided to use assertions in my library internal classes for public methods too. In my team we use assertions in a pretty intensive manner, even in public methods, and for arguments checking, and the helped us a lot. I prefer them over exceptions. You can read in an old post of Joel Spolsky about exceptions:

"... In fact they are significantly worse than goto's:
They are invisible in the source code.
They create too many possible exit points for a function.
..."


Consider the following code snippet:
public class CommandExecutor<T extends Target> {
    private final T targetOfExecution;
    
    public CommandExecutor(T targetOfExecution) {
        assert targetOfExecution != null;

        this.targetOfExecution = targetOfExecution;
        ...
    }

   ...

    public void execute(Command<T> c) {
        // prepare target for command execution
        // do not need to check the targetOfExecution

       // I prefer to get some assertion during development and tests
       // instead of waiting for a runtime np-ex
       assert c != null;

      c.execute( targetOfExecution );

     // do some post operations
    }

}

Assuming that this class is internal to your library, and not part of the private API, the assertion in the constructor will do a favor for you.

Instead of exceptions the assertions can be turned off, so you can combine them with return values to improve the quality of your source.

Further more combining assertion with immutability, the execute method will be more readable, since you can save some null checks.

I would be happy to read your thought on this topic.

Sunday, July 24, 2011

Why will your project fail without a PO

How many times have you already heard form your PO about a feature: oh, that is cool, but it's not the way I thought it.
If you hear this statement frequently you should be keen on some improvements to your processes, but if you hear it once in a sprint (by the review) you are definitely in big trouble.

In this case the following questions should come up during the retrospective:

- Do our stories have clearly defined DoD?
If the stories don't have DoD or those are not well defined it is likely to fail the real target of the story. If you have questions on some of the DoD you must request the PO for negotiation. Don't be afraid of discussing on remove or add some criteria to the list, since the DoD should be negotiable anyway. Keep in mind, that detailed discussion during the sprint planing meeting might cover up some details and even led to split up stories (see some patterns on splitting stories).

- Why did our PO not complain about it during the sprint?
It's important to negotiate the stories with your PO. If you/the team has questions about the story or feels uncertain about some DoD the PO must be available for you to clarify the problematic issues. If you have the feeling that the PO is not available for the team, or the team must often wait for a meeting with the PO, it is clearly a big impediment, and your scrum master must take care about it.

After all it is likely to have recent questions about the stories, and the product owner must available for the team.
A good, committed team will depend more on the PO than on the scrum master, whereas new teams are likely need more the scrum master and less the PO.

Saturday, July 23, 2011

Simple product backlog

There are a lot of complex tools to manage product backlog, and do a lot more. You probably know Greenhopper for JIRA, or IcedScrum just to mention two of them. 
These tools are good, but a bit heavy weighted if you are looking for a simple possibility to manage your product backlog for a small or middle sized project.

For my purposes, I've made a simple excel sheet (or better say a Libre calc sheet), that I'd like to share with you.

It has some additional features like:
  • Progress estimation (sprint estimation) based on recorder or assumed velocity
  • Velocity chart
  • Story burn-down chart

I'm really interested on any feedback or on suggestions for improvement. I do upload an excel version too, but I can't test it, so if you gonna be sure, just use the ods file.


ProductBacklog.xls
ProductBacklog.ods