Adding external/custom jars into Maven project

One of the strongest points of Maven is that it automatically manages project dependencies. The developer just needs to specify which dependencies and in which version are needed and the Maven takes care of the rest including downloading and storing them at the right location and additionally packaging them into final artifact (e.g. WAR). This is very convenient and almost completely removes a need to hold additional jars in lib/ project subdirectory.

However, there is a small assumption that all required dependencies are available at one or more public repositories. It is usually the case but sometimes you may need to use a jar which is not available there for some reason. Luckily, there are few popular approaches to overcome this problem which are described below.

Adding jar to public Maven repository

Theoretically, the best way would be to add a jar to a public Maven repository. However, if the jar is proprietary, it is usually impossible to get the permission from the company to do so.

Using system dependency

The second method is to add the required dependency with the system scope and additionally provide an absolute path to the a jar file placed somewhere on the local disc:

<dependencies>
  <dependency>
    <groupId>com.example</groupId>
    <artifactId>evalpostfix</artifactId>
    <version>1.0</version>
    <scope>system</scope>
    <systemPath>${basedir}/lib/evalpostfix-1.0.jar</systemPath>
  </dependency>
</dependencies>

The problem with this approach is that this dependency will be completely ignored during packaging and forcing Maven to add it to the final artifact (e.g. WAR) would result in a very clumsy POM file.

Installing jar into local Maven repository

Much better solution is to add the required dependency manually to the local repository using command:

$ mvn install:install-file -Dfile=<path-to-file> \
    -DgroupId=<group-id> -DartifactId=<artifact-id> \
    -Dversion=<version> -Dpackaging=<packaging>

For example adding external jar evalpostfix-1.0.jar to the local repository could look like this:

$ mvn install:install-file -Dfile=evalpostfix-1.0.jar \
     -DgroupId=com.example -DartifactId=evalpostfix \
     -Dversion=1.0 -Dpackaging=jar
(...)
[INFO] --- maven-install-plugin:2.4:install-file (default-cli) @ standalone-pom ---
[INFO] Installing /home/robert/informatyka/softwarecave/infixtopostfix/target/evalpostfix-1.0.jar to /home/robert/.m2/repository/com/example/evalpostfix/1.0/evalpostfix-1.0.jar
[INFO] Installing /tmp/mvninstall2671284263455462989.pom to /home/robert/.m2/repository/com/example/evalpostfix/1.0/evalpostfix-1.0.pom
(...)

Once the dependency is available in the local repository it can be added to POM file like any other dependency:

<dependencies>
  <dependency>
    <groupId>com.example</groupId>
    <artifactId>evalpostfix</artifactId>
    <version>1.0</version>
  </dependency>
</dependencies>

This solution is still inconvenient because every new developer working on the project would have to run mvn install:install command on its own workstation.

Using internal Maven repository in a company

One of the best ideas is to setup an internal Maven repository in a company for storing such dependencies. The repository should be available to every developer working on a project though HTTP or other protocol supported by Maven. Of course, the repository server does not have to be available from outside of the company.

The required dependencies should be installed on the repository server using mvn install:install-file command:

$ mvn install:install-file -Dfile=evalpostfix-1.0.jar \
      -DgroupId=com.example -DartifactId=evalpostfix \
      -Dversion=1.0 -Dpackaging=jar \
      -DlocalRepositoryPath=/opt/mvn-repository/

The only difference from the command in the previous section is that it additionally specifies the path on the repository server where the jars and metadata should be stored.

Once it is finished, the dependency can be added to the POM file. Additionally, the location of the new repository server is provided:

<repositories>
  <repository>
    <id>Internal company repository</id>
    <url>http://mvnrepo.company.com/</url>
  </repository>
</repositories>
(...)
<dependencies>
  <dependency>
    <groupId>com.example</groupId>
    <artifactId>evalpostfix</artifactId>
    <version>1.0</version>
  </dependency>
</dependencies>

The advantages of this approach should be clearly visible:

  • new developers can start building the project without any additional preparation tasks
  • no need to send jars though emails, IM or downloading them from the Internet
  • reduced or completely removed need for build instructions
  • all external jars are managed in a single place
  • dependencies and the server can be shared by multiple projects

Using in-project Maven repository

The idea is quite similar to using internal repository server but this time the repository is stored in a directory (e.g. called lib) located in a project root directory. After creating the directory and installing jar files there using mvn install:install-file command, the dependencies and the repository can be referenced from a POM file:

<repositories>
  <repository>
    <id>Internal company repository</id>
    <url>file://${basedir}/lib</url>
  </repository>
</repositories>
(...)
<dependencies>
  <dependency>
    <groupId>com.example</groupId>
    <artifactId>evalpostfix</artifactId>
    <version>1.0</version>
  </dependency>
</dependencies>

The created repository including jars, pom files and checksums must be stored in a version control system so that it is available to other developers. The biggest issue with this solution is that it clutters VCS repository with files that such never be placed there (e.g. jars).

Conclusion

Choosing the right solution is not always easy. Personally, I would first try to add the jar into public or at least internal Maven repository in a company. If it is not be possible, I would go for in-project Maven repository and use the other methods as a last resort.

Posted in Java, Maven | Tagged , | 12 Comments

Integrating Hibernate with Spring

When building a web application, we will sooner or later need to somehow store data entered by users and retrieve it later. In most cases the best place to keep such data is a database because it additionally provides many useful features like transactions.

Therefore, in this article I would like to show how to extend our previous Spring MVC application to use Hibernate to access database and manage transactions. The configuration details slightly depend on the database being used. In our case it will be Oracle 11gR2 database.

Configuring Hibernate session factory

Before we start we have to configure Hibernate session factory in our Spring configuration file:

<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
    <property name="dataSource" ref="dataSource" />
    <property name="packagesToScan" value="com.example.springhibernate" />
    <property name="hibernateProperties">
        <props>
            <prop key="dialect">
                org.hibernate.dialect.Oracle10gDialect
            </prop>
            <prop key="hibernate.show_sql">
                true
            </prop>
            <prop key="hibernate.hbm2ddl.auto">
                create
            </prop>
        </props>
    </property>
</bean>

In property dataSource we refer to the data source configured in our application server and exposed via JNDI with name java:/orcl:

    <jee:jndi-lookup id="dataSource" jndi-name="java:/orcl" />

The second property packagesToScan specifies Java package to automatically scan for annotated entity classes. This way it is no longer necessary to prepare Hibernate mapping file.

Finally, the third property hibernateProperties gives us possibility to configure various Hibernate properties. In property dialect we specify that we use Oracle database, then we inform Hibernate to print issued SQL commands to the server log and to generate necessary objects (like tables) in the database. Please, note that the last option cannot be used in the production code because it may drop already existing tables in the database. We use it only to simplify the example.

Configuring transaction support

Because we plan to use Hibernate transactions and declare them using annotations, we have to add two additional elements to our Spring configuration:

<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
    <property name="sessionFactory" ref="sessionFactory" />
</bean>

<tx:annotation-driven/>

The first one informs Spring to instantiate HibernateTransactionManager transaction manager and associate it with the previously configured Hibernate session factory. The second one tells Spring to scan all classes for @Transactional annotation on a class or method level in order to use it with the transaction manager.

Using annotations

After we have finished the configuration, we can add standard persistence annotations to our Person entity class:

package com.example.springhibernate;

import java.io.Serializable;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;

@Entity
@Table(name = "springhibernate_person")
@NamedQuery(name = "Person.selectAll", query = "select o from Person o")
public class Person implements Serializable {
    private static final long serialVersionUID = 3297423984732894L;
    
    @Id
    @GeneratedValue
    private int id;
    private String firstName;
    private String lastName;
    private Integer age;
    // constructor, setters and getters
}

To access the database we create a simple repository class annotated with @Repository:

package com.example.springhibernate;

import java.io.Serializable;
import java.util.List;
import org.hibernate.Query;
import org.hibernate.SessionFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;

@Repository
@Transactional
public class PersonList implements Serializable {
    private static final long serialVersionUID = 324589274837L;

    @Autowired
    private SessionFactory sessionFactory;
    
    @Transactional
    public void addPerson(Person person) {
        sessionFactory.getCurrentSession().save(person);
    }
    
    @Transactional
    public List<Person> getAll() {
        Query query = sessionFactory.getCurrentSession().getNamedQuery("Person.selectAll");
        return (List<Person>) query.list();
    }
}

In this class we obtain a reference to the correct SessionFactory instance using plain @Autowired annotation. Additionally, both methods of this class are annotated with @Transactional to inform Spring that these methods should be executed in a transaction.

Conclusion

Configuration of Hibernate in Spring is not very difficult but still requires setting several parameters specific to the environment in XML configuration files. The rest of the code is totally independent of it so changes necessary to switch to the other database or application server are limited to few lines of the configuration.

The example above uses version 4 of Hibernate but switching to version 3 is pretty simple and requires only replacing hibernate4 by hibernate3 in two places in Spring XML file.

The complete source code of the example can be found at GitHub.

Posted in Hibernate, Java, Java EE, Spring | Tagged , , , , , | 1 Comment

Repeating annotations in Java 8

In Java 7 and before attaching more than one annotation of the same type to the same part of the code (e.g. a class or a method) was forbidden. Therefore, the developers had to group them together into single container annotation as a workaround:

@Authors({
    @Author(name = "John"),
    @Author(name = "George")
})
public class Book { ... }

Java 8 brought a small yet very useful improvement called repeating annotations which allows us to rewrite the same code without explicitly using the container annotation:

@Author(name = "John")
@Author(name = "George")
public class Book { ... }

For compatibility reasons, the container annotation is still used but this time the Java compiler is responsible for wrapping the repeating annotations into a container annotation.

Declaring repeatable annotation type

User-defined annotations are not repeatable by default and have to be annotated with @Repeatable annotation:

package com.example.customannotation;

import java.lang.annotation.Repeatable;

@Repeatable(value = Authors.class)
public @interface Author {
    String name() default "";
}

The element value of @Repeatable annotation represents the type of the container annotation:

package com.example.customannotation;

public @interface Authors {
    Author[] value();
}

When repeating annotation Author is used multiple times on the same part of the code, the Java compiler automatically creates container annotation Authors and stores all repeating annotations Author into its value element.

This is the minimal working configuration but in most cases you may decide to additionally specify at least target and retention of the annotation type.

Accessing annotations via reflection

Repeating annotations can be accessed in two ways. The first one is to access them by first getting their container annotation using getAnnotation() or getDeclaredAnnotation() methods of AnnotatedElement interface:

Authors authors = klazz.getAnnotation(Authors.class);
for (Author author : authors.value())
    System.out.println("Author=" + author.name());

The second method relies on newly introduced getAnnotationsByType() and/or getDeclaredAnnotationsByType() methods which automatically scan though the container annotation, extract the repeating annotations and return them at once as an array:

Author[] authors = klazz.getAnnotationsByType(Author.class);
for (Author author : authors)
    System.out.println("Author=" + author.name());

Conclusion

Repeating annotations is a small addition to Java which simplifies usage of some annotations in various frameworks (especially in Hibernate and JPA). Because the feature is quite new, you may still need to explicitly use container annotation for some time (until @Repeatable is added to all necessary annotation types).

Posted in Java | Tagged | Leave a comment

Custom annotations in Java

Java developers are not limited to using built-in annotations only but can also create their own annotations to provide additional functionality. For example many Java frameworks define custom annotations to provide (or at least simplify usage of) such functionality as:

  • unit testing
  • ORM mapping
  • bean validation
  • Java class to XML mapping
  • web-service description

Of course, the list is not even one percent complete. In this article I would like to describe how to create custom annotations and later access them through reflection.

Creating custom annotation

Annotation type definition is very similar to an interface definition:

public @interface Version {
    int major();
    int minor() default 0;
    String date();
}

The most visible difference is the usage of interface keyword preceded by the at sign (@) when defining annotation type. The similar syntax is not a coincidence because in fact annotations are visible to the virtual machine as plain interfaces extending Annotation interface and the annotation elements are visible as abstract methods. It is also possible to create static fields, static classes and enums inside an annotation. It is, however, impossible to create a new annotation type by extending (inheriting from) existing annotation type.

Another important difference is the ability to specify a default value for the annotation element. If the element has a default value, it can but don’t have to be specified when using the annotation. If it is not specified, the default value is used. The default value must be a constant and can never be a null value. The latter requirement is somewhat inconvenient and forces programmers to use other default values like “” or Void.class.

Additionally, annotation elements cannot have arguments, cannot define thrown exceptions, cannot be generic and their element types are limited to:

  • primitive types like int, long, double or boolean
  • String class
  • Class class with optional bounds
  • enum types
  • annotation types
  • an array containing one of the above types

Here is another annotation using most of the types above:

public @interface ClassInfo {
    enum AccessLevel { PUBLIC, PROTECTED, PACKAGE_PROTECTED, PRIVATE};

    String author();
    Version version();
    AccessLevel accessLevel() default AccessLevel.PACKAGE_PROTECTED;
    String[] reviewers() default { };
    Class<?>[] testCases() default { };
}

Please, note that in the second element we refer to the previously defined Version annotation. In the next one we use enum type (defined within the same annotation) as the element type and we also provide one of its values as a default value. The last two elements can be assigned an array – if they are not set, they default to an empty array.

Meta-annotations

Java provides several meta-annotations – annotations which can be applied to other annotations. The custom annotation can be annotated with one or more of such meta-annotations to provide additional information how the custom annotation can be used.

@Target

@Target annotation restricts to which source code elements the custom annotation can be applied. The value of the @Target annotation is an array containing one or more of the following values:

  • ElementType.ANNOTATION_TYPE – can be applied to another annotation type (creates meta-annotation)
  • ElementType.CONSTRUCTOR – can be applied to a constructor
  • ElementType.FIELD – can be applied to a field (includes enum constants)
  • ElementType.LOCAL_VARIABLE – can be applied to a local variable
  • ElementType.METHOD – can be applied to a method
  • ElementType.PACKAGE – can be applied to a package (placed in package-info.java file)
  • ElementType.PARAMETER – can be applied to a method parameter
  • ElementType.TYPE – can be applied to a type (class, interface, enum or annotation)
  • ElementType.TYPE_PARAMETER – can be applied to a type parameter (new concept in Java 8)
  • ElementType.TYPE_USE – can be applied to a use of type (new concept in Java 8)

If the @Target annotation is missing, the annotation can be applied almost everywhere except type parameter.

@Retention

@Retention annotation indicates how the custom annotation is stored. Its value can be one of the following values:

  • RetentionPolicy.SOURCE – annotations are analyzed by the compiler only and are never stored into class files
  • RetentionPolicy.CLASS – annotations are stored into class files but are not retained by the virtual machine at run-time
  • RetentionPolicy.RUNTIME – annotations are stored into class files and are retained by the virtual machine at run-time so they are available via reflection

If the @Retention annotation is missing, the value defaults to Retention.CLASS. In most cases RetentionPolicy.RUNTIME policy is used in order to be able to examine the annotations at run-time.

@Documented

@Documented annotation indicates whether the custom annotation should appear on the annotated elements in Javadoc documentation. If @Documented is applied to the custom annotation, all classes annotated with the custom annotation will be marked as such in Javadoc documentation. If @Documentation is missing, Javadoc documentation may contain information about the custom annotation (depending on its access modifiers and JavaDoc parameters) but won’t contain information about which classes were annotated with the custom annotation.

@Inherited

@Inherited annotation indicates whether the custom annotation is inherited from the super class. This annotation does not have any effect if the custom annotation is applied to anything other than a class. By default the annotations are not inherited.

@Repeatable

@Repeatable annotation indicates whether the custom annotation can be applied to the same source code element multiple times. By default the same annotation type can be used only once on the same source code element.

Accessing annotations via reflection

Information about annotations applied to classes, methods and many other elements can be extracted using AnnotatedElement interface which is implemented by the following reflective classes: Class, Constructor, Field, Method, Package and Parameter. The presence of the annotation can be checked using isAnnotationPresent() method and the actual annotations can be retrieved using methods: getAnnotation(), getAnnotations() and few more.

The element values can be accessed by calling appropriate methods (named the same as annotation elements) on the returned instances of Annotation interface.

If the annotations have a retention policy different than RetentionPolicy.RUNTIME, they won’t be accessible though reflection.

Example

As an example we will create a very, very simple annotation-based unit test framework. The methods to test will be annotated using following annotation:

package com.example.customannotation;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface MyTest {
    String name() default "";
    MyTestState state() default MyTestState.ACTIVE;
    Class<? extends Throwable> expected() default None.class;
    
    static class None extends Throwable {
    }
}

Annotation MyTest uses @Retention(RetentionPolicy.RUNTIME) to make it accessible through reflection at run-time and @Target(ElementType.METHOD) to restrict its usage only to methods. Because we cannot use null as a default value for expected element, we create an empty class None and set its class object as a default value. Additionally, we allow the tests to be enabled or disabled using this enumeration:

package com.example.customannotation;

public enum MyTestState {
    ACTIVE, INACTIVE
}

Once we have the annotation ready, we can apply it to the test methods:

package com.example.customannotation;

import static com.example.customannotation.MyAsserts.*;

public class SimpleTestCase {

    @MyTest(name = "test1WithCustomName", state = MyTestState.ACTIVE)
    public void test1() {
        assertEquals(2, 1 + 1);
        assertEquals(Integer.parseInt("-3"), -3);
    }
    
    @MyTest(expected = NumberFormatException.class)
    public void test2() {
        Integer.parseInt("1.23ddd");
    }
    
    @MyTest(state = MyTestState.INACTIVE)
    public void test3() {
        throw new IllegalStateException("Test case is inactive");
    }
}

The last step is to create a simple test runner which accepts a list of classes to test:

package com.example.customannotation;

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;

public class MyTestRunner {

    public void run(Class<?>... klasses) {
        for (Class<?> testClass : klasses) {
            runTestClass(testClass);
        }
    }
    
    private void runTestClass(Class<?> klass) {
        for (Method method : klass.getMethods()) {
            MyTest annotation = method.getAnnotation(MyTest.class);
            if (annotation != null)
                runTestMethod(klass, method, annotation);
        }
    }

    private void runTestMethod(Class<?> klass, Method method, MyTest annotation) {
        if (annotation.state() != MyTestState.ACTIVE)
            return;
        try {
            System.out.println("Running test: " + getTestName(method, annotation));
            Object testInstance = klass.newInstance();
            method.invoke(testInstance);
            System.out.println("SUCCESS");
        } catch (InstantiationException e) {
            System.err.println("FAILED: Failed to instantiate class " + klass.getName());
        } catch (IllegalAccessException e) {
            System.err.println("FAILED: Failed to call test method " + method.getName());
        } catch (InvocationTargetException e) {
            checkThrowable(annotation, e.getCause());
        }
    }

    private static String getTestName(Method method, MyTest annotation) {
        return !annotation.name().isEmpty() ? annotation.name() : method.getName();
    }
    
    private void checkThrowable(MyTest annotation, Throwable th) {
        if (annotation.expected() == th.getClass())
            System.out.println("SUCCESS");
        else
            System.out.println("FAILED: " + th.getMessage());
    }
}

and use the runner to execute the tests:

package com.example.customannotation;

import java.io.IOException;

public class Main {

    public static void main(String[] args) throws IOException {
        MyTestRunner runner = new MyTestRunner();
        runner.run(SimpleTestCase.class);
    }

}

The Method.getAnnotation() method is used to extract MyTest annotation (if present) for each method. Later the elements of MyTest annotation are accessed in getTestName() and checkThrowable() methods. The lack of null checks in both methods is normal because annotation elements cannot be null.

Conclusion

Creating custom annotations is not a very common task because most of the time we are just using existing annotations defined in various frameworks. However, sometimes it may be necessary to create our own annotation to extend existing framework (e.g. Bean Validation or Spring). To keep the article concise I have almost silently omitted several rarely used concepts like repeated annotations or use of types. I am going to cover them in near future.

The complete source code for the example is available at GitHub.

Another example of custom annotations is described in article Custom bean validation constraints.

Posted in Java | Tagged , | 3 Comments

Annotation basics in Java

Annotations are a kind of metadata attached to various parts of the source code in Java. Although they do not directly affect how the code works, they are processed and used by different tools to provide additional functionality or services to the application. The typical use cases are:

  • instructions to the Java compiler – checking various assumptions (e.g. whether a method was correctly overridden), suppressing warnings, marking and reporting usage of deprecated code and more
  • instructions to the environment – generating code, XML files and more
  • runtime processing – taking different actions at runtime depending on the presence and the contents of the annotation

Annotations are used extensively in various frameworks to simplify source code, reduce the length of configuration, provide loose coupling between components and much more. The notable frameworks using annotations are Java EE, Spring, Hibernate and JUnit.

Basic usage

Annotation can be attached to a source code element (e.g. a class or a method) by placing its name preceded with an at sign character (@) before the element to annotate:

@Override
void doAction() { ... }

In case of many annotations it is also possible to provide additional elements in parentheses:

@SuppressWarnings(value = "unchecked")
void doAction() { ... }

If only one element named value is provided, its name may be omitted as shown below:

@SuppressWarnings("unchecked")
void doAction() { ... }

If needed multiple annotations of different types may be attached to the same source code element:

@Entity
@Table(name = "PEOPLE_PERSON")
public class Person implements Serializable { ... }

If there are at least two annotations of the same type (also called repeating annotations), they have to be grouped into one composed annotation. In the code below, two different @NamedQuery annotations could not be attached to the same class so they were put as a part of @NamedQueries annotation:

@Entity
@Table(name = "PEOPLE_PERSON")
@NamedQueries({
    @NamedQuery(name = "selectAllPersons",
                query = "select o from Person o"),
    @NamedQuery(name = "countAllPersons",
                query = "select count(o) from Person o")
})
public class Person implements Serializable { ... }

Since Java 8 it is possible to use repeating annotations without explicit grouping but it requires a small change to a definition of each annotation type. Therefore, it may not be yet possible in many cases.

Common annotations

Java SE comes with few predefined annotations. Some of them used by the Java compiler are described below.

@Override

@Override annotation informs the compiler that given method is intended to override a method declared in a superclass. If the method with this annotation fails to override any method (e.g. due to incompatible signature), the Java compiler raises an error. Although using this annotation is not necessary (omitting it does not cause any compilation errors), it is very useful to detect possible issues when doing large modifications or refactoring.

class MyFile implements AutoCloseable {
    @Override
    public void close() throws Exception { ...  }
}

@Deprecated

@Deprecated annotation informs that given element (e.g. constructor, class, method) is deprecated and its usage is discouraged. When a deprecated element is used somewhere in the source code, a warning is reported during compilation. Many IDE provide also visual indication to a developer by striking through all deprecated elements and their usage.

@Deprecated annotation is closely related with JavaDoc @deprecated tag. If @Deprecated annotation is used, it is also a good idea to add @deprecated tag to a JavaDoc comment explaining why it was deprecated and what could be used in place of it.

/**
 * Closes the file.
 * @deprecated Does not report errors.
 *             Use closeFile() instead.
 */
@Deprecated
void close() { ... }

@SuppressWarnings

@SuppressWarnings annotation instructs the compiler to suppress given warnings for annotated elements and all of its children. At the moment there are only two official (described in the Java Language Specification) categories of warnings which can be suppressed:

  • deprecation – suppresses warnings about usage of deprecated elements
  • unchecked – suppresses warnings about usage of raw (unchecked) types

However, IDE (e.g. Eclipse or IntelliJ IDEA) and Java compilers can and usually do implement their own custom categories. For example you can get all categories supported by Oracle Java compiler using command:

$ javac -X

and searching for ‘-Xlint’ option.

Here is a sample code which suppresses warning regarding usage of deprecated LineNumberInputStream class:

@SuppressWarnings("deprecation")
void read() {
    LineNumberInputStream is = new LineNumberInputStream(System.in);
}

@SafeVarargs

@SafeVarargs annotation informs compiler that the annotated method or constructor does not do any potentially unsafe operations on its varargs arguments. This annotation is generally used to suppress unchecked warnings related to the usage of varargs in the code like this:

@SafeVarargs
static <T> void print(T... args) {
    for (T t : args)
        System.out.println(t);
}
   
void callPrint() {
    print(new ArrayList<Integer>(), new ArrayList<Long>());
}

Because looking at the body of the print() method, we don’t see any dangerous casts or assignments, we can safely add @SafeVarargs annotation. Without @SafeVarargs the compiler would generate warnings similar to these:

warning: [unchecked] Possible heap pollution from parameterized vararg type T
warning: [unchecked] unchecked generic array creation for varargs parameter of type ArrayList...

@FunctionalInterface

@FunctionalInterface annotation (available since Java 8) informs the Java compiler that given interface is intended to be a functional interface. The functional interface is an interface with exactly one abstract method (not counting default methods and abstract methods overriding methods of Object class) and which can be used together with lambda expressions. If the interface annotated with @FunctionalInterface contains different number of abstract methods than one, the Java compiler raises an error. Although using this annotation is not necessary (omitting it does not cause any compilation errors), it is very useful to detect possible issues when doing bigger modifications or refactoring.

In the example below removing existing method test() or adding a new abstract method (except the ones mentioned above) would result in a compilation error:

@FunctionalInterface
public interface MyPredicate<T> {
    boolean test(T data);
}

Annotations and marker interfaces

Marker interface is simply an interface without any methods. Although it does not define any behavior in a typical sense, it carries type information (information whether the class implements given marker interface or not) which can be used by various mechanisms in Java to perform some special handling. Two widely known examples of such interfaces are Serializable and Cloneable.

Most probably annotations would be better suited for marking the class as serializable or cloneable but they were not available in Java at the time serialization and cloning were introduced. Nowadays, we may think about marker interfaces as very limited and unattractive predecessors of annotations.

Conclusion

These days, annotations are used almost everywhere in Java so it is very important to know them. Although this article covers only the basic usage of few existing annotations, it should be enough to start using other annotations available in various frameworks.

Posted in Java | Tagged | 1 Comment

Facelets ui:repeat tag

Facelets ui:repeat tag can be used as a substitute of h:dataTable tag from JSF HTML library or c:forEach tag from JSTL core library. While using h:dataTable tag is still the preferred method to create tables, ui:repeat tag gives developer more control over table structure and thus allows to overcome several limitations of h:dataTable tag. As usual this comes with higher level of verbosity but the difference is not very significant. In this post we will convert the project from Data tables in JSF post to use ui:repeat tag. Additionally, we will add row numbers to the table so it will look like this:

jsfrepeat

General usage

Tag ui:repeat iterates over a collection (a scalar object, an array, java.util.List or java.sql.ResultSet) provided by value attribute and inserts its contents into the final page once for every iteration. Attribute var is used to name and later access the properties of every element of the collection. The most interesting part of ui:repeat tag is an optional varStatus attribute which names an object to be queried for additional data.

Here is an example of a table with 3 columns (row number, name and value):

<table>
  <ui:repeat value="#{rows} var="row" varStatus="status">
    <tr>
      <td>#{status.index + 1}</td>
      <td>#{row.name}</td>
      <td>#{row.value}</td>
    </tr>
  </ui:repeat>
</table>

Selecting rows

Tag ui:repeat has 3 additional attributes which control which rows should be printed:

  • offset – the index from which the iteration should start. The default value is 0.
  • step – the step between this and next index. The default value is 1.
  • size – the number of iterations. The default value is calculated based on the size of the collection, offset and step: (collectionsize – offset) / step

Status variable

At any point of the iteration we can query object defined by varStatus attribute to get following data:

  • begin – the index from which the iteration started (corresponds to offset attribute of ui:repeat tag)
  • step – the step between this and the next index (corresponds to step attribute of ui:repeat tag
  • end – the index of the next element past the last one (equals offset + step * size)
  • index – the index of the current row
  • first, last, even, odd – useful logical values for assigning CSS styles to rows

Example

Because we want to show a table with header, footer and caption, we have to create them explicitly:

<?xml version='1.0' encoding='UTF-8' ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:h="http://java.sun.com/jsf/html" 
      xmlns:f="http://java.sun.com/jsf/core"
      xmlns:ui="http://java.sun.com/jsf/facelets">
    <h:head>
        <title>#{msgs.httpHeaders}</title>
        <h:outputStylesheet library="default" name="css/styles.css" />
    </h:head>
    <h:body>
        <table class="table">
            <tr class="tableHeader" >
                <th>#{msgs.id}</th>
                <th>#{msgs.name}</th>
                <th>#{msgs.value}</th>
            </tr>
            <ui:repeat value="#{headers.entries}" var="entry" varStatus="status">
                <tr class="#{status.even ? 'tableRow' : 'tableRowAlt'}">
                    <td>
                        #{status.index + 1}
                    </td>
                    <td>
                        <h:outputText value="#{entry.name}" />
                    </td>
                    <td>
                        <h:outputText value="#{entry.value}" />
                    </td>
                    <f:facet name="caption"></f:facet>
                </tr>
            </ui:repeat>
            <tr class="tableFooter">
                <td></td>
                <td>#{msgs.stopDash}</td>
                <td>#{msgs.stopDash}</td>
            </tr>
            <caption class="tableCaption">#{msgs.httpHeadersCaption}</caption>
        </table>
    </h:body>
</html>

The single row of the table is represented using following name-value class:

package com.example.jsfrepeat;

public class HeaderEntry {

    private String name;
    private String value;

    public HeaderEntry(String name, String value) {
        this.name = name;
        this.value = value;
    }

    public String getName() {
        return name;
    }

    public String getValue() {
        return value;
    }

}

The list of all HTTP headers (table rows) is fetched from HTTPServletRequest using getEntries() method:

package com.example.jsfrepeat;

import java.util.ArrayList;
import java.util.Enumeration;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.enterprise.context.RequestScoped;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.inject.Named;
import javax.servlet.http.HttpServletRequest;

@Named
@RequestScoped
public class Headers {

    private List<HeaderEntry> entries;
    
    @PostConstruct
    public void init() {
        entries = new ArrayList<>();
        ExternalContext context = FacesContext.getCurrentInstance().getExternalContext();
        HttpServletRequest request = (HttpServletRequest) context.getRequest();
        
        Enumeration<String> namesIt = request.getHeaderNames();
        while (namesIt.hasMoreElements()) {
            String name = namesIt.nextElement();
            Enumeration<String> valueIt = request.getHeaders(name);
            while (valueIt.hasMoreElements()) {
                String value = valueIt.nextElement();
                entries.add(new HeaderEntry(name, value));
            }
        }
    }
    
    public List<HeaderEntry> getEntries() {
        return entries;
    }
}

Conclusion

Creating tables using ui:repeat tag is more straightforward than using h:dataTable tag and also gives developer more control over final table structure. However, when this additional control is not needed, it is usually much better to use h:dataTable tag which clearly shows developer intent.

The complete source code for the example was tested on JBoss AS 7.1 and is available at GitHub.

Posted in Java, Java EE, JSF | Tagged , , , | Leave a comment

Data tables in JSF

Data table is one of the most advanced UI components provided by JSF. This component is used to represent data in a table form and can contain multiple rows and columns and optionally header, footer and caption. In this post I would like to explain how to use it to render a simple table like this:

jsfdatatable

Basic table

The usage of data table is pretty simple. First, the data table has to be defined using h:datatable element. This element should have at least two attributes to iterate through the data. The first one is value which should point to one of the following:

  • single object
  • array
  • instance of java.util.List
  • instance of java.sql.ResultSet
  • instance of javax.servlet.jsp.jstl.sql.Result
  • instance of javax.faces.model.DataModel

JSF iterates though every element of the object pointed by this attribute and assigns each element to the variable specified by the second attribute var. Later, the properties can be extracted from this variable and put into the table. Typically, the data provided to data table is in form of an array or a list. If the data references a single scalar object, only one row will be rendered which is not much useful.

Inside h:datatable element we should put as many h:column elements as there are individual columns. Each h:column element can contain several JSF components which will appear in a single cell of the table. The final structure of a simple table looks like this:

<h:datatable value="#{rows}" value="row">
  <h:column>
    <!-- components in the first column -->
    #{row.columnValue1}
  </h:column>
  <h:column>
    <!-- components in the second column -->
    #{row.columnValue2}
  </h:column>
  <!-- more columns -->
</h:datatable>

Facets

This simple data table can be extended to contain optional header, footer and caption using f:facet elements:

<h:datatable value="#{rows}" value="row"
    headerClass="headerClass1"
    footerClass="footerClass1"
    captionClass="captionClass1">
  <h:column>
    <f:facet name="header">
      <!-- first column header contents -->
    </f:facet>
    <f:facet name="footer">
      <!-- first column footer contents -->
    </f:facet>
    <!-- components in the first column -->
    #{row.columnValue1}
  </h:column>
  <h:column>
    <f:facet name="header">
      <!-- second column header contents -->
    </f:facet>
    <f:facet name="footer">
      <!--  second column footer contents -->
    </f:facet>
    <!-- components in the second column -->
    #{row.columnValue2}
  </h:column>
  <!-- more columns -->
  <f:facet name="caption">
    <!-- table caption contents -->
  </f:facet>
</h:datatable>

Facets for header and footer are placed inside h:column elements while the facet for caption is put directly inside h:datatable. Additionally, we can specify CSS classes to be used by a header (headerClass), footer (footerClass) and also a caption (captionClass) of the table. There is also captionStyle attribute which specifies inline style for a caption.

Styles for rows and columns

CSS styles for individual columns and rows can be defined using either columnClasses or rowClasses attributes of h:datatable respectively. Obviously, these attributes are mutually exclusive and should not be applied to the same table.

Both attributes contain a list of comma-separated CSS classes. The first CSS class is used for the first column/row, the second one for the second column/row and so on. If the number of CSS classes is less than the number of columns/rows in a table, the classes are used repeatedly for the next columns/rows. For example if we specify only two classes, the first one is used for odd columns/rows and the second one for even columns/rows.

To apply the CSS style to the whole table we can use standard style or styleClass attributes.

Sorting and paging

Element h:datatable does not provide any real support for sorting elements in columns or splitting data into multiple pages so it has to be done manually (e.g. by defining non-standard component or using appropriate database queries to sort data and fetch only a portion of it). It contains first and count attributes to show only a portion of rows but it is not very useful in practice especially for very large tables.

Example

Equipped with this knowledge we can create a table with list of all HTTP headers sent by a web browser to a server. The single row of the table is represented using following name-value class:

package com.example.jsfdatatable;

public class HeaderEntry {

    private String name;
    private String value;

    public HeaderEntry(String name, String value) {
        this.name = name;
        this.value = value;
    }

    public String getName() {
        return name;
    }

    public String getValue() {
        return value;
    }

}

The list of all HTTP headers (table rows) is fetched from HTTPServletRequest using getEntries() method:

package com.example.jsfdatatable;

import java.util.ArrayList;
import java.util.Enumeration;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.enterprise.context.RequestScoped;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.inject.Named;
import javax.servlet.http.HttpServletRequest;

@Named
@RequestScoped
public class Headers {

    private List<HeaderEntry> entries;
    
    @PostConstruct
    public void init() {
        entries = new ArrayList<>();
        ExternalContext context = FacesContext.getCurrentInstance().getExternalContext();
        HttpServletRequest request = (HttpServletRequest) context.getRequest();
        
        Enumeration<String> namesIt = request.getHeaderNames();
        while (namesIt.hasMoreElements()) {
            String name = namesIt.nextElement();
            Enumeration<String> valueIt = request.getHeaders(name);
            while (valueIt.hasMoreElements()) {
                String value = valueIt.nextElement();
                entries.add(new HeaderEntry(name, value));
            }
        }
    }
    
    public List<HeaderEntry> getEntries() {
        return entries;
    }
}

The whole JSF web page contains a table with two columns (Name and Value), header, footer and caption. We use rowClasses to apply separate styles for odd and even rows:

<?xml version='1.0' encoding='UTF-8' ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:h="http://java.sun.com/jsf/html" 
      xmlns:f="http://java.sun.com/jsf/core">
    <h:head>
        <title>#{msgs.httpHeaders}</title>
        <h:outputStylesheet library="default" name="css/styles.css" />
    </h:head>
    <h:body>
        <h:dataTable value="#{headers.entries}" var="entry"
                     styleClass="table"
                     rowClasses="tableRowOdd, tableRowEven"
                     headerClass="tableHeader"
                     footerClass="tableFooter"
                     captionClass="tableCaption" >
            <h:column>
                <f:facet name="header">#{msgs.name}</f:facet>
                <f:facet name="footer">#{msgs.stopDash}</f:facet>
                <h:outputText value="#{entry.name}" />
            </h:column>
            <h:column>
                <f:facet name="header">#{msgs.value}</f:facet>
                <f:facet name="footer">#{msgs.stopDash}</f:facet>
                <h:outputText value="#{entry.value}" />
            </h:column>
            <f:facet name="caption">#{msgs.httpHeadersCaption}</f:facet>
        </h:dataTable>
    </h:body>
</html>

Conclusion

Element h:datatable is very useful in rendering HTML tables. While it does not provide any advanced features. it can be easily extended or used a part of bigger structure which provides such support.

Even though we used only plain text inside cells, they can actually contain more advanced components like graphics, radio buttons, check boxes, lists and so on.

The complete example was tested on JBoss AS 7.1 and is available at GitHub.

Posted in Java, Java EE, JSF | Tagged , , , , | 4 Comments

Parameterized unit tests in JUnit

Sometimes you may want to execute a series of tests which differ only by input values and expected results. Instead of writing each test separately, it is much better to abstract the actual tests into a single class and provide it a list of all input values and expected results. JUnit 4 introduced a standard and easy solution to this problem called parametrized tests.

Structure of a parametrized test

In order to use a parameterized test the test class must be annotated with @RunWith(Parameterized.class) annotation to inform JUnit that custom test runner should be used instead of the standard one. This custom test runner has several requirements from the test class. First, the class has to provide a static public method annotated with @Parameters annotation and returning a collection of test data elements (which in turn are stored in an array). Additionally, the test class should have a single constructor which accepts test data elements from the previously mentioned array. Typically, the constructor just stores all of its arguments into the appropriate fields of the class so they can be later accessed by test methods.

When parameterized test is executed, a new instance of a test class is created for the cross-product of each test method and each element of the collection (with test data elements). Instance of the test class is produced by passing all test data elements from an array as arguments of the constructor. Then the appropriate test method is run.

Example

Let’s consider following class with a single method:

package com.example.junitparameterizedtests;

public class OneBitsCounter {

    int getCount(long value) {
        value = value - ((value >> 1) & 0x5555555555555555L);
        value = (value & 0x3333333333333333L)
                + ((value >> 2) & 0x3333333333333333L);
        value = ((value + (value >> 4)) & 0x0F0F0F0F0F0F0F0FL);
        return (int) ((value * (0x0101010101010101L)) >> 56);
    }
}

The method is supposed to return a number of ‘1’ bits in a binary representation of the value. The code of the method is not obvious so we would like to check the method for several input values:

package com.example.junitparameterizedtests;

import java.util.Arrays;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

@RunWith(Parameterized.class)
public class OneBitsCounterTestCase {

    @Parameters(name = "{index}: test({0}) expected={1}")
    public static Iterable<Object[]> data() {
        return Arrays.asList(new Object[][] {
            { 0b0, 0},
            { 0b001, 1},
            { 0b11011, 4},
            { 0b111111111111111111111111111, 27},
            { 0b0111010111111111111111111111010101111111L, 34}
        });
    }
    
    private long value;
    private int oneBitsCount;
    
    public OneBitsCounterTestCase(long value, int oneBitsCount) {
        this.value = value;
        this.oneBitsCount = oneBitsCount;
    }
    
    @Test
    public void testGetCount() {
        OneBitsCounter counter = new OneBitsCounter();
        assertEquals(counter.getCount(value), oneBitsCount);
    }
    
}

The static data() method returns five arrays containing test data elements. For each array a new instance of the test class is created using two-argument constructor. Once the object is created the actual test method is run.

Naming individual tests

Since version 4.11 of JUnit it is possible to provide an individual name for each test using a simple name pattern in @Parameters annotation. The name can contain following place-holders:

  • {index} – current index of test data elements
  • {0}, {1}, {2}, … – corresponding test data element

This naming can be very useful to quickly identify the failing test.

Conclusion

Support for parameterized tests is a simple yet very useful feature of JUnit enabling us to run the same test for many different sets of values. The main reason to use them is to reduce the size of source code and remove code duplication.

The complete source code of the example can be found at GitHub.

Posted in Java, Software development practices | Tagged , , | 2 Comments

Evaluating postfix expressions

The standard notation used to represent mathematical expressions is called infix notation. You should be very familiar with it already because it is almost exclusively used in books and thought in schools. Just to be clear, the typical example of infix expression is:

(2 + 3) - 7 / 9

However, there exists two other yet significantly less popular notations called prefix and postfix. In this article we will concentrate on the later and describe what it is and how to evaluate it using computer.

Postfix notation

Postfix notation (also known as Reverse Polish Notation or RPN in short) is a mathematical notation in which operators follow all of its operands. It is different from infix notation in which operators are placed between its operands. The previously mentioned infix expression can be represented using postfix notation like this:

2 3 + 7 9 / -

To evaluate this expression we take two first numbers 2 and 3, add them and remember the result; then we take the next two numbers 7 and 9, divide them and remember the result. At last we take the two remembered values and we subtract them to obtain the final result.

While postfix notation may seem less natural and straightforward, it has several advantages which made it popular in computing. The main reason is that postfix expressions are generally easier to calculate on computers than the equivalent infix expressions and do not require any brackets to define the order of operations (assuming that every operator has fixed number of operands). Additionally, the ease of processing results in significantly simpler and more efficient algorithms. This made postfix notation very popular in representing intermediate results of computations.

Algorithm

The algorithm to evaluate any postfix expression is based on stack and is pretty simple:

  1. Initialize empty stack
  2. For every token in the postfix expression (scanned from left to right):
    1. If the token is an operand (number), push it on the stack
    2. Otherwise, if the token is an operator (or function):
      1. Check if the stack contains the sufficient number of values (usually two) for given operator
      2. If there are not enough values, finish the algorithm with an error
      3. Pop the appropriate number of values from the stack
      4. Evaluate the operator using the popped values and push the single result on the stack
  3. If the stack contains only one value, return it as a final result of the calculation
  4. Otherwise, finish the algorithm with an error

Example

As an example we will try to evaluate the following postfix expression:

2 3 4 + * 6 -

which can be represented in infix notation like this:

2 * (3 + 4) - 6

The exact steps of the algorithm are put in the table below:

Input token Operation Stack contents (top on the right) Details
2 Push on the stack 2
3 Push on the stack 2, 3
4 Push on the stack 2, 3, 4
+ Add 2, 7 Pop two values: 3 and 4 and push the result 7 on the stack
* Multiply 14 Pop two values: 2 and 7 and push the result 14 on the stack
6 Push on the stack 14, 6
Subtract 8 Pop two values: 14 and 6 and push the result 8 on the stack
(End of tokens) (Return the result) 8 Pop the only value 8 and return it

The contents of the stack in the Stack contents … column is represented from left to right with the rightmost values being on the top of the stack. When there are no more tokens in the input, the contents of the stack is checked. If there is only one value, it is the result of the calculation. If there are no values or if there are many, the passed input expression was not a valid postfix expression.

Source code

The algorithm can be easily implemented in Java using LinkedList as a stack implementation:

package com.example.evalpostfix;

import java.util.Deque;
import java.util.LinkedList;
import java.util.Scanner;

public class PostfixEvaluator {

    private Deque<Double> args;

    public PostfixEvaluator() {
        args = new LinkedList<>();
    }

    public double evaluate(String expr) {
        args.clear();
        try (Scanner scanner = new Scanner(expr)) {
            while (scanner.hasNext()) {
                String token = scanner.next();
                processToken(token);
            }
        }

        if (args.size() == 1) {
            return args.pop();
        } else {
            throw new IllegalArgumentException("Invalid number of operators");
        }
    }

    private void processToken(String token) {
        switch (token) {
            case "+":
                addArgs();
                break;
            case "-":
                subArgs();
                break;
            case "*":
                mulArgs();
                break;
            case "/":
                divArgs();
                break;
            default:
                try {
                    double arg = Double.parseDouble(token);
                    args.push(arg);
                } catch (NumberFormatException e) {
                    throw new IllegalArgumentException("Invalid number: " + token, e);
                }
        }
    }

    private void addArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 + arg2);
    }

    private void subArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 - arg2);
    }

    private void mulArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 * arg2);
    }

    private void divArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 / arg2);
    }

    private void checkArgumentsSize() {
        if (args.size() < 2) {
            throw new IllegalArgumentException("Not enough parameters for operation");
        }
    }
}

The code allows any double value as an operand and can be easily extended to support additional binary operators (e.g. modulo, power), unary operations (e.g. factorial, square root) or functions (e.g. logarithm, trigonometric functions).

Conclusion

Evaluating postfix expressions is a very simple example presenting usefulness of stack in evaluating mathematical expressions. If you are interested in evaluating infix expressions, you can check Shunting-yard algorithm.

You can find the complete source code with tests at GitHub.

Posted in Algorithms, Java | Tagged , | 3 Comments

Git: branching and merging

Branch is a core concept in Git and many other version control systems. Generally speaking, a branch is a line of development which is parallel and independent of all other lines but which still shares the same history with all other branches if you look far enough in time. Because the branches are independent the changes applied to one branch does not automatically propagate to other branches and are not visible there. This way the development can be done in parallel by many contributors without disturbing each other.

Mainline

Git by default (after creating a repository) comes with a single branch named master. This branch represent the main development line (often named mainline) of the repository and is a branch from which new branches can be created of. It is also the destination branch where other branches can be merged to.

Of course, new branches doesn’t have to be created directly of this branch and merged directly into it but may also be created from other custom branches and merged into them.

Feature branches

A typical reason to create a branch is to have a “private space” to develop a new feature without disturbing other people’s work and being disturbed by them. It also keeps the mainline free from questionable and incomplete code. Only after the feature is finished and tested on the feature branch, it is merged as a whole into the mainline. Usually, after merging the feature branch can be safely deleted.

Bug-fix branches

Another reason to create a branch is to develop a bug-fix. Because the same bug is often present in many different branches (mainline and few releases), it is generally easier and faster to create a new branch with the bug-fix and merge it into all branches where the bug is present. It is completely fine if such branch contains a single commit only.

Experimental branches

A branch can be also created to experiment with tentative idea without affecting other people’s work. If the experiment turns out very well, the experimental branch may be merged back. If it fails, the experimental branch can be deleted without any merging.

Release branches

Branches are also used to take a snapshot of the mainline at some point in time and prepare it for release. Usually, only bug-fixes and small improvements are added to a release branch in order to stabilize it and get ready for final testing and build.

Creating branches

Creating a branch in Git is very easy:

$ git branch feature1-branch

The created branch feature1-branch is a child of the current branch and has exactly the same history up to the moment they were branched. The command above merely creates a branch so if you want to work on the new branch, you have to switch to it using command:

$ git checkout feature1-branch
Switched to branch 'feature1-branch'

After this you can safely commit changes to the new branch similarly as you would do with the mainline.

There is also a shorthand command which creates a branch and immediately switches to it:

$ git checkout -b feature1-branch
Switched to a new branch 'feature1-branch'

As you should know Git has a notion of local and remote repositories. The branch we have just created is a local one and is present only in a local repository. Usually, you would want to push this new branch to a remote repository so that other team members can access and work on it:

$ git push -u origin feature1-branch
Total 0 (delta 0), reused 0 (delta 0)
To file:///home/robert/tmp/git2/
 * [new branch]      feature1-branch -> feature1-branch
Branch feature1-branch set up to track remote branch feature1-branch from origin.

Option -u ensures that the local branch tracks the new remote branch. This way Git is able to find the right local branch when pulling the changes from a remote repository using argument-less git pull command.

Listing branches

With git branch command you can also see all local branches:

$ git branch
* feature1-branch
  master

all remote branches:

$  git branch -r
  origin/HEAD -> origin/master
  origin/feature1-branch
  origin/master

or just all (local and remote) branches:

$  git branch -a
* feature1-branch
  master
  remotes/origin/HEAD -> origin/master
  remotes/origin/feature1-branch
  remotes/origin/master

The branch annotated with an asterisk is the current branch. Additionally, Git provides an option to list branches which are already merged into the current branch (either directly or indirectly):

$ git branch --merged
  feature1-branch
* master
  test

There is also an opposite option –no-merged. These two options are very useful in determining which branches can be safely deleted from the repository.

Switching between branches

As shown before switching between branches is done using git checkout command:

$ git checkout feature1-branch
Switched to branch 'feature1-branch'

All untracked files and local uncommitted changes in the working tree are left untouched so they can be later committed to the new branch. If the target branch is not found in the local repository but there exists a tracking branch in exactly one remote repository, the command creates a local branch pointing to the remote one and switches to it.

Deleting branches

If a branch was fully merged and is no longer needed, it can be deleted with command:

$ git branch -d bugfix1-branch
Deleted branch bugfix1-branch (was b12dd4e).

If it was not merged and we don’t plan to do so for some reason, we can remove the branch forcefully:

$ git branch -D experimental1-branch
Deleted branch experimental1-branch (was b12dd4e).

These commands operate on the local repository only so after removing a local branch, it is usually a good idea to remove the same branch from the remote repository:

$ git push origin :experimental1-branch
To file:///home/robert/tmp/git2/
 - [deleted]         experimental1-branch

While this looks almost like pushing a new branch to a repository, there is a slight difference – a colon before branch name (actually empty branch name before the colon) which informs Git that the branch should be removed rather than created. If the command is too obscure to you, an alternative may be used:

$ git push origin --delete experimental1-branch
To file:///home/robert/tmp/git2/
 - [deleted]         experimental1-branch

Synchronizing with remote

Even if a local branch has a tracking remote branch in the remote repository, the changes committed to the local branch won’t automatically appear in the remote one. Changes made to the current local branch can be pushed to the remote repository (possibly with changes to other branches depending on Git version and configuration) using command:

$ git push

Additionally, to fetch changes made by other developers in a remote tracking branch and apply them to the current local one, the pull command can be used:

$ git pull

When working in many people on a project, you may end up in a situation that somebody removed a branch from a remote repository but you still see it in the output of git branch -a command even after running git pull many times. It is because git pull does not automatically prune no-longer-existing branches and it has to be done manually:

$ git remote prune origin
Pruning origin
URL: file:///home/robert/tmp/git2/
 * [pruned] origin/feature7-branch

Merging

When a new child branch is created based on a parent branch, they have exactly the same history. But once you start applying changes to one of them, their histories start to diverge. At some point you may decide that you want to share some of the changes from one of the branches (usually child branch) with another one (usually parent branch). This concept is commonly called merging in version control systems. After merging, the changes from the source branch will become available and visible in the destination branch.

To merge a branch into another one, you have to switch to the destination branch and then run git merge command with the name of the source branch to merge in:

$ git merge feature11-branch 
Updating b12dd4e..0b7f55a
Fast-forward
 ABC.txt | 0
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 ABC.txt

Of course, after this the source branch may be deleted and the changes to the destination branch should be pushed to a remote repository.

In case Git complains about conflicts during merge operation, you can refer to the article explaining how to resolve merge conflicts.

Conclusion

Branching and merging is one of the most important concepts in version control systems that every developer should know. In this article I have concentrated on the basics which should be enough in most cases. For details you can always consult git-branch, git-checkout and git-merge manual pages.

Posted in Git, Version control | Tagged , , | Leave a comment