Data tables in JSF

Data table is one of the most advanced UI components provided by JSF. This component is used to represent data in a table form and can contain multiple rows and columns and optionally header, footer and caption. In this post I would like to explain how to use it to render a simple table like this:

jsfdatatable

Basic table

The usage of data table is pretty simple. First, the data table has to be defined using h:datatable element. This element should have at least two attributes to iterate through the data. The first one is value which should point to one of the following:

  • single object
  • array
  • instance of java.util.List
  • instance of java.sql.ResultSet
  • instance of javax.servlet.jsp.jstl.sql.Result
  • instance of javax.faces.model.DataModel

JSF iterates though every element of the object pointed by this attribute and assigns each element to the variable specified by the second attribute var. Later, the properties can be extracted from this variable and put into the table. Typically, the data provided to data table is in form of an array or a list. If the data references a single scalar object, only one row will be rendered which is not much useful.

Inside h:datatable element we should put as many h:column elements as there are individual columns. Each h:column element can contain several JSF components which will appear in a single cell of the table. The final structure of a simple table looks like this:

<h:datatable value="#{rows}" value="row">
  <h:column>
    <!-- components in the first column -->
    #{row.columnValue1}
  </h:column>
  <h:column>
    <!-- components in the second column -->
    #{row.columnValue2}
  </h:column>
  <!-- more columns -->
</h:datatable>

Facets

This simple data table can be extended to contain optional header, footer and caption using f:facet elements:

<h:datatable value="#{rows}" value="row"
    headerClass="headerClass1"
    footerClass="footerClass1"
    captionClass="captionClass1">
  <h:column>
    <f:facet name="header">
      <!-- first column header contents -->
    </f:facet>
    <f:facet name="footer">
      <!-- first column footer contents -->
    </f:facet>
    <!-- components in the first column -->
    #{row.columnValue1}
  </h:column>
  <h:column>
    <f:facet name="header">
      <!-- second column header contents -->
    </f:facet>
    <f:facet name="footer">
      <!--  second column footer contents -->
    </f:facet>
    <!-- components in the second column -->
    #{row.columnValue2}
  </h:column>
  <!-- more columns -->
  <f:facet name="caption">
    <!-- table caption contents -->
  </f:facet>
</h:datatable>

Facets for header and footer are placed inside h:column elements while the facet for caption is put directly inside h:datatable. Additionally, we can specify CSS classes to be used by a header (headerClass), footer (footerClass) and also a caption (captionClass) of the table. There is also captionStyle attribute which specifies inline style for a caption.

Styles for rows and columns

CSS styles for individual columns and rows can be defined using either columnClasses or rowClasses attributes of h:datatable respectively. Obviously, these attributes are mutually exclusive and should not be applied to the same table.

Both attributes contain a list of comma-separated CSS classes. The first CSS class is used for the first column/row, the second one for the second column/row and so on. If the number of CSS classes is less than the number of columns/rows in a table, the classes are used repeatedly for the next columns/rows. For example if we specify only two classes, the first one is used for odd columns/rows and the second one for even columns/rows.

To apply the CSS style to the whole table we can use standard style or styleClass attributes.

Sorting and paging

Element h:datatable does not provide any real support for sorting elements in columns or splitting data into multiple pages so it has to be done manually (e.g. by defining non-standard component or using appropriate database queries to sort data and fetch only a portion of it). It contains first and count attributes to show only a portion of rows but it is not very useful in practice especially for very large tables.

Example

Equipped with this knowledge we can create a table with list of all HTTP headers sent by a web browser to a server. The single row of the table is represented using following name-value class:

package com.example.jsfdatatable;

public class HeaderEntry {

    private String name;
    private String value;

    public HeaderEntry(String name, String value) {
        this.name = name;
        this.value = value;
    }

    public String getName() {
        return name;
    }

    public String getValue() {
        return value;
    }

}

The list of all HTTP headers (table rows) is fetched from HTTPServletRequest using getEntries() method:

package com.example.jsfdatatable;

import java.util.ArrayList;
import java.util.Enumeration;
import java.util.List;
import javax.enterprise.context.RequestScoped;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.inject.Named;
import javax.servlet.http.HttpServletRequest;

@Named
@RequestScoped
public class Headers {

    public List<HeaderEntry> getEntries() {
        List<HeaderEntry> result = new ArrayList<>();
        ExternalContext context = FacesContext.getCurrentInstance().getExternalContext();
        HttpServletRequest request = (HttpServletRequest) context.getRequest();
        
        Enumeration<String> namesIt = request.getHeaderNames();
        while (namesIt.hasMoreElements()) {
            String name = namesIt.nextElement();
            Enumeration<String> valueIt = request.getHeaders(name);
            while (valueIt.hasMoreElements()) {
                String value = valueIt.nextElement();
                result.add(new HeaderEntry(name, value));
            }
        }
        return result;
    }
}

The whole JSF web page contains a table with two columns (Name and Value), header, footer and caption. We use rowClasses to apply separate styles for odd and even rows:

<?xml version='1.0' encoding='UTF-8' ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:h="http://java.sun.com/jsf/html" 
      xmlns:f="http://java.sun.com/jsf/core">
    <h:head>
        <title>#{msgs.httpHeaders}</title>
        <h:outputStylesheet library="css" name="styles.css" />
    </h:head>
    <h:body>
        <h:dataTable value="#{headers.entries}" var="entry"
                     styleClass="table"
                     rowClasses="tableRowOdd, tableRowEven"
                     headerClass="tableHeader"
                     footerClass="tableFooter"
                     captionClass="tableCaption" >
            <h:column>
                <f:facet name="header">#{msgs.name}</f:facet>
                <f:facet name="footer">#{msgs.stopDash}</f:facet>
                <h:outputText value="#{entry.name}" />
            </h:column>
            <h:column>
                <f:facet name="header">#{msgs.value}</f:facet>
                <f:facet name="footer">#{msgs.stopDash}</f:facet>
                <h:outputText value="#{entry.value}" />
            </h:column>
            <f:facet name="caption">#{msgs.httpHeadersCaption}</f:facet>
        </h:dataTable>
    </h:body>
</html>

Conclusion

Element h:datatable is very useful in rendering HTML tables. While it does not provide any advanced features. it can be easily extended or used a part of bigger structure which provides such support.

Even though we used only plain text inside cells, they can actually contain more advanced components like graphics, radio buttons, check boxes, lists and so on.

The complete example was tested on JBoss AS 7.1 and is available at GitHub.

Posted in Java, Java EE, JSF | Tagged , , , , | Leave a comment

Parameterized unit tests in JUnit

Sometimes you may want to execute a series of tests which differ only by input values and expected results. Instead of writing each test separately, it is much better to abstract the actual tests into a single class and provide it a list of all input values and expected results. JUnit 4 introduced a standard and easy solution to this problem called parametrized tests.

Structure of a parametrized test

In order to use a parameterized test the test class must be annotated with @RunWith(Parameterized.class) annotation to inform JUnit that custom test runner should be used instead of the standard one. This custom test runner has several requirements from the test class. First, the class has to provide a static public method annotated with @Parameters annotation and returning a collection of test data elements (which in turn are stored in an array). Additionally, the test class should have a single constructor which accepts test data elements from the previously mentioned array. Typically, the constructor just stores all of its arguments into the appropriate fields of the class so they can be later accessed by test methods.

When parameterized test is executed, a new instance of a test class is created for the cross-product of each test method and each element of the collection (with test data elements). Instance of the test class is produced by passing all test data elements from an array as arguments of the constructor. Then the appropriate test method is run.

Example

Let’s consider following class with a single method:

package com.example.junitparameterizedtests;

public class OneBitsCounter {

    int getCount(long value) {
        value = value - ((value >> 1) & 0x5555555555555555L);
        value = (value & 0x3333333333333333L)
                + ((value >> 2) & 0x3333333333333333L);
        value = ((value + (value >> 4)) & 0x0F0F0F0F0F0F0F0FL);
        return (int) ((value * (0x0101010101010101L)) >> 56);
    }
}

The method is supposed to return a number of ’1′ bits in a binary representation of the value. The code of the method is not obvious so we would like to check the method for several input values:

package com.example.junitparameterizedtests;

import java.util.Arrays;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

@RunWith(Parameterized.class)
public class OneBitsCounterTestCase {

    @Parameters(name = "{index}: test({0}) expected={1}")
    public static Iterable<Object[]> data() {
        return Arrays.asList(new Object[][] {
            { 0b0, 0},
            { 0b001, 1},
            { 0b11011, 4},
            { 0b111111111111111111111111111, 27},
            { 0b0111010111111111111111111111010101111111L, 34}
        });
    }
    
    private long value;
    private int oneBitsCount;
    
    public OneBitsCounterTestCase(long value, int oneBitsCount) {
        this.value = value;
        this.oneBitsCount = oneBitsCount;
    }
    
    @Test
    public void testGetCount() {
        OneBitsCounter counter = new OneBitsCounter();
        assertEquals(counter.getCount(value), oneBitsCount);
    }
    
}

The static data() method returns five arrays containing test data elements. For each array a new instance of the test class is created using two-argument constructor. Once the object is created the actual test method is run.

Naming individual tests

Since version 4.11 of JUnit it is possible to provide an individual name for each test using a simple name pattern in @Parameters annotation. The name can contain following place-holders:

  • {index} – current index of test data elements
  • {0}, {1}, {2}, … – corresponding test data element

This naming can be very useful to quickly identify the failing test.

Conclusion

Support for parameterized tests is a simple yet very useful feature of JUnit enabling us to run the same test for many different sets of values. The main reason to use them is to reduce the size of source code and remove code duplication.

The complete source code of the example can be found at GitHub.

Posted in Java, Software development practices | Tagged , , | Leave a comment

Evaluating postfix expressions

The standard notation used to represent mathematical expressions is called infix notation. You should be very familiar with it already because it is almost exclusively used in books and thought in schools. Just to be clear, the typical example of infix expression is:

(2 + 3) - 7 / 9

However, there exists two other yet significantly less popular notations called prefix and postfix. In this article we will concentrate on the later and describe what it is and how to evaluate it using computer.

Postfix notation

Postfix notation (also known as Reverse Polish Notation or RPN in short) is a mathematical notation in which operators follow all of its operands. It is different from infix notation in which operators are placed between its operands. The previously mentioned infix expression can be represented using postfix notation like this:

2 3 + 7 9 / -

To evaluate this expression we take two first numbers 2 and 3, add them and remember the result; then we take the next two numbers 7 and 9, divide them and remember the result. At last we take the two remembered values and we subtract them to obtain the final result.

While postfix notation may seem less natural and straightforward, it has several advantages which made it popular in computing. The main reason is that postfix expressions are generally easier to calculate on computers than the equivalent infix expressions and do not require any brackets to define the order of operations (assuming that every operator has fixed number of operands). Additionally, the ease of processing results in significantly simpler and more efficient algorithms. This made postfix notation very popular in representing intermediate results of computations.

Algorithm

The algorithm to evaluate any postfix expression is based on stack and is pretty simple:

  1. Initialize empty stack
  2. For every token in the postfix expression (scanned from left to right):
    1. If the token is an operand (number), push it on the stack
    2. Otherwise, if the token is an operator (or function):
      1. Check if the stack contains the sufficient number of values (usually two) for given operator
      2. If there are not enough values, finish the algorithm with an error
      3. Pop the appropriate number of values from the stack
      4. Evaluate the operator using the popped values and push the single result on the stack
  3. If the stack contains only one value, return it as a final result of the calculation
  4. Otherwise, finish the algorithm with an error

Example

As an example we will try to evaluate the following postfix expression:

2 3 4 + * 6 -

which can be represented in infix notation like this:

2 * (3 + 4) - 6

The exact steps of the algorithm are put in the table below:

Input token Operation Stack contents (top on the right) Details
2 Push on the stack 2
3 Push on the stack 2, 3
4 Push on the stack 2, 3, 4
+ Add 2, 7 Pop two values: 3 and 4 and push the result 7 on the stack
* Multiply 14 Pop two values: 2 and 7 and push the result 14 on the stack
6 Push on the stack 14, 6
- Subtract 8 Pop two values: 14 and 6 and push the result 8 on the stack
(End of tokens) (Return the result) 8 Pop the only value 8 and return it

The contents of the stack in the Stack contents … column is represented from left to right with the rightmost values being on the top of the stack. When there are no more tokens in the input, the contents of the stack is checked. If there is only one value, it is the result of the calculation. If there are no values or if there are many, the passed input expression was not a valid postfix expression.

Source code

The algorithm can be easily implemented in Java using LinkedList as a stack implementation:

package com.example.evalpostfix;

import java.util.Deque;
import java.util.LinkedList;
import java.util.Scanner;

public class PostfixEvaluator {

    private Deque<Double> args;

    public PostfixEvaluator() {
        args = new LinkedList<>();
    }

    public double evaluate(String expr) {
        args.clear();
        try (Scanner scanner = new Scanner(expr)) {
            while (scanner.hasNext()) {
                String token = scanner.next();
                processToken(token);
            }
        }

        if (args.size() == 1) {
            return args.pop();
        } else {
            throw new IllegalArgumentException("Invalid number of operators");
        }
    }

    private void processToken(String token) {
        switch (token) {
            case "+":
                addArgs();
                break;
            case "-":
                subArgs();
                break;
            case "*":
                mulArgs();
                break;
            case "/":
                divArgs();
                break;
            default:
                try {
                    double arg = Double.parseDouble(token);
                    args.push(arg);
                } catch (NumberFormatException e) {
                    throw new IllegalArgumentException("Invalid number: " + token, e);
                }
        }
    }

    private void addArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 + arg2);
    }

    private void subArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 - arg2);
    }

    private void mulArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 * arg2);
    }

    private void divArgs() {
        checkArgumentsSize();
        double arg2 = args.pop();
        double arg1 = args.pop();
        args.push(arg1 / arg2);
    }

    private void checkArgumentsSize() {
        if (args.size() < 2) {
            throw new IllegalArgumentException("Not enough parameters for operation");
        }
    }
}

The code allows any double value as an operand and can be easily extended to support additional binary operators (e.g. modulo, power), unary operations (e.g. factorial, square root) or functions (e.g. logarithm, trigonometric functions).

Conclusion

Evaluating postfix expressions is a very simple example presenting usefulness of stack in evaluating mathematical expressions. If you are interested in evaluating infix expressions, you can check Shunting-yard algorithm.

You can find the complete source code with tests at GitHub.

Posted in Algorithms, Java | Tagged , | Leave a comment

Git: branching and merging

Branch is a core concept in Git and many other version control systems. Generally speaking, a branch is a line of development which is parallel and independent of all other lines but which still shares the same history with all other branches if you look far enough in time. Because the branches are independent the changes applied to one branch does not automatically propagate to other branches and are not visible there. This way the development can be done in parallel by many contributors without disturbing each other.

Mainline

Git by default (after creating a repository) comes with a single branch named master. This branch represent the main development line (often named mainline) of the repository and is a branch from which new branches can be created of. It is also the destination branch where other branches can be merged to.

Of course, new branches doesn’t have to be created directly of this branch and merged directly into it but may also be created from other custom branches and merged into them.

Feature branches

A typical reason to create a branch is to have a “private space” to develop a new feature without disturbing other people’s work and being disturbed by them. It also keeps the mainline free from questionable and incomplete code. Only after the feature is finished and tested on the feature branch, it is merged as a whole into the mainline. Usually, after merging the feature branch can be safely deleted.

Bug-fix branches

Another reason to create a branch is to develop a bug-fix. Because the same bug is often present in many different branches (mainline and few releases), it is generally easier and faster to create a new branch with the bug-fix and merge it into all branches where the bug is present. It is completely fine if such branch contains a single commit only.

Experimental branches

A branch can be also created to experiment with tentative idea without affecting other people’s work. If the experiment turns out very well, the experimental branch may be merged back. If it fails, the experimental branch can be deleted without any merging.

Release branches

Branches are also used to take a snapshot of the mainline at some point in time and prepare it for release. Usually, only bug-fixes and small improvements are added to a release branch in order to stabilize it and get ready for final testing and build.

Creating branches

Creating a branch in Git is very easy:

$ git branch feature1-branch

The created branch feature1-branch is a child of the current branch and has exactly the same history up to the moment they were branched. The command above merely creates a branch so if you want to work on the new branch, you have to switch to it using command:

$ git checkout feature1-branch
Switched to branch 'feature1-branch'

After this you can safely commit changes to the new branch similarly as you would do with the mainline.

There is also a shorthand command which creates a branch and immediately switches to it:

$ git checkout -b feature1-branch
Switched to a new branch 'feature1-branch'

As you should know Git has a notion of local and remote repositories. The branch we have just created is a local one and is present only in a local repository. Usually, you would want to push this new branch to a remote repository so that other team members can access and work on it:

$ git push -u origin feature1-branch
Total 0 (delta 0), reused 0 (delta 0)
To file:///home/robert/tmp/git2/
 * [new branch]      feature1-branch -> feature1-branch
Branch feature1-branch set up to track remote branch feature1-branch from origin.

Option -u ensures that the local branch tracks the new remote branch. This way Git is able to find the right local branch when pulling the changes from a remote repository using argument-less git pull command.

Listing branches

With git branch command you can also see all local branches:

$ git branch
* feature1-branch
  master

all remote branches:

$  git branch -r
  origin/HEAD -> origin/master
  origin/feature1-branch
  origin/master

or just all (local and remote) branches:

$  git branch -a
* feature1-branch
  master
  remotes/origin/HEAD -> origin/master
  remotes/origin/feature1-branch
  remotes/origin/master

The branch annotated with an asterisk is the current branch. Additionally, Git provides an option to list branches which are already merged into the current branch (either directly or indirectly):

$ git branch --merged
  feature1-branch
* master
  test

There is also an opposite option –no-merged. These two options are very useful in determining which branches can be safely deleted from the repository.

Switching between branches

As shown before switching between branches is done using git checkout command:

$ git checkout feature1-branch
Switched to branch 'feature1-branch'

All untracked files and local uncommitted changes in the working tree are left untouched so they can be later committed to the new branch. If the target branch is not found in the local repository but there exists a tracking branch in exactly one remote repository, the command creates a local branch pointing to the remote one and switches to it.

Deleting branches

If a branch was fully merged and is no longer needed, it can be deleted with command:

$ git branch -d bugfix1-branch
Deleted branch bugfix1-branch (was b12dd4e).

If it was not merged and we don’t plan to do so for some reason, we can remove the branch forcefully:

$ git branch -D experimental1-branch
Deleted branch experimental1-branch (was b12dd4e).

These commands operate on the local repository only so after removing a local branch, it is usually a good idea to remove the same branch from the remote repository:

$ git push origin :experimental1-branch
To file:///home/robert/tmp/git2/
 - [deleted]         experimental1-branch

While this looks almost like pushing a new branch to a repository, there is a slight difference – a colon before branch name (actually empty branch name before the colon) which informs Git that the branch should be removed rather than created. If the command is too obscure to you, an alternative may be used:

$ git push origin --delete experimental1-branch
To file:///home/robert/tmp/git2/
 - [deleted]         experimental1-branch

Synchronizing with remote

Even if a local branch has a tracking remote branch in the remote repository, the changes committed to the local branch won’t automatically appear in the remote one. Changes made to the current local branch can be pushed to the remote repository (possibly with changes to other branches depending on Git version and configuration) using command:

$ git push

Additionally, to fetch changes made by other developers in a remote tracking branch and apply them to the current local one, the pull command can be used:

$ git pull

When working in many people on a project, you may end up in a situation that somebody removed a branch from a remote repository but you still see it in the output of git branch -a command even after running git pull many times. It is because git pull does not automatically prune no-longer-existing branches and it has to be done manually:

$ git remote prune origin
Pruning origin
URL: file:///home/robert/tmp/git2/
 * [pruned] origin/feature7-branch

Merging

When a new child branch is created based on a parent branch, they have exactly the same history. But once you start applying changes to one of them, their histories start to diverge. At some point you may decide that you want to share some of the changes from one of the branches (usually child branch) with another one (usually parent branch). This concept is commonly called merging in version control systems. After merging, the changes from the source branch will become available and visible in the destination branch.

To merge a branch into another one, you have to switch to the destination branch and then run git merge command with the name of the source branch to merge in:

$ git merge feature11-branch 
Updating b12dd4e..0b7f55a
Fast-forward
 ABC.txt | 0
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 ABC.txt

Of course, after this the source branch may be deleted and the changes to the destination branch should be pushed to a remote repository.

In case Git complains about conflicts during merge operation, you can refer to the article explaining how to resolve merge conflicts.

Conclusion

Branching and merging is one of the most important concepts in version control systems that every developer should know. In this article I have concentrated on the basics which should be enough in most cases. For details you can always consult git-branch, git-checkout and git-merge manual pages.

Posted in Git, Version control | Tagged , , | Leave a comment

Custom bean validation constraints

Bean Validation API defines several built-in constraint annotations which are very useful in many situations. However, there are still some cases where these standard constraints are not enough and you have to create your own custom constraint. With Bean Validation this task is pretty easy and straightforward.

In the article Validating HTML forms in Spring using Bean Validation we have built a simple Spring MVC application using built-in constraints only. This time we will introduce a new text form field for entering favourite day of a week (in place of the age field) and add a custom constraint to this field. The new field will be checked if it a part of workweek or weekend depending on the attributes set in the constraint. Additionally, the constraint will allow specifying whether the comparison is case-sensitive or not.

Defining custom annotation

The first step is creating a custom annotation @DayOfWeek which represents a custom constraints:

package com.example.beanvalidationcustomconstraint;

import java.lang.annotation.Documented;
import static java.lang.annotation.ElementType.FIELD;
import static java.lang.annotation.ElementType.METHOD;
import java.lang.annotation.Retention;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Target;
import javax.validation.Constraint;
import javax.validation.Payload;

@Documented
@Constraint(validatedBy = DayOfWeekValidator.class)
@Target({ METHOD, FIELD, ANNOTATION_TYPE })
@Retention(RUNTIME)
public @interface DayOfWeek {
    String message() default "{com.example.beanvalidationcustomconstraint.DayOfWeek.message}";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};
    DayOfWeekType[] value() default { };
    boolean ignoreCase() default false;
}

The annotation contains three mandatory attributes: message, groups and payload. The first attribute specifies a message to show or a reference to it if the validation fails. In this case the message attribute references the actual message stored in ValidationMessages.properties file or one of it internationalized versions. Attribute groups allows the definition of validation groups but we won’t use any in this example. The last one payload specifies extra data to be used by the clients of this constraint – we also do not use any in this example.

The are also two other attributes which are more interesting from our point of view and are used to provide additional settings for the custom constraint. The value is a default attribute (used when no other attribute name is specified when using the annotation) and in our case it holds an array of allowed day types:

package com.example.beanvalidationcustomconstraint;

public enum DayOfWeekType {
    WORKWEEK,
    WEEKEND
}

The ignoreCase attribute specifies whether the constraint should use case-sensitive or case-insensitive string comparisons. If these attributes are not specified, they default to an empty array and false, respectively.

We also annotate the newly created annotation with @Documented to enable showing it in JavaDoc for elements annotated with it, @Constraint to indicate that this is a Bean Validation constraint annotation and to specify the custom validator associated with it, @Target to inform that the annotation can be attached to methods, fields and other annotations and @Retention to specify that we want the annotation to be available at runtime via reflection.

Defining the validator

The created annotation does not contain logic which performs actual validation but instead it refers to the class DayOfWeekValidator using @Constraint annotation. The actual validator looks like this:

package com.example.beanvalidationcustomconstraint;

import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;

public class DayOfWeekValidator implements ConstraintValidator<DayOfWeek, String> {
    private DayOfWeekType[] allowedTypes;
    private boolean ignoreCase;
    
    @Override
    public void initialize(DayOfWeek constraint) {
        allowedTypes = constraint.value();
        ignoreCase = constraint.ignoreCase();
    }

    @Override
    public boolean isValid(String value, ConstraintValidatorContext context) {
        if (value == null)
            return true;
        
        for (DayOfWeekType type : allowedTypes) {
            switch (type) {
                case WORKWEEK:
                    if (isWorkWeek(value))
                        return true;
                    break;
                case WEEKEND:
                    if (isWeekEnd(value))
                        return true;
            }
        }
        return false;
    }

    private boolean isWorkWeek(String value) {
        return equalsDay(value, "Monday") || equalsDay(value, "Tuesday")
                || equalsDay(value, "Wednesday") || equalsDay(value, "Thursday")
                || equalsDay(value, "Friday");
    }

    private boolean isWeekEnd(String value) {
        return equalsDay(value, "Saturday") || equalsDay(value, "Sunday");
    }

    private boolean equalsDay(String value1, String value2) {
        return ignoreCase ? value1.equalsIgnoreCase(value2) : value1.equals(value2);
    }

}

The custom validator implements generic ConstraintValidator interface with two type parameters: the type of the custom constraint annotation and the type of the element which can be validated using this validator. Then we implement initialize() method which fetches the attributes/settings of the custom constraint and isValid() method which performs the actual validation and returns true if the validation finished successfully or false otherwise.

Using the constraint

Once we have the annotation and the validator ready, we can use it in the same way as any other built-in constraint:

package com.example.beanvalidationcustomconstraint;

import java.io.Serializable;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;

public class Person implements Serializable {
    private static final long serialVersionUID = 3297423984732894L;
    
    @Size(min = 1, max = 20, message = "{firstNameInvalid}")
    private String firstName;
    @Size(min = 1, max = 40, message = "{lastNameInvalid}")
    private String lastName;

    @NotNull
    @DayOfWeek(value = DayOfWeekType.WEEKEND, ignoreCase = true)
    private String favouriteDayOfWeek;

    // constructor, setters and getters
}

In this case we allow only Saturday and Sunday (ignoring the letter case) as the value of favouriteDayOfWeek field. Because we use Spring MVC, the validation will take place when the user tries to submit the form.

Defining custom constraints using composition

Sometimes we don’t even need to define a validator for the custom constraint. It is possible if we can represent our custom constraint as a conjunction of already existing constraints. In this example we specify a constraint which is met only if @NotNull, @Min and @Max constraints are met:

package com.example.beanvalidationcustomconstraint;

import java.lang.annotation.Documented;
import static java.lang.annotation.ElementType.ANNOTATION_TYPE;
import static java.lang.annotation.ElementType.FIELD;
import static java.lang.annotation.ElementType.METHOD;
import java.lang.annotation.Retention;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Target;
import javax.validation.Constraint;
import javax.validation.Payload;
import javax.validation.constraints.Max;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;

@NotNull
@Min(0)
@Max(10)
@Documented
@Constraint(validatedBy = {})
@Target({ METHOD, FIELD, ANNOTATION_TYPE })
@Retention(RUNTIME)
public @interface Range {
    String message() default "Range is not valid";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};

}

This is especially useful if the same combination of constraints are applied to many different fields or methods.

Conclusion

Bean Validation is very extensible and allows us to define virtually any custom constraint and use it in the same way as the built-in ones.

The sample code for this example was tested with JBoss and is available at GitHub.

Posted in Java, Java EE, Spring | Tagged , , , | 1 Comment

Common exception misuses in Java (and not only)

Exceptions were introduced in many programming languages as a standard method to report and handle errors. If you have ever used functions that return special values (usually -1 or NULL) to indicate an error, you should know how easy it is to forget to check this value and completely ignore the error. One great advantage of exceptions is that it is generally “hard to forget about them”. It means that if you don’t handle the exception somewhere in the code, it will abort the execution of the application and will also appear on a console or in logs.

Although exceptions were introduced into mainstream programming languages many years ago and many books have been written about them, they are still used wrongly. In this article I try to describe several doubtful practices regarding usage of exceptions which it is usually better to avoid.

Using too many try-catch blocks

Sometimes you may see code like this:

Writer writer = null;
try {
     writer = new FileWriter("/tmp/a");
} catch(FileNotFoundException e) {
     // handle error; and return from the method
} catch (IOException ex) {
     // handle error; and return from the method
}       
try {
     writer.write("Line1");
} catch (IOException e) {
     // handle error; close the file and return from the method
}
try {
     writer.close();
} catch (IOException e) {
     // handle error; and return from the method
}

Generally there is nothing really wrong with this code regarding exceptions but it would be much better if all instructions were put into single try-catch block:

Writer writer = null;
try {
    writer = new FileWriter("/tmp/a");
    writer.write("Line1");
} catch(FileNotFoundException e) {
    // handle error
} catch (IOException ex) {
    // handle error
} finally {
    if (writer != null)
        writer.close();
}

It reduces the length of the source code and improves its readability. Merging try-catch blocks changes the behaviour of the application but usually the change is not very significant and is totally acceptable. In some cases we may still prefer to use multiple try-catch blocks (e.g. to provide better error reporting).

Using exceptions for control flow

Exceptions were invented only for error reporting and handling and should not be used for other purposes like control flow. Here is an example of such misuse:

try {
    Iterator<String> it = list.iterator();
    while (true) {
       String value = it.next();
       // do something with value
    }
} catch (NoSuchElementException e) {
    // OK, end of the list
}

Instead of catching NoSuchElementException the code should check whether there is a next element available in the iterator before accessing it. This check would completely prevent appearance of the mentioned exception.

While the code above is ugly, it has also another problem. Exception throwing and catching is generally a very expensive operation in most (all?) programming languages. If the code above is run very often (especially in some loop), it may greatly slow down your application.

Using wrong exception class

Java provides own exception hierarchy with many predefined exception classes. The exceptions in Java can be divided into unchecked and checked ones. The former should be used for reporting programmer errors like dereferencing of null value, accessing elements outside of array bounds or fetching objects from an empty collection. Generally, unchecked exceptions can be easily avoided by adding a simple condition in the code before calling a method. The later are generally unpredictable and usually not much can be done to prevent them. The examples of checked exceptions are I/O or file parsing errors.

The common mistake is throwing checked exception from a method when unchecked one should be used. The programmers using this method will be enforced to catch the exception but there would be nothing they could do to handle it (maybe except rethrowing it as an unchecked exception). This will result in more unnecessary and hard to read code.

The opposite is also possible. If a method throws unchecked exception for an error which must be handled in some way, the programmers simply will not catch this exception which in turn may abort the execution of the application.

Meaningless messages

One of the common mistakes is throwing exception without any message or with message which does not describe the cause of the problem:

if (list.isEmpty())
    throw new IllegalArgumentException();
if (list.size() != array.length)
    throw new IllegalArgumentException("Wrong data");

Such exceptions are generally useless. Even if you have the full source code of the class from which the exception is thrown, it may still not explain what is exactly wrong. It is generally better to put some additional data into exception message so it will be much easier to find the root cause of the exception once it appears:

if (list.isEmpty())
    throw new IllegalArgumentException("List is empty");
if (list.size() != array.length)
    throw new IllegalArgumentException("List and array have different sizes: " + list.size() + " " + array.length);

Catching unchecked exceptions

In most cases unchecked exceptions should not be caught at all but prevented using a simple condition. You can see the example of such issue above in paragraph Using exceptions for control flow. Whenever there is a catch block for unchecked exception in the code, it should be removed and replaced by a proper condition.

Another problem with catching unchecked exceptions is that it hides programming errors and therefore makes it more difficult to find out why certain thing does not work. The same is also true when catching instances of general Exception and Throwable classes.

As always there are some situations in which catching unchecked exceptions is acceptable. The first is that there is no easy way to ensure that the exception won’t happen. For example there is no easy method to check whether the string representation of a number is valid before calling method Integer.parse(). Therefore, it is much easier to call the method and catch the unchecked exception if it happens.

The other situation is that you are running some external code which cannot be validated beforehand and you don’t want the unchecked exceptions thrown by this code to abort the execution of your application. This is what web application servers do.

Reporting exceptions late

Sometimes programmers are afraid to throw an exception whenever an error happens. Instead they often return null value, empty string, empty collection or just ignore the error. For example the method below returns null whenever it is impossible to return the correct value:

public String getSecondElement(List<String> list) {
    if (list.size() >= 2)
        return list.get(1);
    else
        return null;
}

While it may be convenient, there is a risk associated with this solution. The returned null value may be passed to some other part of the code and cause there NullPointerException exception. Additionally, the place where the exception was raised may be very distant from the place where the original problem occurred which makes it much harder to find the root cause.

If we raised an exception (even unchecked one) at the first place we noticed the problem (in our case in getSecondElement method), it would be much easier to find the root cause and fix it.

Handling exceptions early

Some programmers feel obliged to catch and handle every exception in the sample place (a method or a class) where they are raised. Usually, at this place we don’t have enough knowledge about the bigger operation this method or class is a part of and therefore we cannot handle the exceptions properly. If we are unsure how to handle a particular exception, it is usually much better to pass it up to the caller because higher level methods have more knowledge about the context and can better revert or retry the operation or inform the user about the error.

Ignoring exceptions

In my opinion the worst thing we could do with exceptions is ignoring them:

public void loadDrumKit(String name) {
    try {
        // here comes code for loading from file
    } catch (IOException e) {
        // ignore it - we can use the old drum kit
    }
}

When the exception sooner or later happens, we would have no information why the application started to misbehave. As a total minimum we should log the caught exception with full stack trace so that we could find the root cause after checking the logs. And the best way to handle the exception would be to either pass it up to the caller or clearly inform the user about the problem (of course, additional logging is welcome).

Conclusion

Handling of exceptions is never easy but still we should not misuse them and take short-cuts just because it is easier. Otherwise, it may affect the application stability and predictability which will reduce the overall customer satisfaction. Of course, the things written above are just guidelines for exception handling and there are situations where it is completely OK (or even better) to deviate from them.

Posted in Java, Software development practices | Tagged , | 5 Comments

Views in Java Collections Framework

View in Java Collections Framework is a lightweight object which implements Collection or Map interface but is not a real collection in a traditional sense. In fact, view does store objects inside but references another collection, array or a single object and uses it to provide the data to a user.

Empty view

To start with views let’s take a look at the simplest ones which represent empty collections. In Collections class you can find emptyList(), emptySet() and emptyMap() methods which create empty instance of List, Set or Map respectively:

List<String> clearList = Collections.emptyList();
Set<String> clearSet = Collections.emptySet();
Map<String, Integer> clearMap = Collections.emptyMap();

The returned instance is actually immutable so trying to add an element into it will result in an exception. However, this kind of empty collection is very convenient if some API requires a collection but for some reason we don’t want to pass any objects there.

View of a single object

Very often we need a collection with only one element. Achieving this with views is very easy. We can create such list, set or map by calling singletonList(), singleton() or singletonMap() methods respectively:

List<String> oneList = Collections.singletonList("elem");
Set<String> oneSet = Collections.singleton("elem");
Map<String, Integer> oneMap = Collections.singletonMap("one", 1);

It is also possible to create a list which contains specified element given number of times:

List<String> nTimesList = Collections.nCopies(9, "elem");

The created collections are immutable similarly to empty views. Additionally, the views does not have the overhead of a typical collection and are easier to create.

View of an array

If you ever needed to repack elements from an array into a list just to call a single method, you may appreciate asList() method from Arrays class which creates a list view backed by an array:

String[] monthArray = new String[12];
List<String> monthList = Arrays.asList(monthArray);

For obvious reasons the returned collection is immutable which means it is impossible to add or remove elements from it. But it is still possible to modify the elements inside the view using get() or set() methods.

Since Java 5 it is also possible to use varargs in asList() method:

List<String> months = Arrays.asList("July", "August");

View of a portion of a collection

We can also create a view of a portion of a list:

List<String> nextFive = list.subList(5, 10);

The returned view contains 5 elements of the original list between index 5, inclusive, and 10, exclusive. The view is also mutable so any modification on the view (e.g. adding or removing elements), will also affect the original list.

The similar functionality is also possible on SortedSet using methods:

SortedSet<E> 	headSet(E toElement);
SortedSet<E> 	subSet(E fromElement, E toElement);
SortedSet<E> 	tailSet(E fromElement);

and on SortedMap:

SortedMap<K,V> 	headMap(K toKey);
SortedMap<K,V> 	subMap(K fromKey, K toKey);
SortedMap<K,V> 	tailMap(K fromKey);

There are even more such methods in NavigableSet and NavigableMap interfaces.

Views of keys, values and entries

You should be probably aware of keySet(), entrySet() and values() methods of Map interface. They also return views instead of real collections which make them very efficient.

Unmodifiable views

Class Collection provides methods which create immutable view for many collections types:

List<T>         unmodifiableList(List<? extends T> list);
Map<K,V>        unmodifiableMap(Map<? extends K,? extends V> m);
Set<T>          unmodifiableSet(Set<? extends T> s);
SortedMap<K,V>  unmodifiableSortedMap(SortedMap<K,? extends V> m);
SortedSet<T>    unmodifiableSortedSet(SortedSet<T> s);

If somebody tries to add or remove elements from the view, it will throw an exception. This kind of behaviour is very useful if we want to ensure that given method will not modify the collection. However, it is still possible to modify the elements inside the collection.

Creating unmodifiable view does not make the original collection unmodifiable. It is still possible to change it using the original reference.

Synchronized views

Similarly to unmodifiable views we can create synchronized views using methods:

Collection<T>   synchronizedCollection(Collection<T> c);
List<T>         synchronizedList(List<T> list);
Map<K,V>        synchronizedMap(Map<K,V> m);
Set<T>          synchronizedSet(Set<T> s);
SortedMap<K,V>  synchronizedSortedMap(SortedMap<K,V> m);
SortedSet<T>    synchronizedSortedSet(SortedSet<T> s);

Every method of synchronized view is synchronized which make the view thread-safe. Of course, you should no longer hold or use the reference to the original collection because it would allow unsynchronized access to it.

Conclusion

I tried to mention most popular and useful views in Java Collections Framework. They greatly simplify some common tasks, reduce the number of times when repacking is needed and also reduce the amount of code to type. I hope you will find them useful too.

Posted in Java | Tagged , | Leave a comment