Immutable objects

Most of the objects in Java are mutable. It means that their state/fields can be changed after the object was created. The examples of those are: ArrayList, Calendar, StringBuilder.

Immutable objects

An object is immutable if its state/fields cannot be changed after the object is constructed. In practice it means that:

  1. The object class must not have any mutable methods like setters.
  2. Every object contained within this object is also immutable (or otherwise protected from modification).
  3. If the object contains collection, it must be impossible to add new elements, remove existing elements or modify the value of any element. Every element of the collection must be immutable (or otherwise protected from modification).
  4. Every field of the object is final and private.
  5. The object class is declared as final.

The most notable examples of immutable classes are String, Integer and other wrappers of basic types, LocalDateTime and other classes from Java 8 Date and Time API.

Benefits of immutable objects

Reliance on immutable objects is widely accepted strategy for creating simple and reliable code. Few of the most notable benefits of immutable objects are listed below:

  1. These are simpler to construct, test, use and easier to reason about because they can never be changed. Class invariant is checked once and never changed. There is no need for defensive copying.
  2. Immutable objects are excellent candidates for Map and Set keys because these cannot change value while in the collection. Immutable objects prevents key modification.
  3. The internal state of the object stays consistent even after exception is thrown. This is not the case for many mutable objects.
  4. Immutable objects can be easily and safely cached because these are not going to change.
  5. Immutable objects are automatically thread-safe so the whole problem of thread synchronization is gone. Additionally, these are particularly useful in concurrent application.

Cost of immutable objects

Immutability may have a performance cost because completely new object needs to be created as opposed to updating an existing object in place. But often this cost is overestimated and can be reduced by the elimination of thread synchronization, easier/faster garbage collection and specialized builders.

Advertisements
Posted in Java, Software development practices | Tagged | Leave a comment

Delete directory with contents in Java

Removing empty directory in Java is as simple as calling File.delete() (standard IO) or Files.delete() (NIO) method. However, if the folder is not empty (for example contains one or more files or subdirectories), these methods will refuse to remove it. In this post I want to present few ways to recursively remove the directory together with its contents.

Standard recursion (Java 6 and before)

The first method recursively removes files and directories from the directory tree starting from the leaves. Because it uses old I/O class File for operating on files and directories, this method can be used in any Java version.

void deleteDirectoryRecursionJava6(File file) throws IOException {
  if (file.isDirectory()) {
    File[] entries = file.listFiles();
    if (entries != null) {
      for (File entry : entries) {
        deleteDirectoryRecursionJava6(entry);
      }
    }
  }
  if (!file.delete()) {
    throw new IOException("Failed to delete " + file);
  }
}

Standard recursion using NIO (since Java 7)

Java 7 introduced improved API for I/O operations (also known as NIO). Once we decide to use it, the first method can be changed as follows:

void deleteDirectoryRecursion(Path path) throws IOException {
  if (Files.isDirectory(path, LinkOption.NOFOLLOW_LINKS)) {
    try (DirectoryStream<Path> entries = Files.newDirectoryStream(path)) {
      for (Path entry : entries) {
        deleteDirectoryRecursion(entry);
      }
    }
  }
  Files.delete(path);
}

Walk tree (since Java 7)

Additionally, Java 7 introduced new method Files.walkFileTree() which traverses directory tree in the file-system using visitor design pattern. This new method can be easily used to recursively delete directory:

void deleteDirectoryWalkTree(Path path) throws IOException {
  FileVisitor visitor = new SimpleFileVisitor<Path>() {
            
    @Override
    public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
      Files.delete(file);
      return FileVisitResult.CONTINUE;
    }

    @Override
    public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {
      Files.delete(file);
      return FileVisitResult.CONTINUE;
    }

    @Override
    public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
      if (exc != null) {
        throw exc;
      }
      Files.delete(dir);
      return FileVisitResult.CONTINUE;
    }
  };
  Files.walkFileTree(path, visitor);
}

Streams and NIO2 (since Java 8)

Since Java 8 we can use Files.walk() method which behaves like this according to the official documentation:

Return a Stream that is lazily populated with Path by walking the file tree rooted at a given starting file. The file tree is traversed depth-first, the elements in the stream are Path objects that are obtained as if by resolving the relative path against start.

The stream has to be sorted in reverse order first to prevent removal of non-empty directory. The final code looks like this:

void deleteDirectoryStream(Path path) throws IOException {
  Files.walk(path)
    .sorted(Comparator.reverseOrder())
    .map(Path::toFile)
    .forEach(File::delete);
}

However, this code has two drawbacks:

  • The stream is being sorted so all stream elements must be present in memory at the same time. This may significantly increase memory consumption for deep directory trees.
  • There is no error handling because the return value from File.delete() is ignored. This can be improved by using custom lambda inside forEach().

Apache Commons IO

Finally, there is a one-liner solution for the impatient. Just add Maven dependency:

<dependency>
   <groupId>commons-io</groupId>
   <artifactId>commons-io</artifactId>
   <version>2.6</version>
</dependency>

and call this single method:

FileUtils.deleteDirectory(file);

That’s all.

Conclusion

All of above methods should do the job. Personally, I prefer the last one because it is the simplest to use. The source code is available at GitHub.

Posted in Java, Maven, Uncategorized | Tagged , , , | Leave a comment

How to delete file in Java

File management (good old CRUD: create, read, update, delete) is quite common operation in software development. In this short post I would like to present 2 ways of removing files in Java.

Method available in every Java version

Every Java version provides delete() method in File class which can be used to delete file:

File filePath = new File("SomeFileToDelete.txt");
boolean success = filePath.delete();

The delete() method returns boolean value which informs whether the file was removed successfully. If the file did not exist before call, the method return false.

This method can also delete empty directory. If the directory does not exist before the call or is not empty, the method returns false.

It is important to note that this method does not throw any exception in case of failure (except SecurityException). Additionally, it does not have any way to inform why the delete operation failed.

New method since Java 7

Because of the above limitations the new static method delete() in Files class was introduced in Java 7:

Path filePath = Paths.get("SomeFileToDelete.txt");
Files.delete(filePath);

The static method Files.delete() deletes a file, empty directory or a link (not the file pointed to). The real improvement over the previous method is that it properly utilizes exceptions and reports more information about the root cause if the file/directory/link cannot be removed for some reason. The following exceptions can be reported:

  • NoSuchFileException if the file/directory/link does not exist
  • DirectoryNotEmptyException if the file is a directory and is not empty
  • IOException if an IO error occurs (e.g. missing file permissions)
  • SecurityException if the operation is not allowed by SecurityManager

There is an additional method Files.deleteIfExists which also deletes the file but does not throw exception NoSuchFileException if the file does not exist. This method can still throw other above exceptions to indicate error.

Common problems

Sometimes the delete operation may fail with the following error message:

The process cannot access the file because it is being used by another process

This error is quite popular on Windows and means that some process or application is still using the file (e.g. read or write to the file). Therefore, Windows operating system blocks the file removal. In order to remove the file successfully, it needs to be closed first.

Posted in Java, Uncategorized | Tagged , | Leave a comment

Mapping a map of simple types in JPA using @ElementCollection

In the previous post I have shown how to easily map a collection (e.g. Set or List) of simple types in JPA. Today I would like to present how we achieve something similar for Java maps like HashMap or TreeMap.

Mapping a map

Assume that there is a requirement to keep a quantity for each item sold in the store. If the item can be uniquely identified by its name only (no additional information about item is needed), it can be defined as a plain Map<String, Integer> with few JPA annotations:

@Entity
public class Store {
  @Id
  protected long id;

  @ElementCollection
  @CollectionTable(name = "item_quantity",
        joinColumns = { @JoinColumn(name = "store_id") })
  @MapKeyColumn(name = "item")
  @Column(name = "quantity")
  protected Map<String, Integer> itemQuantityMap = new HashMap<>();

The first annotation @ElementCollection is required and is needed to inform JPA provider that a map is used. The rest of the annotations are optional and are used to customize the schema.

The @CollectionTable annotation specifies the name of the DB table where the map keys and values are stored. In this case the new table is named item_quantity. This new table refers to the parent entity Store through a foreign key. The definition of the column holding the foreign key is specified in joinColumns parameter.

The @Column annotation defines the properties (e.g. name) of the column in the new table where the values of the Map are stored.

The @MapKeyColumn does something similar to @Column but for the keys of the map. It defines the properties of the column where the keys of the Map are stored.

Enumerated types in key

The key of the map can be of enumerated type. In this case JPA provider stores the ordinal of the enum in the database table by default. If the name of the enum should be used instead, it can be changed using @MapKeyEnumerated annotation like this:

   ...
   @MapKeyEnumerated(STRING)
   protected Map<TypeEnum, Integer> map;

In practice, @MapKeyEnumerated can be treated as an equivalent of @Enumerated in case of map keys.

Date types in key

If the key of the map represents date or time (e.g. java.util.Date), the annotation @MapKeyTemporal is required. This annotation is an equivalent of @Temporal in case of map keys.

Posted in Hibernate, JPA | Tagged , | Leave a comment

Mapping collection of simple type in JPA using @ElementCollection

JPA framework provides good support for mapping collections of value types. The value types can be either simple types like Integer or String, or custom embeddable types. In this short post I would like to present two most popular mappings with simple types.

Mapping a set

Assume that there is a requirement to store a collection of unique names in an entity. In JPA it could be as simple as defining a plain Set of String in entity class and adding few annotations:

@Entity
public class Author {
  @Id
  protected long id;

  @ElementCollection
  @CollectionTable(name = "author_name",
        joinColumns = { @JoinColumn(name = "author_id") })
  @Column(name = "name")
  protected Set<String> names = new HashSet<>();

The first annotation @ElementCollection is required and is needed to inform JPA provider that a collection of value types is used. The rest of the annotations are optional and are used to customize the schema.

The @CollectionTable annotation specifies the name of the DB table where the collection values are stored. In this case the new table is named author_name. This new table refers to the parent entity Author through a foreign key. The definition of the column holding the foreign key is specified in joinColumns parameter.

The @Column annotation defines the properties (e.g. name) of the column in the new table where the values of the Set are stored.

Mapping a list

Common Set implementations (HashSet and TreeSet) in Java does not allow duplicates and also does not preserve the order. If one of these features is required, the List can be used instead of Set:

@Entity
public class Author {
  @Id
  protected long id;

  @ElementCollection
  @CollectionTable(name = "author_name",
        joinColumns = { @JoinColumn(name = "author_id") })
  @Column(name = "name")
  protected List<String> names = new ArrayList<>();

Actually, the example is almost exactly the same as above. The list can contain duplicates and they will be preserved to the database.

Preserving order in a list

Even if you store items in a list in a particular order, the order will not be restored when the entity is loaded again from the database. In fact, the order of the items in the list is DBMS-dependent. To overcome this we can add extra annotation:

  @OrderColumn(name = "order_idx")   // optional; preserves order of the list

The annotation defines the new column in the collection table which is used for ordering only. JPA provider (e.g. Hibernate) manages this column internally. On persist or update it updates the column with the position of the item and on retrieve it uses the column to order items.

Preserving the order using @OrderColumn may seem like a good idea but in practice it may cause many problems. The order may be good for one application view but on other views we may want to use different ordering. The second problem is that keeping order reduces performance because JPA provider has to internally read and update values in order column. Additionally, the order is usually not a part of application data/domain so there may be no good reason to keep it in the database.

Due to the above issues it is good to check advantages and disadvantages in each particular case before using @OrderColumn.

Posted in Database, JPA | Tagged , , | Leave a comment

Importing WSDL with Java and Maven

SOAP web services are often used in commercial software. If we plan to use existing SOAP web service, we should receive a WSDL file which defines the contract between the web service and its clients. This contract defines at least: the methods provided by the web service, arguments of each methods and their types, exception specification for methods and definitions of additional XSD types.

JDK provides wsimport executable which can generate Java source code files based on the information provided in the WSDL file. In practice we use a build tool to do it automatically. In this post I would like to show how we can import WSDL file in Maven project.

Storing WSDL file

After receiving WSDL file we should put it in a location that is accessible by Maven. Usually WSDL file is placed under src/main/resources folder or one of its subdirectories.

In our case the situation will be more complicated because we got two files. One is the WSDL file periodictableaccess.wsdl that we want to import. The second is the XSD file periodictablebase.xsd which is imported inside WSDL file like this:

<xsd:import schemaLocation="periodictablebase.xsd" namespace="http://www.example.org/periodictablebase"></xsd:import>

With such import the WSDL file and its XSD file should be placed in the same directory. We will choose src/main/resources/wsdl directory.

Configuring JAX-WS Maven plugin

To import the WSDL file we have to put the following plugin definition in pom.xml file:

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>jaxws-maven-plugin</artifactId>
  <version>2.4.1</version>
  <executions>
    <execution>
      <id>periodictableaccessws</id>
      <goals>
        <goal>wsimport</goal>
      </goals>
      <configuration>
        <wsdlDirectory>src/main/resources/wsdl</wsdlDirectory>
        <wsdlFiles>
           <wsdlFile>periodictableaccess.wsdl</wsdlFile>
        </wsdlFiles>
        <packageName>com.example.periodictable.ws.access</packageName>
        <vmArgs>
           <vmArg>-Djavax.xml.accessExternalSchema=all</vmArg>
        </vmArgs>
      </configuration>
   </execution>
 </executions>
</plugin>

Let’s go over it step by step. With groupId, artifactId and version we reference the Maven plugin and its version that we want to use. In this case it is jaxws-maven-plugin from Codehaus.

Later we define executions element. This element can contain multiple execution elements – one for each independent wsimport execution. In our case we got only one WSDL file and we execute wsimport only once.

Inside execution element we define unique ID of the execution. Obviously, if you got multiple execution elements, they should have different IDs.

We should also define the goals that we want to use. There are four of these:

  • wsimport – generates JAX-WS artifacts used by JAX-WS clients and services using WSDL file
  • wsimport-test – same like wsimport but for tests
  • wsgen – generates JAX-WS artifacts used by JAX-WS clients and services using service endpoint implementation class
  • wsgen-test – same like wsgen but for tests

Finally, we should provide the configuration that should be used to generate JAX-WS artifacts. Element wsdlDirectory defines the directory where the WSDL files are placed and wsdlFiles specifies which exactly WSDL files from this directory should be imported. We can specify more than one WSDL file if needed.

Element packageName defines in which Java package the generated artifacts should be placed. It is advised to put the generated artifacts in its own separate package to prevent name collisions.

Normally, it should be everything but in our case the WSDL file is not standalone and imports XML schema from separate file. Since Java 8 such access is subject to restrictions and may result in following error during build:

schema_reference: Failed to read schema document 'periodictablebase.xsd', because 'file' access is not allowed due to restriction set by the accessExternalSchema property. 

To prevent the error we should define javax.xml.accessExternalSchema property. We can do so in different ways. In this case we pass the property definition as an argument to wsimport execution.

Here we used only a small subset of parameters provided by the Maven plugin jaxws-maven-plugin. You can read more about the plugin and its parameters in the official documentation.

Please, notice that we don’t specify the path to XSD file in pom.xml file. It is completely enough that the WSDL file knows the location of XSD file.

Running JAX-WS Maven plugin

When you run build in Maven (e.g. mvn install), you should notice the following plugin execution in the output:

--- jaxws-maven-plugin:2.4.1:wsimport (periodictableaccessws) @ com.example.wsimp ---
Processing: file:/home/robert/informatyka/tests/com.example.wsimp/src/main/resources/wsdl/periodictableaccess.wsdl
jaxws:wsimport args: [-keep, -s, '/home/robert/informatyka/tests/com.example.wsimp/target/generated-sources/wsimport', -d, '/home/robert/informatyka/tests/com.example.wsimp/target/classes', -encoding, UTF-8, -Xnocompile, -p, com.example.periodictable.ws.access, "file:/home/robert/informatyka/tests/com.example.wsimp/src/main/resources/wsdl/periodictableaccess.wsdl"]
parsing WSDL...

Generating code...

If you take a close look, you can notice that:

  • The generated Java source files (*.java files) are put somewhere under target/generated-sources/wsimport directory.
  • The generated Java class files (*.class files) are put under target/classes directory.

JAX-WS Maven plugin is bound to Maven lifecycle phase generate-sources. This phase is run almost at the very beginning of the build to ensure that all generated classes are already here for compile phase.

More information about wsimport can be found in this technote.

Importing multiple WSDL files

If you have many WSDL files to import, there are 2 possibilities. The easiest one is to specify multiple wsdlFile elements inside wsdlFiles in file pom.xml. The generated classes will be put in the same Java package which may result in name conflicts. The better option is to create multiple execution elements inside file pom.xml. This way we can specify separate Java package for each imported WSDL to prevent name conflicts.

Posted in Java, Java EE, Maven, Web-Services, XML | Tagged , , , , | 1 Comment

Objects utility class in Java

Today I would like to quickly mention java.util.Objects class. The JavaDoc documentation for this class says:

This class consists of static utility methods for operating on objects. These utilities include null-safe or null-tolerant methods for computing the hash code of an object, returning a string for an object, and comparing two objects.

That’s all. There is nothing special or difficult in this class – similar methods were already written thousands of times by many developers all over the world. Yet I find this class very useful because it simplifies the source code and removes unnecessary code duplication. Let’s take a glimpse at its methods.

Validating input arguments

Class Objects provides 3 methods to verify if the argument passed to the method is not null.

static <T> T	Objects.requireNonNull(T obj)

Checks if the passed object is not null. If it is null, the methods throws exception.

static <T> T	Objects.requireNonNull(T obj, String message)

Checks if the passed object is not null. If it is null, the methods throws exception with the specified message.

static <T> T	Objects.requireNonNull(T obj, Supplier messageSupplier)

Checks if the passed object is not null. If it is null, the method throws exception with the message provided by supplier. Although it looks more complicated than the previous method, it has the advantage that it creates error message lazily – only when it is actually needed. Therefore, it may be a bit faster in certain situations.

These basic methods reduce the following 3 lines of code:

if (arg == null) {
    throw new NullPointerException("Argument is null.");
}

to just one:

Objects.requireNonNull(arg, "Argument is null.").

These methods are used often in JDK source code.

Null-safe output

There are also 2 very simple methods for converting objects to string.

static String	Objects.toString(Object o, String nullDefault)

If the passed object is not null, returns o.toString(). Otherwise, returns nullDefault value.

static String	Objects.toString(Object o)

The same as Objects.toString(o, “null”).

Null and non-null predicates

There are two self-explanatory methods:

static boolean	Objects.isNull(Object obj)
static boolean	Objects.nonNull(Object obj)

The only reason for their existence is that they can be used as a predicate in stream API like this:

myList.stream().map(...).filter(Objects::nonNull)...

Hash code and equals

Let’s consider following class with typical hashCode and equals methods:

class Mapping {
        private String name; // never null
        private Integer value; // can be null

        public Mapping(String name, Integer value) {
            Objects.requireNonNull(name);
            this.name = name;
            this.value = value;
        }
        
        @Override
        public int hashCode() {
            int result = name.hashCode();
            result += 3 * (value != null ? value.hashCode() : 0);
            return result;
        }

        @Override
        public boolean equals(Object obj) {
            if (obj == null) {
                return false;
            }
            if (getClass() != obj.getClass()) {
                return false;
            }
            final Mapping other = (Mapping) obj;
            return name.equals(other.name)
                    && ((value == null && other.value == null) || (value != null && value.equals(other.value)));
        }
    }

We can assume that name is never null but value can be sometimes null and we have to put some conditional code in hashCode and equals methods. It is nothing hard here but it takes a moment to verify if the implementation of these methods is correct.

Class Objects provides few convenient methods to simplify the code:

static int	Objects.hashCode(Object o)

If the object is not null, returns its hash code. Otherwise, returns 0.

static int	Objects.hash(Object... values)

Generates hash code for the sequence of values. The values can contain nulls.

static int	Objects.equals(Object a, Object b)

Compares two values in null-safe way. If both are not null, calls equals method on first argument. If both are null, returns true. Otherwise, returns false.

With the first method we can simplify the hashCode method to:

public int hashCode() {
    int result = name.hashCode();
    result += 3 * Objects.hashCode(value);
    return result;
}

and with the second we can reduce the body of the method to single line:

public int hashCode() {
    return Objects.hash(name, value);
}

We can do the similar thing for equals:

public boolean equals(Object obj) {
   if (obj == null) {
       return false;
   }
   if (getClass() != obj.getClass()) {
       return false;
   }
   final Mapping other = (Mapping) obj;
   return Objects.equals(name, other.name)
        && Objects.equals(value, other.value);
}

Most of the conditional code is removed. Isn’t it simpler?

Other

There are two additional methods in Objects class.

static boolean	Objects.deepEquals(Object a, Object b)

If both arguments are arrays, behaves like Arrays.deepEquals(). If both arguments are null, returns true. If one argument is null, returns false. Otherwise, calls equals on the first argument.

static <T> int	Objects.compare(T a, T b, Comparator c)

If both arguments are null or both are the same reference, returns 0. Otherwise, compares the arguments using provided comparator.

Conclusion

This class is a nice addition to Java 7 and reduces some unnecessary boilerplate. New versions of IDE can even use this class to generate simpler and more concise hashCode and equals methods than before.

We can go even further and generate these methods (and more) “on-the-fly” using lombok. But it is a topic for separate post.

Posted in Java | Tagged | Leave a comment