How to delete file in Java

File management (good old CRUD: create, read, update, delete) is quite common operation in software development. In this short post I would like to present 2 ways of removing files in Java.

Method available in every Java version

Every Java version provides delete() method in File class which can be used to delete file:

File filePath = new File("SomeFileToDelete.txt");
boolean success = filePath.delete();

The delete() method returns boolean value which informs whether the file was removed successfully. If the file did not exist before call, the method return false.

This method can also delete empty directory. If the directory does not exist before the call or is not empty, the method returns false.

It is important to note that this method does not throw any exception in case of failure (except SecurityException). Additionally, it does not have any way to inform why the delete operation failed.

New method since Java 7

Because of the above limitations the new static method delete() in Files class was introduced in Java 7:

Path filePath = Paths.get("SomeFileToDelete.txt");

The static method Files.delete() deletes a file, empty directory or a link (not the file pointed to). The real improvement over the previous method is that it properly utilizes exceptions and reports more information about the root cause if the file/directory/link cannot be removed for some reason. The following exceptions can be reported:

  • NoSuchFileException if the file/directory/link does not exist
  • DirectoryNotEmptyException if the file is a directory and is not empty
  • IOException if an IO error occurs (e.g. missing file permissions)
  • SecurityException if the operation is not allowed by SecurityManager

There is an additional method Files.deleteIfExists which also deletes the file but does not throw exception NoSuchFileException if the file does not exist. This method can still throw other above exceptions to indicate error.

Common problems

Sometimes the delete operation may fail with the following error message:

The process cannot access the file because it is being used by another process

This error is quite popular on Windows and means that some process or application is still using the file (e.g. read or write to the file). Therefore, Windows operating system blocks the file removal. In order to remove the file successfully, it needs to be closed first.

Posted in Java, Uncategorized | Tagged , | Leave a comment

Mapping a map of simple types in JPA using @ElementCollection

In the previous post I have shown how to easily map a collection (e.g. Set or List) of simple types in JPA. Today I would like to present how we achieve something similar for Java maps like HashMap or TreeMap.

Mapping a map

Assume that there is a requirement to keep a quantity for each item sold in the store. If the item can be uniquely identified by its name only (no additional information about item is needed), it can be defined as a plain Map<String, Integer> with few JPA annotations:

public class Store {
  protected long id;

  @CollectionTable(name = "item_quantity",
        joinColumns = { @JoinColumn(name = "store_id") })
  @MapKeyColumn(name = "item")
  @Column(name = "quantity")
  protected Map<String, Integer> itemQuantityMap = new HashMap<>();

The first annotation @ElementCollection is required and is needed to inform JPA provider that a map is used. The rest of the annotations are optional and are used to customize the schema.

The @CollectionTable annotation specifies the name of the DB table where the map keys and values are stored. In this case the new table is named item_quantity. This new table refers to the parent entity Store through a foreign key. The definition of the column holding the foreign key is specified in joinColumns parameter.

The @Column annotation defines the properties (e.g. name) of the column in the new table where the values of the Map are stored.

The @MapKeyColumn does something similar to @Column but for the keys of the map. It defines the properties of the column where the keys of the Map are stored.

Enumerated types in key

The key of the map can be of enumerated type. In this case JPA provider stores the ordinal of the enum in the database table by default. If the name of the enum should be used instead, it can be changed using @MapKeyEnumerated annotation like this:

   protected Map<TypeEnum, Integer> map;

In practice, @MapKeyEnumerated can be treated as an equivalent of @Enumerated in case of map keys.

Date types in key

If the key of the map represents date or time (e.g. java.util.Date), the annotation @MapKeyTemporal is required. This annotation is an equivalent of @Temporal in case of map keys.

Posted in Hibernate, JPA | Tagged , | Leave a comment

Mapping collection of simple type in JPA using @ElementCollection

JPA framework provides good support for mapping collections of value types. The value types can be either simple types like Integer or String, or custom embeddable types. In this short post I would like to present two most popular mappings with simple types.

Mapping a set

Assume that there is a requirement to store a collection of unique names in an entity. In JPA it could be as simple as defining a plain Set of String in entity class and adding few annotations:

public class Author {
  protected long id;

  @CollectionTable(name = "author_name",
        joinColumns = { @JoinColumn(name = "author_id") })
  @Column(name = "name")
  protected Set<String> names = new HashSet<>();

The first annotation @ElementCollection is required and is needed to inform JPA provider that a collection of value types is used. The rest of the annotations are optional and are used to customize the schema.

The @CollectionTable annotation specifies the name of the DB table where the collection values are stored. In this case the new table is named author_name. This new table refers to the parent entity Author through a foreign key. The definition of the column holding the foreign key is specified in joinColumns parameter.

The @Column annotation defines the properties (e.g. name) of the column in the new table where the values of the Set are stored.

Mapping a list

Common Set implementations (HashSet and TreeSet) in Java does not allow duplicates and also does not preserve the order. If one of these features is required, the List can be used instead of Set:

public class Author {
  protected long id;

  @CollectionTable(name = "author_name",
        joinColumns = { @JoinColumn(name = "author_id") })
  @Column(name = "name")
  protected List<String> names = new ArrayList<>();

Actually, the example is almost exactly the same as above. The list can contain duplicates and they will be preserved to the database.

Preserving order in a list

Even if you store items in a list in a particular order, the order will not be restored when the entity is loaded again from the database. In fact, the order of the items in the list is DBMS-dependent. To overcome this we can add extra annotation:

  @OrderColumn(name = "order_idx")   // optional; preserves order of the list

The annotation defines the new column in the collection table which is used for ordering only. JPA provider (e.g. Hibernate) manages this column internally. On persist or update it updates the column with the position of the item and on retrieve it uses the column to order items.

Preserving the order using @OrderColumn may seem like a good idea but in practice it may cause many problems. The order may be good for one application view but on other views we may want to use different ordering. The second problem is that keeping order reduces performance because JPA provider has to internally read and update values in order column. Additionally, the order is usually not a part of application data/domain so there may be no good reason to keep it in the database.

Due to the above issues it is good to check advantages and disadvantages in each particular case before using @OrderColumn.

Posted in Database, JPA | Tagged , , | Leave a comment

Importing WSDL with Java and Maven

SOAP web services are often used in commercial software. If we plan to use existing SOAP web service, we should receive a WSDL file which defines the contract between the web service and its clients. This contract defines at least: the methods provided by the web service, arguments of each methods and their types, exception specification for methods and definitions of additional XSD types.

JDK provides wsimport executable which can generate Java source code files based on the information provided in the WSDL file. In practice we use a build tool to do it automatically. In this post I would like to show how we can import WSDL file in Maven project.

Storing WSDL file

After receiving WSDL file we should put it in a location that is accessible by Maven. Usually WSDL file is placed under src/main/resources folder or one of its subdirectories.

In our case the situation will be more complicated because we got two files. One is the WSDL file periodictableaccess.wsdl that we want to import. The second is the XSD file periodictablebase.xsd which is imported inside WSDL file like this:

<xsd:import schemaLocation="periodictablebase.xsd" namespace=""></xsd:import>

With such import the WSDL file and its XSD file should be placed in the same directory. We will choose src/main/resources/wsdl directory.

Configuring JAX-WS Maven plugin

To import the WSDL file we have to put the following plugin definition in pom.xml file:


Let’s go over it step by step. With groupId, artifactId and version we reference the Maven plugin and its version that we want to use. In this case it is jaxws-maven-plugin from Codehaus.

Later we define executions element. This element can contain multiple execution elements – one for each independent wsimport execution. In our case we got only one WSDL file and we execute wsimport only once.

Inside execution element we define unique ID of the execution. Obviously, if you got multiple execution elements, they should have different IDs.

We should also define the goals that we want to use. There are four of these:

  • wsimport – generates JAX-WS artifacts used by JAX-WS clients and services using WSDL file
  • wsimport-test – same like wsimport but for tests
  • wsgen – generates JAX-WS artifacts used by JAX-WS clients and services using service endpoint implementation class
  • wsgen-test – same like wsgen but for tests

Finally, we should provide the configuration that should be used to generate JAX-WS artifacts. Element wsdlDirectory defines the directory where the WSDL files are placed and wsdlFiles specifies which exactly WSDL files from this directory should be imported. We can specify more than one WSDL file if needed.

Element packageName defines in which Java package the generated artifacts should be placed. It is advised to put the generated artifacts in its own separate package to prevent name collisions.

Normally, it should be everything but in our case the WSDL file is not standalone and imports XML schema from separate file. Since Java 8 such access is subject to restrictions and may result in following error during build:

schema_reference: Failed to read schema document 'periodictablebase.xsd', because 'file' access is not allowed due to restriction set by the accessExternalSchema property. 

To prevent the error we should define javax.xml.accessExternalSchema property. We can do so in different ways. In this case we pass the property definition as an argument to wsimport execution.

Here we used only a small subset of parameters provided by the Maven plugin jaxws-maven-plugin. You can read more about the plugin and its parameters in the official documentation.

Please, notice that we don’t specify the path to XSD file in pom.xml file. It is completely enough that the WSDL file knows the location of XSD file.

Running JAX-WS Maven plugin

When you run build in Maven (e.g. mvn install), you should notice the following plugin execution in the output:

--- jaxws-maven-plugin:2.4.1:wsimport (periodictableaccessws) @ com.example.wsimp ---
Processing: file:/home/robert/informatyka/tests/com.example.wsimp/src/main/resources/wsdl/periodictableaccess.wsdl
jaxws:wsimport args: [-keep, -s, '/home/robert/informatyka/tests/com.example.wsimp/target/generated-sources/wsimport', -d, '/home/robert/informatyka/tests/com.example.wsimp/target/classes', -encoding, UTF-8, -Xnocompile, -p,, "file:/home/robert/informatyka/tests/com.example.wsimp/src/main/resources/wsdl/periodictableaccess.wsdl"]
parsing WSDL...

Generating code...

If you take a close look, you can notice that:

  • The generated Java source files (*.java files) are put somewhere under target/generated-sources/wsimport directory.
  • The generated Java class files (*.class files) are put under target/classes directory.

JAX-WS Maven plugin is bound to Maven lifecycle phase generate-sources. This phase is run almost at the very beginning of the build to ensure that all generated classes are already here for compile phase.

More information about wsimport can be found in this technote.

Importing multiple WSDL files

If you have many WSDL files to import, there are 2 possibilities. The easiest one is to specify multiple wsdlFile elements inside wsdlFiles in file pom.xml. The generated classes will be put in the same Java package which may result in name conflicts. The better option is to create multiple execution elements inside file pom.xml. This way we can specify separate Java package for each imported WSDL to prevent name conflicts.

Posted in Java, Java EE, Maven, Web-Services, XML | Tagged , , , , | Leave a comment

Objects utility class in Java

Today I would like to quickly mention java.util.Objects class. The JavaDoc documentation for this class says:

This class consists of static utility methods for operating on objects. These utilities include null-safe or null-tolerant methods for computing the hash code of an object, returning a string for an object, and comparing two objects.

That’s all. There is nothing special or difficult in this class – similar methods were already written thousands of times by many developers all over the world. Yet I find this class very useful because it simplifies the source code and removes unnecessary code duplication. Let’s take a glimpse at its methods.

Validating input arguments

Class Objects provides 3 methods to verify if the argument passed to the method is not null.

static <T> T	Objects.requireNonNull(T obj)

Checks if the passed object is not null. If it is null, the methods throws exception.

static <T> T	Objects.requireNonNull(T obj, String message)

Checks if the passed object is not null. If it is null, the methods throws exception with the specified message.

static <T> T	Objects.requireNonNull(T obj, Supplier messageSupplier)

Checks if the passed object is not null. If it is null, the method throws exception with the message provided by supplier. Although it looks more complicated than the previous method, it has the advantage that it creates error message lazily – only when it is actually needed. Therefore, it may be a bit faster in certain situations.

These basic methods reduce the following 3 lines of code:

if (arg == null) {
    throw new NullPointerException("Argument is null.");

to just one:

Objects.requireNonNull(arg, "Argument is null.").

These methods are used often in JDK source code.

Null-safe output

There are also 2 very simple methods for converting objects to string.

static String	Objects.toString(Object o, String nullDefault)

If the passed object is not null, returns o.toString(). Otherwise, returns nullDefault value.

static String	Objects.toString(Object o)

The same as Objects.toString(o, “null”).

Null and non-null predicates

There are two self-explanatory methods:

static boolean	Objects.isNull(Object obj)
static boolean	Objects.nonNull(Object obj)

The only reason for their existence is that they can be used as a predicate in stream API like this:

Hash code and equals

Let’s consider following class with typical hashCode and equals methods:

class Mapping {
        private String name; // never null
        private Integer value; // can be null

        public Mapping(String name, Integer value) {
   = name;
            this.value = value;
        public int hashCode() {
            int result = name.hashCode();
            result += 3 * (value != null ? value.hashCode() : 0);
            return result;

        public boolean equals(Object obj) {
            if (obj == null) {
                return false;
            if (getClass() != obj.getClass()) {
                return false;
            final Mapping other = (Mapping) obj;
            return name.equals(
                    && ((value == null && other.value == null) || (value != null && value.equals(other.value)));

We can assume that name is never null but value can be sometimes null and we have to put some conditional code in hashCode and equals methods. It is nothing hard here but it takes a moment to verify if the implementation of these methods is correct.

Class Objects provides few convenient methods to simplify the code:

static int	Objects.hashCode(Object o)

If the object is not null, returns its hash code. Otherwise, returns 0.

static int	Objects.hash(Object... values)

Generates hash code for the sequence of values. The values can contain nulls.

static int	Objects.equals(Object a, Object b)

Compares two values in null-safe way. If both are not null, calls equals method on first argument. If both are null, returns true. Otherwise, returns false.

With the first method we can simplify the hashCode method to:

public int hashCode() {
    int result = name.hashCode();
    result += 3 * Objects.hashCode(value);
    return result;

and with the second we can reduce the body of the method to single line:

public int hashCode() {
    return Objects.hash(name, value);

We can do the similar thing for equals:

public boolean equals(Object obj) {
   if (obj == null) {
       return false;
   if (getClass() != obj.getClass()) {
       return false;
   final Mapping other = (Mapping) obj;
   return Objects.equals(name,
        && Objects.equals(value, other.value);

Most of the conditional code is removed. Isn’t it simpler?


There are two additional methods in Objects class.

static boolean	Objects.deepEquals(Object a, Object b)

If both arguments are arrays, behaves like Arrays.deepEquals(). If both arguments are null, returns true. If one argument is null, returns false. Otherwise, calls equals on the first argument.

static <T> int a, T b, Comparator c)

If both arguments are null or both are the same reference, returns 0. Otherwise, compares the arguments using provided comparator.


This class is a nice addition to Java 7 and reduces some unnecessary boilerplate. New versions of IDE can even use this class to generate simpler and more concise hashCode and equals methods than before.

We can go even further and generate these methods (and more) “on-the-fly” using lombok. But it is a topic for separate post.

Posted in Java | Tagged | Leave a comment

Default and static methods in interfaces in Java 8

Before Java 8 interfaces could only contain static fields (usually simple constants) and abstract methods. Java 8 provided the ability to define concrete (default and static) methods in interfaces. This new language feature is used extensively in Java core packages.

Static methods

Static method in interface looks the same like in a normal class:

public interface Checker {
    public static boolean isNull(Object obj) {
        return obj == null;

The main reason to add static methods to interfaces is to keep related utility methods in one place so that they can be easily used by subclasses, default methods in subinterfaces or by users of this interface.

Default methods

Default method looks like a typical class method but is defined inside an interface and contains default specifier. Let’s look at Collection.removeIf() default method:

default boolean removeIf(Predicate<? super E> filter) {
        boolean removed = false;
        final Iterator<E> each = iterator();
        while (each.hasNext()) {
            if (filter.test( {
                removed = true;
        return removed;

Default method can access everything that is defined within this interface or is inherited by this interface, including:

  • reference to this
  • all abstract methods defined in this or super-interfaces
  • all default or static methods defined in this or super-interfaces
  • all static fields defined in this or super-interfaces

Default methods allow adding new functionality to existing interfaces without breaking all existing implementations – they preserve backwards compatibility. A class, that implements an interface with a default method, gets the default implementation from the interface but it can still override the default implementation.

Default and static methods in functional interfaces

The functional interface can contain multiple default and static methods and still be functional. In fact, default and static methods are not abstract and are not counted within the limit of exactly one abstract method. Here is an example:

public interface Comparator<T> {
    int compare(T o1, T o2);
    default Comparator<T> reversed() {
        return Collections.reverseOrder(this);
    public static <T> Comparator<T> nullsFirst(Comparator<? super T> comparator) {
        return new Comparators.NullComparator<>(true, comparator);

Extending interfaces which contain default methods

If we create a new interface which extends an interface which contains a default method, we have 3 possibilities:

  • Not mention the default method in the new interface. This way the new interface will inherit the default method from parent.
  • Override the default method by redefining it in the new interface and providing new method body. All subclasses and subinterfaces will use new definition of the default method.
  • Declare the default method as abstract in the new interface. This way the default method must be overridden in subclasses or subinterfaces of the new interface.

Default method ambiguity

Sometimes we may want to implement two interfaces which contain default methods with the same method signature (name, parameters, and so on):

public interface InterfaceOne { 
    default void doSomething() { 
public interface InterfaceTwo {
    default void doSomething() {
public class MyClass implements InterfaceOne, InterfaceTwo  {

In this rare case the compilation will fail because Java compiler does not know which implementation of the default method it should choose for the class. To resolve this issue we have to explicitly redefine/redeclare the default method in the class. We have two possibilities here.

The first one is to simply override the default method in the class and provide new method body:

public class MyClass implements InterfaceOne, InterfaceTwo  {
    public void doSomething() {
        // some code

Please, note that we are not using default keyword anymore. We can also use following syntax:


to call default implementation from one of the implemented interfaces.

Alternatively, we can declare the default method in the class as abstract:

public abstract class MyClass implements InterfaceOne, InterfaceTwo  {
        public abstract void doSomething();

As a result the class must be made abstract also. This way we can somehow “postpone” the problem because the concrete subclass will have to redefine this default method.


Many static and default methods have been added to existing interfaces since Java 8 to simplify their usage and promote code reuse. Some of these interfaces include: Iterator, Iterable, Comparator, Collection.

Posted in Java | Tagged | Leave a comment

Database schema creation in JPA using SQL scripts

Recent versions of JPA provide a feature to automatically create the database objects (like tables, sequences or indexes), load initial data into database on application deployment; and also remove them after the application is undeployed.

All that is needed is to define several properties in persistence.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1" xmlns=""
  <persistence-unit name="mainPU" transaction-type="JTA">
      <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
      <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/>
      <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
      <property name="javax.persistence.schema-generation.create-source" value="script"/>
      <property name="javax.persistence.schema-generation.create-script-source" value="dbscripts/create.sql"/>
      <property name="javax.persistence.schema-generation.drop-source" value="script"/>
      <property name="javax.persistence.schema-generation.drop-script-source" value="dbscripts/drop.sql"/>
      <property name="javax.persistence.sql-load-script-source" value="dbscripts/load.sql"/>
      <property name="hibernate.hbm2ddl.import_files_sql_extractor" value="org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor" />
      <property name="hibernate.show_sql" value="true"/>


Property javax.persistence.schema-generation.database.action defines which action should be taken on database when the application is deployed:

  • none – Takes no action on database. Nothing will be created or dropped.
  • create – JPA provider will create the database schema on application deployment.
  • drop – JPA provider will drop the database schema on application deployment.
  • drop-and-create – JPA provider will first drop the old database schema and then will create the database schema on application deployment.

If property javax.persistence.schema-generation.database.action is not specified, then none is assumed by default. In practice drop-and-create is very useful in simple, test applications and none in real production applications in which the database schema is created elsewhere.

Property javax.persistence.schema-generation.create-source informs JPA provider what should be used as a source of database schema:

  • metadata – JPA provider will use entity metadata (e.g. annotations) to generate the database schema. This is the default.
  • script – JPA provider will run provided SQL script to create database schema. The script should create tables, indexes, sequences and other necessary database artifacts.
  • metadata-then-script – The combination of metadata and then script in that order.
  • script-then-metadata – The combination of script and then metadata in that order.

Finally, property javax.persistence.schema-generation.create-script-source specifies the location of SQL script to run on application deployment. The location can be a file URL but usually is a relative path to the SQL script packaged into application JAR/WAR.

Properties javax.persistence.schema-generation.drop-source and javax.persistence.schema-generation.drop-script-source have similar values and meaning as their create* counterparts but of course these are used to drop database schema.

There is also one additional property javax.persistence.sql-load-script-source which can be used to load the initial data into the database tables. This SQL script is run after the database schema was created.


Hibernate requires (by default) that the SQL script contains maximum one line per statement. In short it means that the SQL statement cannot be split into multiple lines for better readability which is a common thing for CREATE TABLE commands. This inconvenience can be resolved by specifying following Hibernate specific property:

<property name="hibernate.hbm2ddl.import_files_sql_extractor" value="org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor" />


The above properties are very useful for simple, test applications which does not require the database data to survive the application undeployment. In production applications property javax.persistence.schema-generation.database.action should be set to none to prevent the loss data from the database in case the application is temporarily undeployed.

The sample application using these properties is available at

Posted in Hibernate, Java, Java EE, JPA | Tagged , , , | Leave a comment