Spring Cache: Introduction and use

This article will explain the cache mechanism and present some practical aspects of Spring Cache. Starting from the ground up, step by step, it will walk you through how to add annotations and set parameters to configure and start using the Spring data caching feature. It will also discuss the use and configuration of external providers (EhCache, Redis) in more detail. Read on to make sure your app is fast and efficient!

Spring Cache: Introduction and use

What is cache?

Cache is a mechanism that stores data that have already been used, so that they can be retrieved more quickly if they are needed again in the nearest future. This allows the number of database queries and the number of external site calls to be reduced.

Cache memory is employed in nearly all systems. In computers, it can be found in processors, RAM memory, as well as various mobile, online and desktop apps.

When creating an application, you can use many different caches, which mainly focus on database data caching; their main goal is to optimise queries and make the app faster. What I would like to focus on, however, are the possible applications of cache memory provided by the Spring framework, i.e. Spring Cache.

What is Spring Cache?

Spring provides support for cache abstraction, i.e. it allows you to use an abstract API to access the cache. The cache can be flexibly configured by adding relevant dependencies and annotations. If you want to use an external provider, too, all you need to do is add the relevant dependencies and configuration parameters.

What makes this library special is that it can be used at the method level in your app by means of annotations. Spring developers also made sure that the cache could be used in multi-threaded apps.

In Spring Cache, you can utilise one or several CacheManagers (cache management interfaces) at the same time. You can add data storage or cache lifetime configurations.

CacheManagers supported by Spring can be divided into ones that:  

  • rely on Spring-internal mechanisms, e.g. ConcurrentMapCacheManager, SimpleCacheManager; 
  • support external providers, e.g. CaffeineCacheManager, EhCacheCacheManager, JCacheCacheManager, RedisCacheManager.  

How to set up and use Spring Cache?

Spring Cache is easy and intuitive to set up and use. Let’s see for ourselves. My example is an app that uses Spring Boot and Maven, so we will begin the setup process by specifying the relevant dependencies in the pom.xml file.

<dependency> 
    <groupId>org.springframework.boot</groupId> 
    <artifactId>spring-boot-starter-cache</artifactId> 
</dependency> 

If your app uses Spring only, you should use:

<dependency> 
    <groupId>org.springframework</groupId> 
    <artifactId>spring-context</artifactId> 
    <version>5.1.8.RELEASE</version> 
</dependency> 

As I mentioned before, the setup and use of Spring Cache is done through annotations. Of course, you can also fall back on the older method, i.e. configure it in an XML file, but I think annotations are much more intuitive and easier to understand than adding line after line in xml. 

@Cacheable annotation

The first annotation we should use is @EnableCaching in our configuration class:

@EnableCaching 
public class CacheConfig {

    @Bean 
    public CacheManager cacheManager() { 
    SimpleCacheManager cacheManager = new SimpleCacheManager(); 
    Cache postsCache = new ConcurrentMapCache("posts"); 
    cacheManager.setCaches(Arrays.asList(postsCache)); 
    return cacheManager; 
    }  
};  

This will enable data to be cached. In Spring Cache, this usually happens in the app service layer, but sometimes cache is added at the data access layer (e.g. if several services use the same methods). 

We will begin our adventure with adding cache on different methods from the @Cacheable. This annotation is used on methods that retrieve data but don’t modify them.  

@Cacheable has several parameters that can and should be set up.

@Cacheable(cacheNames = "PostsWithComments", key = "#page") 
public List<Post> getPostsWithComments(int page,Sort.Direction sort){ 
        List<Post> allPosts = postRepository.findAllPosts( 
        PareRequest.of(page, PAGE_SIZE, Sort.by(sort, "id") 
        )); 
        List<Long> ids=allPosts.stream() 
        .map(Post::getId) 
        .collect(Collectors.toList()); 
        List<Comment> comments =commentRepository.findAllByPostIdIn(ids); 
        allPosts.forEach( 
        post->post.setComment(extractComments(comments,post.getId()))); 
        return allPosts; 


@Cacheable(cacheNames = "SinglePost") 
public Post getSinglePost(long id) { 
    return postRepository.findById(id).orElseThrow(); 

cacheName parameter

One parameter you need to set is cacheName. It defines the name of the cache, which can be useful, e.g. whenever you want to configure an external provider or several different CacheManagers. 

key parameter

Another parameter is key, or a key added to your cache. The default cache key generator for Spring Cache is SimpleKeyGenerator, where all parameters in a method are the keys. However, sometimes you don’t need to use all parameters; you can define a specific key, using the special syntax of the Spring Expression Language (SpEL), as shown in this example: key = ”#page’’.

You can also use your own method as the key, as long as it is defined inside a class, e.g. hashCode().

keyGenerator parameter

Other parameters of @Cacheable include keyGenerator, which allows you to define your own key generator, cacheManager, condition, where you can use SpEL to specify when cache should store data (e.g. condition = “#name.length < 32”), and several more. 

As for the keyGenerator, Spring Cache uses a default keyGenerator, which can create problems when, e.g. two methods utilise the same cache and have the same parameters, used as the key by default. 

This can cause collision or cache data may be overwritten by other methods. This is why it is such a good practice to define your key or add your own keyGenerator. 

To do so, you need to create your own class to overwrite the default key generator. It must implement the KeyGenerator interface, which provides the generate() method to define how our keys are to be assigned. 

public class CustomKeyGenerator implements KeyGenerator {

    @Override 
    public Object generate(Object target, Method method, Object... params) { 
    return target.getClass().getSimpleName() + "_" 
    + method.getName() + "_" 
    + StringUtils.arrayToDelimitedString(params, "_"); 
    } 
}

You can use your generator in two ways; one is to declare a bean in your configuration class:

@Configuration
public class CacheConfig extends CachingConfigurerSupport {

    @Bean("customKeyGenerator") 
    public KeyGenerator keyGenerator() { 
    return new CustomKeyGenerator(); 
    } 

Note that the configuration class inherits from CachingConfigurerSupport, because it allows you to create a bean for the keyGenerator, but you can also create one for cacheManager or cacheResolver.  

The other way is to add the relevant parameters on a method that uses the @Cacheable annotation:  

@Override 
@Cacheable(value = "authors", keyGenerator = "customKeyGenerator") 
public List<AuthorDto> getAll() { 
    return authorRepository.findAll().stream() 
            .map(AuthorMapper::toDto
            .collect(Collectors.toList()); 

With your own keyGenerator, you can still use the default key generator as well. Each method may use a different key generator so you are not limited to just one. This is known as the keyGenerator per method principle.

How does Spring Cache work?

Now, let’s look at how Spring Cache works. The Spring Cache mechanism is based on aspects (Aspect Oriented Programming). What we have here is an aspect responsible for creating a cache. 

It reacts to an annotation, in this case: @Cacheable, on a given method, and creates a dedicated cache. The class responsible for the whole action is CacheInterceptor, which expands the CacheAspectSupport class and implements MethodInterceptor.  

The entire process and mechanism is not easy to explain, but one thing you should know is that CacheInterceptor simply calls the relevant superclass methods in the correct order. 

If you want to know how this works in detail, I encourage you to read the Spring documentation – this is probably the best source of information.

I already mentioned that the default cache manager in Spring Cache is ConcurrentMapCacheManager. It saves cache in the ConcurrentHashMap and the ConcurrentMapCache class is used for each cache. This class also saves data in the ConcurrentHashMap, and individual entries for your cache are saved in the store field. 

public class ConcurrentMapCacheManager implements CacheManager, BeanClassLoaderAware { 
    private final ConcurrentMap<String, Cache> cacheMap =  
                  new ConcurrentHashMap(16); 
    private boolean dynamic = true; 
    private boolean allowNullValues = true; 
    private boolean storeByValue = false; 
    @Nullable 
    private SerializationDelegate serialization;
 
    public ConcurrentMapCacheManager() { 
    }

public class ConcurrentMapCache extends AbstractValueAdaptingCache { 
    private final String name; 
    private final ConcurrentMap<Object, Object> store; 
    @Nullable 
    private final SerializationDelegate serialization; 
 
    public ConcurrentMapCache(String name) { 
    this(name, new ConcurrentHashMap(256), true); 
    } 
 
    public ConcurrentMapCache(String name, boolean allowNullValues) { 
    this(name, new ConcurrentHashMap(256), allowNullValues); 
    }   

Essentially, this is what makes your cache work. Unfortunately, a default cache manager does not give you much leeway to manage your cache: you can’t decide to clear it after a certain time, you can’t set a data storage limit, so all the cache management operations are on your side and need to be performed manually. 

In this case, what you can do is use additional Spring Cache annotations.

@CachePut annotation

One such annotation worth mentioning is @CachePut. The annotation is added on methods that modify data. If a method with this annotation is called, it will update a cache entry; this guarantees you will always have the latest modifications available for use. 

Just like in the @Cacheable annotations, it is important to add parameters, such as cacheName and key.  

In this example, we want our id to be our key; we must extract it from result (this is a specific name in SpEL that means an object returned by a given method, or postEdited = result):  

@Transactional 
@CachePut(cacheNames = "SinglePost", key = "#result.id") 
public Post editPost(Post post) { 
   Post postEdited = postRepository.findById(post.getId()).orElseThrow(); 
   postEdited.setTitle(post.getTitle()); 
   postEdited.setContent(post.getContent()); 
   return postEdited; 

@CacheEvict annotation

The last annotation I would like to mention here is @CacheEvict, which allows you to clear your cache, and is added on methods that delete data. 

This means that whenever a method that deletes some data is called, your cache will also be cleared of that entry. This helps save cache storage space; it is harder to fill up, because it stores no redundant data. However, it also allows to ensure consistency of the main data source with your cache. 

Usually, adding cache and key is obligatory but this step can be skipped in this case, because the key will be the id from the method parameter: 

@CacheEvict(cacheNames = "SinglePost") 
public void deletePost(long id) { 
    postRepository.deleteById(id); 

As you can see, a default cache manager only allows you to control the cache to a limited extent, just enough to ensure its correct function. 

External providers

Thanks to external providers, you can make your cache more flexible, e.g. define the data volume that may be stored in the cache or specify the time after which the cache should be cleared. An extra advantage of external providers is that they extend cache lifetime beyond that of the app. 

This is an option available e.g. with Redis (about which I will talk at greater length below), which is independent of the app. 

Your app can be restarted but the data will still be stored in the cache, whereas the ConcurrentHashMap is cleared whenever the app restarts. In Spring, you can use providers, which are defined in the CacheConfiguration class: 

final class CacheConfigurations { 
    private static final Map<CacheType, Class<?>> MAPPINGS
 
    private CacheConfigurations() { 
    } 
    . 
    . 
    . 

static { 
Map<CacheType, Class<?>> mappings = new EnumMap(CacheType.class); 
    mappings.put(CacheType.GENERIC, GenericCacheConfiguration.class); 
    mappings.put(CacheType.EHCACHE, EhCacheCacheConfiguration.class); 
    mappings.put(CacheType.HAZELCAST, HazelcastCacheConfiguration.class); 
    mappings.put(CacheType.INFINISPAN, InfinispanCacheConfiguration.class); 
    mappings.put(CacheType.JCACHE, JCacheCacheConfiguration.class); 
    mappings.put(CacheType.COUCHBASE, CouchbaseCacheConfiguration.class); 
    mappings.put(CacheType.REDIS, RedisCacheConfiguration.class); 
    mappings.put(CacheType.CAFFEINE, CaffeineCacheConfiguration.class); 
    mappings.put(CacheType.SIMPLE, SimpleCacheConfiguration.class); 
    mappings.put(CacheType.NONE, NoOpCacheConfiguration.class); 
    MAPPINGS = Collections.unmodifiableMap(mappings); 

Since the default cache manager does not give you a lot of freedom to configure your cache, let’s use an external provider and set it up to match your needs; also, let’s tap its potential to steer your stored data more easily and efficiently. 

How to add an external provider?

In our example, we will use EhCache, which offers three data storage methods: 

MemoryStore – heap storage, i.e. data is stored in the JVM stack, but the downside is that the data will be supported by the Garbage Collector;  

OffHeapStore – off-heap storage; Garbage Collector no longer works, but the process is slower than with heap storage; 

DiskStore – data is stored on the disk. 

In addition, EhCache utilises the Least Recently Used (LRU) algorithm, which means that the oldest (least used) elements are the first to be deleted from the cache whenever it fills up. 

EhCache configuration

Let’s move on to EhCache configuration and its basic applications. The first thing you need to do is add dependencies in your pom.xml file: 

<dependency> 
    <groupId>org.ehcache</groupId> 
    <artifactId>ehcache</artifactId> 
    <version>3.5.3</version> 
</dependency> 
<dependency> 
    <groupId>javax.cache</groupId> 
    <artifactId>cache-api</artifactId> 
    <version>1.1.1</version> 
</dependency> 

And add new properties in application.properties:  

spring.cache.jcache.config=classpath:ehcache.xml

Properties will indicate the EhCache configuration file which needs to be added next. In our example, the file is called ehcache.xml and looks like this:

<config xmlns="http://www.ehcache.org/v3" 
        xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance" 
        xmlns:jsr107 ="http://www.ehcache.org/v3/jsr107" 
        xsi:schemaLocation ="http://www.ehcache.org/v3  
https://www.ehcache.org/schema/ehcache-core-3.0.xsd">

The whole cache configuration will now proceed from here. EhCache allows you to create a specific template you can use in several caches. 

In our example, we have a default template, in which one setting is expiry, where you can define the time after which your entry should be deleted from the cache (ttl) or the time after which the cache should be cleared if it is not queried (tti) 

The next thing to set up is the heap, or the number of elements to be stored in the cache.

In this case, you can choose from more configuration options; to find out what they are, you can read the documentation. I want to focus on just the basic features of this provider here.

The next stage is to call your template on a specific cache, or <cache alias…, where you must provide the cache name (this is why adding a name in the @Cacheable annotation is so important!) and use a given template (uses-template). Of course, you can also configure one specific cache, as shown above for SinglePost. 

Any alternatives to EhCache?

Another external provider worth mentioning here is Redis. It has been on the rise as a standard that is now increasingly used as a default for the key-value base for data caching. It serves as a data warehouse used as a database and cache.

Using Redis

To use Redis, you can launch it with a Docker image or install it directly in your system. Let’s assume you have already done that and focus only on its use. 

Before you start using Redis as data cache, think about the basic configuration parameters, such as maximum storage, clearing algorithms and persistence. 

1. Maximum memory – by default, Redis comes without any storage limits in 64-bit systems, but in 32-bit systems, the storage limit is set at 3 GB. 

2. Clearing algorithms – when the cache reaches its storage limit, old data is removed to make space for the new. There are two clearing algorithms:  
Least Recently Used (LRU) – whenever storage space fills up, the keys that were used a longer time ago (the oldest) will be deleted first.  
Least Frequently Used (LFU) – Redis will calculate how many times a key has been used; more frequently used keys will be stored longer than those used, e.g. just once. 

3. Persistence – at the start, cache memory is always empty; sometimes, for reasons beyond our control, we may lose connection or need to restart Redis, so a good idea is to use snapshots that will help us retrieve cache data.

Configuring Cache in Redis

Redis has two snapshot types:  

  • RDB – takes snapshots at specific intervals and it is up to you to define in the configuration how often (e.g. in seconds) this will happen or when the number of keys changes. The snapshots are saved in a .rdb file. When Redis is restarted after failure or switch-off, it will only retrieve the last saved data, which may not always match the data immediately before the incident; 
  • AOF – takes a snapshot after each save operation received by the server, which means data is always updated, and if Redis is restarted, it will restore the cache to its state before failure/switch-off.  

I encourage you to read the documentation, where the upsides and downsides of these snapshot types are discussed.

To configure the cache in Redis, you need to set the above parameters in the configuration file, redis.conf, which is on the Redis side, and not in your project like in EhCache.

#memory limit up to 128MB
maxmemory 128mb
#remove the last recently used (LRU) keys first
maxmemory-policy allkeys-lru

In this example, max. data storage limit was set to 128 MB and the LRU clearing algorithm was configured.

Of course, configuration should begin with adding dependencies in pom.xml:

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

The next step is to add properties in application.properties>:

spring.cache.type=redis
spring.redis.host=localhost
spring.redis.port=6379

These settings will define the location of the cache service. 

The cache is used in the same way as the default cacheManager in Spring Cache. Put simply, you use the annotations discussed above: @EnableCaching (in your configuration class), @Cacheable, @CachePut, @CacheEvict. The difference is that in this case each annotation must have a key parameter defined using SpEL:

@Cacheable(value = "author", key = "#id")
@GetMapping(value = "/author/{id}")
public ResponseEntity<AuthorDto> get(@PathVariable long id) throws NotFoundException {
log.info("User try to get author with id: {}", id);
AuthorDto authorDto = authorService.get(id);
return new ResponseEntity<>(authorDto, HttpStatus.OK);
}

@CachePut(value = "author", key = "#id")
@PutMapping(value = "/author/{id}")
public ResponseEntity<AuthorDto> update(@Valid @RequestBody AuthorDto authorDto, @PathVariable long id) throws NotFoundException {
log.info("User try to update author with id: {}", id);
authorService.update(id, authorDto);
return new ResponseEntity<>(authorDto, HttpStatus.OK);
}

Redis also allows you to track various statistics, such as:  
hit/miss ratio – the ratio of key hits to key misses;  
latency – the maximum time between request and response;  
evicted keys – or keys deleted after maximum storage capacity has been reached. 

Conclusion

As you can see, Spring Cache is an extremely useful tool for developers looking for more effective data access management in Spring applications.

Using this mechanism in the right app layers may significantly speed up data readout and reduce database queries, which will ultimately influence app speed and performance.

It is worth stressing that Spring Cache is very flexible and can be integrated with a variety of providers, which means it is a versatile tool for anyone who wants to optimise and streamline their systems.