Simple concurrent In-memory cache for web application using Future

March 11th, 2009 | Tags: , , ,

In-memory cache’s can be extremely useful for small web applications where you don’t want to full-blown cache system like ehCache or simply can’t afford one. I recently had such a requirement and I must say that I kind of made a mess of it. The requirement was to cache User objects so that we didn’t have to make too many calls to the database. Let me just say outright that while such a cache is not the best idea in the world, it isn’t the worst either. When you simply need to cache a few objects to reduce the load on the db in a moderately loaded web-app, this implementation works just fine.

The service layer was my choice for the cache, I use the Controller -> Service -> DAO model in all my webapps mainly because it keeps the code clean and also because it makes it much easier to manage transactions across DAO’s. My cache was placed in the service layer and was implemented something like this :

@Service
public class UserServiceImpl implements UserService {

	@Autowired
	private UserDao	userDao;
	private Log		logger	= LogFactory.getLog(getClass());
	private Map<Integer, User> cache = new HashMap<Integer, User>();
	
	@Override
	public User get(int userId) {
		User user = cache.get(userId);
		if (user == null) {
			user = userDao.getUserById(userId);
			cache.put(userId, user);
		}
		return user;
	}

	public UserDao getUserDao() {
		return userDao;
	}

	public void setUserDao(UserDao userDao) {
		this.userDao = userDao;
	}
}

That’s it, the simplest kind of cache. I use spring for all my projects so all the beans are spring-managed and spring takes care of autowiring the dao into my service. But the problem with such a cache :

  • it’s not thread-safe
  • two threads may ask for the same user from the cache, not find it and then try to load the user info from the database twice

I know “not thread safe” implies #2 automatically but still. Either way, this is the worst kind of implementation one could choose, and I did just that. Until I read Java Concurrency in Practice by Doug Lea and Brian Goetz. The cache implementation in the book is worth taking a look at, it’s simple, completely thread safe and requires very little code.

Basic flow of the implementation :

  1. Get a FutureTask from the ConcurrentHashMap
  2. If task doesn’t exist, create one and put it in the map
  3. If task exists, try and get the value of the User object from the task
  4. If task is over, User object is returned
  5. If task is still processing, the thread blocks till a User object is returned

The new implemetation :

@Service
public class UserServiceImpl implements UserService {

	@Autowired
	private UserDao						userDao;
	private Log						logger	= LogFactory.getLog(getClass());
	private ConcurrentMap<Integer, FutureTask<User>>	cache	= new ConcurrentHashMap<Integer, FutureTask<User>>();

	@Override
	public User get(final int userId) {
		FutureTask<User> f = cache.get(userId);
		if (f == null) {
			// create a new callable
			Callable<User> callable = new Callable<User>() {

				@Override
				public User call() throws Exception {
					return userDao.getUserById(userId);
				}

			};
			FutureTask<User> ft = new FutureTask<User>(callable);
			f = cache.putIfAbsent(userId, ft);
			if (f == null) {
				f = ft;
				ft.run();
			}
		}
		try {
			return f.get();
		} catch (CancellationException e) {
			cache.remove(userId, f);
		} catch (InterruptedException e) {
			e.printStackTrace();
		} catch (ExecutionException e) {
			e.printStackTrace();
		}
		// we shouldn't come here at all unless the user doesn't exist
		return null;
	}
}

In the above few lines of code, we have managed to implement a small cache which runs within our web app and requires no extra infrastructure. A bit of explanation as to why it’s implemented the way it is. We don’t actually save the User objects in the cache, instead we save FutureTask objects. Whenever we need to obtain the User from the cache, we obtain the FutureTask from the cache and then the User object from the FutureTask. If however, the FutureTask for a particular user is null, we create a new FutureTask and put it in the cache. Notice that we don’t put he object directly but call putIfAbsent. This automatically brings concurrency to the number of FutureTasks that are created for a particular user. Even if two threads were to concurrently try and replace the FutureTask only one would succeed. The second thread would simply reach f.get() and block till the User object is populated and returned. No matter how many threads ask for the same user, only one call to the database is made and only one FutureTask will ever exist and execute for every user.

The exceptions themselves provide a great deal of information. If a task is cancelled and get() is called on it, then a CancellationException is thrown. The ExecutionException is thrown when the underlying task itself throws an exception. To get the underlying exception thrown by the task, the getCause() method of the ExecutionEception returns that exception.

It is worthwhile to note that because of the use of ConcurrentHashMap here, 32 threads can write to the map at the same time. This increases concurrency of the map by a HUGE margin over Hashtable. That’s it, your in-memory implementation of a simple cache for your light load web-app is ready. Feel free to leave a comment if you have any q’s.

  1. Florian
    December 17th, 2010 at 15:25
    Quote | #1

    Is there any way you see to handle stale items and a max size for the cache?

  2. suresh
    January 26th, 2011 at 11:07
    Quote | #3

    I having similar kind of requirement in my project..but the caching should be time sensitive..it means my cached data would be expired after every 4 hours..Could you pls help me how to implement time based caching.Thanks in advance

  3. Gaurav Arora
    January 26th, 2011 at 13:51
    Quote | #4
  4. July 19th, 2011 at 01:15
    Quote | #5

    Helpful blog, keep me personally through searching it, I am seriously interested to find out another recommendation of it.

Comments are closed.