tl;dr If you need to speed up your c# application you probably need to add caching, and the easiest way to do that is use the source library I wrote called LazyCache.

Do I need to cache?

Lots of apps don't need caching because web servers and databases are fast. However many apps get to a point where performance becomes an issue, and adding caching is one easy way to get a significant performance boost. It is also usually simpler than re-architecting your solution for scalability, and cheaper than more hardware.

Caching on the web

Most apps use many forms of caching for performance: web servers set cache headers to enable browsers to cache files, databases cache query plans, offers output caching for page/controller generated text. You can host you entire site behind the might of cloudflare, a global content cache. All of these are useful, and often necessary, but it can be easier to take some method(s) in your code whose result scan be Used more than once and cache it.

The accidental cache

You don't need to use a formal cache library or system to do caching, lot's of developers do it instinctively. Have you ever saved the result of a database query in a static property for later use? Or held on to a result of a slow method in the property of a long lived object? Then you have cached it. But did you think about when to dispose that reference so it could be garbage collected? When does that cached value become stale and need refreshing? Rather than hand roll all that logic, let someone else do it for you.

Just a dictionary

Most caches are interfaced as a dictionary - a key/value store where you save the results of some function under a string identifier and then retrieve them later using the same key. The results might be saved in memory, on disk, on another server - it does not matter - with the same key you can get the results you saved beforehand significantly faster than you can generate it again. You also gain the benefit you reduced the load on the original data source. For example in c# most other caches look like:

// save the results in the cache ...
cache["all-products"] = database.Products.ToList();  
// ... some time later I need them again
var cachedProducts = (List<Product>) cache["all-products"]  

However most caches don't guarantee to keep your object cached forever, much like your pub drinking buddy they will hold your drink while you go to the loo, but if you take too long it might not be there when you come back and ask for it. If you want it stored forever don't use a cache.

A static dictionary, System.Web.Caching and MemoryCache

A naive implementation of a key based cache might look like this:

public static MyFirstCache {  
    public static Dictionary<string,object> Items = new Dictionary<string,object>();

So you can add and remove any CLR type into the cache and get hold of it later. It will be stored in memory and is easily accessible. But the garbage collector will never be able to clean it up, and if you add too much to the cache you will run out of memory.

The dot net framework provides a couple of dictionary based caches that are more sophisticated and will manage old cache entries automatically if the machine needs memory. In you can use System.Web.Caching.Cache which has been around for years, and in more recent versions of the framework you can use System.Runtime.Caching.MemoryCache. Both of these work great, but suffer from two flaws that LazyCache will solve for you.

Cache-Aside pattern

The Cache-Aside pattern is the common pattern to use with a dictionary based cache. It suggests that every time you need cto ache the result of a function (say a database call) you have to Check cache by key > execute the method > Add result to cache so and it is available next time you check the cache with that key, saving you executing the function again. The problem is that this code snippet gets scattered throughout your code every time you need to cache something. Why cant you do it all in one line of code?

Locking and concurrency

The second problem is in a highly concurrent environment if two requests check the cache for the same key at the same time and there is no cached value then you might end up generating the result twice which is inefficient and loads your source more than necessary. Because the built in caches don't support delegates/lamdas, you cannot perform a 'GetOrAdd' from the cache in one go. To ensure the second thread waits for the first thread to generate the result you must implement locking, ideally double locking. This adds to the code you must write when LazyCache has it built and tested already and again this can get scattered throughout the application.

Getting started with Lazy cache

LazyCache makes it really easy to add caching to your application. First install it from nuget: Install-Package LazyCache.

Then select a function whose results you can cache. The method should be slow enough that its result is worth caching and suitable that the results can be reused at a later execution time under the same key.

Lets say when building a new blog you want to cache the most recent ten posts stored in the database and you get these using Entity Framework. Say you already have the below code:

IList<Post> GetLatestPosts() {  
    return dataContext.Posts
//get the most recent posts for the homepage
viewModel.posts = GetLatestPosts();  

To add LazyCache first create the CachingService.

IAppCache cache = new CachingService();  

By default all instances share the same underlying cache store so you can create a new CachingService whenever you need one, but it is designed ready for dependency injection and would suit a singleton scope.

Tnext replace the direct call to the GetLatestPosts() with the result from The caching services GetOrAdd method:

viewModel.posts = cache.GetOrAdd("latest-posts", () => GetLatestPosts());  

Notice we passed the key and a delegate to fetch the posts in one method call. If the posts are already in the cache (for 20 minutes by default) the lamda never gets executed saving the EF database call. If not LazyCache executes the lamda, puts the result in the cache for later, and returns them to the caller. Notice also that we did not need to cast the cached results because the compiler could imply the type from the lamda function.

Benefits of LazyCache

  • Quick to add to your app & developer friendly API.
  • Built in lazy locking so your cachable delegates only get executed once per cache miss.
  • Dependency injection friendly.
  • No added cost in external services or servers.
  • Uses MemoryCache under the hood which is a proven part of dot net.
  • Extensible - swap out to redis or casandra at a later time but keep the same code and API.
  • Open source.

What about [insert your serious caching system here]?

There are lots of caching solutions out there - memcache, redis, azure cache etc. All of them require extra money or extra servers. However RAM is cheap and you probably are not using all of it at the moment anyway. Unless you know upfront you need a massive cache I recommend starting with LazyCache because it is so quick to get started. You can always change the underlying cache provider later on without breaking your application.

More info