Caching is an incredibly important concept. It means taking a lengthy process and storing the response, so you have it later. This cuts down on the time required to do the same process in the future.
Imagine you own a walk-up coffee bar, and you have two servers, serving coffee. At this coffee bar, you have 50 different combinations of coffee. Now imagine that people are frequently ordering vanilla almond milk lattes (VAMLs). What you then might do is prep a bunch of lattes, so your servers did not have to go and make every individual order.
In essence, this is caching. It is taking an operation that would take a long time, and storing the results so you can serve quicker on subsequent requests.
Let's look at a couple of different types of caching and to see why this isn't just as easy as prepping a certain number ahead of time.
Fragment Caching (Frag Caching)
In our coffee shop...
Going back to our coffee example. Let's say our milk sours after 1 hour, so if we make too much, we have to throw it away, and it's a lost cost. Likewise, if we decided to cache everything, making 50 of each coffee every time someone ordered, we'd be throwing a lot away.
So, instead, what we could do is look at the components that make up the latte. Our simple recipe would look like:
1 shot of espresso ( 1m prep time )
1 shot of syrup ( 10s prep time )
1 cup of foamed milk ( 1m prep time )
Then we could cache the results of individual pieces:
Prep a bunch of pre-measured shots of espresso
Pre-measure individual cups of milk
And since most people are ordering VAMLs, premix the espresso and syrup
Now, one server, when they get an order, can make a VAML in half the time!
And, on Instagram...
Another example would be an Instagram post feed. If you think of a post, it has a bunch of information attached, for this example, let's say it's:
The post itself ( photo, caption, location, tags )
Count of how many people liked it
Here's a simplified solution
We need to do a database query for the post, so we're going to get all the information there returned with the database query. The count of people who liked the post is going to be data-expensive; it's going to require querying for every post and doing a count of all the likes. However, if you can detect whenever a like is added or removed, you could calculate the new count for a post right then, when the person liked it.
Mitigating the Herd
Back at the coffee shop...
We're serving a record number of people, and the line is moving quickly. Everyone is happy. Now our server turns around and grabs the last pre-made VAML... The next server turns around, realizes there are none left and has to make a single coffee for their customer and now we're back to being slow. What would be better is if the first server had seen that they were running low and started pre-making another 50 VAMLs.
In essence this is herd mitigation. It's a server pulling some information and seeing that it's still a hot commodity, letting the other servers continue to serve and making a single person wait.
In code it would look something like:
CACHE_TIME = timedelta(hour=1) MITIGATE_HERD_AT = timedelta(minutes=5) def get_from_cache(key): # Grab the cached value value, expires_at = cache.get(key) # If we're getting close the end, then mitigate the herd if expires_at + MITIGATE_HERD_AT >= datetime.now(): # Put the old value back in cache for a little bit of time, while *this* # server figures out the newest value cache.set(key, (value, expires_at + MITIGATE_HERD_AT) # Perform the operation to be cached - like making 50 VAMLs value = expensive_function() # Put that result back in the cache so all the servers use it cache.set(key, (value, datetime.now() + CACHE_TIME) return value
Dynamic data is incredibly important. We've all been on a customer support call when the representative says "it takes the system a few minutes to update". Something has been cached here, and isn't updating. This may be acceptable in some scenarios, but it isn't when you are talking about fields like aviation, health, finance, or even breaking news.
So you have two options:
Expire the information after a certain amount of time
We could prep 50 VAMLs and know the milk spoils every hour, so you throw them away
Instagram could calculate the amount of likes you had every hour
You can see from the examples above how this would be less than ideal. However, there are some scenarios where this is ideal. One might be Spotify, they don't want to recalculate your suggested songs every time someone adds a new song, or you hit like. They can get away with doing this after some time delay and everything be ok.
Or, you could throw away just the part you know went bad.
This dynamic system is ideally what you are going for. You don't want to wait an hour to see how many likes your post has on Instagram, you want to see it when it happens.
The complexity here becomes how to detect that data has changed. Most information passed around is comprised of lots of smaller pieces of information joined together to give a result. This is why Fragment Caching discussed above is so important.
In the example above, our servers (coffee baristas) are analogous to our web servers. They have a queue of people lining up (people viewing your webpage), and they need to get coffee (information) to them. If that coffee (information) takes a long time to process, your queue will get longer and people will start leaving (the pages will be very slow). So, if you can anticipate the orders ahead of time (what information is needed), that can be cached and make your business much more efficient.
The tricky part is making a caching system that is abstract enough that engineers don't have to continually remember what piece of information might be cached versus what isn't. As the quote at the beginning of this article says- one of the hardest problems is figuring out that your cache is stale.
At ClearSummit we built a Starter Kit, which includes our decades of industry knowledge to empower startups with a high quality platform out of the box. It allows us to build flexible award-winning platforms and gives our clients the flexibility to grow their engineering and product teams by starting with a platform that has engrained in it best practices learned from scaling startups to exit, reducing ongoing overhead for small businesses, and transforming enterprise codebases.
Let’s build something great.
Together, we can assemble and execute a plan to hit your key objectives with a software product that looks, feels, and is a top-of-the-line technology experience.