August 27, 2006

Scheduling jobs in a web farm Or Clustered services

I have migrated my blogs to This article goes here.

I had asked a question here about this topic. The answer I got really surprised me. I was told to consider Grid computing. Anyway, my thoughts on the topic are below.

Even though we are writing 3-tier applications, which are clustered and load balanced using hardware load balancers, we are often left wondering what to do about scheduled jobs and what to do about Long running jobs - which cannot be web pages.
The options that we face are:
1) Make them SQL Server Jobs. Typically databases are clustered, even if not, they must be available for the application to work. So it is better to tie the failure dependency to the database. The challenge that we face here is that it is really not advisable to have custom DLLs running on potentially shared database machine.
2) Make them EXEs and trigger by windows scheduler. The problem here is that windows scheduler is not cluster aware . If this needs to be done, we either need to live with manual failover of the jobs, or we need to schedule the job on multiple machines and implement some kind of locking possibly using database - to ensure that only one job runs at a time.
3) Look at a clustered scheduler - including windows cluster APIs in case you have an OS level clustering at the web/app server level. In my experience, it is rate to have a cluster at this level, but if there is one, you must be ready to exploit it. All OS clusters, including veritas, provide cluster services programming. You usually have two options : you can either make the job/service a part of the machine healthcheck. so if your job fails, the cluster fails over. Secondly, you can make the job run only on the primary node of the cluster. Possibly its the second option we are looking for. There are third party clustered schedulers available, mostly commercial.
4) Windows services: here again, we can take advantage of OS cluster services to make the service rum on primary node of the cluster only. Alternatively, we can code a lock at database level to make only one service active.
5) Grid computing APIs : Grid computing tools, acting as glorified schedulers, can ensure that the job runs once, successfully and only once.

Getting around "You are being redirected to an unsecure site"

I have migrated my blogs to this item goes here
Doing a response.redirect from a HTTPS page to HTTP page is not considered good as the users get this warning "you are being redirected to an unsecure site".
The one way which I have used to avoid this warning is to tell a client side javascript to do the redirect.
i.e. if I want to go from https://myserver/a.aspx to http://myserver/b.aspx, I do the following
From a.aspx, i do a response.redirect to https://myserver/redirect.aspx?targeturl=http://myserver/b.aspx
Then by using a javascript on redirect.aspx, i make a page on load javascript method to call something like
this makes the client browser move to the http page.
While coding such redirect - you may want to consider that the deployment of the application may be done on different environments with different server URLs. Hence you may want to read the hostname ("" of from what the client has specified.

Sizing for CMS

I have migrated my blogs to This article goes here.

I will try to complement Apoorv's blog at on Portals and Content Management.
In one of the recent blogs, Apoorv mentioned sizing. That is one thing which I happen to have worked a lot on.
I will list a few things which I noticed about the content management systems that I worked on :
- It is the database access that kills.
- SQL Queries on the presentation layer tend to be a lot heavier than the queries on the Content management backend
- Unless and until you have a very simple presentation - it will always make sense to cache the presentation as HTML pages and serve that to customers instead of dynamic pages (of course everyone knows that)
- Even if the update frequency or volume is large (lets say more than a page a minute on an average) and the database size gets large - even the publishing process takes its toll. It is good to have an aggressive archiving for the content. In case Archiving is not feasible (after all it is a content management system) - a replicated database for presentation may be the only option.
- Coming to sizing you are likely to get a better projections by benchmarking against existing applications
- For benchmarking against existing application - you need to have page views per second for the most frequently used dynamic pages and the database size.
- If you have a benchmark - you can half the performance for every 10 times increase in data size
- Typically, if you can serve 7 pages per second by 1 CPU of application server and 2 CPU of database server for a non cached application - it is considered good performance. Typically the CMS pages which are updating "one content item" only can achieve this kind of performance for a database having 10 to 50 thousand content items.
- XML processing is usually a big killer, so if you are transferring around large structured documents using web services, and a document size is expected to be more than 20 KB then you have to really look at the performance. As per a benchmark I am doing now - 1 CPU can consume a web service returning 1 Meg data only 3 times a second. This is with it doing no processing at all - just a web service call using regular soap client. So as a thumb rule - if you are making web service calls - it will be a good start to halve the above benchmark of 7 pages per CPU on the app server to form a target to aim for.- Some CMS have object or XML databases. I am not sure how you can size for them if the content size is beyond a certain size.- Search engines fall in a different league. I am not sure how to size for them.

August 26, 2006

Cache - access and expiry

I have migrated my blogs to this article goes here.

In my previous post on cache implementation I had talked about where to keep the cache. Now I will talk about how to access the cache and how to expire the same.

Before we go into those topics, it is very important to consider the tolerance of stale data for maximum optimization.
Lets say we cannot tolerate stale data at all - lets say the application is for selling a hotel room in Thai Bhat converted to Sterling. Now, depending on the rate I get from my bank and the date of stay, I get a price.
So every time anyone requests for a room, I have to check the price like
PriceInGBP=PriceInTHB* ( Select ExchangeRate where fromCurrency=THB and ToCurrency=GBP and ValidityFromDate<=12Jan and ValidityToDate >= 12Jan and isLive=True)
- Since I am selling in future so I have to look up rates on that date.
Now if I cannot tolerate stale data, the best I can do is to put a trigger on the exchange rate table to update a ExchangeRateVersionNumber Table. The exchange rate version number is updated on any change on the entire exchange rate table.
Thus my application changes to
If CachedExchangeRateVersionNumber = select * from ExchangeRateVersionNumber , used Cached exchange rate, Else the above query to fetch the exchange rate.
Here we see that the query on the database, hence load on the database is much smaller aiding scalabililty.

However, atleast one query needs to be fired everytime.
Now lets assume that we could tolerate stale data for 1 minute,
we could change the getExchangeRate to
If CacheExchangeRateTimeStamp > = currentDateTime - 1 minute, then use cached , otherwise the above.
This is going to result in a significantly faster execution times as it will save the round trips to the database.

Lets now take the scenario where the exchange rate table is updated by a feed from my bank, instead of being entered manually. Here, If I cache, I have to tolerate stale data - even if that is for a minute. There can be no trigger ( unless supported by my bank) which can help check validity.

Now lets come back to accessing the cache. We want to read from the cache simultaneously via multiple threads, however when the cache is being updated, we need all the threads to stop.
While implementing such lookup - we need to be careful that we are not locking/synchronizing while reading - i.e. we must not force different objects to read in sequence.

In the typical implementation, I would prefer a hashtable as it already provides such thread-safety ( provided there is only one writing thread).

The granularity of this lock should be the same as granularity of cache update.

Now looking at expiry:
We have already discussed that in some cases, we need to have a Garbage collector like daemon clearing the cached items. This will also be required in case of LRU cache.
If the number of items in cache are limited, we dont need such daemon. The expiry checking happens while fetching from it.
In some other cases, we will expire and seed cache at pre-defined times.

Here are some questions which I answered in a recent post on MSDN forums:
1. At what moment of time should we invalidate the cache like at the start of transaction , just before updating the database or any other time
It really depends on what kind of data you are caching. If you are caching flight information as the expedia example above, the transaction is not in your control hence you have no option but to have a slight tolerance for stale data.
Assuming your own system is updating the data in cache and assuming that it is in a common datastore like a SQL database. If I also assume that we are talking about a clustered environment : there are multiple instances of the cache - one in each app-pool x each machine . Hence the component which is updating the data which is eventually cached has no way of reliably marking the cache invalid in all the different instances of the in-memory cache.
This forces all the different caches to have their own "listeners" checking for cache update.If I donot talk patterns here and only talk implementation - If you were having an application which can tolerate stale data, you could have a background thread updating the cache periodically from the database. IF you were having an application which cannot tolerate stale cache, you will have to check the DB status before each read from the cache. Now in DB you could use triggers to populate cache status in a single cell table ( lets say last updated timestamp) which the cache compares with itself and updates the cache if required, This way, the load on the database is much lesser than what it would be if it were to retrieve the entire cache.
So in this case, updating the cache invalid flag as a part of the atomic transaction will help.
2 When should we lock the cache
In either case, we should lock the cache at the time update check or update is happening. Now the granularity of the lock depends on the granularity in which we want to refresh the data.
3. how should we overcome the data retrieval latency issue which ultimately lead to stale data for some time in cache
For an externally maintained data, it may not be possible. For a self maintained data, sample implementation is discussed above.
4. if a thread is making some changes in a data in database how can we make sure that cache gets refreshed for the other thread to get fresh value .
I think I got your dilemma now - I would say that keep the cache as a static hashtable or equivalent and always get data from it for all your threads ( something like (MyObject)Cache.getFromCache(itemid) ) - lock the getFromCache method when you are checking for updates.
However, if you are looking at a code which is definitely going to be non clustered, you could put data in threads, have events and delegates to trigger cache - have an implementation of observer pattern or state machine. State machine if you have a dependancy like
Thread A has cached data-> Class B depends on that state of class A-> class C depends on state of class B
where you want the change to be propagated all the way to C before you release the locks.
5. Should we update the cache inside the transaction or after completion of transaction (in both scenarios of wanting and not wanting stale data)
It should be a part of the atomic transaction in either case.
6.If one thread is reading or writing a value to cache, how to block other threads to access that cache object.
If we keep the cache in one static class, and implement a lock on the read from cache method **when the update or check for update is happening** ( I am not talking of a synchronous one at a time access). Now you could do an explicit lock of the object/method or you could use a dataset like Hashtable.Synchronized - which already have the implementation - which locks all getters when setters are happening.
I would say for following will fit your purpose best- based on what I think your requirement is ( self maintained data in database, potential clustering, potentially no tolerance for stale data, TTL and not LRU)
1) Chose something like a static synchronized hashtable for your cache
2) For a cluster - you would need to poll to get changes.
3) Poll for changes at a timeout or at every get depending on whether you can tolerate stale data or not
4) Define a coarse granularity of cache expiry (otherwise polling for cache expiry can offset any gains got by having the cache in the first place).
If I were writing a generic cache - I would probably go for three different implementations for - externally maintained items, self maintained items and LRU cache - and Maybe a fourth one for non-clustered applications - like Games.

Do post a comment and I will try to keep up with responses.

Cache - Implementation

I have migrated my blog to This article goes here.
In my previous post on Cache concepts I had indicated that cache are defined by how we want to expire them. I also talked about granularity of cache.

In this post - I will look at where to keep the cache .

When it comes to caching implementation - the consideration are : Where should we keep the cache, how do we access the cache and how do we keep the cache updated. Accessing and updating cache will form topics for subsequent posts.

Where to keep the cache.

We all agree that the closer the cache is to the consumer, the more optimal it is.
Hence - for a browser based application- the most optimal way would be to cache it in user browsers. This can be done by having simple Get URLs and by specifying appropriate meta tags and HTTP headers indicating a cache expiry time.
The problem with this is that it becomes unpredictable - whether the browsers cache or not and whether they actually expire cache when we want them to depends on the browsers and the user settings. This is however by all means is favorable for items which are not subject to change ( typically images do not change even if text of an article does). The second problem with this is that this is cached per user, and not across users.

The second closest place is proxy servers. The proxy servers have the same advantages and problems as the browsers and I will not go into it. Typically, we want to defeat the browser and proxy cache - and I would probably write a separate post on that.

The next nearest location is the Web servers ( I will go ahead and make an assumption that we are talking about a 3 tier application which is clustered at each level.).
Now here we have an option that we use a daemon job to act as a user, access the dynamic page and place the resiting page as a static HTML page. The links point to this HTML page - either directly or using a ISAPI / NSAPI plugin which work directly on the web server.
This is the most optimal form of caching on the server. Serving such pages only require the same processing as a static HTML file. This is an ideal candidate for web sites which are accessible without login, or which have an "all or nothing" login - no user profile based access.

The next location is withing the web-app. The web-app is most popular location for caching contents. Most popular web platforms provide an option to cache parts of web page - either as a part of the language - like or JSP cache tag, or using third party components like oscache.
This is highly efficient as we are caching computed HTML - with the least possible amount of processing on the server end, while still allowing personalized / dynamic content for the rest of the page.
It is worth noting that such cache is not Clustered and here there will be an instance of the cached item on each web server instance. This will pose a challenge while refreshing the cache in case of event driven cache. This could also lead to in-consistent results for a short period in case of TTL caches ( two different users hitting different web servers could see different results).

So where is such a cache applicable? Well - almost everywhere. Lets say you have an application which requires authentication and personalizes parts of site - like navigation based on access rights. Here we can cache the "relatively" static parts of the page.

Third option will be to cache objects in memory. Almost all applications cache reference data in memory either on web-app or on the app layer or both. The cached objects are placed in some kind of singleton or static objects. The datastore is structured depending on the granularity of cache expiry - and also based on whether there are limited cache items or growing/very large number of items in the cache. Lets take a few examples to understand this better.
As in last article,
1) Departure boards of all London Airports. Here feeds come from different airports, number of airports is limited and all get flights for an airport get updated together. Here it might make sense to have one "hashtable" or similar per airport and flights as items within them. On receiving a new feed, the entire hashtable may be cleared. So here, cache clearing is done on retrieving a new feed and there is no worrying about cache running away with all available memory.
2) Flight results at or similar: Here the flight results depend on number and age of passengers, departure and return dates, preferences like economy, business class etc. It is likely that the same query may never get repeated - like If I searched for today's flight, from tomorrow it will never be there. In such case, the cached result will just sit and eat away the memory. Such cases require cached items to be specifically recorded in a list - and the items in the list be cleared by a background thread- very much like garbage collection in managed .NET code or Java. The datastore in such case may be a static hashtable + a table containing object references and expiry times sorted by expiry time.
3) Inventory for an e-commerce shop. The Item summary including inventory may be cached at search results level. On navigating to item details, or on attempting to buy, the cached inventory for the specific item may be refreshed. The datastore in this case maybe something like a resultset.
4) Reference data: In case of any change, we may want to refresh all reference data elements.
The datastore in this case may be a static hashtable.

Remember that these object cache will be one per server instance ( 1 per JVM, 1 per App pool etc.)

The final option is to cache objects in database. Now there are two distinct scenario where we do that
1) We create materialized views and hence "cache" the table values in a way its optimal to retrieve.
2) Data fetched from external sources is stored in the database for subsequent use.

Cache is kept in database in multiple scenario like
a) No tolerance of stale data. Database cache is a single place and hence can be updated ( read observer pattern, state machines and related patterns) as soon as the source data gets updated.
b) Grid computing / 2 Tier application or a very large web-farm - in which case it might be sub-optimal to have the items cached at each node.
c) Data comes from external interfaces at a very high cost, and hence 1 fetch per JVM/CLR may be too in-efficient. Hence the item is cached in the database, and it may as well be cached in the individual machines.

In the next post on cache, I will explore in detail accessing and updating cache, especially event driven ones.

Cache - the concepts

I have migrated my blogs to . this entry goes here.
In the early days of Internet, when memory was expensive and CPUs were less powerful, and the dreams were big and budgets were small - caching was perhaps the biggest buzz thing in software architecture. Today, we have at least 10x more powerful CPUs, memory is at least 10 times cheaper, and hardware and software can also be scaled many times over ( who would have imagined windows supporting 64 processors? ) - Cache is still there - and is supported out of the box in some web programming languages!!
I came across this question in MSDN architecture form - which triggered this note.

There are three distinct kinds of scenario which need different kinds of caching and cache expiry.
1) Most Frequently Used: Imagine that you are building an application like - you have millions of items in the catalogue, you can fetch details about all of them in runtime from the database. Here, we know that the book descriptions are not going to change very often, hence we know that we do not need to hit the database every time we need to show details of a book / or other item. However, the sheer size of the database is so large that we cannot ( no not even now) contemplate keeping all of it in memory of the application server. The solution here is to keep the most frequently used items in the Cache and remove the rest. The cache expiry algorithm here would be to remove the Least Recently Used item from the cache. These kinds of cache are called LRU cache for least recently used. It is funny how its named after the mechanisms of expiry and not on how elements are cached.
If we think a bit more, we will realize how old this concept it. Computer architecture - memory management - virtual memory uses this concept - a quick look at wikipedia for Virtual Memory shows that this concept has been around from 1959! phew!

2) Time to Live: There are many data for which the resources are abundant and we can cache and manage all in memory. If you have subscribed to RSS feeds via any of the readers -like or you know that the data you see when you launch these pages may not be the latest. These readers do not go to the source very frequently to get an update. They get the feeds and keep it in memory for about 2 hours or whatever is the time you specify. This kind of caching is named "Time to Live" or TTL - again because the item in memory has a certain time to live before it expires. You can see this kind of cache on ebay ( you will notice that while viewing a list of items, the bid price does not always reflect what is happening inside - esp for items which are ending soon).

3) Event based: here we cache the data till an external event forces the cache to be expired. some of the online travel sites - cache the hotel availability information, till they get an error while booking saying that there are no rooms available. This is a trigger for cache expiry. Similarly the B2C new sites have explicit "publishing" action which clears the cache and lets the users see the latest articles.

In all three scenarios, the characteristics of cache are in fact the characteristics of cache expiry.

The next thing to consider is the Granularity of cache.
Many times, information comes in Packets, and hence cache should also expire in packets.
Lets say you have an application showing departure boards of all London airports. Now its likely that you will get a feed every "x" minutes from different airports. And when you get that, you want to expire the status of all flights of the particular airport and reload them. It might be too much of an effort to compare individual flights and change selectively. In some cases, this processing may out-weigh the benefit of the cache in the first place.

It is usually easy to identify the parameters for caching - and for a different value of any of those parameters, we can assign a different "cache set". For example, search results will depend on the query parameters typed by the user. It may also depend on the language option specified by the user and any other structured search field selected by the user. So - caching in that case will have all the structured search options as parameters - and if any of that changes - it can not use the cached value and has to do a query.

I will talk more about caching and cache implementation in my next post.