On Wed, 2013-03-06 at 22:23 +0100, Bernhard Bitsch wrote: > I don't think we should make a new solution based on squid's caching, > too. > > The existing Update Accelerator is written as a rewriter module to > squid. > > This model is strong enough to realize the function " caching of > frequent file requests ". When we jump right ahead to discuss technical details, I would like someone to check out if we can easily control the cache to store our files, so that we don't have to manage our own one. > My first idea for a redesign of the accelerator was generalize the > conditions for caching. > > In the moment all conditions can be described by the pattern > > if URI match set of (sample URI's and RE'S)_1 & URI !match set of > (sample URIs and RE's)_2 then > > check(URI) > > fi > > > > This can be enhanced if the sets of URI's and RE's are condensed to > two regular expressions for each caching class, actually called > "vendor". > > Then the check for caching is just a loop over all classes. > > A second enhancement can be achieved if the most requested checks are > made first. The loop terminates by the first match. The latest version of PCRE comes with a fast JIT compiler for regular expressions. We should take advantage of that instead of running thrugh loops. I agree that all URLs should be configurable. -Michael