From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michael Tremer To: development@lists.ipfire.org Subject: Re: Aw: Fwd: Update-Accelerator 3.0 Date: Wed, 06 Mar 2013 23:08:11 +0100 Message-ID: <1362607691.1828.8.camel@hughes.tremer.info> In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1815561190087461111==" List-Id: --===============1815561190087461111== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Wed, 2013-03-06 at 22:23 +0100, Bernhard Bitsch wrote: > I don't think we should make a new solution based on squid's caching, > too. > > The existing Update Accelerator is written as a rewriter module to > squid. > > This model is strong enough to realize the function " caching of > frequent file requests ". When we jump right ahead to discuss technical details, I would like someone to check out if we can easily control the cache to store our files, so that we don't have to manage our own one. > My first idea for a redesign of the accelerator was generalize the > conditions for caching. > > In the moment all conditions can be described by the pattern > > if URI match set of (sample URI's and RE'S)_1 & URI !match set of > (sample URIs and RE's)_2 then > > check(URI) > > fi > > > > This can be enhanced if the sets of URI's and RE's are condensed to > two regular expressions for each caching class, actually called > "vendor". > > Then the check for caching is just a loop over all classes. > > A second enhancement can be achieved if the most requested checks are > made first. The loop terminates by the first match. The latest version of PCRE comes with a fast JIT compiler for regular expressions. We should take advantage of that instead of running thrugh loops. I agree that all URLs should be configurable. -Michael --===============1815561190087461111==--