* Re: Aw: Fwd: Update-Accelerator 3.0
[not found] <trinity-4fddf3b5-ccc4-4ab6-aa82-e7c38081966d-1362605012586@3capp-gmx-bs43>
@ 2013-03-06 22:08 ` Michael Tremer
2013-03-07 9:36 ` Aw: " Bernhard Bitsch
0 siblings, 1 reply; 5+ messages in thread
From: Michael Tremer @ 2013-03-06 22:08 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 1368 bytes --]
On Wed, 2013-03-06 at 22:23 +0100, Bernhard Bitsch wrote:
> I don't think we should make a new solution based on squid's caching,
> too.
>
> The existing Update Accelerator is written as a rewriter module to
> squid.
>
> This model is strong enough to realize the function " caching of
> frequent file requests ".
When we jump right ahead to discuss technical details, I would like
someone to check out if we can easily control the cache to store our
files, so that we don't have to manage our own one.
> My first idea for a redesign of the accelerator was generalize the
> conditions for caching.
>
> In the moment all conditions can be described by the pattern
>
> if URI match set of (sample URI's and RE'S)_1 & URI !match set of
> (sample URIs and RE's)_2 then
>
> check(URI)
>
> fi
>
>
>
> This can be enhanced if the sets of URI's and RE's are condensed to
> two regular expressions for each caching class, actually called
> "vendor".
>
> Then the check for caching is just a loop over all classes.
>
> A second enhancement can be achieved if the most requested checks are
> made first. The loop terminates by the first match.
The latest version of PCRE comes with a fast JIT compiler for regular
expressions. We should take advantage of that instead of running thrugh
loops.
I agree that all URLs should be configurable.
-Michael
^ permalink raw reply [flat|nested] 5+ messages in thread
* Aw: Re: Fwd: Update-Accelerator 3.0
2013-03-06 22:08 ` Aw: Fwd: Update-Accelerator 3.0 Michael Tremer
@ 2013-03-07 9:36 ` Bernhard Bitsch
2013-03-07 13:15 ` Jörn-Ingo Weigert
0 siblings, 1 reply; 5+ messages in thread
From: Bernhard Bitsch @ 2013-03-07 9:36 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 2265 bytes --]
> Gesendet: Mittwoch, 06. März 2013 um 23:08 Uhr
> Von: "Michael Tremer" <michael.tremer(a)ipfire.org>
> An: "Bernhard Bitsch" <Bernhard.Bitsch(a)gmx.de>
> Cc: "development(a)lists.ipfire.org" <development(a)lists.ipfire.org>
> Betreff: Re: Aw: Fwd: Update-Accelerator 3.0
>
> On Wed, 2013-03-06 at 22:23 +0100, Bernhard Bitsch wrote:
>
> > I don't think we should make a new solution based on squid's caching,
> > too.
> >
> > The existing Update Accelerator is written as a rewriter module to
> > squid.
> >
> > This model is strong enough to realize the function " caching of
> > frequent file requests ".
>
> When we jump right ahead to discuss technical details, I would like
> someone to check out if we can easily control the cache to store our
> files, so that we don't have to manage our own one.
>
>
No problem. But this solution must give us th possibility to manage the file store from the WUI.
I don't want to miss this feature.
> > My first idea for a redesign of the accelerator was generalize the
> > conditions for caching.
> >
> > In the moment all conditions can be described by the pattern
> >
> > if URI match set of (sample URI's and RE'S)_1 & URI !match set of
> > (sample URIs and RE's)_2 then
> >
> > check(URI)
> >
> > fi
> >
> >
> >
> > This can be enhanced if the sets of URI's and RE's are condensed to
> > two regular expressions for each caching class, actually called
> > "vendor".
> >
> > Then the check for caching is just a loop over all classes.
> >
> > A second enhancement can be achieved if the most requested checks are
> > made first. The loop terminates by the first match.
>
> The latest version of PCRE comes with a fast JIT compiler for regular
> expressions. We should take advantage of that instead of running thrugh
> loops.
>
The loops are not avoidable by a JIT compiler ( Perl does this too ).
The storage application must loop over the various categories.
At a short look on PCRE I could not find a possibility for efficiently assembling several single RE's/URI's to one.
This is necessary if we want the user to extend the rule set. A main problem in the actual implementation is the extension by adding a new alternative.
- Bernhard
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Re: Fwd: Update-Accelerator 3.0
2013-03-07 9:36 ` Aw: " Bernhard Bitsch
@ 2013-03-07 13:15 ` Jörn-Ingo Weigert
2013-03-12 12:25 ` Michael Tremer
0 siblings, 1 reply; 5+ messages in thread
From: Jörn-Ingo Weigert @ 2013-03-07 13:15 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 4145 bytes --]
Hi would prefer also the current state of an separate repository,
maybe its easy, but i dont now if its possible to move squid's cache,
or part of them easily to other locations, or maintain it via the cache manager.
So the current way to store it separate seems for me the best way,
Having the possibility to set debug-options and to move the update-cache
to another location via webui could also be a good feature.
with a ui to manage the sources and files but with a more IO-less version
to store / access the metadata and even to handle the (update) files properly
while downloading and on deletion.
Some points of thats whats missing and may implemented I wrote on the wiki-page.
But I generally would deny to implement any function which breaks
current security functions
like MITM-ssl or try to implement squid-functionality which is handled
inside squid in a better way.
Because of the nature of dynamic content, its dynamic and handled
better inside squid and its logic.
To implement features, which try to cache social content like the one
Fajar mentioned,
is a bad way, except the content can be identified and checked for a
longer time, otherwise we would
implement a second LRU-cache inside squid again and thats already exist.
To cache FB content is almost senseless over a longer time, cause they
frequently change
their interfaces / paths and naming to prevent f.e. hackers to misuse
their platform.
with that in mind there is only a small part of content which can be
cached and currently is (by squid).
Kind regards,
Ingo
2013/3/7 Bernhard Bitsch <Bernhard.Bitsch(a)gmx.de>:
>
>
>> Gesendet: Mittwoch, 06. März 2013 um 23:08 Uhr
>> Von: "Michael Tremer" <michael.tremer(a)ipfire.org>
>> An: "Bernhard Bitsch" <Bernhard.Bitsch(a)gmx.de>
>> Cc: "development(a)lists.ipfire.org" <development(a)lists.ipfire.org>
>> Betreff: Re: Aw: Fwd: Update-Accelerator 3.0
>>
>> On Wed, 2013-03-06 at 22:23 +0100, Bernhard Bitsch wrote:
>>
>> > I don't think we should make a new solution based on squid's caching,
>> > too.
>> >
>> > The existing Update Accelerator is written as a rewriter module to
>> > squid.
>> >
>> > This model is strong enough to realize the function " caching of
>> > frequent file requests ".
>>
>> When we jump right ahead to discuss technical details, I would like
>> someone to check out if we can easily control the cache to store our
>> files, so that we don't have to manage our own one.
>>
>>
> No problem. But this solution must give us th possibility to manage the file store from the WUI.
> I don't want to miss this feature.
>
>> > My first idea for a redesign of the accelerator was generalize the
>> > conditions for caching.
>> >
>> > In the moment all conditions can be described by the pattern
>> >
>> > if URI match set of (sample URI's and RE'S)_1 & URI !match set of
>> > (sample URIs and RE's)_2 then
>> >
>> > check(URI)
>> >
>> > fi
>> >
>> >
>> >
>> > This can be enhanced if the sets of URI's and RE's are condensed to
>> > two regular expressions for each caching class, actually called
>> > "vendor".
>> >
>> > Then the check for caching is just a loop over all classes.
>> >
>> > A second enhancement can be achieved if the most requested checks are
>> > made first. The loop terminates by the first match.
>>
>> The latest version of PCRE comes with a fast JIT compiler for regular
>> expressions. We should take advantage of that instead of running thrugh
>> loops.
>>
>
> The loops are not avoidable by a JIT compiler ( Perl does this too ).
> The storage application must loop over the various categories.
> At a short look on PCRE I could not find a possibility for efficiently assembling several single RE's/URI's to one.
> This is necessary if we want the user to extend the rule set. A main problem in the actual implementation is the extension by adding a new alternative.
>
> - Bernhard
> _______________________________________________
> Development mailing list
> Development(a)lists.ipfire.org
> http://lists.ipfire.org/mailman/listinfo/development
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Re: Fwd: Update-Accelerator 3.0
2013-03-07 13:15 ` Jörn-Ingo Weigert
@ 2013-03-12 12:25 ` Michael Tremer
2013-03-12 12:44 ` Aw: " Bernhard Bitsch
0 siblings, 1 reply; 5+ messages in thread
From: Michael Tremer @ 2013-03-12 12:25 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 696 bytes --]
Hey,
please guys, don't let us drop the ball on this.
I have seen that some discussed this topic on the wiki pages, which
is ... interesting, but please let's keep this in an orderly fashion.
What I want you to do is to write a list on the wiki with all the
features we want and which features from the current version we don't
want or need anymore. Maybe Jörn-Ingo can make a list of the current
features.
After that we will agree on all proposed features and see how we can
implement them. We are not going into too much technical detail until we
reached this point.
And if we got even that finished, we start planning who implements what
and so on. Details when we get there.
-Michael
^ permalink raw reply [flat|nested] 5+ messages in thread
* Aw: Re: Re: Fwd: Update-Accelerator 3.0
2013-03-12 12:25 ` Michael Tremer
@ 2013-03-12 12:44 ` Bernhard Bitsch
0 siblings, 0 replies; 5+ messages in thread
From: Bernhard Bitsch @ 2013-03-12 12:44 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 364 bytes --]
Michael,
basically you're right, but there is development just on the run in the moment.
And the intermediate results aren't to bad.
Therefore my more technical details.
And yes we should collect the proposal for features, mainly the topic "source and kind of cached material" is relevant for the decision about the technical implementation.
-Bernhard
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-03-12 12:44 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <trinity-4fddf3b5-ccc4-4ab6-aa82-e7c38081966d-1362605012586@3capp-gmx-bs43>
2013-03-06 22:08 ` Aw: Fwd: Update-Accelerator 3.0 Michael Tremer
2013-03-07 9:36 ` Aw: " Bernhard Bitsch
2013-03-07 13:15 ` Jörn-Ingo Weigert
2013-03-12 12:25 ` Michael Tremer
2013-03-12 12:44 ` Aw: " Bernhard Bitsch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox