refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
Hi,
it is great that you take part in the discussion.
No, I think that is exactly the wrong way. Dynamic content should not be
On Wed, 2013-03-06 at 20:13 +0100, Jan Behrens wrote:
> I agree to Fajar's intention!
> We need a way more dynamic content caching capability.
cached because it is _dynamic_. It's very unlikely that someone else
will get the same response to a request.
The only thing that makes sense it to cache _static_ content like
avatars, videos, (big) pictures. But actually this is the proxy's task,
which apparently does not work very well for some things.
What work? Re-implementing an extra cache is not a good idea. At least
> I think the way to go is an written Addon for squid which does the
> needed work.
> Here is a point to start reading about:
> http://wiki.squid-cache.org/Features/AddonHelpers#HTTP_Redirection
not for small files.
Files transferred over HTTPS cannot be cached by the proxy. This has no
> In the company I work in, it is as follows:
> The most traffic is secured (HTTPS).
> Just think about the hosting providers like Dropbox, Google-Drive, and
> so on. There probably are files which are needed by many people and
> then downloaded by them, and you can't legally cache them. That's a
> shame.
legal reasons. It is just technically impossible.
>
> 2013/3/6 Michael Tremer <michael.tremer@ipfire.org>
> Hello,
>
> On Wed, 2013-03-06 at 16:57 +0800, Fajar Ramadhan wrote:
> > Hello there, replying inline
> >
> > > Any other ideas?
> >
> > Hyper Cache
>
>
> That's a possibility. I didn't know that anyone is still using
> the word
> hyper :D
>
> > > >> Michael from IPFire.org told me that you may have some
> > requirements or
> > > >> ideas for an improved update accelerator
> >
> > Firstly, this idea is not part of update accelerator
> thingie.
>
>
> Well, we are thinking about a rewrite, so every idea is
> welcome. Nobody
> promises that it will be implemented, but in the process of
> searching
> for the real purpose of the update accelerator, feel free to
> write
> anything on your mind if you think it is worth considering.
>
> > > >> cause we plan to extend the
> > > >> current version (at this point it look like a complete
> rewrite
> > o_O)
> >
> > complete rewrite, maybe :)
>
> ?
>
> > My idea is basically out from squid 2.7 abilities to cache
> dynamic
> > contents using built-in storeurl feature.
> >
> http://www.squid-cache.org/Doc/config/storeurl_rewrite_program/
>
>
> As we are looking into the (far) future, we cannot possibly
> stick to an
> old version of squid. Even the currently in IPFire 2 running
> version 3.1
> is "old" right now.
>
> Maybe it is also a good idea to design this without
> considering squid as
> the default thing to work with it. It should be possible to
> drop squid
> and use an other proxy server - although I really don't have
> plans for
> that right now, because squid is the best proxy server one can
> have.
>
> > Wiki for example how to use storeurl
> > http://wiki.squid-cache.org/Features/StoreUrlRewrite
> >
> > We already know that squid 2.7 is already obsolete - but
> this feature
> > was extremely useful for slow internet users (just like me
> in
> > Indonesia, where bandwidth is expensive). Built-in storeurl
> feature
> > inside squid 2.7 has ability to cache or manipulating
> caching for
> > dynamically hosted contents (dynamic contents). Example for
> Facebook's
> > CDN :
>
>
> It is interesting that this has not been ported to squid 3.x.
> Apparently, the reason is that this the implementation was
> poorly
> written and so people thought about replacing it entirely. It
> also looks
> like there are not many users of this feature.
>
> > If squid already cache one of this picture then all same
> pictures from
> > hprofile-ak-prn1, hprofile-ak-prn2, hprofile-ak-prn3 .....
> > hprofile-ak-prnX will result cache hit - squid not necessary
> to fetch
> > same content from different cdn urls, since its already in
> cache and
> > request got rewritten by storeurl. All contents from
> Facebook such as
> > javascript, css, image, even sound and videos will have very
> high
> > chance to get hits from squid.
>
>
> Looking at your user data your provided further below, the
> important
> stuff to cache is big files. That's not only video and all
> sorts of
> downloads. Nowadays javascript code of sites like Facebook is
> of the
> size of one or two megabytes*.
>
> * Didn't check. Read this somewhere, some time ago.
>
> What I get from this is that we should design the rewrite to
> literally
> cache anything.
>
> A technical question from me: Why cannot we use the internal
> cache of
> squid to do so, but code our own caching proxy that is
> actually queried
> by the real caching proxy? I think even with a very fast
> implementation,
> squid will always be much faster.
>
> > This method works on almost all web that serving dynamic
> contents for
> > its visitors : Youtube videos (all resolutions) ,
> blogger.com
> > contents, online games patch files, google maps, ads, imeem,
> etc. This
> > is something that you cannot done with squid 3.x.
>
>
> This cannot be done with squid 3 AT THE MOMENT.
>
> > Another approach to make it work on squid 3 is using ICAP -
> I'm not
> > familiar with this one since I never used it. You can see
> some
> > reference about ICAP to cache dynamic contents here (for me
> it seems
> > difficult to do it) :
> >
> http://www.squid-cache.org/mail-archive/squid-users/201206/0074.html
>
>
> As pointed out earlier, I like ICAP. The protocol has a lot of
> advantages and makes us independent from squid (not to replace
> it, but
> being not dependent on a certain version - they all talk
> ICAP).
>
> Can someone find out if somebody already implemented this kind
> of thing?
>
> Terima kasih,
> -Michael
>
>
> _______________________________________________
> Development mailing list
> Development@lists.ipfire.org
> http://lists.ipfire.org/mailman/listinfo/development
>
>
>