From: Michael Tremer <michael.tremer@ipfire.org>
To: development@lists.ipfire.org
Subject: Re: Update-Accelerator 3.0
Date: Wed, 06 Mar 2013 15:09:58 +0100 [thread overview]
Message-ID: <1362578998.4044.22.camel@rice-oxley.tremer.info> (raw)
In-Reply-To: <SNT136-W471D0DEDB59EA0B88832CEC5E40@phx.gbl>
[-- Attachment #1: Type: text/plain, Size: 4161 bytes --]
Hello,
On Wed, 2013-03-06 at 16:57 +0800, Fajar Ramadhan wrote:
> Hello there, replying inline
>
> > Any other ideas?
>
> Hyper Cache
That's a possibility. I didn't know that anyone is still using the word
hyper :D
> > >> Michael from IPFire.org told me that you may have some
> requirements or
> > >> ideas for an improved update accelerator
>
> Firstly, this idea is not part of update accelerator thingie.
Well, we are thinking about a rewrite, so every idea is welcome. Nobody
promises that it will be implemented, but in the process of searching
for the real purpose of the update accelerator, feel free to write
anything on your mind if you think it is worth considering.
> > >> cause we plan to extend the
> > >> current version (at this point it look like a complete rewrite
> o_O)
>
> complete rewrite, maybe :)
?
> My idea is basically out from squid 2.7 abilities to cache dynamic
> contents using built-in storeurl feature.
> http://www.squid-cache.org/Doc/config/storeurl_rewrite_program/
As we are looking into the (far) future, we cannot possibly stick to an
old version of squid. Even the currently in IPFire 2 running version 3.1
is "old" right now.
Maybe it is also a good idea to design this without considering squid as
the default thing to work with it. It should be possible to drop squid
and use an other proxy server - although I really don't have plans for
that right now, because squid is the best proxy server one can have.
> Wiki for example how to use storeurl
> http://wiki.squid-cache.org/Features/StoreUrlRewrite
>
> We already know that squid 2.7 is already obsolete - but this feature
> was extremely useful for slow internet users (just like me in
> Indonesia, where bandwidth is expensive). Built-in storeurl feature
> inside squid 2.7 has ability to cache or manipulating caching for
> dynamically hosted contents (dynamic contents). Example for Facebook's
> CDN :
It is interesting that this has not been ported to squid 3.x.
Apparently, the reason is that this the implementation was poorly
written and so people thought about replacing it entirely. It also looks
like there are not many users of this feature.
> If squid already cache one of this picture then all same pictures from
> hprofile-ak-prn1, hprofile-ak-prn2, hprofile-ak-prn3 .....
> hprofile-ak-prnX will result cache hit - squid not necessary to fetch
> same content from different cdn urls, since its already in cache and
> request got rewritten by storeurl. All contents from Facebook such as
> javascript, css, image, even sound and videos will have very high
> chance to get hits from squid.
Looking at your user data your provided further below, the important
stuff to cache is big files. That's not only video and all sorts of
downloads. Nowadays javascript code of sites like Facebook is of the
size of one or two megabytes*.
* Didn't check. Read this somewhere, some time ago.
What I get from this is that we should design the rewrite to literally
cache anything.
A technical question from me: Why cannot we use the internal cache of
squid to do so, but code our own caching proxy that is actually queried
by the real caching proxy? I think even with a very fast implementation,
squid will always be much faster.
> This method works on almost all web that serving dynamic contents for
> its visitors : Youtube videos (all resolutions) , blogger.com
> contents, online games patch files, google maps, ads, imeem, etc. This
> is something that you cannot done with squid 3.x.
This cannot be done with squid 3 AT THE MOMENT.
> Another approach to make it work on squid 3 is using ICAP - I'm not
> familiar with this one since I never used it. You can see some
> reference about ICAP to cache dynamic contents here (for me it seems
> difficult to do it) :
> http://www.squid-cache.org/mail-archive/squid-users/201206/0074.html
As pointed out earlier, I like ICAP. The protocol has a lot of
advantages and makes us independent from squid (not to replace it, but
being not dependent on a certain version - they all talk ICAP).
Can someone find out if somebody already implemented this kind of thing?
Terima kasih,
-Michael
next parent reply other threads:[~2013-03-06 14:09 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <SNT136-W471D0DEDB59EA0B88832CEC5E40@phx.gbl>
2013-03-06 14:09 ` Michael Tremer [this message]
[not found] <CAEpw26UAAVxm3r2CYbJ279GdKmdTebnvjXx7sYP7LhbOD7d_Bw@mail.gmail.com>
2013-03-06 19:44 ` Michael Tremer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1362578998.4044.22.camel@rice-oxley.tremer.info \
--to=michael.tremer@ipfire.org \
--cc=development@lists.ipfire.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox