@Devel: You are also welcome to share some ideas for upcoming version of update-accelerator
Hello Fajar,
I would like to say hello and get in contact with you.
Michael from IPFire.org told me that you may have some requirements or ideas for an improved update accelerator, cause we plan to extend the current version (at this point it look like a complete rewrite o_O)
So how is your current situation? It would be great if you can give some details of your current system, your users, for what you use the update accelerator, the kind of files you currently store and the space it currently used.
What could be improved or changed in an upcoming version? What is disturbing you at moment?
To share the ideas & requirements I've added a Wiki-Page on wiki.ipfire.org, where Ideas & current problems could be shared URL: http://wiki.ipfire.org/en/development/update-accelerator
It would be great if you can participate to build a great new version!
Kind regards,
Joern-Ingo Weigert Community Developer IPFire.org
Hello,
On Tue, 2013-03-05 at 13:50 +0100, Jörn-Ingo Weigert wrote:
@Devel: You are also welcome to share some ideas for upcoming version of update-accelerator
Here is an idea from me: I would like to rename "Update Accelerator", because: * It is hard to spell and a lot of people mispronounce it. * It is a very long word. That's not only inconvenient to type but also bloats navigation bars and so on. * It really does not make clear what the software does, because it does not accelerate the download of an update (at least not in the first place), so it would be better to use the word "cache" here. * If we enhance the functionality it won't be limited to "updates", so that won't fit anymore either. * We don't want to confuse it with the piece of software that is an add-on for IPCop.
I could even find more reasons, but I couldn't find a name which is better so far.
Hello Fajar,
I would like to say hello and get in contact with you.
Michael from IPFire.org told me that you may have some requirements or ideas for an improved update accelerator, cause we plan to extend the current version (at this point it look like a complete rewrite o_O)
So how is your current situation? It would be great if you can give some details of your current system, your users, for what you use the update accelerator, the kind of files you currently store and the space it currently used.
What could be improved or changed in an upcoming version? What is disturbing you at moment?
To share the ideas & requirements I've added a Wiki-Page on wiki.ipfire.org, where Ideas & current problems could be shared URL: http://wiki.ipfire.org/en/development/update-accelerator
Please move this to the "new" development area over here:
http://wiki.ipfire.org/devel/proxy/update-accelerator
The pages in /en/development are pretty outdated or wrong. We also wanted to move the development stuff away from the part of the wiki that is translated, because development documentation is not worth the translation because of the limited amount of people who are reading it.
-Michael
Hmm,
hmm, so there could be some naming ideas:
- file store (which describe it exactly) - update cache (not really, cause it is not limited on updates) - cache accelerator (not really, cause it doesn't really speed up on first time, as Michael mentioned) - file speedster
Any other ideas?
Ingo
2013/3/5 Michael Tremer michael.tremer@ipfire.org:
Hello,
On Tue, 2013-03-05 at 13:50 +0100, Jörn-Ingo Weigert wrote:
@Devel: You are also welcome to share some ideas for upcoming version of update-accelerator
Here is an idea from me: I would like to rename "Update Accelerator", because:
- It is hard to spell and a lot of people mispronounce it.
- It is a very long word. That's not only inconvenient to type but also bloats navigation bars and so on.
- It really does not make clear what the software does, because it does not accelerate the download of an update (at least not in the first place), so it would be better to use the word "cache" here.
- If we enhance the functionality it won't be limited to "updates", so that won't fit anymore either.
- We don't want to confuse it with the piece of software that is an add-on for IPCop.
I could even find more reasons, but I couldn't find a name which is better so far.
Hello Fajar,
I would like to say hello and get in contact with you.
Michael from IPFire.org told me that you may have some requirements or ideas for an improved update accelerator, cause we plan to extend the current version (at this point it look like a complete rewrite o_O)
So how is your current situation? It would be great if you can give some details of your current system, your users, for what you use the update accelerator, the kind of files you currently store and the space it currently used.
What could be improved or changed in an upcoming version? What is disturbing you at moment?
To share the ideas & requirements I've added a Wiki-Page on wiki.ipfire.org, where Ideas & current problems could be shared URL: http://wiki.ipfire.org/en/development/update-accelerator
Please move this to the "new" development area over here:
http://wiki.ipfire.org/devel/proxy/update-accelerator
The pages in /en/development are pretty outdated or wrong. We also wanted to move the development stuff away from the part of the wiki that is translated, because development documentation is not worth the translation because of the limited amount of people who are reading it.
-Michael
Development mailing list Development@lists.ipfire.org http://lists.ipfire.org/mailman/listinfo/development
Hello there, replying inline
Any other ideas?
Hyper Cache
Hello Fajar,
I would like to say hello and get in contact with you.
Hello there, nice to get in contact with you too. Greeting from Indonesia.
Michael from IPFire.org told me that you may have some requirements or ideas for an improved update accelerator
Firstly, this idea is not part of update accelerator thingie.
cause we plan to extend the current version (at this point it look like a complete rewrite o_O)
complete rewrite, maybe :)
My idea is basically out from squid 2.7 abilities to cache dynamic contents using built-in storeurl feature. http://www.squid-cache.org/Doc/config/storeurl_rewrite_program/
Wiki for example how to use storeurl http://wiki.squid-cache.org/Features/StoreUrlRewrite
We already know that squid 2.7 is already obsolete - but this feature was extremely useful for slow internet users (just like me in Indonesia, where bandwidth is expensive). Built-in storeurl feature inside squid 2.7 has ability to cache or manipulating caching for dynamically hosted contents (dynamic contents). Example for Facebook's CDN :
http://profile.ak.fbcdn.net/hprofile-ak-prn1/s160x160/572710_100002038935636... Hosted image/content is identically same and statically hosted in different Facebook's CDN servers - Michael's pic :) If squid already cache one of this picture then all same pictures from hprofile-ak-prn1, hprofile-ak-prn2, hprofile-ak-prn3 ..... hprofile-ak-prnX will result cache hit - squid not necessary to fetch same content from different cdn urls, since its already in cache and request got rewritten by storeurl. All contents from Facebook such as javascript, css, image, even sound and videos will have very high chance to get hits from squid.
This method works on almost all web that serving dynamic contents for its visitors : Youtube videos (all resolutions) , blogger.com contents, online games patch files, google maps, ads, imeem, etc. This is something that you cannot done with squid 3.x. Configuration and storeurl example to cache Youtube and some CDN (works for me until now - please read the comments too): http://tumbleweed.org.za/2009/02/18/fun-squid-and-cdns This one written in perl, fbcdn seems not work here. http://code.google.com/p/tempat-sampah/source/browse/storeurl.pl?r=17 Development for same feature for squid 3 is on the wayhttp://wiki.squid-cache.org/Features/StoreID Another approach to make it work on squid 3 is using ICAP - I'm not familiar with this one since I never used it. You can see some reference about ICAP to cache dynamic contents here (for me it seems difficult to do it) : http://www.squid-cache.org/mail-archive/squid-users/201206/0074.html All objects will be cached or purged following refresh_pattern directives.
So how is your current situation? It would be great if you can give some details of your current system, your users, for what you use the update accelerator, the kind of files you currently store and the space it currently used.
I used IPFire to serve office and wireless internet clients (3 mbps ADSL shared with 20 - 60 clients, you can expects torrent, and other bandwidth hunger users. duh!). Its just works like a charms. Update accelerator also comes handy - it help to boosts some antivirus and OS'es. Adding extra file extension .vpx - Avast 7.x updates - its use by most of my wireless users. I used 1 TB harddisk for proxy cache and update accelerator.
What could be improved or changed in an upcoming version?
Dynamic contents caching ability inside IPFire.
What is disturbing you at moment?
Bandwidth limitation is our biggest problems - not only for me, but also most internet users in Indonesia.
Sorry for the late response, and thanks for your kind attention. Terima kasih (thank you - in Bahasa Indonesia).
- Fajar
From: jiweigert@gmail.com Date: Tue, 5 Mar 2013 16:53:50 +0100 Subject: Re: Fwd: Update-Accelerator 3.0 To: michael.tremer@ipfire.org CC: development@lists.ipfire.org; inboxfr@live.com
Hmm,
hmm, so there could be some naming ideas:
- file store (which describe it exactly)
- update cache (not really, cause it is not limited on updates)
- cache accelerator (not really, cause it doesn't really speed up on
first time, as Michael mentioned)
- file speedster
Any other ideas?
Ingo
2013/3/5 Michael Tremer michael.tremer@ipfire.org:
Hello,
On Tue, 2013-03-05 at 13:50 +0100, Jörn-Ingo Weigert wrote:
@Devel: You are also welcome to share some ideas for upcoming version of update-accelerator
Here is an idea from me: I would like to rename "Update Accelerator", because:
- It is hard to spell and a lot of people mispronounce it.
- It is a very long word. That's not only inconvenient to type but also bloats navigation bars and so on.
- It really does not make clear what the software does, because it does not accelerate the download of an update (at least not in the first place), so it would be better to use the word "cache" here.
- If we enhance the functionality it won't be limited to "updates", so that won't fit anymore either.
- We don't want to confuse it with the piece of software that is an add-on for IPCop.
I could even find more reasons, but I couldn't find a name which is better so far.
Hello Fajar,
I would like to say hello and get in contact with you.
Michael from IPFire.org told me that you may have some requirements or ideas for an improved update accelerator, cause we plan to extend the current version (at this point it look like a complete rewrite o_O)
So how is your current situation? It would be great if you can give some details of your current system, your users, for what you use the update accelerator, the kind of files you currently store and the space it currently used.
What could be improved or changed in an upcoming version? What is disturbing you at moment?
To share the ideas & requirements I've added a Wiki-Page on wiki.ipfire.org, where Ideas & current problems could be shared URL: http://wiki.ipfire.org/en/development/update-accelerator
Please move this to the "new" development area over here:
http://wiki.ipfire.org/devel/proxy/update-accelerator
The pages in /en/development are pretty outdated or wrong. We also wanted to move the development stuff away from the part of the wiki that is translated, because development documentation is not worth the translation because of the limited amount of people who are reading it.
-Michael
Development mailing list Development@lists.ipfire.org http://lists.ipfire.org/mailman/listinfo/development
Development mailing list Development@lists.ipfire.org http://lists.ipfire.org/mailman/listinfo/development
Hello,
On Wed, 2013-03-06 at 16:57 +0800, Fajar Ramadhan wrote:
Hello there, replying inline
Any other ideas?
Hyper Cache
That's a possibility. I didn't know that anyone is still using the word hyper :D
Michael from IPFire.org told me that you may have some
requirements or
ideas for an improved update accelerator
Firstly, this idea is not part of update accelerator thingie.
Well, we are thinking about a rewrite, so every idea is welcome. Nobody promises that it will be implemented, but in the process of searching for the real purpose of the update accelerator, feel free to write anything on your mind if you think it is worth considering.
cause we plan to extend the current version (at this point it look like a complete rewrite
o_O)
complete rewrite, maybe :)
?
My idea is basically out from squid 2.7 abilities to cache dynamic contents using built-in storeurl feature. http://www.squid-cache.org/Doc/config/storeurl_rewrite_program/
As we are looking into the (far) future, we cannot possibly stick to an old version of squid. Even the currently in IPFire 2 running version 3.1 is "old" right now.
Maybe it is also a good idea to design this without considering squid as the default thing to work with it. It should be possible to drop squid and use an other proxy server - although I really don't have plans for that right now, because squid is the best proxy server one can have.
Wiki for example how to use storeurl http://wiki.squid-cache.org/Features/StoreUrlRewrite
We already know that squid 2.7 is already obsolete - but this feature was extremely useful for slow internet users (just like me in Indonesia, where bandwidth is expensive). Built-in storeurl feature inside squid 2.7 has ability to cache or manipulating caching for dynamically hosted contents (dynamic contents). Example for Facebook's CDN :
It is interesting that this has not been ported to squid 3.x. Apparently, the reason is that this the implementation was poorly written and so people thought about replacing it entirely. It also looks like there are not many users of this feature.
If squid already cache one of this picture then all same pictures from hprofile-ak-prn1, hprofile-ak-prn2, hprofile-ak-prn3 ..... hprofile-ak-prnX will result cache hit - squid not necessary to fetch same content from different cdn urls, since its already in cache and request got rewritten by storeurl. All contents from Facebook such as javascript, css, image, even sound and videos will have very high chance to get hits from squid.
Looking at your user data your provided further below, the important stuff to cache is big files. That's not only video and all sorts of downloads. Nowadays javascript code of sites like Facebook is of the size of one or two megabytes*.
* Didn't check. Read this somewhere, some time ago.
What I get from this is that we should design the rewrite to literally cache anything.
A technical question from me: Why cannot we use the internal cache of squid to do so, but code our own caching proxy that is actually queried by the real caching proxy? I think even with a very fast implementation, squid will always be much faster.
This method works on almost all web that serving dynamic contents for its visitors : Youtube videos (all resolutions) , blogger.com contents, online games patch files, google maps, ads, imeem, etc. This is something that you cannot done with squid 3.x.
This cannot be done with squid 3 AT THE MOMENT.
Another approach to make it work on squid 3 is using ICAP - I'm not familiar with this one since I never used it. You can see some reference about ICAP to cache dynamic contents here (for me it seems difficult to do it) : http://www.squid-cache.org/mail-archive/squid-users/201206/0074.html
As pointed out earlier, I like ICAP. The protocol has a lot of advantages and makes us independent from squid (not to replace it, but being not dependent on a certain version - they all talk ICAP).
Can someone find out if somebody already implemented this kind of thing?
Terima kasih, -Michael
Hello,
let me introduce myself, I am system admin for about 160-180 people with three squids (including 2x ipfire) in my network and many more squids in motherconcern's connected intranet. We have three connections to the internet, two separate lines (each one is secured by ipfire) and another way to internet over intranet. 1x ADSL 2MBit 1x ADSL 16/1MBit 1x intranet internet 8/8 (but shared with business critical services)
So as you might expect, there is a hard way of deciding which request goes where and where should what content be cached. I agree to Fajar's intention! We need a way more dynamic content caching capability.
Please check this: http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
here you can read about Store_Url_Rewrite
<Quote> *Pros:*
**
- *simple to implement.*
*Cons:*
- *works only with squid2 tree* - *The check is done based only on requested URL. in a case of 300 status code response the URL will be cached and can cause endless loop.* -
*There is no way to interact with the cached key in any of squid cache interfaces such as ICP\HTCP\Cache Managerhttp://wiki.squid-cache.org/Features/CacheManager, the resource is a GHOST.*
*(I wrote an ICP client and was working on a HTCP Switch\Hub to monitor and control live cache objects)*
- *To solve the 300 status code problem a specific patch was proposed but wasn't integrated into squid.* - *The 300 status code problem can be solved by ICAP RESPMOD rewriting.*
</Quote>
It is like deprecated and mentioned as "Old methods".
I think the way to go is an written Addon for squid which does the needed work. Here is a point to start reading about: http://wiki.squid-cache.org/Features/AddonHelpers#HTTP_Redirection http://www.squid-cache.org/Doc/config/url_rewrite_program/
Okay that was a lot of text without providing any helpful work, but squid is not easy and should be wisely planned. As you have to care for a lot of things like privacy, bandwidth, filesystem, cache-methods and of course you need to care about "how" your users use the web before you can optimize it.
In the company I work in, it is as follows: The most traffic is secured (HTTPS). Just think about the hosting providers like Dropbox, Google-Drive, and so on. There probably are files which are needed by many people and then downloaded by them, and you can't legally cache them. That's a shame.
My top one place of traffic per week is dropbox, but can't cache anything at the moment... A way of caching is needed which *breaks! *the privacy issue for big files only(maybe >200KB), then stores it *once* and delivers per request, no matter of the URL.
Well, just wanted to say we need to improve the caching itself and hopefully I can be of any help for this.
Best Regards Jan
2013/3/6 Michael Tremer michael.tremer@ipfire.org
Hello,
On Wed, 2013-03-06 at 16:57 +0800, Fajar Ramadhan wrote:
Hello there, replying inline
Any other ideas?
Hyper Cache
That's a possibility. I didn't know that anyone is still using the word hyper :D
Michael from IPFire.org told me that you may have some
requirements or
ideas for an improved update accelerator
Firstly, this idea is not part of update accelerator thingie.
Well, we are thinking about a rewrite, so every idea is welcome. Nobody promises that it will be implemented, but in the process of searching for the real purpose of the update accelerator, feel free to write anything on your mind if you think it is worth considering.
cause we plan to extend the current version (at this point it look like a complete rewrite
o_O)
complete rewrite, maybe :)
?
My idea is basically out from squid 2.7 abilities to cache dynamic contents using built-in storeurl feature. http://www.squid-cache.org/Doc/config/storeurl_rewrite_program/
As we are looking into the (far) future, we cannot possibly stick to an old version of squid. Even the currently in IPFire 2 running version 3.1 is "old" right now.
Maybe it is also a good idea to design this without considering squid as the default thing to work with it. It should be possible to drop squid and use an other proxy server - although I really don't have plans for that right now, because squid is the best proxy server one can have.
Wiki for example how to use storeurl http://wiki.squid-cache.org/Features/StoreUrlRewrite
We already know that squid 2.7 is already obsolete - but this feature was extremely useful for slow internet users (just like me in Indonesia, where bandwidth is expensive). Built-in storeurl feature inside squid 2.7 has ability to cache or manipulating caching for dynamically hosted contents (dynamic contents). Example for Facebook's CDN :
It is interesting that this has not been ported to squid 3.x. Apparently, the reason is that this the implementation was poorly written and so people thought about replacing it entirely. It also looks like there are not many users of this feature.
If squid already cache one of this picture then all same pictures from hprofile-ak-prn1, hprofile-ak-prn2, hprofile-ak-prn3 ..... hprofile-ak-prnX will result cache hit - squid not necessary to fetch same content from different cdn urls, since its already in cache and request got rewritten by storeurl. All contents from Facebook such as javascript, css, image, even sound and videos will have very high chance to get hits from squid.
Looking at your user data your provided further below, the important stuff to cache is big files. That's not only video and all sorts of downloads. Nowadays javascript code of sites like Facebook is of the size of one or two megabytes*.
- Didn't check. Read this somewhere, some time ago.
What I get from this is that we should design the rewrite to literally cache anything.
A technical question from me: Why cannot we use the internal cache of squid to do so, but code our own caching proxy that is actually queried by the real caching proxy? I think even with a very fast implementation, squid will always be much faster.
This method works on almost all web that serving dynamic contents for its visitors : Youtube videos (all resolutions) , blogger.com contents, online games patch files, google maps, ads, imeem, etc. This is something that you cannot done with squid 3.x.
This cannot be done with squid 3 AT THE MOMENT.
Another approach to make it work on squid 3 is using ICAP - I'm not familiar with this one since I never used it. You can see some reference about ICAP to cache dynamic contents here (for me it seems difficult to do it) : http://www.squid-cache.org/mail-archive/squid-users/201206/0074.html
As pointed out earlier, I like ICAP. The protocol has a lot of advantages and makes us independent from squid (not to replace it, but being not dependent on a certain version - they all talk ICAP).
Can someone find out if somebody already implemented this kind of thing?
Terima kasih, -Michael
Development mailing list Development@lists.ipfire.org http://lists.ipfire.org/mailman/listinfo/development
Hi,
it is great that you take part in the discussion.
On Wed, 2013-03-06 at 20:13 +0100, Jan Behrens wrote:
I agree to Fajar's intention! We need a way more dynamic content caching capability.
No, I think that is exactly the wrong way. Dynamic content should not be cached because it is _dynamic_. It's very unlikely that someone else will get the same response to a request.
The only thing that makes sense it to cache _static_ content like avatars, videos, (big) pictures. But actually this is the proxy's task, which apparently does not work very well for some things.
I think the way to go is an written Addon for squid which does the needed work. Here is a point to start reading about: http://wiki.squid-cache.org/Features/AddonHelpers#HTTP_Redirection
What work? Re-implementing an extra cache is not a good idea. At least not for small files.
In the company I work in, it is as follows: The most traffic is secured (HTTPS). Just think about the hosting providers like Dropbox, Google-Drive, and so on. There probably are files which are needed by many people and then downloaded by them, and you can't legally cache them. That's a shame.
Files transferred over HTTPS cannot be cached by the proxy. This has no legal reasons. It is just technically impossible.
2013/3/6 Michael Tremer michael.tremer@ipfire.org Hello,
On Wed, 2013-03-06 at 16:57 +0800, Fajar Ramadhan wrote: > Hello there, replying inline > > > Any other ideas? > > Hyper Cache That's a possibility. I didn't know that anyone is still using the word hyper :D > > >> Michael from IPFire.org told me that you may have some > requirements or > > >> ideas for an improved update accelerator > > Firstly, this idea is not part of update accelerator thingie. Well, we are thinking about a rewrite, so every idea is welcome. Nobody promises that it will be implemented, but in the process of searching for the real purpose of the update accelerator, feel free to write anything on your mind if you think it is worth considering. > > >> cause we plan to extend the > > >> current version (at this point it look like a complete rewrite > o_O) > > complete rewrite, maybe :) ? > My idea is basically out from squid 2.7 abilities to cache dynamic > contents using built-in storeurl feature. > http://www.squid-cache.org/Doc/config/storeurl_rewrite_program/ As we are looking into the (far) future, we cannot possibly stick to an old version of squid. Even the currently in IPFire 2 running version 3.1 is "old" right now. Maybe it is also a good idea to design this without considering squid as the default thing to work with it. It should be possible to drop squid and use an other proxy server - although I really don't have plans for that right now, because squid is the best proxy server one can have. > Wiki for example how to use storeurl > http://wiki.squid-cache.org/Features/StoreUrlRewrite > > We already know that squid 2.7 is already obsolete - but this feature > was extremely useful for slow internet users (just like me in > Indonesia, where bandwidth is expensive). Built-in storeurl feature > inside squid 2.7 has ability to cache or manipulating caching for > dynamically hosted contents (dynamic contents). Example for Facebook's > CDN : It is interesting that this has not been ported to squid 3.x. Apparently, the reason is that this the implementation was poorly written and so people thought about replacing it entirely. It also looks like there are not many users of this feature. > If squid already cache one of this picture then all same pictures from > hprofile-ak-prn1, hprofile-ak-prn2, hprofile-ak-prn3 ..... > hprofile-ak-prnX will result cache hit - squid not necessary to fetch > same content from different cdn urls, since its already in cache and > request got rewritten by storeurl. All contents from Facebook such as > javascript, css, image, even sound and videos will have very high > chance to get hits from squid. Looking at your user data your provided further below, the important stuff to cache is big files. That's not only video and all sorts of downloads. Nowadays javascript code of sites like Facebook is of the size of one or two megabytes*. * Didn't check. Read this somewhere, some time ago. What I get from this is that we should design the rewrite to literally cache anything. A technical question from me: Why cannot we use the internal cache of squid to do so, but code our own caching proxy that is actually queried by the real caching proxy? I think even with a very fast implementation, squid will always be much faster. > This method works on almost all web that serving dynamic contents for > its visitors : Youtube videos (all resolutions) , blogger.com > contents, online games patch files, google maps, ads, imeem, etc. This > is something that you cannot done with squid 3.x. This cannot be done with squid 3 AT THE MOMENT. > Another approach to make it work on squid 3 is using ICAP - I'm not > familiar with this one since I never used it. You can see some > reference about ICAP to cache dynamic contents here (for me it seems > difficult to do it) : > http://www.squid-cache.org/mail-archive/squid-users/201206/0074.html As pointed out earlier, I like ICAP. The protocol has a lot of advantages and makes us independent from squid (not to replace it, but being not dependent on a certain version - they all talk ICAP). Can someone find out if somebody already implemented this kind of thing? Terima kasih, -Michael _______________________________________________ Development mailing list Development@lists.ipfire.org http://lists.ipfire.org/mailman/listinfo/development