On Mon, 2018-04-16 at 17:25 +0200, Peter Müller wrote:
Hello,
[...]>> (a) Security
Whenever you are updating an application or an entire operating systems, security is the most important aspect: An attacker must not be able to manipulate update packages, or fake information about the current patchlevel, or something similar.
In the past, we saw several incidents here - perhaps the most famous one was "Flame" (also known as "Flamer" or "Skywiper"), a sophisticated malware which was detected in Q1/2012 and spread via Windows Updates with a valid signature - and it turned out that strong cryptography is a very good way to be more robust here.
In IPFire 2.x, we recently switched from SHA1 to SHA512 for the Pakfire signatures, and plan to change the signing key (which is currently a 1024-bit-DSA one) with Core Update 121 to a more secure one.
So far, so good.
Keys in Pakfire 3 are usually 4k in size, RSA and we sign and hash with SHA512 only.
That is practically as best as we can go right now.
I agree. ECC cryptography might become relevant some day here, too (RSA does not scale well), but first things first.
I think we could technically use ECC here since we control all of the stack, but I think RSA could still have better compatibility.
Speaking about IPFire 3.x, we plan to download updates
over HTTPS only (see below why). However, there are still a few questions left:
We probably need to encourage some more people to move their mirrors to HTTPS too before we can go large. But I would prefer to have HTTPS only and fewer mirrors than having more mirrors with HTTP only.> However, this is only for privacy and not for security.
I agree.
(i) Should we sign the mirror list? In 3.x, using a custom mirror (i.e. in a company network) will be possible. If not specified, we use the public mirror infrastructure; a list of all servers and paths will be published as it already is today.
The list is a bit more complex than that, but essentially serves the same purpose:
https://pakfire.ipfire.org/distro/ipfire3/repo/stable/mirrorlist?arch=x86_64
This is what it looks like right now.
It is not required at all to sign this list for the integrity of the entire package management system. The packages have a signature and it does not matter if the package was downloaded from a source that was not trustworthy since the signature is validated and either matches or it does not.
However, for the privacy argument, I can understand that there are some arguments for signing it so that no man in the middle can add other mirrors and gather information from any downloading clients.
The mirror list however is being downloaded over HTTPS and therefore we have transport security. TLS can be man-in-the-middle-ed of course.
Generally I would like to allow for users to download a package from a source that we do not know of verify. Debian is making a far stronger point towards that that they even ban HTTPS. They want bigger organizations to use proxy servers that cache data and they want to give them the opportunity to redirect them back to any self-hosted mirrors. That I completely regard out of scope for us since we don't create anywhere near the traffic that Debian creates (because both: size and number of users of our distribution). I would also like to emphasize that we consider security first and then bandwidth use.
I consider the likelihood that an attacker is inserting malicious mirrors in here very small. A signature on the list would also only show that we have seen the same list that a client has downloaded.
When we add a mirror to our list, we do not conduct any form of audit, so that it is even possible that some of the mirrors are compromised or configured in a fashion that we would not prefer. That - by design - is not a problem for the security of Pakfire. But it is possible that people just swap the files on the servers. That is an attack vector that we cannot remove unless we host all mirrors ourselves and never make any mistakes. Not going to happen.
Another point I see here is that an attacker running an evil mirror might denial the existance of new updates by simply not publishing them.
Yes, this is an attack vector and an easy one.
We have a timestamp in the repository metadata that is downloaded first. It also has a hash with the latest version of the package database. The client will walk along all mirrors until it could download it. The last place will be the base mirror that will have it.
https://pakfire.ipfire.org/repositories/ipfire3/stable/x86_64/repodata/repom...
However, the repository metadata is not signed (as it would be in DNS), but I would argue that it should be.
It is kind of undefined what will happen when no repository data could be downloaded at all or in an interval of about a week.
Of course, we might detect that sooner or later via a monitoring tool, but in combination with an unsigned mirror list this point becomes more relevant.
Monitoring is good. It ensures the quality of the mirroring. But the system itself needs to be resilient against this sort of attack.
Should we publish the current update state (called "Core Update" in 2.x, not sure if it exists in 3.x) via DNS, too? That way, we could avoid pings to the mirrors, so installations only need to connect in case an update has been announced.
They would only download the metadata from the main service and there would be no need to redownload the database again which is large. We have to assume that people have a slow connection and bandwidth is expensive.
Pakfire 2 has the mirror list being distributed over the mirrors. Therefore it *is* signed.
Pakfire 3 has a different approach. A central service is creating that list on demand and tries to *optimise* it for each client. That means putting mirrors that are closer or have a bigger pipe to the top of the list. Not sure how good our algorithm is right now, but we can change it on the server-side at any time and changes on the list will propagate quicker than with Pakfire 2.
There are two points I have a different opinion: (a) If I got it right, every client needs to connect to this central server sometimes, which I consider quite bad for various reasons (privacy, missing redundancy, etc.). If we'd distribute the mirror list, we only need a connect at the first time to learn which mirrors are out there.
A decentralised system is better, but I do not see how we can achieve this. A distributed list could of course not be signed.
After that, a client can use a cached list, and fetch updates from any mirror. In case we have a system at the other end of the world, we also avoid connectivitiy issues, as we currently observe them in connection with mirrors in Ecuador.
A client can use a cached list now. The list is only refreshed once a day (I think). Updates can then be fetched from any mirror as long as the repository data is recent.
(b) If might be a postmaster-disease, but I was never a fan of moving knowledge from client to server (my favorite example here are MX recors, which work much better than implementing fail-over and loadbalancing on the server side).
An individual list for every client is very hard to debug, since it becomes difficult to reproduce a connectivity scenario if you do not know which servers the client saw. Second, we have a server side bottleneck here (signing!) and need an always-online key if we decide to sign that list, anyway.
We do not really care about any connectivity issues. There might be many reasons for that and I do not want to debug any mirror issues. The client just needs to move on to the next one.
I do not took a look at the algorithm, yet, but the idea is to prorise mirror servers located near the client, assuming that geographic distance correlates with network distance today (not sure if that is correct anyway, but it is definitely better than in the 90s).
It puts everything in the same country to the top and all the rest to the bottom.
It correlates, but that is it. We should have a list of countries nearby an other one. It would make sense to group them together by continent, etc. But that is for somewhere else.
Basically the client has no way to measure "distance" or "speed". And I do not think it is right to implement this in the client. Just a GeoIP lookup requires to resolve DNS for all mirrors and then perform the database lookup. That takes a long time and I do not see why this is much better than the server-side approach.
The only problem here is to determine which public IP a client has. But there are ways to work around this, and in the end, we'll probably solve most of the issues (especially dealing with signature expire times) you mentioned.
Determining the public IP is a huge problem. See ddns.
Any thoughts? :-)
Yeah, you didn't convince me by assuring that there will be a solution. This can be implemented. But is this worth the work and creating a much more complex system to solve a problem only half-way?
:)
Pakfire 2 also only has one key that is used to sign everything. I do not intend to go down that path why that is a bad idea, but Pakfire 3 is not doing this any more. In fact, packages can have multiple signatures.
That leads me to the question with what key the list should be signed. We would need to sign maybe up to one-hundred lists per second since we generate them live. We could now simplify the proximity algorithm so that each country only gets one list or something similar and then deliver that list from cache.
See above, I do not consider this being necessary.
I do not think that the main key of the repository is a good idea. Potentially we should have an extra key just for the mirror lists on the server.
Either way, I agree here.
We would also need to let the signature expire so that mirrors that are found out to be compromised are *actually* removed. At the moment the client keeps using the mirror list until it can download a new one. What would happen if the download is not possible but the signature on a previous list has expired?
Since this is a scenario which might happen any time, I'd consider falling back to the main mirror the best alternative.
What if that is compromised? What would be the contingency plan there?
We would also make the entire package management system very prone to clock issues. If you are five minutes off, or an hour, the list could have expired and you cannot download any packages any more or you would always fall back to the main mirror.
Another problem solved by a more intelligent client. :-) :-) :-)
How? Please provide detail. Pakfire should not set the system clock. Ever. That is totally out of scope.
In my opinion, we should sign that list, too, to prevent an attacker from inserting his/her mirror silently. On the other hand, packages are sill signed, so a manipulation here would not be possible (and we do not have to trust our mirrors), but an attacker might still gather some metadata.
So to bring this to a conclusion what I want to say here is, that I do not have a problem with it being signed. I just have a problem with all the new problems being created. If you can give me answers to the questions above and we can come up with an approach that improves security and privacy and also does not make bootstrapping a new system a pain in the rear end, then I am up for it.
But it will by design be a weak signature. We could probably not put the key into a HSM, etc.
In case we do not use individual mirror list, using a key baked into a HSM would be possible here.
Would bring us back to the signers again. It is hard to do this in a VM.
[The mirror list can be viewed at https://mirrors.ipfire.org/, if anyone is interested.]
Pakfire 3 has its mirrors here: https://pakfire.ipfire.org/mirrors
(ii) Should we introduce signers? A package built for IPFire 3.x will be signed at the builder using a custom key for each machine. Since malicious activity might took place during the build, the key might became compromised.
Some Linux distributions are using dedicated signers, which are only signing data but never unpack or execute them. That way, we could also move the signing keys to a HSM (example: https://www.nitrokey.com/) and run the server at a secure location (not in a public data centre).
I am in favour of this.
This is just very hard for us to do. Can we bring the entire build service back to work again and then add this?
It is not very straight forward and since we won't have builders and the signers in the same DC, we would need to have a way to either transfer the package securely or do some remote signing. Both doesn't sound like a good idea.
Assumed both builder and signer have good connectivity, transferring a package securely sounds good. To avoid MITM attacks, a sort of "builder signature" might be useful - in the end, a package has two or three signatures then:
First, it is signed by the builder, to prove that it was build on that machine (in case a package turns out to be compromised, this makes tracing much easier) and it was transferred correctly to the signer. Second, the signer adds it signature, which is assumed to be trusted by Pakfire clients here. If not, we need a sort of "master key", too, but I though that's what we wanted to avoid here.
The signature of the builder is not trustworthy. That is precisely why we need a signer. The builder is executing untrusted code and can therefore be easily compromised.
Master keys are bad.
(b) Privacy Fetching updates typically leaks a lot of information (such as your current patch level, or systems architecture, or IP address). By using HTTPS only, we avoid information leaks to eavesdroppers, which I consider a security benefit, too.
However, a mirror operator still has access to those information. Perhaps the IP address is the most critical one, since it allows tracing a system back to a city/country, or even to an organisation.
We have been hosting a ClamAV mirror once and it was very interesting to see this.
Also, many mirrors seem to open up the usage statistics through webalizer. So this will indeed be a public record.
Because of that, I do consider mirrors to be somewhat critical, and would like to see the list singed in 3.x, too.
As stated above, I do not think that this gets rid of the problem that you are describing here.
(i) Should we introduce mirror servers in the Tor network? One way to solve this problem is to download updates via a proxy, or an anonymisation network. In most cases, Tor fits the bill.
For best privacy, some mirror servers could be operated as so-called "hidden services", so traffic won't even leave the Tor network and pass some exit nodes. (Debian runs several services that way, including package mirrors: https://onion.debian.org/ .)
Since Tor is considered bad traffic in some corporate network (or even states), this technique should be disabled by default.
What are your opinions here?
I have never hosted a hidden service on Tor. I do not see a problem with that. It might only be possible that a very tiny of people are going to use this and therefore it is a lot of work to do this with only a few people benefiting from it.
Well, setting up a Tor mirror server is not very hard (_securing_ it is the hard task here :-) ), but I am unaware how much development effort that will be.
Tell me what it needs.
What does it need so that Pakfire would be able to connect to the Tor network? How would this look like from a user's perspective? Where is this being configured? How to we send mirror lists or repository information?
(i) You can connect to a locally running Tor daemon (which is probably what we have on IPFire systems) via SOCKS. To provide a HTTP proxy, some additional software is needed (polipo, see here for a configuration example: https://www.marcus-povey.co.uk/2016/03/24/using-tor-as-a-http-proxy/).
I was hoping that there was something builtin available. Until this day I do not understand why Tor does not implement a HTTP proxy.
(ii) What does "user's perspective" mean here? Of course, transferring files over Tor is slower, but that does not really matters since updates are not that time critical.
What steps the user will have to do. So setting up Tor is one thing. Installing another service is another thing. Under those circumstances it looks like we don't need to change a thing in pakfire since pakfire can handle a HTTP proxy. But it wouldn't be a switch in pakfire.
Then, there will be DNS traffic.
(iii) /etc/tor/torrc (and Pakfire configuration I do not know, yet). (iv) As ususal, it does to make any difference wether a mirror is accessed via Tor or plaintext.
Under those circumstances, is it even worth hosting a hidden service? Why not access the other mirrors?
A good example might be apt-transport-tor (https://packages.debian.org/stretch/apt-transport-tor), not sure how good it fits on IPFire.
(ii) Reducing update connections to anybody else Some resources (GeoIP database, IDS rulesets, proxy blacklists) are currently not fetched via the IPFire mirrors, causing some of the problems mentioned above.
For example, to fetch the GeoIP database, all systems sooner or later connect to "geolite.maxmind.com", so we can assume they see a lot of IP addresses IPFire systems are located behind. :-\ Michael and I are currently working on a replacement for this, called "libloc", but that is a different topic.
This is a huge problem for me. We cannot rely on any third parties any more. I guess the reports in the media over the last days and weeks have proven that there is too much of a conflict of interest. There are no free services from an organization that is trying to make billions of dollars.
Since it is very hard to get consent from the IPFire users on every of those, we should just get everything from one entity only.
ACK.
Pushing all these resources into packages (if they are free, of course) and deliver them over our own mirrors would reduce some traffic to third party servers here. For libloc, we plan to do so.
If by package, you are NOT referring to a package in the sense of a pakfire package, then I agree.
Should we do this for other resources such as rulesets and blacklists, too?
Ideally yes, but realistically we cannot reinvent everything ourselves. I am personally involved into too many of these side-projects that there is only little time for the main thing. So I would rather consider that we work together with the blacklist people, or just leave it for now. I guess that is a thing for the blacklists because they are opt-in. People have to pick one and it is obvious that something is being downloaded. However, it is not obvious what the dangers are. The geo IP database however is not opt-in. And it isn't opt-out either.
Since blocklists do not eat up much disk space, I'd say we host everything ourselves we can do (Emerging Threats IDS signatures, or Spamhaus DROP if we want to implement that sometimes, ...).
We wouldn't have the license to do that.
But we probably need to get in touch with the maintainers first.
Looking forward to read your comments.
Sorry this took a little while.
No problem. :-)
So, this discussion is getting longer and longer. Let's please try to keep it on track and high level. If we have certain decisions coming out of it, then we can split it up and discuss things more in detail. Just want to make sure it doesn't take me an hour to reply to these emails...
Best regards, Peter Müller
-Michael