Amazon publishes information regarding some of their IP networks primarily used for AWS cloud services in a machine-readable format. To improve libloc lookup results for these, we have little choice other than importing and parsing them.
Unfortunately, there seems to be no machine-readable list of the locations of their data centers or availability zones available. If there _is_ any, please let the author know.
Fixes: #12594
Signed-off-by: Peter Müller peter.mueller@ipfire.org --- src/python/location-importer.in | 110 ++++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
diff --git a/src/python/location-importer.in b/src/python/location-importer.in index 1e08458..5be1d61 100644 --- a/src/python/location-importer.in +++ b/src/python/location-importer.in @@ -19,6 +19,7 @@
import argparse import ipaddress +import json import logging import math import re @@ -931,6 +932,10 @@ class CLI(object): TRUNCATE TABLE network_overrides; """)
+ # Update overrides for various cloud providers big enough to publish their own IP + # network allocation lists in a machine-readable format... + self._update_overrides_for_aws() + for file in ns.files: log.info("Reading %s..." % file)
@@ -998,6 +1003,111 @@ class CLI(object): else: log.warning("Unsupported type: %s" % type)
+ def _update_overrides_for_aws(self): + # Download Amazon AWS IP allocation file to create overrides... + downloader = location.importer.Downloader() + + try: + with downloader.request("https://ip-ranges.amazonaws.com/ip-ranges.json", return_blocks=False) as f: + aws_ip_dump = json.load(f.body) + except Exception as e: + log.error("unable to preprocess Amazon AWS IP ranges: %s" % e) + return + + # XXX: Set up a dictionary for mapping a region name to a country. Unfortunately, + # there seems to be no machine-readable version available of this other than + # https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availabili... + # (worse, it seems to be incomplete :-/ ); https://www.cloudping.cloud/endpoints + # was helpful here as well. + aws_region_country_map = { + "af-south-1": "ZA", + "ap-east-1": "HK", + "ap-south-1": "IN", + "ap-south-2": "IN", + "ap-northeast-3": "JP", + "ap-northeast-2": "KR", + "ap-southeast-1": "SG", + "ap-southeast-2": "AU", + "ap-southeast-3": "MY", + "ap-northeast-1": "JP", + "ca-central-1": "CA", + "eu-central-1": "DE", + "eu-central-2": "CH", + "eu-west-1": "IE", + "eu-west-2": "GB", + "eu-south-1": "IT", + "eu-south-2": "ES", + "eu-west-3": "FR", + "eu-north-1": "SE", + "me-south-1": "BH", + "sa-east-1": "BR" + } + + # Fetch all valid country codes to check parsed networks aganist... + rows = self.db.query("SELECT * FROM countries ORDER BY country_code") + validcountries = [] + + for row in rows: + validcountries.append(row.country_code) + + with self.db.transaction(): + for snetwork in aws_ip_dump["prefixes"] + aws_ip_dump["ipv6_prefixes"]: + try: + network = ipaddress.ip_network(snetwork.get("ip_prefix") or snetwork.get("ipv6_prefix"), strict=False) + except ValueError: + log.warning("Unable to parse line: %s" % snetwork) + continue + + # Sanitize parsed networks... + if not self._check_parsed_network(network): + continue + + # Determine region of this network... + region = snetwork["region"] + cc = None + is_anycast = False + + # Any region name starting with "us-" will get "US" country code assigned straight away... + if region.startswith("us-"): + cc = "US" + elif region.startswith("cn-"): + # ... same goes for China ... + cc = "CN" + elif region == "GLOBAL": + # ... funny region name for anycast-like networks ... + is_anycast = True + elif region in aws_region_country_map: + # ... assign looked up country code otherwise ... + cc = aws_region_country_map[region] + else: + # ... and bail out if we are missing something here + log.warning("Unable to determine country code for line: %s" % snetwork) + continue + + # Skip networks with unknown country codes + if not is_anycast and validcountries and cc not in validcountries: + log.warning("Skipping Amazon AWS network with bogus country '%s': %s" % \ + (cc, network)) + return + + # Conduct SQL statement... + self.db.execute(""" + INSERT INTO network_overrides( + network, + country, + is_anonymous_proxy, + is_satellite_provider, + is_anycast + ) VALUES (%s, %s, %s, %s, %s) + ON CONFLICT (network) DO NOTHING""", + "%s" % network, + cc, + None, + None, + is_anycast, + ) + + @staticmethod def _parse_bool(block, key): val = block.get(key)
Hello Peter,
Thanks for this, I guess this would affect quite a few people out there…
However, is it a good idea to use the overrides table for this? Should that not be reserved for the pure overrides?
There is no way to view these changes. Is that something we can live with?
-Michael
On 10 Apr 2021, at 13:28, Peter Müller peter.mueller@ipfire.org wrote:
Amazon publishes information regarding some of their IP networks primarily used for AWS cloud services in a machine-readable format. To improve libloc lookup results for these, we have little choice other than importing and parsing them.
Unfortunately, there seems to be no machine-readable list of the locations of their data centers or availability zones available. If there _is_ any, please let the author know.
Fixes: #12594
Signed-off-by: Peter Müller peter.mueller@ipfire.org
src/python/location-importer.in | 110 ++++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
diff --git a/src/python/location-importer.in b/src/python/location-importer.in index 1e08458..5be1d61 100644 --- a/src/python/location-importer.in +++ b/src/python/location-importer.in @@ -19,6 +19,7 @@
import argparse import ipaddress +import json import logging import math import re @@ -931,6 +932,10 @@ class CLI(object): TRUNCATE TABLE network_overrides; """)
# Update overrides for various cloud providers big enough to publish their own IP
# network allocation lists in a machine-readable format...
self._update_overrides_for_aws()
for file in ns.files: log.info("Reading %s..." % file)
@@ -998,6 +1003,111 @@ class CLI(object): else: log.warning("Unsupported type: %s" % type)
- def _update_overrides_for_aws(self):
# Download Amazon AWS IP allocation file to create overrides...
downloader = location.importer.Downloader()
try:
with downloader.request("https://ip-ranges.amazonaws.com/ip-ranges.json", return_blocks=False) as f:
aws_ip_dump = json.load(f.body)
except Exception as e:
log.error("unable to preprocess Amazon AWS IP ranges: %s" % e)
return
# XXX: Set up a dictionary for mapping a region name to a country. Unfortunately,
# there seems to be no machine-readable version available of this other than
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
# (worse, it seems to be incomplete :-/ ); https://www.cloudping.cloud/endpoints
# was helpful here as well.
aws_region_country_map = {
"af-south-1": "ZA",
"ap-east-1": "HK",
"ap-south-1": "IN",
"ap-south-2": "IN",
"ap-northeast-3": "JP",
"ap-northeast-2": "KR",
"ap-southeast-1": "SG",
"ap-southeast-2": "AU",
"ap-southeast-3": "MY",
"ap-northeast-1": "JP",
"ca-central-1": "CA",
"eu-central-1": "DE",
"eu-central-2": "CH",
"eu-west-1": "IE",
"eu-west-2": "GB",
"eu-south-1": "IT",
"eu-south-2": "ES",
"eu-west-3": "FR",
"eu-north-1": "SE",
"me-south-1": "BH",
"sa-east-1": "BR"
}
# Fetch all valid country codes to check parsed networks aganist...
rows = self.db.query("SELECT * FROM countries ORDER BY country_code")
validcountries = []
for row in rows:
validcountries.append(row.country_code)
with self.db.transaction():
for snetwork in aws_ip_dump["prefixes"] + aws_ip_dump["ipv6_prefixes"]:
try:
network = ipaddress.ip_network(snetwork.get("ip_prefix") or snetwork.get("ipv6_prefix"), strict=False)
except ValueError:
log.warning("Unable to parse line: %s" % snetwork)
continue
# Sanitize parsed networks...
if not self._check_parsed_network(network):
continue
# Determine region of this network...
region = snetwork["region"]
cc = None
is_anycast = False
# Any region name starting with "us-" will get "US" country code assigned straight away...
if region.startswith("us-"):
cc = "US"
elif region.startswith("cn-"):
# ... same goes for China ...
cc = "CN"
elif region == "GLOBAL":
# ... funny region name for anycast-like networks ...
is_anycast = True
elif region in aws_region_country_map:
# ... assign looked up country code otherwise ...
cc = aws_region_country_map[region]
else:
# ... and bail out if we are missing something here
log.warning("Unable to determine country code for line: %s" % snetwork)
continue
# Skip networks with unknown country codes
if not is_anycast and validcountries and cc not in validcountries:
log.warning("Skipping Amazon AWS network with bogus country '%s': %s" % \
(cc, network))
return
# Conduct SQL statement...
self.db.execute("""
INSERT INTO network_overrides(
network,
country,
is_anonymous_proxy,
is_satellite_provider,
is_anycast
) VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (network) DO NOTHING""",
"%s" % network,
cc,
None,
None,
is_anycast,
)
- @staticmethod def _parse_bool(block, key): val = block.get(key)
-- 2.26.2
Hello Michael,
thanks for your reply.
Frankly, the longer I think about this patches' approach, the more I become unhappy with it:
(a) We are processing the Amazon AWS IP range feed overcredulous: It comes without being digitally signed in any way over a HTTPS connection - at least _I_ don't trust PKI, and should probably finally write that blog post about it planned for quite some time now :-/ - from a CDN. ip-ranges.amazonaws.com is not even DNSSEC-signed, not to mention DANE for their web service.
Worse, my patch lacks additional safeguards. At the moment, the feeds' content is only checked for too big to too small prefixes, or anything not globally routable, and similar oddities. Amazon, however, must not publish any information regarding IP space they do not own - and if they do, we should not process it.
While this does not eliminate the possible attack of somebody tampering with their feed on their server(s), the CDN, or anywhere in between, it would prevent a hostile actor to abuse that feed for arbitrarily spoofing the contents of a libloc database generated by us.
Unfortunately, I have no elegant idea how to do this at the moment. A most basic approach would consist in rejecting any network not announced by ASNs we know are owned or maintained by Amazon - not sure how volatile this list would be.
Only accepting information for networks whose RIR data proof ownership or maintenance by Amazon would be a more thorough approach, though. However, that involves bulk queries to the Whois, as a decent chunk of their IP space is assigned by ARIN. In case of RIPE et al., we might parse our way through the databases we already have, but this is laborious, and we have no routines for enumerating maintainer data yet.
(b) I honestly dislike intransparent changes here. Since we fill the override SQL table on demand every time, retracing content of generated location databases will be quite tricky if they did not originate from our own override files.
On the other hand, we do not store the contents of the RIR databases downloaded, either. Simply dumping the Amazon AWS IP range feed into our Git repository would solve the transparency issue, but results in unnecessary bloat - unless we really need it someday.
Do you have a particular idea about how to solve this issue in mind?
Regarding (a), the RIRs' FTP server FQDNs are at least DNSSEC-signed, but we do not enforce this. While I vaguely remember to have seen signatures for the RIPE database, we currently do not validate it, either. Although this would increase complexity and affects performance when generating a database at our end, I would propose to do so whenever possible. Thoughts?
Sorry for this length and not very optimistic answer. If you ask me, you'll always get the worst-case scenario. :-)
After all, we are doing security here...
Thanks, and best regards, Peter Müller
Hello Peter,
Thanks for this, I guess this would affect quite a few people out there…
However, is it a good idea to use the overrides table for this? Should that not be reserved for the pure overrides?
There is no way to view these changes. Is that something we can live with?
-Michael
On 10 Apr 2021, at 13:28, Peter Müller peter.mueller@ipfire.org wrote:
Amazon publishes information regarding some of their IP networks primarily used for AWS cloud services in a machine-readable format. To improve libloc lookup results for these, we have little choice other than importing and parsing them.
Unfortunately, there seems to be no machine-readable list of the locations of their data centers or availability zones available. If there _is_ any, please let the author know.
Fixes: #12594
Signed-off-by: Peter Müller peter.mueller@ipfire.org
src/python/location-importer.in | 110 ++++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
diff --git a/src/python/location-importer.in b/src/python/location-importer.in index 1e08458..5be1d61 100644 --- a/src/python/location-importer.in +++ b/src/python/location-importer.in @@ -19,6 +19,7 @@
import argparse import ipaddress +import json import logging import math import re @@ -931,6 +932,10 @@ class CLI(object): TRUNCATE TABLE network_overrides; """)
# Update overrides for various cloud providers big enough to publish their own IP
# network allocation lists in a machine-readable format...
self._update_overrides_for_aws()
for file in ns.files: log.info("Reading %s..." % file)
@@ -998,6 +1003,111 @@ class CLI(object): else: log.warning("Unsupported type: %s" % type)
- def _update_overrides_for_aws(self):
# Download Amazon AWS IP allocation file to create overrides...
downloader = location.importer.Downloader()
try:
with downloader.request("https://ip-ranges.amazonaws.com/ip-ranges.json", return_blocks=False) as f:
aws_ip_dump = json.load(f.body)
except Exception as e:
log.error("unable to preprocess Amazon AWS IP ranges: %s" % e)
return
# XXX: Set up a dictionary for mapping a region name to a country. Unfortunately,
# there seems to be no machine-readable version available of this other than
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
# (worse, it seems to be incomplete :-/ ); https://www.cloudping.cloud/endpoints
# was helpful here as well.
aws_region_country_map = {
"af-south-1": "ZA",
"ap-east-1": "HK",
"ap-south-1": "IN",
"ap-south-2": "IN",
"ap-northeast-3": "JP",
"ap-northeast-2": "KR",
"ap-southeast-1": "SG",
"ap-southeast-2": "AU",
"ap-southeast-3": "MY",
"ap-northeast-1": "JP",
"ca-central-1": "CA",
"eu-central-1": "DE",
"eu-central-2": "CH",
"eu-west-1": "IE",
"eu-west-2": "GB",
"eu-south-1": "IT",
"eu-south-2": "ES",
"eu-west-3": "FR",
"eu-north-1": "SE",
"me-south-1": "BH",
"sa-east-1": "BR"
}
# Fetch all valid country codes to check parsed networks aganist...
rows = self.db.query("SELECT * FROM countries ORDER BY country_code")
validcountries = []
for row in rows:
validcountries.append(row.country_code)
with self.db.transaction():
for snetwork in aws_ip_dump["prefixes"] + aws_ip_dump["ipv6_prefixes"]:
try:
network = ipaddress.ip_network(snetwork.get("ip_prefix") or snetwork.get("ipv6_prefix"), strict=False)
except ValueError:
log.warning("Unable to parse line: %s" % snetwork)
continue
# Sanitize parsed networks...
if not self._check_parsed_network(network):
continue
# Determine region of this network...
region = snetwork["region"]
cc = None
is_anycast = False
# Any region name starting with "us-" will get "US" country code assigned straight away...
if region.startswith("us-"):
cc = "US"
elif region.startswith("cn-"):
# ... same goes for China ...
cc = "CN"
elif region == "GLOBAL":
# ... funny region name for anycast-like networks ...
is_anycast = True
elif region in aws_region_country_map:
# ... assign looked up country code otherwise ...
cc = aws_region_country_map[region]
else:
# ... and bail out if we are missing something here
log.warning("Unable to determine country code for line: %s" % snetwork)
continue
# Skip networks with unknown country codes
if not is_anycast and validcountries and cc not in validcountries:
log.warning("Skipping Amazon AWS network with bogus country '%s': %s" % \
(cc, network))
return
# Conduct SQL statement...
self.db.execute("""
INSERT INTO network_overrides(
network,
country,
is_anonymous_proxy,
is_satellite_provider,
is_anycast
) VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (network) DO NOTHING""",
"%s" % network,
cc,
None,
None,
is_anycast,
)
- @staticmethod def _parse_bool(block, key): val = block.get(key)
-- 2.26.2
Hello,
On 12 Apr 2021, at 18:48, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael,
thanks for your reply.
Frankly, the longer I think about this patches' approach, the more I become unhappy with it:
Oh no. Don’t overthink it :)
(a) We are processing the Amazon AWS IP range feed overcredulous: It comes without being digitally signed in any way over a HTTPS connection - at least _I_ don't trust PKI, and should probably finally write that blog post about it planned for quite some time now :-/ - from a CDN. ip-ranges.amazonaws.com is not even DNSSEC-signed, not to mention DANE for their web service.
There would be no other way how we can authenticate this data. We do exactly the same with data from the RIRs.
Worse, my patch lacks additional safeguards. At the moment, the feeds' content is only checked for too big to too small prefixes, or anything not globally routable, and similar oddities. Amazon, however, must not publish any information regarding IP space they do not own - and if they do, we should not process it.
Do we not automatically filter those out later? Should we apply the same DELETE FROM … statements to the overrides table that we apply to the imported RIR data?
https://git.ipfire.org/?p=location/libloc.git;a=blob;f=src/python/location-i...
While this does not eliminate the possible attack of somebody tampering with their feed on their server(s), the CDN, or anywhere in between, it would prevent a hostile actor to abuse that feed for arbitrarily spoofing the contents of a libloc database generated by us.
Unfortunately, I have no elegant idea how to do this at the moment. A most basic approach would consist in rejecting any network not announced by ASNs we know are owned or maintained by Amazon - not sure how volatile this list would be.
Probably single networks won’t be moved at all, but at the size of AWS I assume that new networks are added very often.
Only accepting information for networks whose RIR data proof ownership or maintenance by Amazon would be a more thorough approach, though. However, that involves bulk queries to the Whois, as a decent chunk of their IP space is assigned by ARIN. In case of RIPE et al., we might parse our way through the databases we already have, but this is laborious, and we have no routines for enumerating maintainer data yet.
That would be a rather complicated process and I am not sure if it is worth it.
IP address space that has been acquired and is transitioning to AWS might not show up as owned by the right entity/entities and we might reject it. We simply cannot check this automatically as we cannot check any other IP network being owned by who ever it says.
(b) I honestly dislike intransparent changes here. Since we fill the override SQL table on demand every time, retracing content of generated location databases will be quite tricky if they did not originate from our own override files.
I am a little bit unhappy with this as well. The overrides table also takes precedence. That is why I would have expected this in the networks table.
In a way, the RIRs are not transparent to us and we just import their data, do something with it and put it into our database. AWS is just another source of data just like the RIRs.
Although it isn’t perfect, I could live a lot better with this solution.
On the other hand, we do not store the contents of the RIR databases downloaded, either. Simply dumping the Amazon AWS IP range feed into our Git repository would solve the transparency issue, but results in unnecessary bloat - unless we really need it someday.
Do you have a particular idea about how to solve this issue in mind?
See above.
Regarding (a), the RIRs' FTP server FQDNs are at least DNSSEC-signed, but we do not enforce this. While I vaguely remember to have seen signatures for the RIPE database, we currently do not validate it, either. Although this would increase complexity and affects performance when generating a database at our end, I would propose to do so whenever possible. Thoughts?
Yes, we *should* do this, but I currently do not have any free time to work on it. Would be happy to support you on this.
Sorry for this length and not very optimistic answer. If you ask me, you'll always get the worst-case scenario. :-)
After all, we are doing security here...
:)
-Michael
Thanks, and best regards, Peter Müller
Hello Peter,
Thanks for this, I guess this would affect quite a few people out there…
However, is it a good idea to use the overrides table for this? Should that not be reserved for the pure overrides?
There is no way to view these changes. Is that something we can live with?
-Michael
On 10 Apr 2021, at 13:28, Peter Müller peter.mueller@ipfire.org wrote:
Amazon publishes information regarding some of their IP networks primarily used for AWS cloud services in a machine-readable format. To improve libloc lookup results for these, we have little choice other than importing and parsing them.
Unfortunately, there seems to be no machine-readable list of the locations of their data centers or availability zones available. If there _is_ any, please let the author know.
Fixes: #12594
Signed-off-by: Peter Müller peter.mueller@ipfire.org
src/python/location-importer.in | 110 ++++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
diff --git a/src/python/location-importer.in b/src/python/location-importer.in index 1e08458..5be1d61 100644 --- a/src/python/location-importer.in +++ b/src/python/location-importer.in @@ -19,6 +19,7 @@
import argparse import ipaddress +import json import logging import math import re @@ -931,6 +932,10 @@ class CLI(object): TRUNCATE TABLE network_overrides; """)
# Update overrides for various cloud providers big enough to publish their own IP
# network allocation lists in a machine-readable format...
self._update_overrides_for_aws()
for file in ns.files: log.info("Reading %s..." % file)
@@ -998,6 +1003,111 @@ class CLI(object): else: log.warning("Unsupported type: %s" % type)
- def _update_overrides_for_aws(self):
# Download Amazon AWS IP allocation file to create overrides...
downloader = location.importer.Downloader()
try:
with downloader.request("https://ip-ranges.amazonaws.com/ip-ranges.json", return_blocks=False) as f:
aws_ip_dump = json.load(f.body)
except Exception as e:
log.error("unable to preprocess Amazon AWS IP ranges: %s" % e)
return
# XXX: Set up a dictionary for mapping a region name to a country. Unfortunately,
# there seems to be no machine-readable version available of this other than
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
# (worse, it seems to be incomplete :-/ ); https://www.cloudping.cloud/endpoints
# was helpful here as well.
aws_region_country_map = {
"af-south-1": "ZA",
"ap-east-1": "HK",
"ap-south-1": "IN",
"ap-south-2": "IN",
"ap-northeast-3": "JP",
"ap-northeast-2": "KR",
"ap-southeast-1": "SG",
"ap-southeast-2": "AU",
"ap-southeast-3": "MY",
"ap-northeast-1": "JP",
"ca-central-1": "CA",
"eu-central-1": "DE",
"eu-central-2": "CH",
"eu-west-1": "IE",
"eu-west-2": "GB",
"eu-south-1": "IT",
"eu-south-2": "ES",
"eu-west-3": "FR",
"eu-north-1": "SE",
"me-south-1": "BH",
"sa-east-1": "BR"
}
# Fetch all valid country codes to check parsed networks aganist...
rows = self.db.query("SELECT * FROM countries ORDER BY country_code")
validcountries = []
for row in rows:
validcountries.append(row.country_code)
with self.db.transaction():
for snetwork in aws_ip_dump["prefixes"] + aws_ip_dump["ipv6_prefixes"]:
try:
network = ipaddress.ip_network(snetwork.get("ip_prefix") or snetwork.get("ipv6_prefix"), strict=False)
except ValueError:
log.warning("Unable to parse line: %s" % snetwork)
continue
# Sanitize parsed networks...
if not self._check_parsed_network(network):
continue
# Determine region of this network...
region = snetwork["region"]
cc = None
is_anycast = False
# Any region name starting with "us-" will get "US" country code assigned straight away...
if region.startswith("us-"):
cc = "US"
elif region.startswith("cn-"):
# ... same goes for China ...
cc = "CN"
elif region == "GLOBAL":
# ... funny region name for anycast-like networks ...
is_anycast = True
elif region in aws_region_country_map:
# ... assign looked up country code otherwise ...
cc = aws_region_country_map[region]
else:
# ... and bail out if we are missing something here
log.warning("Unable to determine country code for line: %s" % snetwork)
continue
# Skip networks with unknown country codes
if not is_anycast and validcountries and cc not in validcountries:
log.warning("Skipping Amazon AWS network with bogus country '%s': %s" % \
(cc, network))
return
# Conduct SQL statement...
self.db.execute("""
INSERT INTO network_overrides(
network,
country,
is_anonymous_proxy,
is_satellite_provider,
is_anycast
) VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (network) DO NOTHING""",
"%s" % network,
cc,
None,
None,
is_anycast,
)
- @staticmethod def _parse_bool(block, key): val = block.get(key)
-- 2.26.2
Hello Michael,
thanks for your answer. I guess I'll reply better late than never...
Hello,
On 12 Apr 2021, at 18:48, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael,
thanks for your reply.
Frankly, the longer I think about this patches' approach, the more I become unhappy with it:
Oh no. Don’t overthink it :)
Déformation professionnelle. Sorry.
(a) We are processing the Amazon AWS IP range feed overcredulous: It comes without being digitally signed in any way over a HTTPS connection - at least _I_ don't trust PKI, and should probably finally write that blog post about it planned for quite some time now :-/ - from a CDN. ip-ranges.amazonaws.com is not even DNSSEC-signed, not to mention DANE for their web service.
There would be no other way how we can authenticate this data. We do exactly the same with data from the RIRs.
Well, we at least could rely on RIR data being signed. Amazon did not bless us with that.
Worse, my patch lacks additional safeguards. At the moment, the feeds' content is only checked for too big to too small prefixes, or anything not globally routable, and similar oddities. Amazon, however, must not publish any information regarding IP space they do not own - and if they do, we should not process it.
Do we not automatically filter those out later? Should we apply the same DELETE FROM … statements to the overrides table that we apply to the imported RIR data?
https://git.ipfire.org/?p=location/libloc.git;a=blob;f=src/python/location-i...
No, that DELETE FROM statement block covers announcements, not network objects parsed from RIRs.
While this does not eliminate the possible attack of somebody tampering with their feed on their server(s), the CDN, or anywhere in between, it would prevent a hostile actor to abuse that feed for arbitrarily spoofing the contents of a libloc database generated by us.
Unfortunately, I have no elegant idea how to do this at the moment. A most basic approach would consist in rejecting any network not announced by ASNs we know are owned or maintained by Amazon - not sure how volatile this list would be.
Probably single networks won’t be moved at all, but at the size of AWS I assume that new networks are added very often.
Possibly, but hopefully not new Autonomous Systems. Restricting Amazon on those would, however, mean an additional *.txt file, but I am fine with that.
Only accepting information for networks whose RIR data proof ownership or maintenance by Amazon would be a more thorough approach, though. However, that involves bulk queries to the Whois, as a decent chunk of their IP space is assigned by ARIN. In case of RIPE et al., we might parse our way through the databases we already have, but this is laborious, and we have no routines for enumerating maintainer data yet.
That would be a rather complicated process and I am not sure if it is worth it.
Probably not, thanks to heavy rate limits on their Whois servers.
IP address space that has been acquired and is transitioning to AWS might not show up as owned by the right entity/entities and we might reject it. We simply cannot check this automatically as we cannot check any other IP network being owned by who ever it says.
(b) I honestly dislike intransparent changes here. Since we fill the override SQL table on demand every time, retracing content of generated location databases will be quite tricky if they did not originate from our own override files.
I am a little bit unhappy with this as well. The overrides table also takes precedence. That is why I would have expected this in the networks table.
In a way, the RIRs are not transparent to us and we just import their data, do something with it and put it into our database. AWS is just another source of data just like the RIRs.
Although it isn’t perfect, I could live a lot better with this solution.
Me too. However, I would like to have a "source" column in the networks table then, so we could at least filter those networks out easily, if we want or need to.
On the other hand, we do not store the contents of the RIR databases downloaded, either. Simply dumping the Amazon AWS IP range feed into our Git repository would solve the transparency issue, but results in unnecessary bloat - unless we really need it someday.
Do you have a particular idea about how to solve this issue in mind?
See above.
Regarding (a), the RIRs' FTP server FQDNs are at least DNSSEC-signed, but we do not enforce this. While I vaguely remember to have seen signatures for the RIPE database, we currently do not validate it, either. Although this would increase complexity and affects performance when generating a database at our end, I would propose to do so whenever possible. Thoughts?
Yes, we *should* do this, but I currently do not have any free time to work on it. Would be happy to support you on this.
I see, this will be the next item on my to do list then...
Thanks, and best regards, Peter Müller
Sorry for this length and not very optimistic answer. If you ask me, you'll always get the worst-case scenario. :-)
After all, we are doing security here...
:)
-Michael
Thanks, and best regards, Peter Müller
Hello Peter,
Thanks for this, I guess this would affect quite a few people out there…
However, is it a good idea to use the overrides table for this? Should that not be reserved for the pure overrides?
There is no way to view these changes. Is that something we can live with?
-Michael
On 10 Apr 2021, at 13:28, Peter Müller peter.mueller@ipfire.org wrote:
Amazon publishes information regarding some of their IP networks primarily used for AWS cloud services in a machine-readable format. To improve libloc lookup results for these, we have little choice other than importing and parsing them.
Unfortunately, there seems to be no machine-readable list of the locations of their data centers or availability zones available. If there _is_ any, please let the author know.
Fixes: #12594
Signed-off-by: Peter Müller peter.mueller@ipfire.org
src/python/location-importer.in | 110 ++++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
diff --git a/src/python/location-importer.in b/src/python/location-importer.in index 1e08458..5be1d61 100644 --- a/src/python/location-importer.in +++ b/src/python/location-importer.in @@ -19,6 +19,7 @@
import argparse import ipaddress +import json import logging import math import re @@ -931,6 +932,10 @@ class CLI(object): TRUNCATE TABLE network_overrides; """)
# Update overrides for various cloud providers big enough to publish their own IP
# network allocation lists in a machine-readable format...
self._update_overrides_for_aws()
for file in ns.files: log.info("Reading %s..." % file)
@@ -998,6 +1003,111 @@ class CLI(object): else: log.warning("Unsupported type: %s" % type)
- def _update_overrides_for_aws(self):
# Download Amazon AWS IP allocation file to create overrides...
downloader = location.importer.Downloader()
try:
with downloader.request("https://ip-ranges.amazonaws.com/ip-ranges.json", return_blocks=False) as f:
aws_ip_dump = json.load(f.body)
except Exception as e:
log.error("unable to preprocess Amazon AWS IP ranges: %s" % e)
return
# XXX: Set up a dictionary for mapping a region name to a country. Unfortunately,
# there seems to be no machine-readable version available of this other than
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
# (worse, it seems to be incomplete :-/ ); https://www.cloudping.cloud/endpoints
# was helpful here as well.
aws_region_country_map = {
"af-south-1": "ZA",
"ap-east-1": "HK",
"ap-south-1": "IN",
"ap-south-2": "IN",
"ap-northeast-3": "JP",
"ap-northeast-2": "KR",
"ap-southeast-1": "SG",
"ap-southeast-2": "AU",
"ap-southeast-3": "MY",
"ap-northeast-1": "JP",
"ca-central-1": "CA",
"eu-central-1": "DE",
"eu-central-2": "CH",
"eu-west-1": "IE",
"eu-west-2": "GB",
"eu-south-1": "IT",
"eu-south-2": "ES",
"eu-west-3": "FR",
"eu-north-1": "SE",
"me-south-1": "BH",
"sa-east-1": "BR"
}
# Fetch all valid country codes to check parsed networks aganist...
rows = self.db.query("SELECT * FROM countries ORDER BY country_code")
validcountries = []
for row in rows:
validcountries.append(row.country_code)
with self.db.transaction():
for snetwork in aws_ip_dump["prefixes"] + aws_ip_dump["ipv6_prefixes"]:
try:
network = ipaddress.ip_network(snetwork.get("ip_prefix") or snetwork.get("ipv6_prefix"), strict=False)
except ValueError:
log.warning("Unable to parse line: %s" % snetwork)
continue
# Sanitize parsed networks...
if not self._check_parsed_network(network):
continue
# Determine region of this network...
region = snetwork["region"]
cc = None
is_anycast = False
# Any region name starting with "us-" will get "US" country code assigned straight away...
if region.startswith("us-"):
cc = "US"
elif region.startswith("cn-"):
# ... same goes for China ...
cc = "CN"
elif region == "GLOBAL":
# ... funny region name for anycast-like networks ...
is_anycast = True
elif region in aws_region_country_map:
# ... assign looked up country code otherwise ...
cc = aws_region_country_map[region]
else:
# ... and bail out if we are missing something here
log.warning("Unable to determine country code for line: %s" % snetwork)
continue
# Skip networks with unknown country codes
if not is_anycast and validcountries and cc not in validcountries:
log.warning("Skipping Amazon AWS network with bogus country '%s': %s" % \
(cc, network))
return
# Conduct SQL statement...
self.db.execute("""
INSERT INTO network_overrides(
network,
country,
is_anonymous_proxy,
is_satellite_provider,
is_anycast
) VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (network) DO NOTHING""",
"%s" % network,
cc,
None,
None,
is_anycast,
)
- @staticmethod def _parse_bool(block, key): val = block.get(key)
-- 2.26.2
Hello,
On 14 May 2021, at 17:22, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael,
thanks for your answer. I guess I'll reply better late than never...
Hello,
On 12 Apr 2021, at 18:48, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael,
thanks for your reply.
Frankly, the longer I think about this patches' approach, the more I become unhappy with it:
Oh no. Don’t overthink it :)
Déformation professionnelle. Sorry.
(a) We are processing the Amazon AWS IP range feed overcredulous: It comes without being digitally signed in any way over a HTTPS connection - at least _I_ don't trust PKI, and should probably finally write that blog post about it planned for quite some time now :-/ - from a CDN. ip-ranges.amazonaws.com is not even DNSSEC-signed, not to mention DANE for their web service.
There would be no other way how we can authenticate this data. We do exactly the same with data from the RIRs.
Well, we at least could rely on RIR data being signed. Amazon did not bless us with that.
Again, nobody does. Maybe this is something you should raise at the next RIR meeting :)
Worse, my patch lacks additional safeguards. At the moment, the feeds' content is only checked for too big to too small prefixes, or anything not globally routable, and similar oddities. Amazon, however, must not publish any information regarding IP space they do not own - and if they do, we should not process it.
Do we not automatically filter those out later? Should we apply the same DELETE FROM … statements to the overrides table that we apply to the imported RIR data?
https://git.ipfire.org/?p=location/libloc.git;a=blob;f=src/python/location-i...
No, that DELETE FROM statement block covers announcements, not network objects parsed from RIRs.
I know, and I was suggesting to run it on networks, too.
While this does not eliminate the possible attack of somebody tampering with their feed on their server(s), the CDN, or anywhere in between, it would prevent a hostile actor to abuse that feed for arbitrarily spoofing the contents of a libloc database generated by us.
Unfortunately, I have no elegant idea how to do this at the moment. A most basic approach would consist in rejecting any network not announced by ASNs we know are owned or maintained by Amazon - not sure how volatile this list would be.
Probably single networks won’t be moved at all, but at the size of AWS I assume that new networks are added very often.
Possibly, but hopefully not new Autonomous Systems. Restricting Amazon on those would, however, mean an additional *.txt file, but I am fine with that.
Only accepting information for networks whose RIR data proof ownership or maintenance by Amazon would be a more thorough approach, though. However, that involves bulk queries to the Whois, as a decent chunk of their IP space is assigned by ARIN. In case of RIPE et al., we might parse our way through the databases we already have, but this is laborious, and we have no routines for enumerating maintainer data yet.
That would be a rather complicated process and I am not sure if it is worth it.
Probably not, thanks to heavy rate limits on their Whois servers.
IP address space that has been acquired and is transitioning to AWS might not show up as owned by the right entity/entities and we might reject it. We simply cannot check this automatically as we cannot check any other IP network being owned by who ever it says.
(b) I honestly dislike intransparent changes here. Since we fill the override SQL table on demand every time, retracing content of generated location databases will be quite tricky if they did not originate from our own override files.
I am a little bit unhappy with this as well. The overrides table also takes precedence. That is why I would have expected this in the networks table.
In a way, the RIRs are not transparent to us and we just import their data, do something with it and put it into our database. AWS is just another source of data just like the RIRs.
Although it isn’t perfect, I could live a lot better with this solution.
Me too. However, I would like to have a "source" column in the networks table then, so we could at least filter those networks out easily, if we want or need to.
Agreed. Sadly we won’t be able to have this in the text dump of the database that we commit to the Git repository which makes debugging more complicated.
On the other hand, we do not store the contents of the RIR databases downloaded, either. Simply dumping the Amazon AWS IP range feed into our Git repository would solve the transparency issue, but results in unnecessary bloat - unless we really need it someday.
Do you have a particular idea about how to solve this issue in mind?
See above.
Regarding (a), the RIRs' FTP server FQDNs are at least DNSSEC-signed, but we do not enforce this. While I vaguely remember to have seen signatures for the RIPE database, we currently do not validate it, either. Although this would increase complexity and affects performance when generating a database at our end, I would propose to do so whenever possible. Thoughts?
Yes, we *should* do this, but I currently do not have any free time to work on it. Would be happy to support you on this.
I see, this will be the next item on my to do list then...
Thanks, and best regards, Peter Müller
Sorry for this length and not very optimistic answer. If you ask me, you'll always get the worst-case scenario. :-)
After all, we are doing security here...
:)
-Michael
Thanks, and best regards, Peter Müller
Hello Peter,
Thanks for this, I guess this would affect quite a few people out there…
However, is it a good idea to use the overrides table for this? Should that not be reserved for the pure overrides?
There is no way to view these changes. Is that something we can live with?
-Michael
On 10 Apr 2021, at 13:28, Peter Müller peter.mueller@ipfire.org wrote:
Amazon publishes information regarding some of their IP networks primarily used for AWS cloud services in a machine-readable format. To improve libloc lookup results for these, we have little choice other than importing and parsing them.
Unfortunately, there seems to be no machine-readable list of the locations of their data centers or availability zones available. If there _is_ any, please let the author know.
Fixes: #12594
Signed-off-by: Peter Müller peter.mueller@ipfire.org
src/python/location-importer.in | 110 ++++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
diff --git a/src/python/location-importer.in b/src/python/location-importer.in index 1e08458..5be1d61 100644 --- a/src/python/location-importer.in +++ b/src/python/location-importer.in @@ -19,6 +19,7 @@
import argparse import ipaddress +import json import logging import math import re @@ -931,6 +932,10 @@ class CLI(object): TRUNCATE TABLE network_overrides; """)
# Update overrides for various cloud providers big enough to publish their own IP
# network allocation lists in a machine-readable format...
self._update_overrides_for_aws()
for file in ns.files: log.info("Reading %s..." % file)
@@ -998,6 +1003,111 @@ class CLI(object): else: log.warning("Unsupported type: %s" % type)
- def _update_overrides_for_aws(self):
# Download Amazon AWS IP allocation file to create overrides...
downloader = location.importer.Downloader()
try:
with downloader.request("https://ip-ranges.amazonaws.com/ip-ranges.json", return_blocks=False) as f:
aws_ip_dump = json.load(f.body)
except Exception as e:
log.error("unable to preprocess Amazon AWS IP ranges: %s" % e)
return
# XXX: Set up a dictionary for mapping a region name to a country. Unfortunately,
# there seems to be no machine-readable version available of this other than
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
# (worse, it seems to be incomplete :-/ ); https://www.cloudping.cloud/endpoints
# was helpful here as well.
aws_region_country_map = {
"af-south-1": "ZA",
"ap-east-1": "HK",
"ap-south-1": "IN",
"ap-south-2": "IN",
"ap-northeast-3": "JP",
"ap-northeast-2": "KR",
"ap-southeast-1": "SG",
"ap-southeast-2": "AU",
"ap-southeast-3": "MY",
"ap-northeast-1": "JP",
"ca-central-1": "CA",
"eu-central-1": "DE",
"eu-central-2": "CH",
"eu-west-1": "IE",
"eu-west-2": "GB",
"eu-south-1": "IT",
"eu-south-2": "ES",
"eu-west-3": "FR",
"eu-north-1": "SE",
"me-south-1": "BH",
"sa-east-1": "BR"
}
# Fetch all valid country codes to check parsed networks aganist...
rows = self.db.query("SELECT * FROM countries ORDER BY country_code")
validcountries = []
for row in rows:
validcountries.append(row.country_code)
with self.db.transaction():
for snetwork in aws_ip_dump["prefixes"] + aws_ip_dump["ipv6_prefixes"]:
try:
network = ipaddress.ip_network(snetwork.get("ip_prefix") or snetwork.get("ipv6_prefix"), strict=False)
except ValueError:
log.warning("Unable to parse line: %s" % snetwork)
continue
# Sanitize parsed networks...
if not self._check_parsed_network(network):
continue
# Determine region of this network...
region = snetwork["region"]
cc = None
is_anycast = False
# Any region name starting with "us-" will get "US" country code assigned straight away...
if region.startswith("us-"):
cc = "US"
elif region.startswith("cn-"):
# ... same goes for China ...
cc = "CN"
elif region == "GLOBAL":
# ... funny region name for anycast-like networks ...
is_anycast = True
elif region in aws_region_country_map:
# ... assign looked up country code otherwise ...
cc = aws_region_country_map[region]
else:
# ... and bail out if we are missing something here
log.warning("Unable to determine country code for line: %s" % snetwork)
continue
# Skip networks with unknown country codes
if not is_anycast and validcountries and cc not in validcountries:
log.warning("Skipping Amazon AWS network with bogus country '%s': %s" % \
(cc, network))
return
# Conduct SQL statement...
self.db.execute("""
INSERT INTO network_overrides(
network,
country,
is_anonymous_proxy,
is_satellite_provider,
is_anycast
) VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (network) DO NOTHING""",
"%s" % network,
cc,
None,
None,
is_anycast,
)
- @staticmethod def _parse_bool(block, key): val = block.get(key)
-- 2.26.2
Hello Michael, hello *,
before I start coding, I just wanted to share my current idea of importing IP feeds from Amazon AWS in a less insecure way. Comments, etc. are appreciated. :-)
(a) Run "location-importer update-whois" and "location-importer update-announcements", as we did before. (b) Introduce something like "location-importer update-3rd-party-feeds", which is a blanket function for updating all the 3rd party feeds we will have at some day, as Amazon for sure won't be the only one. (c) In case of Amazon, download their feed, parse it and put the results in a temporary table. (d) Process a list of Autonomous Systems owned or controlled by Amazon. (d) Delete every IP network from this temporary table which is not announced by one of the Autonomous Systems. That way, we limit potential damage by a broken or manipulated Amazon IP feed to their ASNs. (e) Anything left in the temporary table is safe to go, and will be merged into the overrides table.
Sounds a bit complicated than my first patch looked like, but is more versatile and robust. :-)
Speaking of robustness, do we want a "source" column for the overrides table as well? Although it won't appear in the generated database or it's .txt dump, it might be worth having, so we still have transparency on 3rd party feeds at this point.
Thanks, and best regards, Peter Müller
Hello,
First, is there a need to constantly rename subjects? I find this more confusing than helpful to keep track of a conversation on this list.
On 30 May 2021, at 10:15, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael, hello *,
before I start coding, I just wanted to share my current idea of importing IP feeds from Amazon AWS in a less insecure way. Comments, etc. are appreciated. :-)
You already submitted some code before. What happened to that?
(a) Run "location-importer update-whois" and "location-importer update-announcements", as we did before. (b) Introduce something like "location-importer update-3rd-party-feeds", which is a blanket function for updating all the 3rd party feeds we will have at some day, as Amazon for sure won't be the only one.
Does this need a third command? Why can this not be part of “update-whois”?
(c) In case of Amazon, download their feed, parse it and put the results in a temporary table. (d) Process a list of Autonomous Systems owned or controlled by Amazon.
Where is this list coming from?
(d) Delete every IP network from this temporary table which is not announced by one of the Autonomous Systems. That way, we limit potential damage by a broken or manipulated Amazon IP feed to their ASNs.
This is your second step (d).
When you say you are comparing this, what is the authority for this? The BGP feed? Whois?
(e) Anything left in the temporary table is safe to go, and will be merged into the overrides table.
Sounds a bit complicated than my first patch looked like, but is more versatile and robust. :-)
I kind of liked the first patch. It was simple and it worked.
Speaking of robustness, do we want a "source" column for the overrides table as well? Although it won't appear in the generated database or it's .txt dump, it might be worth having, so we still have transparency on 3rd party feeds at this point.
I do not think it is worth it, because it is easy to check. If you want it, I wouldn’t object either.
Thanks, and best regards, Peter Müller
Hello Michael,
thanks for your reply.
Hello,
First, is there a need to constantly rename subjects? I find this more confusing than helpful to keep track of a conversation on this list.
Personally, I like the idea of changing the subject as soon as the discussion leaves the proposed patch as such, shifting towards a more general issue. That way, I thought it might be easier to differ between remarks targetting the _actual_ patch, and general discussions.
If you'll object, I will stop doing that. You are the boss around here... :-)
On 30 May 2021, at 10:15, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael, hello *,
before I start coding, I just wanted to share my current idea of importing IP feeds from Amazon AWS in a less insecure way. Comments, etc. are appreciated. :-)
You already submitted some code before. What happened to that?
It is still available, although I would not consider it being safe for production anymore.
(a) Run "location-importer update-whois" and "location-importer update-announcements", as we did before. (b) Introduce something like "location-importer update-3rd-party-feeds", which is a blanket function for updating all the 3rd party feeds we will have at some day, as Amazon for sure won't be the only one.
Does this need a third command? Why can this not be part of “update-whois”?
Because we do not necessarily have the BGP data available at this step. If we want to build in AS-based safeguards, we will have to parse 3rd party feeds after running "location-importer update-announcements".
(c) In case of Amazon, download their feed, parse it and put the results in a temporary table. (d) Process a list of Autonomous Systems owned or controlled by Amazon.
Where is this list coming from?
Something similar to "countries.txt", I guess. It would definitely be something we will have to maintain on our own. A simple .txt file per 3rd party source, containing one ASN per line, would do it in my point of view.
(d) Delete every IP network from this temporary table which is not announced by one of the Autonomous Systems. That way, we limit potential damage by a broken or manipulated Amazon IP feed to their ASNs.
This is your second step (d).
?
When you say you are comparing this, what is the authority for this? The BGP feed? Whois?
The BGP feed. We cannot rely on RIR data for this job, as they do not reflect reality and we don't have them for ARIN- and LACNIC-maintained space.
(e) Anything left in the temporary table is safe to go, and will be merged into the overrides table.
Sounds a bit complicated than my first patch looked like, but is more versatile and robust. :-)
I kind of liked the first patch. It was simple and it worked.
Indeed. But it allowed Amazon to inject arbitrary data. This is bad enough for RIRs already, I do not want to extend the list of entities being able to do this to some profit-oriented companies...
Speaking of robustness, do we want a "source" column for the overrides table as well? Although it won't appear in the generated database or it's .txt dump, it might be worth having, so we still have transparency on 3rd party feeds at this point.
I do not think it is worth it, because it is easy to check. If you want it, I wouldn’t object either.
Hm, it might not be that easy in production, since we do not store the raw contents of our IP feeds. Especially if there is a delta, finding out which entry in the overrides table came from with source could be tricky, eventually.
Thanks, and best regards, Peter Müller
Hello,
On 2 Jun 2021, at 22:12, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael,
thanks for your reply.
Hello,
First, is there a need to constantly rename subjects? I find this more confusing than helpful to keep track of a conversation on this list.
Personally, I like the idea of changing the subject as soon as the discussion leaves the proposed patch as such, shifting towards a more general issue. That way, I thought it might be easier to differ between remarks targetting the _actual_ patch, and general discussions.
If you'll object, I will stop doing that. You are the boss around here... :-)
Not really :)
I am just trying to find a strategy to deal with the massive amount of emails and having conversations split it not helpful.
On 30 May 2021, at 10:15, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael, hello *,
before I start coding, I just wanted to share my current idea of importing IP feeds from Amazon AWS in a less insecure way. Comments, etc. are appreciated. :-)
You already submitted some code before. What happened to that?
It is still available, although I would not consider it being safe for production anymore.
(a) Run "location-importer update-whois" and "location-importer update-announcements", as we did before. (b) Introduce something like "location-importer update-3rd-party-feeds", which is a blanket function for updating all the 3rd party feeds we will have at some day, as Amazon for sure won't be the only one.
Does this need a third command? Why can this not be part of “update-whois”?
Because we do not necessarily have the BGP data available at this step. If we want to build in AS-based safeguards, we will have to parse 3rd party feeds after running "location-importer update-announcements".
I would say that this is a difficult dependency. We should have the “owner” of that subnet somewhere in the WHOIS data which is deterministic while the BGP isn’t.
There are “route” objects which should allow you to do what you want to do.
(c) In case of Amazon, download their feed, parse it and put the results in a temporary table. (d) Process a list of Autonomous Systems owned or controlled by Amazon.
Where is this list coming from?
Something similar to "countries.txt", I guess. It would definitely be something we will have to maintain on our own. A simple .txt file per 3rd party source, containing one ASN per line, would do it in my point of view.
Hmm, okay.
(d) Delete every IP network from this temporary table which is not announced by one of the Autonomous Systems. That way, we limit potential damage by a broken or manipulated Amazon IP feed to their ASNs.
This is your second step (d).
?
You mis-numbered them. Never mind.
When you say you are comparing this, what is the authority for this? The BGP feed? Whois?
The BGP feed. We cannot rely on RIR data for this job, as they do not reflect reality and we don't have them for ARIN- and LACNIC-maintained space.
Well, interesting. I would have said the opposite.
(e) Anything left in the temporary table is safe to go, and will be merged into the overrides table.
Sounds a bit complicated than my first patch looked like, but is more versatile and robust. :-)
I kind of liked the first patch. It was simple and it worked.
Indeed. But it allowed Amazon to inject arbitrary data. This is bad enough for RIRs already, I do not want to extend the list of entities being able to do this to some profit-oriented companies...
I consider that a different problem. Not necessarily less important, but probably we should not try to fix all problems in only one patch.
Speaking of robustness, do we want a "source" column for the overrides table as well? Although it won't appear in the generated database or it's .txt dump, it might be worth having, so we still have transparency on 3rd party feeds at this point.
I do not think it is worth it, because it is easy to check. If you want it, I wouldn’t object either.
Hm, it might not be that easy in production, since we do not store the raw contents of our IP feeds. Especially if there is a delta, finding out which entry in the overrides table came from with source could be tricky, eventually.
Thanks, and best regards, Peter Müller
-Michael
Hello Michael,
thanks for your reply.
(b) Introduce something like "location-importer update-3rd-party-feeds", which is a blanket function for updating all the 3rd party feeds we will have at some day, as Amazon for sure won't be the only one.
Does this need a third command? Why can this not be part of “update-whois”?
Because we do not necessarily have the BGP data available at this step. If we want to build in AS-based safeguards, we will have to parse 3rd party feeds after running "location-importer update-announcements".
I would say that this is a difficult dependency. We should have the “owner” of that subnet somewhere in the WHOIS data which is deterministic while the BGP isn’t.
There are “route” objects which should allow you to do what you want to do.
Unfortunately, we do not have these data for ARIN and LACNIC space - at least not in a manner we can properly use in an automated way. For the rest of the RIRs, I agree.
(c) In case of Amazon, download their feed, parse it and put the results in a temporary table. (d) Process a list of Autonomous Systems owned or controlled by Amazon.
Where is this list coming from?
Something similar to "countries.txt", I guess. It would definitely be something we will have to maintain on our own. A simple .txt file per 3rd party source, containing one ASN per line, would do it in my point of view.
Hmm, okay.
(d) Delete every IP network from this temporary table which is not announced by one of the Autonomous Systems. That way, we limit potential damage by a broken or manipulated Amazon IP feed to their ASNs.
This is your second step (d).
?
You mis-numbered them. Never mind.
When you say you are comparing this, what is the authority for this? The BGP feed? Whois?
The BGP feed. We cannot rely on RIR data for this job, as they do not reflect reality and we don't have them for ARIN- and LACNIC-maintained space.
Well, interesting. I would have said the opposite.
Normative power of the factual (a bit phony, I guess): If something is announced via BGP, traffic to it will be routed towards the announced ASN - things like RPKI not taken into account here -, no matter what a RIR database contains for this network.
I don't like saying that, but I guess this is what we have.
(e) Anything left in the temporary table is safe to go, and will be merged into the overrides table.
Sounds a bit complicated than my first patch looked like, but is more versatile and robust. :-)
I kind of liked the first patch. It was simple and it worked.
Indeed. But it allowed Amazon to inject arbitrary data. This is bad enough for RIRs already, I do not want to extend the list of entities being able to do this to some profit-oriented companies...
I consider that a different problem. Not necessarily less important, but probably we should not try to fix all problems in only one patch.
All right, I agree. Given the state of discussion, I propose to work and submit a two-part patchset: The first one adds a source column to the network_overrides, so we can debug things better on our systems at least, while the second one imports Amazon AWS feed in the same way as I did initially.
We would then care about additional safeguards later... *fingers crossed* :-]
Would you agree on that?
Thanks, and best regards, Peter Müller
Hello,
On 5 Jun 2021, at 13:40, Peter Müller peter.mueller@ipfire.org wrote:
Hello Michael,
thanks for your reply.
(b) Introduce something like "location-importer update-3rd-party-feeds", which is a blanket function for updating all the 3rd party feeds we will have at some day, as Amazon for sure won't be the only one.
Does this need a third command? Why can this not be part of “update-whois”?
Because we do not necessarily have the BGP data available at this step. If we want to build in AS-based safeguards, we will have to parse 3rd party feeds after running "location-importer update-announcements".
I would say that this is a difficult dependency. We should have the “owner” of that subnet somewhere in the WHOIS data which is deterministic while the BGP isn’t.
There are “route” objects which should allow you to do what you want to do.
Unfortunately, we do not have these data for ARIN and LACNIC space - at least not in a manner we can properly use in an automated way. For the rest of the RIRs, I agree.
I hate that we have to have a lot of duplicate code to deal with two RIRs.
(c) In case of Amazon, download their feed, parse it and put the results in a temporary table. (d) Process a list of Autonomous Systems owned or controlled by Amazon.
Where is this list coming from?
Something similar to "countries.txt", I guess. It would definitely be something we will have to maintain on our own. A simple .txt file per 3rd party source, containing one ASN per line, would do it in my point of view.
Hmm, okay.
(d) Delete every IP network from this temporary table which is not announced by one of the Autonomous Systems. That way, we limit potential damage by a broken or manipulated Amazon IP feed to their ASNs.
This is your second step (d).
?
You mis-numbered them. Never mind.
When you say you are comparing this, what is the authority for this? The BGP feed? Whois?
The BGP feed. We cannot rely on RIR data for this job, as they do not reflect reality and we don't have them for ARIN- and LACNIC-maintained space.
Well, interesting. I would have said the opposite.
Normative power of the factual (a bit phony, I guess): If something is announced via BGP, traffic to it will be routed towards the announced ASN - things like RPKI not taken into account here -, no matter what a RIR database contains for this network.
I don't like saying that, but I guess this is what we have.
RPKI uses the RIR database. So we would just enforce RPKI for the entire database. I guess it is pretty much the same.
(e) Anything left in the temporary table is safe to go, and will be merged into the overrides table.
Sounds a bit complicated than my first patch looked like, but is more versatile and robust. :-)
I kind of liked the first patch. It was simple and it worked.
Indeed. But it allowed Amazon to inject arbitrary data. This is bad enough for RIRs already, I do not want to extend the list of entities being able to do this to some profit-oriented companies...
I consider that a different problem. Not necessarily less important, but probably we should not try to fix all problems in only one patch.
All right, I agree. Given the state of discussion, I propose to work and submit a two-part patchset: The first one adds a source column to the network_overrides, so we can debug things better on our systems at least, while the second one imports Amazon AWS feed in the same way as I did initially.
We would then care about additional safeguards later... *fingers crossed* :-]
Would you agree on that?
Okay. Merged.
Thanks, and best regards, Peter Müller