This is a more in detail article about how libloc works internally. This might be slightly too tech-savvy for some readers, but it might still be a fun read if you would like to know more about the challenges and implementation of IPFire Location.
When we started the project, it was immediately clear that the biggest challenge would be packing the data into the database efficiently so that it consumes as little space as possible and - at the same time - can be read as quickly as possible. This is required to make the library as versatile as possible and enable applications that we are not aware of yet (because you can never be too fast) and to scale down to the smallest systems that IPFire runs on.
The internet is a big space. Four billion IPv4 addresses is nothing. The IPv6 address space is large, and so are the addresses. 128 bits are 16 bytes. Storing the full address for the already allocated address space would already be huge, but as the internet continues to grow at a fast pace, the database would very soon become bigger and bigger.
Storing it is not the only problem: The larger the database is on disk, the longer it takes to search through it. Loading it all into memory is no longer becoming an option and so this all results in performance problems.
Looking at applications
Many target applications for our library need to be fast. Nobody likes to wait for a website to load, but more importantly, applications like an Intrusion Prevention System need to be able to handle a large number of packets a second. If a source IP address needs to be classified by using libloc, this can only take a couple of nanoseconds. A millisecond would already cause a performance impact and connections would take too long to establish.
That is why the goal was that the database needed to be searchable in O(1) - or for those who are not familiar with Landau symbols: every search should take the same time, no matter where in the database the object is stored.
This becomes very clear if you imagine the database being organised like a spreadsheet. All networks are listed and when you search you will go through the spreadsheet line by line until you have found what you are looking for. However, we need the closest match and therefore need to look at all networks on the spreadsheet.
Currently, the database has just under one million entries which means comparing one million subnets to the IP address we are looking up. In Landau notation this would be O(n). If another network will be added it will make the search longer and the algorithm will become slower and slower the bigger the database gets.
Generic search algorithms very often cannot be optimised any further than this. But in our case, we know more about our data: If we ordered the spreadsheet by network, we know that we can stop searching when we passed the first network with a start address larger than what we are searching for.
That will mean that most likely we do not have to search through the whole spreadsheet, but it would still mean that in the worst case we have to search the whole document since the element we are looking for might come last.
IP addresses are just bits. IPv6 addresses are 128 bits and IPv4 addresses are 32 bits long. If we would simply note each network as an array of bits (1010011001... and so on) we could write it as a binary tree:
Starting from the root node of the tree, we would "turn left" if the first bit is zero and "turn right" when the first bit is one. If we do that for each bit we walk down the tree until we have reached its end.
This operation - in our case - takes constant time. Since there are never more than 128 bits in an IPv6 address, we can only walk 128 steps. That is a lot shorter than searching through one million of entries. Brilliant!
We are not only storing IP addresses in the database. We are storing networks; for example: 2001:db8::/48. The number after the slash denotes that only the first 48 bits are relevant and the rest is the host part of the IP address. So that means we do not even to care about anything after the first 48 bits. This is even better because our tree is getting shorter and therefore smaller.
That means that we can also decide how far we want to walk down the tree. At a higher leaf might be a larger subnet that was allocated to an internationally operating ISP which has split this network further and assigned one smaller subnet for each country they are operating in. There is a lot of flexibility.
Another advantage of binary trees is that they are sorted. If we want to list all networks belonging to a certain country, this becomes relevant.
But how do we know a network belongs to a certain country? At every leaf, there is a pointer stored to an array which carries all the remaining information - or to continue with the spreadsheet example: the other columns.
In our case they contain the country code (like DE) and the AS number if available. On top of that there are a couple of flags which can be set to mark a network as part of a satellite network, anonymous proxy or anycast.
All this is only a few megabytes for millions of entries. Therefore we have very little overhead and we did not have to compromise and potentially drop any data to make it fit.
Storing those is a lot cheaper since we can simply order them and write them one after the other. Ordering them allows us performing binary search on the array to look them up very quickly. They are also searchable by name, which comes without an index.
The String Pool
Since Autonomous Systems come with a name, those need to be stored somewhere. We wanted to avoid any duplication to keep the database as small as possible and therefore decided to build a string pool which stores each string once. That way, we can refer to a place in the pool where this string is being stored and all other objects only store pointers to those locations which allow us to make all objects fixed size and not waste a byte.
Since the whole database is simply statically sized objects for each different object type (networks, autonomous systems, etc.) they can be read and parsed very quickly which again accelerates the search.
The database is of course built in a way that it can be read and written on all possible operating systems and we follow the general concept to encode it in "network byte order" or big endian.
We garnish all of this with a builtin signature. Because we are using the content of this database for security-sensitive things - firewall rules - we need to make sure that nobody is sending us a faked version of the database with no data in it. That would then result in no firewall rules being created and hackers having a free go.
To avoid carrying a signature hash around separately, we built it into the database file. That way this is a lot easier to handle and as a user of libloc not even to worry about.
This was also one of the main reasons to build this whole project and distinguishes us from our competitors.
Finally, this database needs to be updated on a regular basis. Although the database is very small, it still is a couple of megabytes and those downloaded a couple of times a month adds up. Also do not forget that we have to upload the data each IPFire system is downloading and therefore this is a lot of terabytes.
The system therefore checks for the latest database version using DNS. A text record simply stores the timestamp of the latest version. If that is newer than the current database, the system will try to download it.
To avoid any tempering with that, we use DNSSEC to validate it and we have some nice tricks to avoid re-downloading any data from outdated mirrors.
That way, only a couple of bytes are exchanged to check if an update is needed. We can run it once an hour which means that any updates will be rolled out to every user very quickly.
Our library is written from scratch and does not depend on any third-party code with exception of the standard C library and OpenSSL for the cryptographic functionality. That makes it easily portable and small to be integrated in many other projects, too.
There is more detail on how to use all this on either the command line or in your own application on the official website.
We Are Proud
... to have been working on such an interesting project and that we gave ourselves the opportunity to put so much attention to every detail. We believe that our work is very vital to the internet community and that we are ready to challenge the status quo and provide a rock solid, easy to use and versatile and fast solution.
I could go on about the many features and little details which we spent so much time on getting just right.
Please support our work on this and other features all around IPFire with a donation. They won't be possible without your help! Head over to www.ipfire.org/donate to donate now.