What is SPFS
SPFS is a high performance filesystem with built in data retention, data reduction, versioning and many more cool features. SPFS is designed to be simple to use, and flexibility to chose the backup tool of your preference. After all, why do you need to learn how to use an agent, when you can select the tool that you already have the skils and knowledge to use?
What is the differences between 2.3 and 2.1?
Performance improvement copying smaller files
With SPFS 2.3 we improved the performance when copying files that are smaller than the SPFS filesystem blocksize (See option value "BLOCKSIZE nnn" in spfs.opt)
With SPFS 2.1 each copy has a group leader (a Spectrum Protect object), with one or many members (one or many Spectrum Protect object) that has the data. A file that are small, will than has to call to create the groupleader, and than call to create the member. This requires 11 API calls.
For data that are larger than the blocksize, this is ofcource not an issue, as it is only the startup that needs 11 API calls, after that, it only needs 2 API calls per SPFS block.
With SPFS 2.3 we changed this so that if a file is smaller than the SPFS blocksize, than we are not createing a group leader, but instead sending the data directly (only one Spectrum Protect object that holds the meta data and the data of the file). This requires only 5 API calls per file, which improves the performance.
It also improves the performance for restore, as fewer amount of queries are needed, and fewer amount of objects are needed to be requested.
Performance improvement copying sparsed files
Copying sparsed files has also been improved.
With SPFS 2.1 each data segment is an object on the Spectrum Protect server. So if a file has 100 data segments, than that file will have a group leader with 100 members (100 data objects + 1 leader object). The performance will suffer both during backup, and also for restore, as there will be many objects to manage, as well as many API calls to request.
With SPFS 2.3 we are storing the offset and size into the objects information class, we can hold approximately 80 holes in one object. If all holes fits into the objects information class, and the size of the data is smaller than the SPFS blocksize, than SPFS will store this in a single Spectrum Protect object, and will not create group leader object. This improves the backup and restore speed as fewer objects are needed, as well as fewer amount of API calls are needed.
If the number of holes are more than there are room for or the size is larger than the SPFS blocksize, than SPFS will create a group leader, with members of the data where the member object information class will hold the offset and size of the data. The objects are named with the starting offset of the data, but the relative data offsets and size are described by the objects information class.
Note that there can be more objects with same starting offset for the same groupleader object, this can happen if the number of holes for a sparsed files are more than there is room for in a single object. SPFS will than create a new object with the same name, so that it can continue to describe the data.
Overwriting data previously written within a transaction is now supported
With SPFS 2.1, it wasn't supported to overwrite data previously written.
This due to that (1) Spectrum Protect is a WORM device (Write Once Read Many). So it can't overwrite data previously written.
To prevent this from happening, SPFS had to keep track of all regions (offset + size) that are written during a transaction (open -> write,write...write -> close). This could decrease the performance, especially if it is a sparsed file, which than has to keep track of more regions than a contiguous file.
With SPFS 2.3, we are no longer needing the check, as we introduced a way to overwrite previous data within a transaction (open -> write,write.. rewind,write -> close), this is now supported.
This uses same mechanism as with copying sparsed files, where there can be multiple objects using same name (start offset) within a group. These are when they are copied back, re-animated in the right order by SPFS making sure that the content is ok using the objects information class that describes the relative offset and size.
This was an issue for tools such as pg_dump > dump.file, as it rewind to begining and overwrite data. This can now be performed without issues.
Performance improvement lookup and listings of files
With SPFS 2.1, we uses bTree to reduce the amount of API calls needed, by looking searching in the memory bTree data cache.
When a objects meta data is requested, SPFS uses the bTree to start scanning from the root, scans til it finds the object in root catalog, continue with first entry in that catalog scans til it finds next object, and will do this until it either finds the object of interest, or nothing is found.
Even though this is very fast, it does this many times in SPFS.
With SPFS 2.3, we introduced a "hot cache", where last found entries are stored in another cache in memory.
When a object meta data is requested, SPFS tries first to find the closest object in the "hot cache". If an exact match is found, than no bTree scaning is needed. If SPFS finds nothing, than a full bTree scanning is needed. If SPFS finds an object that are close to, Eg SPFS looking for /path/to/dir/file and the "hot cache" holds /path/to/xxx/test.file, than SPFS will start the bTree scan from /path/to to find the meta data object of interest.
This improves both the lookup, and the listings of files, as well as the data copying.
lssmart tool added
We added a tool to list the operating systems mimetype of a file, so that this can be used to create smart option filters.
# lssmart *.c
Explains that the files are text files, and are C source files.
How much performance improvements?
Copying of files can be 3 times faster
Listings of files are 0-5 times faster (for a directory with hundreds to thousands of files, than this can be more)
Copying of sparsed files 0-80 times faster (this depends on many things, such as how many holes etc...)