you know, when you compare compression algorithm times and ratios, i can see why people particularly like zstd.
-
you know, when you compare compression algorithm times and ratios, i can see why people particularly like zstd.
-
on the other hand, for my use case, it looks like lz4 might be a better option. here are some numbers from facebook, who would want zstd to look very good if anything.
lz4: ratio 2.10, 450MB/s encode, 2200MB/s decode
zstd: ratio 3.14, 150MB/s encode, 550MB/s -
one aspect of designing for performance is reducing bottlenecks. i mostly handle this with physicist math - roughly. for example, knowing that lz4 is 2.2GB/s decode and knowing that BLAKE3 is 6.9GB/s, i can tell you that lz4 is going to be a problem first if you're hash addressing.
-
the other metric that comes up a lot is times per second. for example if an operation takes a nanosecond, you can do it a billion times a second.
so when i've time to design things properly, i'm looking to minimise the potential bottlenecks that are within our control. and in practice the user is going to run it on some junker and it will manage to hit every bottleneck anyway, no doubt.
-
just as an indication of how times change, i used to get benchmark results in *milliseconds* and now i get them in *nanoseconds*
-
@dysfun Obligatory musical number: Pushin' the Speed of Light by Julia Ecklar and Anne Prather