The term “one-trick pony” historically refers to a circus pony which has been taught to perform one trick. Today, the term refers to something that is limited to performing one skill or capability.
A good example of this is a packet capture solution that achieves 10Gbps but only under certain conditions; i.e., fixed UDP packets of 1500 bytes. Or, fixed UDP packets of 64 bytes. You get the idea. The point is that a packet capture appliance designed to perform well under certain fixed conditions is basically a one-trick pony; not high-performance solution.
This blog entry is a corollary to entry #2. When a evaluating the performance of a packet capture solution, it is imperative to conduct a battery of tests that represent traffic of variable and extreme conditions. Not just one simple test consisting of a stream of packets at a fixed rate, packet size, and protocol type.
Even if a product can do 8 hours or more of sustained write-to-disk at 10Gb full-duplex, don’t be quick to declare it a high-performance solution. The next core function to evaluate is its ability to continuously record for days. Yes, days. Why is continuous capture so important? Think about this. When the file system fills up, what will you do? Will you take the unit off-line to rebuild the file system? How long will this take? While your system is off-line, you will have no visibility in terms of monitoring your network? Can you afford this? And, does it even make sense?
It turns out that best-practice for continuous capture is to retain packet capture files in first-in-first-out (FIFO) order. Older files are purged to make room for new ones. With this strategy, your window of capture is a function of the size of your storage subsystem. This could range from hours to days. However, maintaining a FIFO, i.e. simultaneously writing and deleting files, while maintaining 10Gb sustained rates is no easy feat. In general, file delete transactions are an order of magnitude slower than create, read, and write operations. This is where you may see many packet capture solutions just degrade over time. They are just unable to keep up with the pace. My advice is to make sure your solution is capable of capturing continuously by testing it for days at sustained capture rates.
In similar fashion, I find it humorous that some vendors measure “sustained” 10Gb packet capture in a minutes rather than hours. I recently read a report from an independent testing lab. The test criteria for validating 10Gbps capture performance was a burst of approximately 500 million fixed UDP packets at 9.85Gbps. So if I extrapolate and assume a minimum size of 64-byte fixed sized packets, then the test is about 26 seconds:
Either way, the test is not sufficient enough to claim “sustained” speeds. Based on my experience, to truly measure and test the performance of packet capture at 10Gb, the test should last at least 8 hours, preferably 24 hours. If I compare the 10-minute test administered by the independent test lab to a 24-hour test that we do in our lab, this is how it compares (single-duplex):
Now that’s a test. A big difference in the number of packets. And, measured in hours instead of minutes. Why is this important? Well, simply put, if a system is not properly designed for efficiency, it will eventually exhaust resources and drop packets. For example, are you aware that writing to a freshly formatted disk (outer cylinders) is more efficient than writing to an almost full disk (inner cylinders).
A good rule of thumb for exposing potential inefficiencies with a packet capture application is to start with a freshly formatted file system, then stream to disk at wire-speed until the file system is 90% full. In the process of doing this, you will have the benefit of testing other things as well; including potential memory leaks, application efficiency, file system buffering, RAID/HBA controller, disk cache to mention a few. When the 90% full water mark is reached, stop the test, then count the number of packets dropped. This is the way to measure sustained packet capture performance!