RadPerf Overload Testing packets/s is much lower than in Performance Testing

Alan DeKok aland at deployingradius.com
Sat Feb 6 15:32:07 CET 2016


On Feb 5, 2016, at 11:58 PM, Jim <jbalatero at visp.net> wrote:
> 
> I tried the performance testing in RadPerf for -p 8 to see the maximum accepted load that a RADIUS server can handle. The result was 203 packets/s. My problem now is in the overload testing, I was not able to reach the packets/s number printed out for -p.

  What does that mean?

  If you use "-p 1000", all that means is that the client sends a 1000 packets.  The server might only be able to respond to 200 a second.  At the end of that second, the other 800 packets will be left in the processing queue

  Using "-p 1000" on the *client* does not magically make the *server* be able to handle 1000 packets/s.

> I tried -n 200 and 500 but got a very low packets/s in my result, it was 34 packets/s. What might be wrong?

  "-n" sends packets at a fixed rate, whether or not the server responds.  That leads to overload situations very quickly.

  I think you're trying to do load test without a clear picture of how load tests work.

  "-p N" means that the client sends N packets without looking for a reply.  Then, it only sends new packets once it's received replies for old ones.  This means that the client is automatically limited by the speed of the server.  There is a queue of unprocessed packets, but the client never sends packets faster than the server can handle.  This tests how much load the server can accept when things are going well, and the client is cooperative.

  "-n N" means that the client sends N packets per second, every second.  It then tracks which of those packets got a reply.  If you do "-n 500", and the server can only handle 200 packets/s, you're giving the server 300 packets *every second* more than it can handle.  After 10 seconds, you've given it 3000 packets more than it can handle.  The sever is overloaded, and falling down, because the client keeps hammering it.


  The server just accepts packets, and processes them.  If it's given more packets than it can handle, it either puts them on a queue for later processing (a short-term load spike), or it starts throwing away packets (sustained load).

  -p tests short-term load spikes.

  -n tests sustained load.

  And the results can be non-linear, especially if you're using an SQL database.  "-n 50" might result in higher packets/s than "-n 1000".  Because at 50 packets/s, the server can easily keep up.  At 1000 packets/s, the server can get overloaded, and the packets can start interfering with each other.  Latency in highly contended systems can go up exponentially.  So when there's overload, the server basically falls over, because there's nothing else it can do.

  I think you should spend some time looking into offered load versus accepted load.

  Right now, you're just randomly trying things, and are surprised by what happens.  Well... if you don't understand how it works, then everything is surprising.

  Perhaps you could explain what you *think* is going on.  What do you think the server should do when it's given double the load it can process?  Why do you think the server do that?

  Alan DeKok.




More information about the Freeradius-Users mailing list