2.2.0 & dhcp: regression
fr at grosbein.net
Fri Jul 12 15:22:32 CEST 2013
On 12.07.2013 19:57, Alan DeKok wrote:
> Eugene Grosbein wrote:
>> The problem is always reproducible and have obvious "hard limit"
>> correlating or consisting with number of open files.
> I'm not sure what changes from 2.1.12 to 2.2.0 would cause that.
>> I understand. With one exception - we have not performance problem,
>> we have full lockup of all threads and after that not one request is served.
> All I can suggest is to see doc/bugs. Use gdb to find out where the
> Perl threads are locking.
Thanks, I'll try next week.
>> That's another topic. I'd like not to turn to deep discussion of our load.
>> In short, there may be bursts of thousands of DHCP requests per second
>> lasting several minutes and we have enough horsepower to process them in parallel
>> if we have at least one thousand of threads in the pool.
> Even 1000's of packets/s shouldn't require 1000's of threads.
> FreeRADIUS can do ~60K packet/s single threaded. The main thing is to
> ensure that the per-packet latency is low.
> You may want to convert the Perl logic into a stored procedure in the
> database. That is usually MUCH faster. I've done that for some
> deployments, and managed to get 1000's of DHCP packets/s with only a
> small number of threads.
That have already been done. My perl code uses persistent connections
to MS SQL database and runs stored procedure there.
More information about the Freeradius-Users