What is the purpose of the default accounting-on/off queries?

Nathan Ward lists+freeradius at daork.net
Sun Oct 10 14:20:10 CEST 2021


> On 11/10/2021, at 1:02 AM, Terry Burton <terry.burton at gmail.com> wrote:
> 
> On Sun, 10 Oct 2021, 12:16 Alan DeKok, <aland at deployingradius.com <mailto:aland at deployingradius.com>> wrote:
> 
>> On Oct 9, 2021, at 10:28 PM, Nathan Ward <lists+freeradius at daork.net>
>> wrote:
>>>> As well as the VRF issue, you can experience this in global context if
>>>> the database connection is configured with a query timeout and the
>>>> bulk session update does not complete in time.
>>> 
>>> It seems very odd to me that you would have an Accounting-On/Off query
>> taking anywhere near what a DB query timeout is. That doesn’t sound like an
>> MX issue - that sounds like some index tuning is required, or perhaps the
>> query is doing a large amount of work that could be handled another way?
>> 
>>  If it's a big GGSN, then updating 100K rows can take time.  Doing that
>> update in an asynchronous transaction would be better, perhaps.
> 
> 
> Indeed.
> 
> You can also maintain a per-NAS epoch table using Accounting On/Off. Then
> adjust routine queries to join against this (or just make a view in which
> acctstoptime = NULL is replaced with the end of the previous epoch unless
> the session started afterwards), allowing you to ignore sessions not
> belonging to the current epoch.
> 
> But it's the kind of thing that only becomes with doing when needed because
> of the added schema/query complexity…

Hmm, interesting! I did some testing on a DB just now and can clear 160k sessions in about 3 seconds, from a table of about 2.8M. My schema is a little different, but not significantly for what is relevant for what accounting-on/off is doing. There are updates going on, maybe 300/s or so (it’s a DB with a mirror of live data, I’m not poking a production DB!).

Some point soon I'll synthesise some data on the shipped schemas and see how it performs and see if I can tweak them to be faster - if 100k sort of size is a problem then it may be worth putting some effort in to optimising, as that’s not very big. Maybe some alternative schema/queries for high scale, with complexity as a trade off.

--
Nathan Ward



More information about the Freeradius-Users mailing list