Bug/Enhancement request: Race condition with short-term accounting (FreeRadius 2.1.10)

Alan DeKok aland at deployingradius.com
Tue Aug 28 23:11:57 CEST 2012


Matthias Nagel wrote:
> if two accounting messages for the same session are sent by the authenticator very quickly, the messages may be processed by the radius server in the wrong order. This results into two sessions being accounted instead of one. The second "phantom" session stays open for ever, because it never receives any update and/or stop message.

  This is a well-known issue with RADIUS.  Packets may appear in any order.

> Example:
> 
> If a supplicant authenticates and immediately disconnects again, the following steps are executed:
> 
> 1) The authenticator sends an accounting start message
> 2) The authenticator sends an accounting stop message immediately
> 3) The RADIUS server receives the start message and assigns it to thread #1
> 4) The RADIUS server receives the stop message and assigns it to thread #2
> 5) Thread #2 terminates first and the accounting stop message is written to the PostgreSQL database. The SQL UPDATE statement fails, because there is no entry for this session that could be updated, as the start message has not been processed yet. Hence, an INSERT INTO statement is executed as a fail-over measure.
> 6) Thread #1 terminates and an SQL INSERT statement is performed in order to log the start message.

  That doesn't make sense.  If the table indexes are set up correctly,
the SQL insert will fail at step (6).  The module will then try the
update query, which should succeed.

> The result is, that the same session is accounted with two entries in the database. The first entry is complete, this is to say it has a start and stop time. This is the result of step 5. The second entry is incomplete, i.e. it only has as start time. The latter never will be completed, because the stop message has already been processed and acknowledged to the authenticator.

  That is a database consistency issue.  You can't have two rows using
the same keys.

> But I would like to propose the following solution. Instead of assigning incoming requests to the thread pool randomly, first preprocess the request and assign requests that have identical user names (or some other senseful attribute) to the same thread. This way requests that might belong to the same session are processed by the same thread and cannot outperform each other. Requests that never can belong to the same session are still processed concurrently.

  That is not going to happen.  It's a bad fix.

  The correct fix is to use the SQL indexes.

  Alan DeKok.


More information about the Freeradius-Users mailing list