Cannot control attribute ordering via "rlm_perl"

Claude Brown Claude.Brown at vividwireless.com.au
Mon Jan 30 04:30:11 CET 2012


> 
>   I'd normally just put users into SQL.
> 

Yes - this was our default approach.  It made the most sense to us initially.

rlm_sql
- highly dynamic
- need non-trivial skills to config "right" for performance
- need extra h/w to scale for white-hot performance

rlm_files
- not so dynamic (regular reloads provide a reasonable trade-off)
- trivial to config for performance
- unlikely to need extra h/w for white-hot performance

Our custom module
- highly dynamic
- trivial to config for performance
- unlikely to need extra h/w for white-hot performance

:)


> 
> > FreeRADIUS core is very stable. But MySQL adds instability we have been
> unable to identify or reproduce in our environment.
> 
>   That's odd.  While MySQL isn't perfect, I have successfully used it in
> systems with 100's of transactions/s.  There was a VoIP provider ~8
> years ago using it with ~1K authentications/s.
> 

I too am surprised that we had these stability issues.  We use MySQL for pretty much everything we do - web-sites, customer management, inventory, real-time network analytics, usage accounting, etc. etc.  Some of these systems have significant load - possibly more than radius.  And we have had no stability problems.


> 
>   MySQL does have concurrency issues.  But if you split it into
> auth/acct, most of those go away.  i.e. use one SQL module for
> authentication queries.  Use a *different* one for accounting inserts.
> 

Yes, we did this.  We have two authentication servers and two accounting servers, with hash-based load-balancing proxies at the front.

This gave us a huge performance boost - and met our performance goals. But the stability issue just never went away.

>   If you also use the decoupled-accounting method (see
> raddb/sites-available), MySQL gets even faster.  Having only one process
> doing inserts can speed up MySQL by 3-4x.

Yes, we did this too. Also helped, but it was getting too far behind in some situations.  We've replaced with batch-loading of batch files.

> 
>   Using a database WILL be faster than reading the file system.
> 

Nah :)

Ignoring the effort to config the DB correctly, you still have most DBMS's using a separate server process. This requires IPC (probably over a network) and OS context-switching between processes.
 
In contrast, fopen, fgets, fclose only requires a switch into kernel mode and back again.  If the block-buffer pool has the data (it usually will) then the process may not even go into a wait-state.

>
>   You can do the same kind of thing with SQL.  Simply create a table,
> and do:
> 
>    update request {
>       My-Magic-Attr = "%{sql: SELECT .. from ..}"
>    }
> 

This is a very cool idea - I wish we had tried it!  This would have allowed us to put the rlm_sql processing into "postauth" and that may have made a huge difference.


> 
>   That's just weird.  SQL should be fine, *if* you design the system
> carefully.  That's the key.
> 

Yes. I contrast that with a trivial "C" module and some local files.

> 
>   It's not a race condition, it's lock contention.
> 

I don't *know* if it was a race condition, but I know it wasn't lock contention. As the threads gradually got lost we would look in the database to find corresponding stalled queries. None would be present.

> 
>   If it works for you...
> 
>   But it's really just a re-implementation of a simple SQL table.

Functionally, yes.  The benefits are more in simplicity of configuration, and performance per server-dollar.  Plus for us, a stability issue.

Thanks for all your help.

Cheers,

Claude.






More information about the Freeradius-Users mailing list