FreeRADIUS Accounting Logging to Two Separate Locations Simultaneously

Chris Decker csd126 at psu.edu
Fri Sep 6 03:41:26 CEST 2013


Arran - Ignore my 'What would happen to the FreeRADIUS processes…" question - I meant to delete that before sending my message.


On Sep 5, 2013, at 9:34 PM, Chris Decker <csd126 at psu.edu> wrote:

> Arran,
> 
> Thank you for taking the time to so clearly lay things out - it seems like rlm_replicate will do exactly what we want!
> 
> I'm going to look into using redis, as it is supported by logstash out-of-the-box and I'm guessing I'll get the benefit of 'guaranteed delivery'.  What would happen to the FreeRADIUS processes should my client be unable to connect back to the redis 'server' (for whatever reason) for an extended period of time?  Also, should I be nervous about using the redis module in production given the 'Experimental' redis module description in the 2.1.1 changelog?
> 
> 
> 
> 
> Thanks,
> Chris
> 
> 
> P.s. My apologies for replying via the digest - you replied before I had time to switch off of digests.
> 
> 
> 
>> Date: Thu, 5 Sep 2013 19:11:35 +0100
>> From: Arran Cudbard-Bell <a.cudbardb at freeradius.org>
>> To: FreeRadius users mailing list
>> 	<freeradius-users at lists.freeradius.org>
>> Subject: Re: FreeRADIUS Accounting Logging to Two Separate Locations
>> 	Simultaneously
>> Message-ID: <E1C61C30-B39E-4D42-9532-1B113DBC29DB at freeradius.org>
>> Content-Type: text/plain; charset=us-ascii
>> 
>> 
>> On 5 Sep 2013, at 18:29, Chris Decker <csd126 at psu.edu> wrote:
>> 
>>> All,
>>> 
>>> I could use some help in understanding my options for the following scenario:
>>> In our environment, FreeRADIUS currently writes its Accounting logs to the local drive - one file per authorized client.  In addition to the local logging, the Security group wants the Accounting logs sent to their logging cluster (in real-time) so they can put them in their elasticsearch database and respond to incidents.
>> 
>> Well you don't want the main log file from the daemon which makes it easier.  That can only go to one place.
>> 
>> There are four types modules you could use for this:
>> 	- linelog
>> 	- detail
>> 	- replicate
>> 	- the db modules (ldap, sql, redis)
>> 
>> Linelog can log to files or syslog, you construct the format lines using static text and attributes.
>> Detail can only log to files, it just dumps the contents of an attribute list to a file.
>> Replicate fires and forgets a copy of the Accounting-Request to a remote server.
>> The DB modules just log to a table.
>> 
>> You can list any combination of those modules in the accounting section of the server to write to multiple destinations.
>> 
>> It's generally sensible to log one copy of the accounting packets to disk on the box it was received, most people use the detail module for this.
>> 
>> For the other consumers, if they want off-box logging and don't want syslog, forward them a copy of the packet using rlm_replicate.  This copies the incoming packet to another destination.  It doesn't block, and doesn't wait for a response, meaning it will be affected by packet loss.  But that shouldn't be an issue on a campus network if you set the QoS priorities correctly, and hey, at least no congestive failure.
>> 
>> For consuming those packets at the other end, you can use another instance of FreeRADIUS (and configure it to not responsd), or radsniff can be used to pick them off the wire with libpcap, and output them in something very similar to detail format.
>> 
>> I've adopted radsniff as a bit of a pet project until FreeRADIUS 3.0.0 is released (were currently in feature freeze, so I needed something to hack on).  So if you want additional features like outputting packet 'signatures' to syslog, and are willing to test the code then I'd be happy to add it in.
>> 
>>> My question: What is the best way to make both the Ops and Security groups happy given the below limitations:
>>> - The Security group does not want to pull the logs from MySQL, as they want to use logstash/elasticsearch and this would just complicate things.
>> 
>> Yeah and who wants to manage SQL tables with millions of rows, eww.
>> 
>>> - The Ops group wants to avoid syslog because they fear syslog could block, causing their production FreeRADIUS servers to eventually stop responding to requests.
>> 
>> 
>> Ok.
>> 
>>> The options we are exploring, in order of preference:
>>> 1. "Robust Accounting" - the Ops team believes there is a way to have the logs written to two locations simultaneously - locally and remotely, and if the remote connection is lost it does not impact operations.  Is this possible?  Does anyone have a sample config they could share?
>> 
>> Um, that's a pretty basic feature of the server, just list multiple modules in the accounting section.
>> 
>>> 2. Re-configure FreeRADIUS to write to one giant log-file, rotated hourly.  A script would then essentially 'tail -f' the log file and stream the logs to the Security group (and would handle the hourly filename changes obviously).
>> 
>> Sure. Unlike core logging, modules will re-open the file handle each time they write an entry, this is nice because you can just move the files out of the way at rotate time, and not so nice, because it's slow.  Depends on load as to whether this is ok.
>> 
>>> 3. Re-configure FreeRADIUS to log to syslog, and have syslog write to a local file AND send remotely to the Security group.  The Ops group wants to avoid syslog if at all possible.
>> 
>> Ok.
>> 
>>> 4. Re-configure FreeRADIUS to also log to MySQL.  The Security group would then have to figure out a way to pull the data out in near-real time and insert it into their own database, which they would like to avoid.
>>> 
>> 
>> Nah...
>> 
>> Replicate the packet stream, let them do whatever they want with it.  That's usually the easiest way to solve these sorts of issues.
>> 
>> -Arran
>> 
>> Arran Cudbard-Bell <a.cudbardb at freeradius.org>
>> FreeRADIUS Development Team
> 



More information about the Freeradius-Users mailing list