buffered_sql problem

Ricardo Larrañaga ricardo.larranaga at gmail.com
Wed Aug 12 20:37:44 CEST 2015


mmmm...i am not sure that is the same problem.
In my case, it is clear that the session was recorded to the sql database
before the acct start gets there. and since the detail file is sequential,
i am not sure how the delay could generate the issue.

I guess the main question fro my would be:

Should the server be able to recover from a duplicated session packet on a
buffered_sql server, or shouldn't?

Right now, it looks like when it happens, the server is not able to process
more packets from that detail file, until reset.

Does anyone has any ideas?
Thanks.
Regards

On Wed, Aug 12, 2015 at 2:41 PM, Jorge Pereira <jpereiran at gmail.com> wrote:

> Hi,
>
>   I have a similar problem in my current setup. My approach was to
> replicate the 'detail' file to another directory
> without as below example.
>
> -----
> root at lab:/etc/freeradius# grep -v -E "^(.*#|$)"
> mods-enabled/detail-singlebucket
> detail *detail-singlebucket* {
>         *filename = ${radacctdir}/bucket/detail*
>         permissions = 0600
>         header = "%t"
>         locking = yes
> }
> root at lab:/etc/freeradius#
> -----
>
> That is called in the section 'accounting' from my
> $raddb/sites-enabled/accounting and I have another server
> as described below, That is used to listen and process the 'detail'.
>
> -----
> root at lab:/etc/freeradius# grep -v -E "^(.*#|$)"
> sites-available/buffered-sql-accounting
> server *buffered-sql-accouting* {
>         listen {
>                 type = detail
>                 *filename = "${radacctdir}/bucket/detail"*
>                 load_factor = 10
>                 poll_interval = 5
>                 retry_interval = 30
>         }
>         preacct {
>
>         }
>         accounting {
>                 # this is a copy of sql/accounting but inserting in the
> table 'radiusaccounting_buffered' (same scheme of radiusaccounting)
>                 -sql_accounting-tb_buffered
>         }
> }
> root at lab:/etc/freeradius#
> -----
>
> The problem is because the accountings from the 'buffered' has a big delay
> as compared between the queries below.
>
> ------
> mysql> select acctsessionid,acctstarttime from *radiusaccounting*  order by
> radacctid DESC LIMIT 10;
> +------------------------+---------------------+
> | acctsessionid          | acctstarttime       |
> +------------------------+---------------------+
> | 166B5A00045CE955CB50BD | 2015-08-12 16:57:19 |
> | 166B5A00045CDB55CB50AE | 2015-08-12 16:57:03 |
> | 166B5A00045BAC55CB4F82 | 2015-08-12 16:52:02 |
> | 166B5A000459C855CB4DEA | 2015-08-12 16:45:14 |
> | 166B5A0004556755CB49B9 | 2015-08-12 16:27:21 |
> | 166B5A000453FE55CB4859 | 2015-08-12 16:21:33 |
> | 166B5A000453E655CB4843 | 2015-08-12 16:21:07 |
> | 166B5A0004537F55CB47C5 | 2015-08-12 16:19:01 |
> | 166B5A000452D555CB472D | 2015-08-12 16:16:30 |
> | 166B5A0004515855CB45A1 | 2015-08-12 16:09:53 |
> +------------------------+---------------------+
> 10 rows in set (0.08 sec)
>
> mysql> select acctsessionid,acctstarttime from *radiusaccounting_buffered*
> order by radacctid DESC LIMIT 10;
> +------------------------+---------------------+
> | acctsessionid          | acctstarttime       |
> +------------------------+---------------------+
> | 166B5A0004481455CB3D4C | 2015-08-12 15:34:21 |
> | 166B5A0004462155CB3BAB | 2015-08-12 15:27:23 |
> | 166B5A0004420355CB37FD | 2015-08-12 15:11:43 |
> | 166B5A000440F755CB3705 | 2015-08-12 15:07:33 |
> | 166B5A00043F4C55CB3571 | 2015-08-12 15:00:50 |
> | 166B5A00043E8555CB34D5 | 2015-08-12 14:58:13 |
> | 166B5A00043D9155CB33E6 | 2015-08-12 14:54:15 |
> | 166B5A00043B6955CB31C3 | 2015-08-12 14:45:07 |
> | 166B5A0004389F55CB2F46 | 2015-08-12 14:34:30 |
> | 166B5A0004385D55CB2EFE | 2015-08-12 14:33:18 |
> +------------------------+---------------------+
> 10 rows in set (0.01 sec)
>
> mysql>
> ------
>
> I believe that the 'buffered' scheme is too slow that the default approach.
> someone have suggestions?
>
>
> --
> Jorge Pereira
>
> On Wed, Aug 12, 2015 at 10:02 AM, Ricardo Larrañaga <
> ricardo.larranaga at gmail.com> wrote:
>
> > Hey Alan:
> > I was wondering if you could help me out a little bit more with this
> issue.
> > The problem repeated itself last week.
> >
> > I tracked it to a session that only lasted 1 second. For this session i
> had
> > a complete accounting record in my database when a start packet came in.
> > This is the first packet in my detail file, and after it , the detail
> file
> > started increasing in size, and my database did not receive any more
> > accounting stops.
> >
> > I remember you mentioning that when there is any "nasty" packets in the
> > detail file, the server gets stuck and you need to edit the file manually
> > to remove the packets.
> > Last time i thought that wast the case, because when i increased the load
> > factor in my buffered_sql server and restarted the server, the detail
> file
> > got processed completely.
> >
> > But since this happened again, i guess the server does get stuck.
> > What i do not get is how come just restarting the server solves the issue
> > (that also solved the issue this time). Well, i also increased the load
> > size, but i would not expect that to have any impact on the server logic.
> >
> > Is this an expected behavior? Shouldn't the server be able to process the
> > packet without restarting the server, since it clearly can process it
> when
> > you restart it.
> >
> > I am just looking for a way to deal with this issue without manual
> > intervention.
> >
> > Thanks a lot!
> > Regards
> >
> >
> > On Mon, Aug 3, 2015 at 12:40 PM, Ricardo Larrañaga <
> > ricardo.larranaga at gmail.com> wrote:
> >
> > > Hi Alan:
> > > Actually, i was about to answer my own quesiton.
> > > So, it looks like the server keeps processing the file, no matter if a
> > > packet had a failed query.
> > > Somehow, my server started getting delayed on processing packets from
> the
> > > detail file (need to investigate why).
> > > So what i did just know is increasing the load parameter on buffered
> sql
> > > and after a highly loaded cpu period, the server caught up, deleted the
> > > file and showed me sessions from today.
> > > What version are you using? I dont really have anything configured to
> > > nulligy the duplicated records, so i am wondering know if keep
> processing
> > > the file even if the insert fails is a new feature?
> > > Thanks.
> > > Regards
> > >
> > >
> > >
> > > On Mon, Aug 3, 2015 at 12:29 PM, <A.L.M.Buxey at lboro.ac.uk> wrote:
> > >
> > >> Hi,
> > >>
> > >> > that is already in the db, so those fail, but my question is, will
> the
> > >> > server keep trying to insert that record and not try any others in
> the
> > >> > detail.work file?
> > >>
> > >> yep. any nastiness and it gets stuck....and wont proceed. you need
> > >> to nullify it...either stop server, edit the .work file to remove
> > >> the dodgy record or configure server to walk over it... probably
> > >> something hacky like this
> > >>
> > >>
> > >>  sql {
> > >>          invalid = 2
> > >>          fail = 2
> > >>  }
> > >>  if (fail || noop || invalid) {
> > >>          ok
> > >>  }
> > >>
> > >> alan
> > >> -
> > >> List info/subscribe/unsubscribe? See
> > >> http://www.freeradius.org/list/users.html
> > >
> > >
> > >
> > -
> > List info/subscribe/unsubscribe? See
> > http://www.freeradius.org/list/users.html
> >
> -
> List info/subscribe/unsubscribe? See
> http://www.freeradius.org/list/users.html
>


More information about the Freeradius-Users mailing list