buffered_sql problem
Ricardo LarraƱaga
ricardo.larranaga at gmail.com
Wed Aug 12 15:02:16 CEST 2015
Hey Alan:
I was wondering if you could help me out a little bit more with this issue.
The problem repeated itself last week.
I tracked it to a session that only lasted 1 second. For this session i had
a complete accounting record in my database when a start packet came in.
This is the first packet in my detail file, and after it , the detail file
started increasing in size, and my database did not receive any more
accounting stops.
I remember you mentioning that when there is any "nasty" packets in the
detail file, the server gets stuck and you need to edit the file manually
to remove the packets.
Last time i thought that wast the case, because when i increased the load
factor in my buffered_sql server and restarted the server, the detail file
got processed completely.
But since this happened again, i guess the server does get stuck.
What i do not get is how come just restarting the server solves the issue
(that also solved the issue this time). Well, i also increased the load
size, but i would not expect that to have any impact on the server logic.
Is this an expected behavior? Shouldn't the server be able to process the
packet without restarting the server, since it clearly can process it when
you restart it.
I am just looking for a way to deal with this issue without manual
intervention.
Thanks a lot!
Regards
On Mon, Aug 3, 2015 at 12:40 PM, Ricardo LarraƱaga <
ricardo.larranaga at gmail.com> wrote:
> Hi Alan:
> Actually, i was about to answer my own quesiton.
> So, it looks like the server keeps processing the file, no matter if a
> packet had a failed query.
> Somehow, my server started getting delayed on processing packets from the
> detail file (need to investigate why).
> So what i did just know is increasing the load parameter on buffered sql
> and after a highly loaded cpu period, the server caught up, deleted the
> file and showed me sessions from today.
> What version are you using? I dont really have anything configured to
> nulligy the duplicated records, so i am wondering know if keep processing
> the file even if the insert fails is a new feature?
> Thanks.
> Regards
>
>
>
> On Mon, Aug 3, 2015 at 12:29 PM, <A.L.M.Buxey at lboro.ac.uk> wrote:
>
>> Hi,
>>
>> > that is already in the db, so those fail, but my question is, will the
>> > server keep trying to insert that record and not try any others in the
>> > detail.work file?
>>
>> yep. any nastiness and it gets stuck....and wont proceed. you need
>> to nullify it...either stop server, edit the .work file to remove
>> the dodgy record or configure server to walk over it... probably
>> something hacky like this
>>
>>
>> sql {
>> invalid = 2
>> fail = 2
>> }
>> if (fail || noop || invalid) {
>> ok
>> }
>>
>> alan
>> -
>> List info/subscribe/unsubscribe? See
>> http://www.freeradius.org/list/users.html
>
>
>
More information about the Freeradius-Users
mailing list