Segfault at src/lib/misc.c:1193 in 3.0.4 (3.0.11 looks very similar)

Mike Ely me at mikeely.org
Mon Sep 26 22:13:13 CEST 2016


The configs are complex and have had many hands on them over a long 
period of time. I am attempting to migrate these configs (some of them 
set by people who are no longer present where I work) over from a 
working 2.x system that proxies numerous devices. In order to get the 
migrated configs to work I had to change the user and group stanzas to 
within a security{} block and also get rid of calls to the no longer 
used rlm_acct_unique module.

The configurations would require a fair amount of redaction to share 
publicly, which is why I asked the more general question of where to 
look within them. Insulting anyone was the nowhere in my mind. I'm 
trying to figure out an unexpected crash, that's all.

Here's the backtrace:
#0  fr_ipaddr_cmp (a=0x20, b=0x7f3965a30510) at src/lib/misc.c:1193 

#1  0x00007f3967740426 in check_for_realm (instance=<optimized out>, 
request=0x7f396f3cda40, returnrealm=0x7f3965a30580)
     at src/modules/rlm_realm/rlm_realm.c:276 

#2  0x00007f396774052d in mod_preacct (instance=<optimized out>, 
request=0x7f396f3cda40) at src/modules/rlm_realm/rlm_realm.c:399
#3  0x00007f396d5f3109 in call_modsingle (request=0x7f396f3cda40, 
sp=0x7f396f3b3720, component=RLM_COMPONENT_PREACCT)
     at src/main/modcall.c:318 

#4  modcall_recurse (request=0x7f396f3cda40, 
component=RLM_COMPONENT_PREACCT, depth=1, 
entry=entry at entry=0x7f3965a309c8)
     at src/main/modcall.c:587
#5  0x00007f396d5f2a70 in modcall_child (request=<optimized out>, 
component=<optimized out>, depth=<optimized out>,
     entry=0x7f3965a309b0, c=<optimized out>, result=0x7f3965a30824) at 
src/main/modcall.c:420
#6  0x00007f396d5f2c67 in modcall_recurse 
(request=request at entry=0x7f396f3cda40,
     component=component at entry=RLM_COMPONENT_PREACCT, 
depth=depth at entry=0, entry=entry at entry=0x7f3965a309b0)
     at src/main/modcall.c:798
#7  0x00007f396d5f3e90 in modcall 
(component=component at entry=RLM_COMPONENT_PREACCT, c=c at entry=0x7f396f3b2da0,
     request=request at entry=0x7f396f3cda40) at src/main/modcall.c:1138
#8  0x00007f396d5f12bf in indexed_modcall 
(comp=comp at entry=RLM_COMPONENT_PREACCT, idx=idx at entry=0, 
request=0x7f396f3cda40)
     at src/main/modules.c:909
#9  0x00007f396d5f1e3f in module_preacct (request=<optimized out>) at 
src/main/modules.c:1867
#10 0x00007f396d5e1c65 in rad_accounting (request=0x7f396f3cda40) at 
src/main/acct.c:56
#11 0x00007f396d601677 in request_running (request=0x7f396f3cda40, 
action=<optimized out>) at src/main/process.c:1498
#12 0x00007f396d5fb617 in request_handler_thread (arg=0x7f396f3b80f0) at 
src/main/threads.c:678
#13 0x00007f396b82adc5 in start_thread (arg=0x7f3965a31700) at 
pthread_create.c:308
#14 0x00007f396b0daced in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113

 From the bt full, this was interesting:
#1  0x00007f3967740426 in check_for_realm (instance=<optimized out>, 
request=0x7f396f3cda40, returnrealm=0x7f3965a30580)
     at src/modules/rlm_realm/rlm_realm.c:276
         i = 1
         my_ipaddr = {af = 2, ipaddr = {ip4addr = {s_addr = 167775328}, 
ip6addr = {__in6_u = {
                 __u6_addr8 = "`\f\000\064\000\376:o9\177\000\000-402", 
__u6_addr16 = {3168, 13312, 65024, 28474, 32569, 0,
                   13357, 12848}, __u6_addr32 = {167775328, 1866137088, 
32569, 167775328}}}}, prefix = 54 '6', scope = 1664759088}
         username = 0x0
         vp = <optimized out>
         realm = 0x7f396dfcf890
         namebuf = <optimized out>
         realmname = <optimized out>
         ptr = <optimized out>
         returnrealm = 0x7f3965a30580
         request = 0x7f396f3cda40
         instance = <optimized out>
         inst = <optimized out>

On 09/26/2016 12:53 PM, Alan DeKok wrote:
>
>> On Sep 26, 2016, at 3:45 PM, Mike Ely <me at mikeely.org> wrote:
>>
>> Behavior is similar on two production hosts running Centos 7.2. Radius is setup as proxy only, passes authentication successfully for a time, then abruptly segfaults. Attached gdb to corefile and found this:
>>
>> Using host libthread_db library "/lib64/libthread_db.so.1".
>> Core was generated by `/usr/sbin/radiusd -d /etc/raddb'.
>> Program terminated with signal 11, Segmentation fault.
>> #0  fr_ipaddr_cmp (a=0x20, b=0x7f3965a30510) at src/lib/misc.c:1193
>> 1193            if (a->af < b->af) return -1;
>>
>>
>> It looks like it's somehow getting the memory address for "a" set to 0x20, which is obviously invalid, and then it segfaults.
>
>   The gdb backtrace would be a lot more useful than just one line.
>
>> Where should I look in the configs for where realm acct_pool servers is being set up and what would I expect to see in a configuration that would allow the server to run successfully and then crash abruptly like this?
>
>   Is there any particular reason to ask a question like that?  Do you want help, or are you interested in insulting the developers?
>
>   The server should not crash.  In fact, we've managed to do authentication proxying in 3.0.4 and 3.0.11 for millions of packets without a server crash.
>
>   The default configuration doesn't crash.  Any configuration I've tried doesn't crash like this.
>
>   So... what configuration changes did you make?  Can you describe what you did?
>
>> Obviously let me know what other data to provide for debugging this.
>
>   This is documented.
>
> http://wiki.freeradius.org/project/bug-reports
>
>   Alan DeKok.
>
>
> -
> List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
>


More information about the Freeradius-Users mailing list