Detail reader concurrency
Alan DeKok
aland at deployingradius.com
Mon May 21 13:14:24 CEST 2018
On May 20, 2018, at 2:45 PM, Nathan Ward <lists+freeradius at daork.net> wrote:
> We are using the detail file reader to relay accounting from our “frontend” servers to a number of backend servers which have different roles.
>
> We write every packet to a detail file per backend, then read them in, and send them on.
>
> One of the servers takes around 80ms to respond to a request, regardless of the request rate - if I send 1/s it responds in 80ms. If I send 100/s, it responds in 80ms. I guess you could just imagine that there is 80ms network latency, which is a problem I imagine is somewhat common.
Yes. I've seen 400ms latency *inside* of corporate networks. That's just sad.
> It looks like the detail reader’s behaviour is to do one packet at a time. I can understand why this would be - it’s certainly a lot simpler! Is there a way to configure it to relay multiple requests at once?
No, sorry.
> The protocol supports this of course with multiple IDs (and some BNGs even do multiple source ports), but, not sure if the detail reader (or, wherever this would need to live) does.
Yes, I'm aware that RADIUS does this. The proxy functionality in v3 supports this, and has supported it for 20 years.
The issue is that if the detail reader is limited to one packet at a time, then it will only proxy one packet at a time.
> One option open to us of course is to attempt to relay the packet first, and only write to the detail file if relaying fails, which would (I think!) mean multiple requests can be in flight at once. Because we’re sending the requests off to multiple backend servers, this means we’ll be waiting for each of them in turn before responding to the original clients, which is something we want to avoid.
Yes. v3 wasn't really designed to do that.
> I have considered a detached subrequest, and relaying to the slow backend server immediately and not using the detail writer unless the backend is broken - but, I don’t want to respond to the client unless the data is either on the backend server, OR, is written to disk on the frontend.
> I have also considered writing to many detail files based on some sort of hash, and having many readers.. but that is not very elegant.
Nope.
TBH, the only real solution here is to use the v4.0.x branch from github. It's still a work in progress, but you can do things like:
parallel {
redundant {
radius1
detail1
}
redundant {
radius2
detail2
}
redundant {
radius3
detail3
}
...
}
Where it (a) spawns 3 sub requests that run in parallel. Then for each sub request, runs a "redundant" section. That section tries to proxy to the home server (e.g. "radius1"). If that fails, it writes to a detail file specifically for that home server (e.g. "detail1").
You will still have to configure each back-end, along with the detail writers and readers. But it *will* work.
And v4 also supports more than one packet outstanding at a time. So it should be much better than v3 for this kind of configuration.
Alan DeKok.
More information about the Freeradius-Users
mailing list