failover and load balancing
Meyers, Dan
d.meyers at lancaster.ac.uk
Mon Apr 20 10:50:46 CEST 2009
> -----Original Message-----
> From: freeradius-users-
> bounces+d.meyers=lancaster.ac.uk at lists.freeradius.org
> [mailto:freeradius-users-
> bounces+d.meyers=lancaster.ac.uk at lists.freeradius.org] On Behalf Of
> Kanwar Ranbir Sandhu
> Sent: 17 April 2009 21:52
> To: freeradius-users at lists.freeradius.org
> Subject: RE: failover and load balancing
>
<snip>
>
> I also believe you're saying that I could load balance, too. In this
> case, auth and accounting could be done on both machines, and I would
> still have one freeradius server in use (primary), from the NAS' point
> of view.
There are probably many better ways of doing it, but the simplest way to
load balance across multiple FreeRADIUS servers is just to set each
server as 'primary' on an equal number of NASes, i.e. 2 servers = half
your NASes with server A as primary, half with server B as primary. A
NAS will always talk to its primary server if it can possibly manage it.
If all NASes have the same IP for their primary server then you'll have
to start doing funky things external to both the NAS and FreeRADIUS to
load balance nicely. I guess you could proxy from one server to the
other for some requests using unlang rules or similar, but by that point
you might as well just handle it on the server it's already hit.
> In this scenario, don't the mysql databases on each machine have to be
> kept in sync? I've assumed that I would have to present one logical
> database to the freeradius server, even if the database itself is
> running on multiple mysql servers. That's why I mentioned "database
> cluster". I don't know if my assumption is correct.
MySQL has replication inbuilt. You can run one server as the master and
as many others as you want as slaves. Slaves can't be written to, but
can be read from. We're actually using this setup for redundancy in a
system we're currently developing. 2 databases within a single MySQL
process per server (each of which also runs FreeRADIUS). 1 database is
replicated across all the servers, with one server acting as the master.
The other database is unique to each server, not replicated. We have a
script that runs on the master server every 5 seconds, pulls data from
all the 'writable' (i.e. non-replicated) dbs on all the slaves, and
writes it to the master replicated db. All systems read data from their
local copy of the replicated DB, and write to their local non-replicated
DB. It means we can have data that is up to 5 seconds out of date, but
at any one point all FreeRADIUS servers have exactly the same view as
they read, so it isn't too much of a problem (for us).
Please note that we're doing this using rlm_perl and having 2 database
handles per perl thread, one for reads and one for writes. I'm not sure
if you can separate out the read and write databases like this if you're
just using rlm_sql or similar.
If you do far more reads that writes (we're writing a lot of logging
data back, but if we weren't reads would far outnumber writes) then you
might want to consider the simpler system of reading from the local
database and just always writing back to the master. You do then run
into the issue of the master being a single point of failure for writes,
whereas with our system no data is lost, it's just buffered until the
master comes back online and the script runs again.
Postgres does supposedly have a version in beta for full master-master
replication, but every time we've tried to get it running it's crashed
on us as soon as we tried to actually write any data. Postgres in
general seemed much slower than MySQL for reading the data we needed as
well.
--
Dan Meyers
Network Specialist, Lancaster University
E-Mail: d.meyers at lancaster.ac.uk
More information about the Freeradius-Users
mailing list