Observing a common error in Freeradius with Postgres but not justification why it happens
Ibrahim Al Mahfooz
ibrahim.nezar at sevennet.net
Sat Sep 2 14:42:42 UTC 2023
>Except the acctuniqueid column is used to index the table. So if you
delete that column, you don't have indexes. And every select / insert /
update will require a full table scan.
>There is pretty much zero downside to leaving in columns which you don't
need. But if you do want to remove columns, don't remove ones which are
indexed. And don't remove columns which are used as part of the WHERE
clauses.
>Or maybe "customized" here means something different than the first
paragraph, where it means "deleted columns".
By the way, true my queries are customized but not big deal of
customization. In fact i just removed the non used columns and they are the
ones that have nothing to do with the where clauses or indexes, hence you
can consider no change in your original optimized design.
> I would recommend not using the same database for FreeRADIUS and for
report generation. What typically happens is that everything is fine until
you want to generate a bunch of reports. And then the reporting queries
use all of the database resources, and >FreeRADIUS will get robbed of
resources.
> Set up DB replication. FreeRADIUS writes to the primary database. That
replicates to the standby database. And the reports run against the
standby database.
Yeah, in my plan for later, for now I'm spinning the basic solution to
check performance and stumbled on this performance issue from day 1.
> That means the database is undersized. i.e. it can't keep up with the
load.
> Either you need a more powerful database, or you need to make the
current database more efficient.
Any reference article on how to optimize postgresdb for freeradius or
generic guidelines you believe useful to start with?
>> The questions to summarize are :
>> 1. Is ~900 TPS and 10ms acceptable DB performance for accounting?
> That is the average case, which isn't relevant. You're looking for the
performance under load. And if you get errors during peak hours, the
database is slow, and can't keep up with the load.
True, that's why I will make a real pgbench real queries case to use and
see the delay ms and tps.
>It doesn't matter a lot. It's mainly to catch extreme corner cases. It
limits how many packets the server will accept before it starts complaining
that it's overloaded.
>It should be set to at least 256 * the number of NASes.
>i.e. set it based on inputs: the number of NASes. Don't set it based on
outputs: SQL connections.
Since I have only two NAS's then I will set it to 512 in this case? Fair
enough?
>The database is slow. It can't handle the load at peak traffic.
>My $0.02 is that if you go back to the default schema + indexes, and then
add a database replica for the report generation, everything will be fine.
>This kind of thing falls under the common practice of "I spent a bunch of
effort trying to optimize things, and now things are slower". Well, don't
do that. The default queries are fine. They're already optimized. The
default schema is fine. It's already optimized.
Again, I haven't made a skeleton change in the default queries, just
removed unused columns. The indexes, and columns that are involving where
clause is left intact. If you have a quick look at the custom queries you
can tell, actually they are very minimal, the goal of this change is to
reduce the size of stored bits in the table and also since they won't be
used, why we store them.
More information about the Freeradius-Users
mailing list