optimize sqlippool scheme
Fajar A. Nugraha
list at fajar.net
Thu Jan 12 14:33:22 CET 2012
On Thu, Jan 12, 2012 at 8:15 PM, Phil Mayers <p.mayers at imperial.ac.uk> wrote:
>> So in short, for now:
>> - revert your changes
>> - try changing the engine to innodb
>> - if it's still too slow, hire a dba, and/or be prepared to implement
>> mysql cluster (or something like clustrix)
>>
>
> - use postgres
>
> ;o)
I avoid postgres like plague in the past due to vacuum :)
It has autovacuum now, but the bad image remains. To be fair, it's
probably similar with the way many people avoid mysql due to
myisam-related issues :)
>
> In all seriousness, It's worth noting that postgres does have the advantage
> that "select ... for update" uses row-level locking, not table level. So,
> you can allocate IPs without fear of duplication, transactionally.
Really? Did you personally test it, running the select queries manually?
Mysql is also capable to do so (at least innodb and ndb does), but the
thing is how many rows got locked depends on how many rows were
selected. So a query like
SELECT ... FOR UPDATE LIMIT ....
would end up locking all rows (the limit clause pretty much don't
matter), in effect doing a table lock, while
SELECT ... FOR UPDATE LIMIT .... WHERE ID IN (...)
would end up locking only several rows. This is where ndb truly
shines, as huge amounts of concurrent process with row locks (around
128 in my setup) can run in paralel without interfering with each
other, as long as the selected rows don't overlap (this is where
randomization comes in). The process to limit the id in my setup right
now has the possiblity of causing duplicate IP address handed out. It
should be possible to fix this, but I haven't had the time and my
current setup is good enough for now.
--
Fajar
More information about the Freeradius-Users
mailing list