Scidb 16.9 multiserver installation


#1

When I did a fresh install of 16.9 I noticed that the config file had TWO servers it tries to connect to when starting scidb. I found some info on this matter here server-N in Scidb

This is swell and all, except that performance (Timewise) has gone way down. I’ve gone through my logs when executing a query, and I found a gap of about 5 seconds that is passed for some kind of job involving replication (If I understand correctly).

Here’s a snippet of the log:

2017-02-23 14:44:04,204 [0x7efd23c14700] [DEBUG]: Prepare physical plan was sent out 2017-02-23 14:44:04,208 [0x7efd23911700] [INFO ]: Executing query(0.1487861044187447484): store(build(<val:double>[f=0:9999,2000,0, d=0:127,128,0],double(random()%1000)/1000), ACCOUNT_4); from program: 127.0.0.1:37434/home/scidbtrunk/stage/install/bin/iquery -anq store(build(<val:double>[f=0:9999,2000,0, d=0:127,128,0],double(random()%1000)/1000), ACCOUNT_4) ; 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: PhysicalBuild: getOutputDistribution: returning defaultPartitioning =0x7efcf40014f0 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: PhysicalBuild: execute: _schema distribution: 0x7efd0c007eb0, getOutDist(): 1 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: PhysicalBuild: getOutputDistribution: returning defaultPartitioning =0x7efcf4000fa0 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: PhysicalBuild: getOutputDistribution: returning defaultPartitioning =0x7efcf4000fa0 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: PhysicalBuild: execute: returning array with distribution: 0x7efd0c007eb0, getOutDist(): 1 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: PhysicalBuild: getOutputDistribution: returning defaultPartitioning =0x7efcf40014f0 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: syncBarrier: barrierId = 0 2017-02-23 14:44:04,209 [0x7efd23911700] [DEBUG]: Sending barrier to every one and waiting for 3 barrier messages 2017-02-23 14:44:04,211 [0x7efd23911700] [DEBUG]: All barrier messages received - continuing 2017-02-23 14:44:04,211 [0x7efd23911700] [DEBUG]: syncBarrier: returning 2017-02-23 14:44:04,211 [0x7efd23911700] [DEBUG]: syncBarrier: barrierId = 1 2017-02-23 14:44:04,211 [0x7efd23911700] [DEBUG]: Sending barrier to every one and waiting for 3 barrier messages 2017-02-23 14:44:04,214 [0x7efd23911700] [DEBUG]: All barrier messages received - continuing 2017-02-23 14:44:04,214 [0x7efd23911700] [DEBUG]: syncBarrier: returning 2017-02-23 14:44:04,214 [0x7efd23911700] [DEBUG]: DBArray::DBArray ID=62, UAID=18, ps=1, desc=public.ACCOUNT_4@2<val:double> [f=0:9999 (4611686018427387903:-4611686018427387903):0:2000; d=0:127 (4611686018427387903:-4611686018427387903):0:128] ArrayId: 62 UnversionedArrayId: 18 Version: 2 Flags: 0 Distro: dist: hash ps: 1 ctx: redun: 1 off: {} shift: 0 res: [0, 1, 4294967298, 4294967299] <val:double,EmptyTag:indicator NOT NULL> 2017-02-23 14:44:09,716 [0x7efd331dd700] [DEBUG]: handleReplicaChunk: received eof 2017-02-23 14:44:09,831 [0x7efd23c14700] [DEBUG]: handleReplicaChunk: received eof 2017-02-23 14:44:09,851 [0x7efd2370f700] [DEBUG]: handleReplicaChunk: received eof 2017-02-23 14:44:11,216 [0x7efd23911700] [DEBUG]: PhysicalUpdate::updateSchemaBoundaries: schema on coordinator: public.ACCOUNT_4@2<val:double> [f=0:9999 (4611686018427387903:-4611686018427387903):0:2000; d=0:127 (4611686018427387903:-4611686018427387903):0:128] ArrayId: 62 UnversionedArrayId: 18 Version: 2 Flags: 0 Distro: dist: hash ps: 1 ctx: redun: 1 off: {} shift: 0 res: [0, 1, 4294967298, 4294967299] <val:double,EmptyTag:indicator NOT NULL> 2017-02-23 14:44:11,216 [0x7efd23911700] [DEBUG]: Dimension boundaries updated: public.ACCOUNT_4@2<val:double> [f=0:9999 (0:9999):0:2000; d=0:127 (0:127):0:128] ArrayId: 62 UnversionedArrayId: 18 Version: 2 Flags: 0 Distro: dist: hash ps: 1 ctx: redun: 1 off: {} shift: 0 res: [0, 1, 4294967298, 4294967299] <val:double,EmptyTag:indicator NOT NULL>

As you can see, there’s a jump between 14:44:04,214 and 14:44:09,716

Using the same query, 15.12 performed much faster executions. So my question is; is there a way to return to a single server cluster on 16.9?


#2

Yeah of course you can go back to 1 node:
set redundancy=0
remove all server lines except server-0 and set the desired number of instances.
reinit, restart.


#3

Ohh I’ve gotta reinitialize!! … I already tried modifying the config file before but I was only performing a stop/start on scidb, I didn’t realize I had to reinit…
I’ve got it working on a single node now. Thank you for the help.