Metadata and NFS


#1

The user guide says that it is necessary a NFS folder, in this case /share, do run scidb on cluster (metadata var), but my master node is not using it. Am I doing something wrong ?

scidb-version=11.12

[cluster_test]
node-0=master,0
node-1=worker,4
db_user=scidb
db_passwd=scidb
install_root=/opt/scidb/11.12
metadata=/share/meta.sql
pluginsdir=/opt/scidb/11.12/lib/scidb/plugins
logconf=/opt/scidb/11.12/share/scidb/log4cxx.properties
base-path=/home/scidb/database
base-port=1239
interface=eth0
no-watchdog=false

On more question, there should be a query coordinator, in this case the master node ?


#2

First - your config.ini says it’s 11.12. But that’s about a year old. We’ve had a 12.3 release since then. Are you sure that you’re changing the right config file?

Second - at the moment (12.3) we’re relying on the NFS mount point to carry the contents of the config.ini onto all the instances. The idea is to allow administrators to change the config in one place and have their change propagate to all instances. NFS is simply the easiest way to do this. We’re changing the way we use NFS in Cheshire (12.10). Watch this space. . .

And third - there’s a bit of a general misconception about the purpose of the query coordinator. Architecturally, there’s no difference between the coordinator instance and any other worker instance. We’ve designed SciDB so that when you connect to an instance, the instance you happen to connect to acts as your coordinator. So far, we’ve not met anyone with a sufficiently large number of concurrent connections to justify distributing connections over multiple instances. But the design doesn’t have any “master” instance. (Note that we do, at the moment, require a single PostgreSQL database for the entire installation. We’re going to move away from that once we have the resources.)


#3

You should definitely consider moving to a newer version of SciDB. 12.3 or 12.10 (to be released).

A couple of comments on the config.ini:

  • metadata=… should refer to the path name (same for master and worker instances), in the /opt/scidb area. This is the full path to the meta.sql which will appear by default in the installation area as /opt/scidb/X.Y/etc/share/meta.sql

  • The configuration you are using will configure a cluster of 5 instances (1 master on node-0 – implied default, 0 workers on node-0, and 4 worker instances on node-1).

  • Shared file system – all of /opt/scidb should be visible and under the same name from all instances (workers and master).

Hope this helps!