Finding The Right Log File

How to go from a SciDB error message to the scidb.log file of the
instance that raised the error.


In SciDB 19.10 and later, SciDB exception error messages report the
physical id of the instance that raised the exception. Knowing the
instance id can save time when diagnosing problems, but only if you
know how to correctly derive the corresponding data directory name.
That involves looking up the instance id in the system catalog’s
instance table, and using the (server_id, server_instance_id)
found there to form the base-path subdirectory. Naively
transforming instance: s0-i3 into (say) /datapath/0/3 won’t work.

For quick reference, here’s the correct procedure. The rest of this
note explains these steps in detail.

  1. The exception text says (for example) instance: s1-i1 (4294967297).
  2. Dump the catalog instance table and find the entry with
    instance_id equal to number in parentheses, 4294967297. (The script, below, prints this information.)
  3. The server_id (sid) and server_instance_id (siid) from this
    entry form the base path subdirectory where this instance’s data
    and log files are kept. For server_id 1, server_instance_id 3, and
    base path /the/path, the instance data directory will be

Don’t fall into the trap of translating s1-i1 into /the/path/1/1/,
that is almost certainly going to be wrong!

(Perhaps ``data directory id’’ would have been a better name than
``server instance id’’.)

Extended Example

An Exception Arises

Here’s an exception raised while running a memory capacity test on my
development cluster. (Long lines wrapped to fit the page.)

2019-10-21 15:16:56,440 ...\
SystemException in file: src/network/MessageHandleJob.cpp function: hand\
  leExecutePhysicalPlan line: 733 instance: s0-i3 (3)
Error description: Not enough memory. Error 'std::bad_alloc' during memo\
  ry allocation.

So this exception was originally raised on instance ``s0-i3 (3)’’.
We’d like to look at the scidb.log entries on that instance from
around the time the exception occurred.

My cluster’s name is leela and its config.ini file contains this
base-path entry:


As you may have guessed, the data directory for instance s0-i3 is
not /scidb_data/leela/0/3 (although there is a small chance that
that’s coincidentally the case).

Examine the instance Table

We find the correct data directory by looking at the SciDB system
catalog’s instance table. Here’s a shell script called
that runs the Postgres psql(1) command to dump the columns of
interest. (The script is included in the 19.10 scidb-utils package
but can be used on earlier releases.)

psql "$@"<<EOF
SELECT server_id AS sid,           -- short names good
       server_instance_id AS siid, -- long names bad
       (host || ':' || port) AS host_port,
        || (instance_id >> 32)
        || '-i'
        || instance_id::bit(32)::integer) AS phys_iid
FROM instance;

Script command line options are passed directly to psql(1). Here’s
the output on my leela cluster:

$ ./ -U leela
 sid | siid |       host_port        | instance_id | phys_id
   0 |    0 | localhost:1239         |           0 | s0-i0
   1 |    3 | |  4294967297 | s1-i1
   1 |    2 | |  4294967298 | s1-i2
   0 |    1 | localhost:1240         |           3 | s0-i3
(4 rows)

Use server_id (sid) and server_instance_id (siid), in that order,
to form the subdirectory portion of the path. So the data directory
for instance s0-i3 (3) is /scidb_data/leela/0/1/. We can go there
to find the instance’s log files, datastore files, and more.

And sure enough, in the /scidb_data/leela/0/1/scidb.log file, here’s
the message logged from near the throw:

2019-10-21 15:16:55.605444 [0x7fb58257e700] [ERROR]:\
 ServerMessageHandleJob::handleExecutePhysicalPlan: QueryID = 0.157168539\
 9399202660 encountered the error: SystemException in file: src/network/M\
 essageHandleJob.cpp function: handleExecutePhysicalPlan line: 733 instan\
 ce: s0-i3 (3)
Error description: Not enough memory. Error 'std::bad_alloc' during memor\
 y allocation.

The Details

Why is this indirect mapping necessary?

The SciDB cluster elasticity code uses a single unique 64-bit number
to identify a physical instance over the entire lifetime of the
cluster. These physical instance id numbers are distinct from the
id numbers in the config.ini file, to allow for potential future
scenarios in which an instance might be rebuilt from replica chunks.
For example, if the instance whose config.ini ids were
(server_id=2, server_instance_id=5) had physical_id=7 but suddenly
comes online with physical_id=12, that might trigger an automatic
rebuild from replica data chunks elsewhere in the cluster. (This
capability has not been implemented.) Hence the need for a mapping,
via the system catalog instance table, from (server_id,
pairs to unique physical_id numbers.

Since the system catalog is implemented as a Postgres database, SciDB
can use Postgres sequence numbers to achieve physical instance ids
that are unique for the lifetime of the cluster. A physical id is
composed of the config.ini server id (the high 32 bits) and a
sequence number obtained from Postgres (the low 32 bits).

The Postgres sequence numbers are not related to the
server_instance_id numbers found in config.ini, but both sets of
numbers are typically small integers, and both are paired with the
same server_id value. That can make for confusion.

The server_instance_ids determine how init-cluster
will go about creating and naming the data directories. My leela
cluster’s config.ini file contains server_instance_ids 0, 1, 2,
and 3.


So we can expect to find data directories named:

  • /scidb_data/leela/0/0
  • /scidb_data/leela/0/1
  • /scidb_data/leela/1/2
  • /scidb_data/leela/1/3

Those server_instance_ids 0, 1, 2, and 3, are the same integers that
the Postgres sequence will hand out for composing the first four
physical instance ids. But as we’ll see, they’ll be given out in a
completely different order, and that’s what makes the translation step

Here’s the instance table again:

$ ./ -U leela
 sid | siid |       host_port        | instance_id | phys_id
   0 |    0 | localhost:1239         |           0 | s0-i0
   1 |    3 | |  4294967297 | s1-i1
   1 |    2 | |  4294967298 | s1-i2
   0 |    1 | localhost:1240         |           3 | s0-i3
(4 rows)

The phys_id column is the 64-bit physical instance_id as formatted
for error messages, with 32-bit server_id and sequence number broken
out. The ``s’’ part is the sid and the ``i’’ part is the sequence
number. These don’t match the siid column because of the way the init-cluster command works.

The instance table entries are created by init-cluster
when the cluster is first installed. It reads config.ini and
creates the data directories. Then it looks for the lowest
(server_id, server_instance_id) pair, usually (0,0). It initializes
that instance first, and this first instance is also responsible for
initializing the Postgres catalog.

Finally all other instances are initialized in parallel, to reduce
the setup time for large clusters. Those initializing instances race
to obtain the Postgres sequence number they’ll incorporate into their
physical instance id.

The race condition explains why the siid column and the ``i’’
portion of the phys_id column do not match—except here in the case
of s1-i2, which matches only by coincidence.

The mapping step between instance id and data directory name is not
ideal, but will remain necessary in the near term.