Compressed chunk handling is broken


I am having trouble with compressed chunks in distributed arrays. I am able to create and write to them fine, but I am unable to read from them.

Looking at the source, it seems that loadChunk() invokes the decompress functions, which in turn call the DBChunk::getData() function, which decides that the chunk data is not yet loaded, so it in turn calls loadChunk(), which promptly blocks because that chunk is already in use (by itself). And so, the dining philosopher, eating alone, starves.

I tried to fix this by adding a new setData() function that return the reference without trying to load the data. This mostly works, but I am not sure that it is completely correct, because now sparse chunks that contain no elements will segfault the program in the SparseChunkIterator constructor, because after the following code is executed, nNonDefaultElems != 0, as it should, rather it contains some large, arbitrary number.

buf = (char*)dataChunk->getData();
SparseChunkHeader* hdr = (SparseChunkHeader*)buf;
allocated = dataChunk->getSize();
used = hdr->used;
nNonDefaultElems = hdr->nElems;

Having compression working now is not that important, but it will become so as our data continues to grow.


Thanks for this.

I have created SciDB Ticket # 1194 to address the issue. You can track the shenanigans here:


Since the ticket allows me to track the shenanigans, but not participate; in response to the query posted there, I am using the source from scidb-

It’s nice to hear that this issue is fixed, and we can expect it in the next release. Will that be version 11.10 (i.e. soon)?


Yes, this will be fixed in 11.10