I am developing a UDO that requires all the instances to do some calculation on their local data and then send their results to the coordinator in order for the coordinator to combine the data and calculate the final result (which is then sent back to the other instances). To achieve that, I’ve been using BufSend and BufReceive, and while they work marvellously for small input sizes, once the input gets so large I start getting the following error when I use my operator:
SystemException in file: src/network/MessageHandleJob.cpp function: handleExecutePhysicalPlan line: 475
Error id: scidb::SCIDB_SE_NO_MEMORY::SCIDB_LE_MEMORY_ALLOCATION_ERROR
Error description: Not enough memory. Error 'std::bad_alloc' during memory allocation.
After a lot of testing I am sure the error is being caused by BufSend when the amount of data it is sending is too large. I have already changed the algorithm my UDO uses to minimize the amount of data passed between instances (it’s impossible to avoid the communication altogether), and while that has enabled me to handle larger arrays, the operator will still crash once the input array reaches a given threshold. Is there any way to do this communication between instances while avoiding crashes regardless of the size of the input array (perhaps with some other functions besides BufSend() and BufReceive() that I may have missed?), or are SciDB UDOs unable to handle inter-instance data passing whose size scales with the size of the array?