'input' then 'store', change array chunk size


Hi~ :grinning:
I execute input and store by chunk size 256, in Python.

db = connect()
db.input('<val:uint64>[y=0:4080:0:256, x=0:9461:0:256]',upload_data=val_arr).store('test')

But the chunk size of the generated array was different.

AFL% show(test);
{i} schema
{0} 'test<val:uint64> [y=0:4080:0:390; x=0:9461:0:256]'

If I create an array and then perform a store or insert, the chunk size will be normal,
but it will take a long time instead. (Auto create: 3~4s, Created: 25s)

Can it take about 4 seconds to store by a certain chunk size?


If the scidb array 'test" already existed and had a y chunk size of 390 that would (a) explain why it does not change and (b) why it might be slower than otherwise (because the data would have to be re-partitioned prior to storing in the existing schema for ‘test’)

If you want the ‘test’ array to take on the schema of the input() operator, you will want to remove the array ‘test’ first.

So if you remove ‘test’ first then the array should get stored as testval:uint64[y=0:4080:0:256, x=0:9461:0:256]’ when your query is run.