Not sure what you’re importing but it shouldn’t take that long. Here’s a R script that builds a sparse 100M x 100M array with 10M random non-empties. Takes 35 seconds on my laptop:

```
library("scidb")
scidbconnect()
build_sparse_array = function ( num_cells = 10000000,
x_start = 1,
x_end = 100000000,
y_start = 1,
y_end = 100000000,
desired_cells_per_chunk = 1000000,
chunk_size = 0)
{
dx = x_end - x_start + 1;
dy = y_end - y_start + 1;
if(chunk_size <= 0)
{
density <- num_cells / (dx * dy)
#Relationship: chunk_size * chunk_size * density = desired_cells_per_chunk
#Assuming 2D arrays with uniform distribution of data
chunk_size = round(sqrt(desired_cells_per_chunk / density), digits=0)
if (chunk_size*chunk_size > 2 ^62)
{
print("Calculated chunk size too large; set chunk size manually")
return(0)
}
}
#If dx or dy is too large - may want to check max return value of random
query <- sprintf(
"apply(
build(
<value:double>
[n=1:%i,1000000,0],
random()
),
x, random() %% %i + %i,
y, random() %% %i + %i
)", num_cells, dx, x_start, dy, y_start);
query <- sprintf(
"redimension(
%s,
<value:double> [x=%i:%i,%i,0, y=%i:%i,%i,0]
)", query, x_start, x_end, chunk_size, y_start, y_end, chunk_size
)
query <- sprintf("store( %s, target)", query)
scidbremove("target", error=invisible);
t1 <- proc.time()
iquery(query);
print(proc.time() - t1)
}
```

Do not use build_sparse if density is less than 10% or so. Build_sparse evaluates the expression for every possible cell. If the arrays are too sparse, it takes forever. For that reason, it is deprecated. Use the kind of pattern you see above.