No. There’s no upper limit. But here’s what I suspect is the problem. When you create a (lot) of arrays, with (say) a lot of attributes, you will end up adding a helluva lot (a lot of attributes times a lot of arrays) of rows to the PostgreSQL catalogs.
Now - we’re using PostgreSQL at the moment because it’s a stable, reliable catalog manager. But it’s not the long term solution. So we’ve not spent a lot of time worrying about or tuning our use of it. That said, if you’re seeing increased Postgres CPU times for operations like adding arrays, then I suspect that Postgres’ internal statistics might be out of whack; it thinks its got fewer rows in its tables than it has, so its scanning rather than using an index. To fix this, run the Postgres “ANALYZE” operation on the Postgres database. That will update Postgres with the number of rows it has in its tables. And it might improve performance of the Postgres SQL we embed within our Catalog Manager.
But there’s one more thing. Why are you creating 5,000 separate timeseries? SciDB is perfectly capable–indeed, it’s designed for–arrays with more than 2D. As a thought, why not create a single, 3D array containing 5,000 2D timeseries data sets? That will reduce the Postgres dependency to next to nothing, and certainly won’t slow down your queries.
Care to post some schema decls for us to take a look at?