How to improve SciDB query with GPU?


#1

Hi,

I am a student from UNR. I am now doing a research about parallel DB. I really want to improve the proformance of SciDB query operation with GPU. Can someone give me some suggestions? I am now planning reprogramed some parts of the source code.


#2

Hello,

I asked around concerning GPU-based hardware acceleration, and learned of this paper:

paradigm4.com/wp-content/upl … Report.pdf

The Hardware Acceleration section will be of interest to you. The work was done under the auspices of the Intel Science and Technology Center for Big Data ( istc-bigdata.org ) at MIT, and this paper was also published in the Proceedings of the 2014 ACM SIGMOD international conference on management of data. Two of the authors were Intel scientists who re-implemented gemm() and gesvd() [the most suitable for off-loading to GPU] onto Xeon Phi GPUs. They found that they could get a 1.7x speedup for some queries, but not for the general case.

Your best bet is to have a discussion with the authors re. their findings and thoughts on what else might be done.


#3

[quote=“mjl”]Hello,

I asked around concerning GPU-based hardware acceleration, and learned of this paper:

paradigm4.com/wp-content/upl … Report.pdf

The Hardware Acceleration section will be of interest to you. The work was done under the auspices of the Intel Science and Technology Center for Big Data ( istc-bigdata.org ) at MIT, and this paper was also published in the Proceedings of the 2014 ACM SIGMOD international conference on management of data. Two of the authors were Intel scientists who re-implemented gemm() and gesvd() [the most suitable for off-loading to GPU] onto Xeon Phi GPUs. They found that they could get a 1.7x speedup for some queries, but not for the general case.

Your best bet is to have a discussion with the authors re. their findings and thoughts on what else might be done.[/quote]

Thank you so much. I will have a look of the materials.