Using the GPU through the parallel comp. toolbox to optimize matrix inversion
Show older comments
Hello,
In my code I impliment:
A=B\c;
B is an invertable matrix (14400x14400), c is a vector.
Since B is large this takes very long. I am using a windows 7, 64bit, 4 GB RAM, 2 cores.
Would I be able to shorten the time by using the GPU through the parallel comp. toolbox?
Would I need to write my own version of parallel matrix inversion, or has this problem been faced before and there is a posted solution?
Thank you.
Accepted Answer
More Answers (1)
Categories
Find more on Parallel Computing in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!