{"id":4562,"date":"2019-03-18T12:00:45","date_gmt":"2019-03-18T17:00:45","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=4562"},"modified":"2019-03-30T14:27:00","modified_gmt":"2019-03-30T19:27:00","slug":"benchmarking-a-gpu","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2019\/03\/18\/benchmarking-a-gpu\/","title":{"rendered":"Benchmarking a GPU"},"content":{"rendered":"<div class=\"content\">\r\n\r\n<!--introduction-->\r\n\r\nI recently acquired a GPU, a graphics processing unit. It's called a GPU because such processors were originally intended to speed up graphics. But MATLAB uses it to speed up computation. Let's see how the <tt>gpuArray<\/tt> object benchmarks on my machine.\r\n\r\nI have been doing computer benchmarks for years. I like to do profiles where I vary the size of a task and see how the amount of memory required affects performance. I always learn something unexpected when I do these profiles.\r\n\r\nBen Todoroff is on the MathWorks Parallel Processing team. Last year he contributed <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/34080-gpubench\">#34080, gpuBench<\/a> to the MATLAB Central File Exchange. He has been able to compare several different GPUs. I am going to consider the performance of only one GPU, but in more detail.\r\n\r\n<b>Important note.<\/b> This is only about double precision. Single precision is another story.\r\n\r\n<!--\/introduction-->\r\n<h3>Contents<\/h3>\r\n<div>\r\n<ul>\r\n \t<li><a href=\"#efd688c7-af77-4efd-90ed-df0170210638\">My Laptop<\/a><\/li>\r\n \t<li><a href=\"#a3722570-d63c-4eff-aad9-46a92dcd1909\">GPU<\/a><\/li>\r\n \t<li><a href=\"#8bae4af0-cc4c-45bd-83fe-f0bcb2544613\">Benchmarking<\/a><\/li>\r\n \t<li><a href=\"#c740d85d-8aed-4deb-84fd-7d1a70c93af4\">Flops<\/a><\/li>\r\n \t<li><a href=\"#7bdf1a9c-a94f-466c-855d-e240782a2396\">CPU, A\\b<\/a><\/li>\r\n \t<li><a href=\"#7f5d72de-865e-40c0-9a42-60e5c3d12ad7\">CPU, A*B<\/a><\/li>\r\n \t<li><a href=\"#6e515cc8-bfe7-444b-9e99-e34b6d121a01\">GPU, A\\b<\/a><\/li>\r\n \t<li><a href=\"#77d25fba-f553-44ba-ada3-b0a95b78ebf5\">GPU, A*B<\/a><\/li>\r\n \t<li><a href=\"#61509744-5da2-48e2-a1da-afe2a8fa11a1\">A\\b, Comparison<\/a><\/li>\r\n \t<li><a href=\"#080ad3be-5261-44f9-802e-065e705c7545\">A*B, Comparison<\/a><\/li>\r\n \t<li><a href=\"#c7998631-1873-4595-8168-e6594b4119e9\">Supercomputers, Then and Now<\/a><\/li>\r\n<\/ul>\r\n<\/div>\r\n<h4>My Laptop<a name=\"efd688c7-af77-4efd-90ed-df0170210638\"><\/a><\/h4>\r\nThe laptop where I do most of my work is a ThinkPad T480s, with an Intel Core i7-8650U processor having a base operating frequency of 1.90 GHz. The CPU has four cores and a maximum Turbo frequency of 4.20 GHz. There is 16GB of main memory. I run the Microsoft Windows 10, 64-bit operating system. The retail price is around $1500.\r\n\r\nMATLAB uses the Intel Math Kernel Library, which is multithreaded. So, it can take advantage of all four cores.\r\n<h4>GPU<a name=\"a3722570-d63c-4eff-aad9-46a92dcd1909\"><\/a><\/h4>\r\nThe GPU is an NVIDIA Titan V operating at 1.455 GHz. It has 12GB of memory, 640 Tensor cores and 5120 CUDA cores. It is housed in a separate Thunderbolt peripheral box which is several times larger than the laptop itself. This box has its own 240W power supply and a couple of fans. It retails for $2999.\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/titan.jpg\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\n<i>Image credit: NVIDIA<\/i>\r\n<h4>Benchmarking<a name=\"8bae4af0-cc4c-45bd-83fe-f0bcb2544613\"><\/a><\/h4>\r\nThis has been a frustrating project. Trying to measure the speed of modern computers is very difficult. I can bring up the Task Manager performance meter on my machine. I can switch on airplane mode so that I am no longer connected to the Internet. I can shut down MATLAB and close all other windows, except the perf meter itself. I can wait until all transient activity is gone. But I still don't get consistent times.\r\n\r\nHere is a snapshot of the Task Manager.\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/perf.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nThe operating frequency varies automatically between the rated base speed of 1.90 GHz and well over 3 GHz. This gets throttled up and down in response to temperature changes and who knows what else. The machine runs faster when I first power it up on a cold morning than it does later in the day.\r\n\r\nNotice the 8 percent utilization by 239 processes and 3026 threads. I know that most of these threads are idle, waiting to wake up and interfere with my timing experiments. All I can do is run things many times and take an average. Or maybe I should use the minimum time, like a sprinter's personal best.\r\n<h4>Flops<a name=\"c740d85d-8aed-4deb-84fd-7d1a70c93af4\"><\/a><\/h4>\r\nI'm going to be measuring <i>gigaflops<\/i>. That's 10^9 flops. What is a flop? The inner loops of most matrix computations are either dot products,\r\n<pre class=\"language-matlab\">s = s + x(i)*y(i)\r\n<\/pre>\r\nor \"daxpys\", double a x plus y,\r\n<pre class=\"language-matlab\">y(i) = a*x(i) + y(i)\r\n<\/pre>\r\nIn either case, that's one multiplication, one addition, and a handful of indexing, load and store operations. Taken altogether, that's two floating point operations, or two <i>flops<\/i>. Each time through an inner loop counts as two flops. If we ignore cache and memory effects, the time required for a matrix computation is roughly proportional to the number of flops.\r\n<h4>CPU, A\\b<a name=\"7bdf1a9c-a94f-466c-855d-e240782a2396\"><\/a><\/h4>\r\nHere is a typical benchmark run for timing the iconic <tt>x = A\\b<\/tt>.\r\n<pre>t = zeros(1,30);\r\nm = zeros(1,30);\r\nn = 0:500:15000;\r\ntspan = 1.0;<\/pre>\r\n<pre>while ~get(stop,'value')\r\n   k = randi(30);\r\n   nk = n(k);\r\n   A = randn(nk,nk);\r\n   b = randn(nk,1);\r\n   cnt = 0;\r\n   tok = 0;\r\n   tic\r\n   while tok &lt; tspan\r\n       x = A\\b;\r\n       cnt = cnt + 1;\r\n       tok = toc;\r\n   end\r\n   t(k) = t(k) + tok\/cnt;\r\n   m(k) = m(k) + 1;\r\nend<\/pre>\r\n<pre>t = t.\/m;\r\ngigaflops = 2\/3*n.^3.\/t\/1.e9;<\/pre>\r\nThis randomly varies the size of the matrix between 500-by-500 and 15,000-by-15,000. If it takes less than one second to compute <tt>A\\b<\/tt>, the computation is repeated. It turns out that systems of order 3000 and less need to be repeated. At the two extremes, systems of order 500 need to be repeated over 100 times, while systems of order 15,000 require over 30 seconds.\r\n\r\nIf I run this for more than an hour, each value of <tt>n<\/tt> is encountered several times and the average times settle down.\r\n\r\nAfter adding some annotation to\r\n<pre>plot(n,t,'o')<\/pre>\r\nI get\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/cpu_mldivide_time.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nThe times are increasing like <tt>n^3<\/tt>, as expected. For <tt>A\\b<\/tt> the inner loop is executed roughly <tt>2\/3 n^3<\/tt> times, so gigaflops are\r\n<pre>giga = 2\/3*n.^3.\/t\/1.e9<\/pre>\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/cpu_mldivide_giga.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nI conclude that for <tt>x = A\\b<\/tt> my machine reaches about 70 gigaflops for <tt>n = 10000<\/tt> and doesn't speed up much for larger matrices.\r\n<h4>CPU, A*B<a name=\"7f5d72de-865e-40c0-9a42-60e5c3d12ad7\"><\/a><\/h4>\r\nMultiplication of two <tt>n<\/tt> -by- <tt>n<\/tt> matrices requires <tt>2n^3<\/tt> flops, three times as many as solving one linear system.\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/cpu_mtimes_time.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/cpu_mtimes_giga.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nSo matrix multiply on the CPU can reach over 80 gigaflops for order as small as 2000.\r\n<h4>GPU, A\\b<a name=\"6e515cc8-bfe7-444b-9e99-e34b6d121a01\"><\/a><\/h4>\r\nNow to the primary motivation for this project, the GPU. First, solving a linear system. Here we reach a significant obstacle -- there is only enough memory on this GPU to handle linear systems of order 21000 or so. For such systems the performance can reach one teraflop (10^12 flops). That is nowhere near the speed that could be reached if there were more memory for the GPU.\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/gpu_mldivide_time.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/gpu_mldivide_giga.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nThere is a discontinuity, probably a cache size effect, around <tt>n = 8000<\/tt>.\r\n<h4>GPU, A*B<a name=\"77d25fba-f553-44ba-ada3-b0a95b78ebf5\"><\/a><\/h4>\r\nIt is easy to break matrix multiplication into many small, parallel tasks that the GPU can handle.\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/gpu_mtimes_time.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/gpu_mtimes_giga.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nA pair of matrices as large as order 14000 can be multiplied in less than a second. Six teraflops is the top speed. We reach that by order 3000.\r\n<h4>A\\b, Comparison<a name=\"61509744-5da2-48e2-a1da-afe2a8fa11a1\"><\/a><\/h4>\r\nHow does the GPU compare to the CPU on <tt>A\\b<\/tt>?\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/mldivide_ratio.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nThe CPU is faster than the GPU for linear systems of order 789 or smaller. The GPU can up to 15 times faster than the CPU, until it runs out of memory. Of course, this is not counting any data transfer time.\r\n<h4>A*B, Comparison<a name=\"080ad3be-5261-44f9-802e-065e705c7545\"><\/a><\/h4>\r\nThe story is very different for matrix multiplication.\r\n\r\n<img decoding=\"async\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/mtimes_ratio.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nAs a function of matrix order, both the CPU and the GPU reach their top speeds quickly. Then it is 80 gigaflops versus 6 teraflops. The GPU is 75 times faster.\r\n<h4>Supercomputers, Then and Now<a name=\"c7998631-1873-4595-8168-e6594b4119e9\"><\/a><\/h4>\r\nOver 40 years ago, at the beginnings of the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/06\/24\/the-linpack-benchmark\/\">LINPACK Benchmark<\/a>, the fastest supercomputer in the world was the newly installed Cray-1 at NCAR, the National Center for Atmospheric Research in Boulder. That machine could solve a 100-by-100 linear system in 50 milliseconds. That's 14 megaflops (10^6 flops), about one-tenth of its top speed of 160 megaflops. At 6 teraflops, the machine on my desk is 37,500 times faster. Both the Cray-1 and the NVIDIA Titan V could profit from more memory.\r\n\r\nOn the other hand, the fastest supercomputer in the world today is <a href=\"https:\/\/www.ornl.gov\/news\/ornl-launches-summit-supercomputer\">Summit<\/a>, at Oak Ridge National Laboratory in Tennessee. Its 200 petaflops (10^15 flops) top speed in roughly 37,500 times faster than my laptop plus GPU. And, Summit has lots of memory.\r\n\r\n<p>\r\nPublished with MATLAB\u00ae R2018b\r\n\r\n<\/div>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/titan.jpg\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction-->\r\n\r\nI recently acquired a GPU, a graphics processing unit. It's called a GPU because such processors were originally intended to speed up graphics. But MATLAB uses it to speed up computation. Let's see how the <tt>gpuArray<\/tt> object benchmarks on my machine.\r\n\r\nI have been doing computer benchmarks for years. I like to do profiles where I vary the size of a task and see how the amount of memory required affects performance. I always learn something unexpected when I do these profiles.\r\n\r\nBen Todoroff is on the MathWorks Parallel Processing team. Last year he contributed <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/34080-gpubench\">#34080, gpuBench<\/a> to the MATLAB Central File Exchange. He has been able to compare several different GPUs. I am going to consider the performance of only one GPU, but in more detail.\r\n\r\n<b>Important note.<\/b> This is only about double precision. Single precision is another story.\r\n\r\n<!--\/introduction-->... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/03\/18\/benchmarking-a-gpu\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":4588,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,6,14,26,19],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4562"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=4562"}],"version-history":[{"count":8,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4562\/revisions"}],"predecessor-version":[{"id":4590,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4562\/revisions\/4590"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/4588"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=4562"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=4562"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=4562"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}