{"id":1137,"date":"2022-03-28T14:19:09","date_gmt":"2022-03-28T12:19:09","guid":{"rendered":"https:\/\/www.beegfs.io\/c\/?p=1137"},"modified":"2025-06-03T23:40:41","modified_gmt":"2025-06-03T21:40:41","slug":"beegfs-now-supports-nvidia-magnum-io-gpu","status":"publish","type":"post","link":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/","title":{"rendered":"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#8217;s the big deal?"},"content":{"rendered":"<h4><strong>How did we get here?<\/strong><\/h4>\n<p>Where does BeeGFS need to go to support its users for years to come? What changes are needed to enable the next generation of AI and HPC workloads, not just today, but for the next decade and beyond? These are questions ThinkParQ and NetApp ask ourselves, our mutual customers, and the greater BeeGFS community almost obsessively.<!--more--><\/p>\n<p>We understand that users of parallel file systems and other large-scale storage solutions can\u2019t make decisions lightly. So, we feel a mutual obligation to tread carefully, while still innovating boldly, in a rapidly changing IT landscape where just over a decade ago the cloud was still about as nebulous as the real thing. Luckily, ThinkParQ is no stranger to HPC, and NetApp is no stranger to enterprise storage.<\/p>\n<p>The significant uptick in the adoption of AI and other HPC techniques in enterprises makes us ideal partners to anticipate how HPC storage must evolve to stay relevant and useful. Anticipating this trend is a big reason why back in 2019 ThinkParQ partnered with NetApp to deliver joint solutions built around BeeGFS.<\/p>\n<p>Together, our engineering teams have worked to deliver innovation around BeeGFS in several ways, including introducing a<a href=\"https:\/\/www.netapp.com\/blog\/high-availability-beegfs\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u00a0shared-storage HA solution<\/a>,<a href=\"https:\/\/www.netapp.com\/blog\/kubernetes-meet-beegfs\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u00a0bringing support for Kubernetes<\/a>, and collaborating with <a href=\"https:\/\/www.netapp.com\/blog\/ef-series-ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">NVIDIA to certify BeeGFS<\/a> with NVIDIA <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-a100\/\" target=\"_blank\" rel=\"noopener noreferrer\">DGX<\/a><a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-a100\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u00a0A100<\/a> <a href=\"https:\/\/resources.nvidia.com\/en-us-netapp\/\" target=\"_blank\" rel=\"noopener noreferrer\">systems<\/a> for machine learning and deep learning (ML\/DL) workloads at scale. This brings us to today, where we\u2019re excited to share how <a href=\"https:\/\/developer.nvidia.com\/gpudirect-storage\" target=\"_blank\" rel=\"noopener noreferrer\">NVIDIA <\/a><a href=\"https:\/\/developer.nvidia.com\/gpudirect-storage\">Magnum IO <\/a><a href=\"https:\/\/developer.nvidia.com\/gpudirect-storage\">GPUDirect<\/a> Storage (GDS) now supports BeeGFS. As part of this, we created a number of enhancements, most notably the addition of multirail support (described in more detail below), along with some smaller improvements to the native cache mode and BeeGFS client networking stack that have far-reaching benefits to how BeeGFS filesystems can be architected going forward, regardless of whether GDS is in use.<\/p>\n<h4><strong>Why is Magnum IO GPUDirect Storage important?\u00a0<\/strong><\/h4>\n<p>From a storage perspective, ML and DL workloads are not so dissimilar from many HPC workloads. They are characterized by many read operations and, particularly in the case of DL, result in the same data being reread multiple times. However, while traditionally these datasets were processed by CPUs, the types of algorithms and techniques commonly applied in modern ML and DL training are greatly accelerated by the use of GPUs. Of course the more \u2018Moore\u2019s law\u2019 is applied to GPUs, the more simply moving mountainous amounts of data to and from the GPU becomes a bottleneck to the overall time to train.<\/p>\n<p>Enter GPUDirect Storage. GDS enables direct memory access (DMA) between GPU memory and local or remote storage behind a NIC. It avoids the need to store and forward data through buffers in CPU memory and instead reads and writes data directly into GPU memory. If this sounds similar to the traditional benefits of remote direct memory access (RDMA), you\u2019re spot on. In many ways, GDS extends RDMA capabilities to account for the existence of GPU memory.<\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-1127\" src=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg\" alt=\"\" width=\"1921\" height=\"1081\" srcset=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-200x113.jpg 200w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-300x169.jpg 300w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-400x225.jpg 400w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-600x338.jpg 600w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-768x432.jpg 768w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-800x450.jpg 800w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-1024x576.jpg 1024w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-1200x675.jpg 1200w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage-1536x864.jpg 1536w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg 1921w\" sizes=\"(max-width: 1921px) 100vw, 1921px\" \/><\/p>\n<h4><strong>How does <\/strong><strong>Magnum IO <\/strong><strong>GPUDirect <\/strong><strong>Storage work in general?<\/strong><\/h4>\n<p>The traditional way of reading data into a GPU memory buffer involves reading into a CPU memory buffer, usually via the POSIX API, and then cudaMemCpy that buffer into GPU memory. This process increases latency and CPU load and decreases throughput from the filesystem to GPU memory.<\/p>\n<p>To provide an alternative, lower overhead data path, GDS provides access to the GPU for DMA engines included in other PCIe devices, like NICs or NVMe drives. To make use of that functionality, applications, instead of using POSIX I\/O, can choose to interface with the cuFile API which will relay the file I\/O request through the nvidia-fs kernel module. This module is responsible for setting up the appropriate memory mappings, providing access to information about these mappings to other kernel drivers like the BeeGFS client module, and passing the I\/O request on to the Linux kernel VFS.<\/p>\n<h4><strong>How does<\/strong><strong> Magnum IO<\/strong><strong> GPUDirect <\/strong><strong>Storage work in BeeGFS?<\/strong><\/h4>\n<p>BeeGFS, like many other filesystems, uses the Linux kernel VFS as an abstraction layer to provide a standard interface to applications. Therefore, BeeGFS I\/O functions will be called whenever the VFS receives an I\/O request for a file stored on a mounted BeeGFS.<\/p>\n<p>A GDS enabled BeeGFS module will register with the nvidia-fs module and will then be able to differentiate between traditional POSIX I\/O requests and requests that have been relayed through nvidia-fs. While BeeGFS has been capable of RDMA data transfers before the introduction of GDS, the nature of the buffers and the mechanism to transfer the buffers over to the storage servers are quite different between the two.<\/p>\n<p>Traditional POSIX I\/O originates from user space and is usually issued by applications that are not RDMA-aware. Thus, the pages that hold the data that is to be transferred need to be copied into a kernel space buffer that can then be sent to the storage server via RDMA send. If a user space application wants to read data from BeeGFS and process it on a GPU, the data needs to be copied from an RDMA buffer in kernel space to a user space buffer and then needs to be copied to GPU memory again.<\/p>\n<p>For GDS on the other hand, we want the storage servers to be able to directly read from or write to a buffer in GPU memory that is exposed via the DMA engine on the network interface. To make that possible, we added RDMA read and write functionality to the client and server-side code. RDMA transfers can now be set up with a client-to-server message that contains the necessary information like addresses and keys for the memory regions that are to be accessed by the storage server. After that setup is complete, the storage server can read from and write to those memory regions without any CPU involvement on the BeeGFS client side. Because the exposed buffers are located in GPU memory, applications that run on the GPU can access the data immediately before or after it is transferred. No copies on the client side are needed.<\/p>\n<p>One more important prerequisite to achieving the best performance possible with GDS in BeeGFS was to support multirail networking. This means that the BeeGFS client can now be configured to use multiple RDMA capable network interfaces on the same network and can automatically balance traffic across those interfaces, based on how many connections are available and already active on each of the interfaces. The core multirail functionality is independent of the GDS implementation and nvidia-fs, so even clients that don&#8217;t use GDS can benefit from being able to use multiple RDMA-\u00a0\u00a0\u00a0\u00a0 enabled network interfaces now. And when GDS is enabled, the BeeGFS client can call a function in nvidia-fs to determine which network device will provide the fastest path to a specific GPU, taking into account the PCIe topology in the node.<\/p>\n<h4><strong>You keep saying it makes things \u201cfaster<\/strong><strong>.<\/strong><strong>\u201d<\/strong> <strong>H<\/strong><strong>ow much faster?<\/strong><\/h4>\n<p>The benchmarking results for this blog post were collected using two NVIDIA DGX A100 systems connected using an NVIDIA Quantum 200Gb\/s InfiniBand switch to a single second-generation NetApp BeeGFS building block. In this building block, two Lenovo SR665 servers were used to run BeeGFS management, metadata, and storage services configured in a Linux HA cluster, and block storage was provided by a pair of NetApp EF600 storage systems:<\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" class=\"size-full wp-image-1106 aligncenter\" src=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2.png\" alt=\"\" width=\"3134\" height=\"1050\" srcset=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-200x67.png 200w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-300x101.png 300w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-400x134.png 400w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-600x201.png 600w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-768x257.png 768w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-800x268.png 800w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-1024x343.png 1024w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-1200x402.png 1200w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2-1536x515.png 1536w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Config-v2.png 3134w\" sizes=\"(max-width: 3134px) 100vw, 3134px\" \/><\/p>\n<p>For this post, we intentionally focused on the relative performance boost GDS provides and did not try to showcase some very large numbers only attainable from a large BeeGFS file system or many GPU servers. Also note this configuration also did not take advantage of the new multirail support in BeeGFS, which would have simplified the deployment by reducing the number of required IPoIB subnets to just one.<\/p>\n<p>Pictures are worth a thousand words, so I\u2019ll leave you with this image showing read performance at various GPU counts and I\/O sizes without GDS (blue bars) and with GDS (orange bars):<\/p>\n<p><img decoding=\"async\" class=\"size-full wp-image-1107 aligncenter\" src=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2.png\" alt=\"\" width=\"3144\" height=\"936\" srcset=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-200x60.png 200w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-300x89.png 300w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-400x119.png 400w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-600x179.png 600w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-768x229.png 768w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-800x238.png 800w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-1024x305.png 1024w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-1200x357.png 1200w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2-1536x457.png 1536w, https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/GDS-Blog-Post-Perf-Chart-v2.png 3144w\" sizes=\"(max-width: 3144px) 100vw, 3144px\" \/><\/p>\n<p>Okay, we have to make one comment, just think of how much less storage you\u2019ll have to buy if you can get roughly double the performance from the same amount of hardware!<\/p>\n<h4><strong>\u00a0<\/strong><strong>So what\u2019s next?<\/strong><\/h4>\n<p>If you couldn\u2019t tell from the beginning of this post, we\u2019re just getting warmed up. GDS is just one of the enhancements planned for BeeGFS as it evolves to meet the needs of supersized environments like the NVIDIA DGX platform, along with ARM support, enhancements to our caching capabilities\/utilization, offering improved performance for AI, machine learning, and other HPC workloads.<\/p>\n<p><em>Authors:<\/em><\/p>\n<p>Philipp Falk, Head of Engineering, ThinkParQ<\/p>\n<p>Joe McCormick, Software Engineer, NetApp<\/p>\n<p>We value your feedback and comments, please comment below <a href=\"https:\/\/groups.google.com\/g\/fhgfs-user\" target=\"_blank\" rel=\"noopener noreferrer\">or on the BeeGFS User Forum.<\/a><\/p>\n<p>18th March 2022<\/p>\n","protected":false},"excerpt":{"rendered":"<p>How did we get here? Where does BeeGFS need to go to support its users for years to come? What changes are needed to enable the next generation of AI and HPC workloads, not just today, but for the next decade and beyond? These are questions ThinkParQ and NetApp <a href=\"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/\"> <span>Read More<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"image","meta":{"_acf_changed":false,"footnotes":""},"categories":[6],"tags":[],"class_list":["post-1137","post","type-post","status-publish","format-image","hentry","category-blog","post_format-post-format-image"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#039;s the big deal? - BeeGFS - The Leading Parallel Cluster File System<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#039;s the big deal? - BeeGFS - The Leading Parallel Cluster File System\" \/>\n<meta property=\"og:description\" content=\"How did we get here? Where does BeeGFS need to go to support its users for years to come? What changes are needed to enable the next generation of AI and HPC workloads, not just today, but for the next decade and beyond? These are questions ThinkParQ and NetApp Read More\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/\" \/>\n<meta property=\"og:site_name\" content=\"BeeGFS - The Leading Parallel Cluster File System\" \/>\n<meta property=\"article:published_time\" content=\"2022-03-28T12:19:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-03T21:40:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1921\" \/>\n\t<meta property=\"og:image:height\" content=\"1081\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Troy Patterson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Troy Patterson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/\"},\"author\":{\"name\":\"Troy Patterson\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/#\\\/schema\\\/person\\\/889fafb6e064ad194bf6b995f2e5147f\"},\"headline\":\"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#8217;s the big deal?\",\"datePublished\":\"2022-03-28T12:19:09+00:00\",\"dateModified\":\"2025-06-03T21:40:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/\"},\"wordCount\":1474,\"commentCount\":1,\"image\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/wp-content\\\/uploads\\\/2022\\\/03\\\/Networking-Magnum-IO-GDS-Storage.jpg\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/\",\"url\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/\",\"name\":\"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what's the big deal? - BeeGFS - The Leading Parallel Cluster File System\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/wp-content\\\/uploads\\\/2022\\\/03\\\/Networking-Magnum-IO-GDS-Storage.jpg\",\"datePublished\":\"2022-03-28T12:19:09+00:00\",\"dateModified\":\"2025-06-03T21:40:41+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/#\\\/schema\\\/person\\\/889fafb6e064ad194bf6b995f2e5147f\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/wp-content\\\/uploads\\\/2022\\\/03\\\/Networking-Magnum-IO-GDS-Storage.jpg\",\"contentUrl\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/wp-content\\\/uploads\\\/2022\\\/03\\\/Networking-Magnum-IO-GDS-Storage.jpg\",\"width\":1921,\"height\":1081},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/beegfs-now-supports-nvidia-magnum-io-gpu\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#8217;s the big deal?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/#website\",\"url\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/\",\"name\":\"BeeGFS - The Leading Parallel Cluster File System\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/#\\\/schema\\\/person\\\/889fafb6e064ad194bf6b995f2e5147f\",\"name\":\"Troy Patterson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3aedb776f814472f0e8914ee35ac325890f5c0d2d64f65d2ab44c6377bff6e6a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3aedb776f814472f0e8914ee35ac325890f5c0d2d64f65d2ab44c6377bff6e6a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3aedb776f814472f0e8914ee35ac325890f5c0d2d64f65d2ab44c6377bff6e6a?s=96&d=mm&r=g\",\"caption\":\"Troy Patterson\"},\"url\":\"https:\\\/\\\/www.beegfs.io\\\/c\\\/author\\\/tpatterson\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what's the big deal? - BeeGFS - The Leading Parallel Cluster File System","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/","og_locale":"en_US","og_type":"article","og_title":"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what's the big deal? - BeeGFS - The Leading Parallel Cluster File System","og_description":"How did we get here? Where does BeeGFS need to go to support its users for years to come? What changes are needed to enable the next generation of AI and HPC workloads, not just today, but for the next decade and beyond? These are questions ThinkParQ and NetApp Read More","og_url":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/","og_site_name":"BeeGFS - The Leading Parallel Cluster File System","article_published_time":"2022-03-28T12:19:09+00:00","article_modified_time":"2025-06-03T21:40:41+00:00","og_image":[{"width":1921,"height":1081,"url":"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg","type":"image\/jpeg"}],"author":"Troy Patterson","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Troy Patterson","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#article","isPartOf":{"@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/"},"author":{"name":"Troy Patterson","@id":"https:\/\/www.beegfs.io\/c\/#\/schema\/person\/889fafb6e064ad194bf6b995f2e5147f"},"headline":"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#8217;s the big deal?","datePublished":"2022-03-28T12:19:09+00:00","dateModified":"2025-06-03T21:40:41+00:00","mainEntityOfPage":{"@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/"},"wordCount":1474,"commentCount":1,"image":{"@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#primaryimage"},"thumbnailUrl":"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/","url":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/","name":"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what's the big deal? - BeeGFS - The Leading Parallel Cluster File System","isPartOf":{"@id":"https:\/\/www.beegfs.io\/c\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#primaryimage"},"image":{"@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#primaryimage"},"thumbnailUrl":"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg","datePublished":"2022-03-28T12:19:09+00:00","dateModified":"2025-06-03T21:40:41+00:00","author":{"@id":"https:\/\/www.beegfs.io\/c\/#\/schema\/person\/889fafb6e064ad194bf6b995f2e5147f"},"breadcrumb":{"@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#primaryimage","url":"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg","contentUrl":"https:\/\/www.beegfs.io\/c\/wp-content\/uploads\/2022\/03\/Networking-Magnum-IO-GDS-Storage.jpg","width":1921,"height":1081},{"@type":"BreadcrumbList","@id":"https:\/\/www.beegfs.io\/c\/beegfs-now-supports-nvidia-magnum-io-gpu\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.beegfs.io\/c\/"},{"@type":"ListItem","position":2,"name":"BeeGFS now supports NVIDIA Magnum IO GPUDirect Storage. So, what&#8217;s the big deal?"}]},{"@type":"WebSite","@id":"https:\/\/www.beegfs.io\/c\/#website","url":"https:\/\/www.beegfs.io\/c\/","name":"BeeGFS - The Leading Parallel Cluster File System","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.beegfs.io\/c\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.beegfs.io\/c\/#\/schema\/person\/889fafb6e064ad194bf6b995f2e5147f","name":"Troy Patterson","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/3aedb776f814472f0e8914ee35ac325890f5c0d2d64f65d2ab44c6377bff6e6a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/3aedb776f814472f0e8914ee35ac325890f5c0d2d64f65d2ab44c6377bff6e6a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/3aedb776f814472f0e8914ee35ac325890f5c0d2d64f65d2ab44c6377bff6e6a?s=96&d=mm&r=g","caption":"Troy Patterson"},"url":"https:\/\/www.beegfs.io\/c\/author\/tpatterson\/"}]}},"_links":{"self":[{"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/posts\/1137","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/comments?post=1137"}],"version-history":[{"count":11,"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/posts\/1137\/revisions"}],"predecessor-version":[{"id":4291,"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/posts\/1137\/revisions\/4291"}],"wp:attachment":[{"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/media?parent=1137"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/categories?post=1137"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.beegfs.io\/c\/wp-json\/wp\/v2\/tags?post=1137"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}