Created with Sketch.
Storage Unpacked Podcast
41 minutes | Jun 11, 2021
#205 – 25 Years of Commvault & the Power of And (Sponsored)
In this week’s episode, Chris and Martin have a great conversation with David Ngo, CTO for Metallic at Commvault, and Ranga Rajagopalan, VP Product Management for Commvault and Metallic. Commvault is celebrating 25 years in data protection, while Metallic, a SaaS data protection solution from Commvault has been available for nearly two years. In this discussion, David and Ranga explain the background to Metallic and how customers have quickly adopted the SaaS model. With 25 years of product development, Commvault has a suite of solutions to call upon, including traditional data protection software, appliances and now SaaS. The “Power of And” provides Commvault the capability to offer customers more than just one way to protect their data. These solutions integrate to form a package or framework from which Commvault can address specific data protection needs, such as a SaaS model, with on-premises restore capability. Where does the market go next? Data management figures heavily in future customer requirements. We can expect to see Commvault expanding its existing data management capabilities, both in visualisation and for data reuse. For more information on Metallic and to try out the solution, go to https://metallic.io/. Elapsed Time: 00:41:11 Timeline 00:00:00 – Intros00:03:30 – Commvault is 25 years old!00:07:15 – Data Protection is by nature multi-generational00:08:45 – What is Metallic?00:11:15 – Delivering a SaaS solution has different technical requirements00:14:00 – Data Protection and SaaS needs careful attention00:16:00 – SaaS is a “shared service responsibility”00:18:00 – Where will Commvault & Metallic evolve in the next 25 years?00:19:45 – The Power of AND provides customers with choice00:23:15 – Customers need a single, unified view of their data & backups00:26:00 – Multi-cloud and data mobility demands a single protection solution00:30:15 – SaaS is a natural backup solution, but where do we go next?00:34:20 – Commercial models make SaaS more attractive00:36:00 – Data is the new oil – what the data equivalent of an oil spill?00:38:00 – How will Commvault develop tools for ethical data reuse?00:39:30 – How can end users try Metallic today?00:40:20 – Wrap Up Related Podcasts & Blogs #128 – Reflections on Commvault GO with Glenn Dekhayser#124 – Initial Thoughts on Commvault Metallic with Chris Mellor#38 – Talking Dedicated Backup Appliances with Don FosterThe Future of Commvault is MetallicCommvault Announces Metallic SaaS Data Protection Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #ffd4. The post #205 – 25 Years of Commvault & the Power of And (Sponsored) appeared first on Storage Unpacked Podcast.
44 minutes | Jun 4, 2021
#204 – Liqid Composable Disaggregated Infrastructure
This week, Chris and Martin talk to Sumit Puri, CEO of Liqid Inc. Liqid has developed a composable infrastructure platform they call CDI or Composable Disaggregated Infrastructure. CDI enables IT organisations to take the building blocks of compute – storage, networking, CPUs, memory and GPUs then combine them dynamically in ways that address the processing needs of the enterprise. Customers use the technology to enable greater efficiency in the use of hardware and to truly deliver the software-defined data centre. Sumit explains how the Liqid technology works across multiple fabrics that could be PCIe, Ethernet or InfiniBand. Customers have used the technology to fully exploit GPUs and to build the “impossible server” based on configurations not available from server manufacturers. However, there’s a whole lot more to this technology that makes it one of the most exciting areas in the data centre today. For more on Liqid (yes, without the “u”), check out https://liqid.com. Elapsed Time: 00:44:25 Timeline 00:00:00 – Intros00:01:00 – Who are Liqid?00:02:00 – How do we define composable?00:04:20 – What components can Liqid compose?00:06:20 – Composablility needs one or more fabrics00:07:20 – How do we compose DRAM and memory?00:09:20 – Data centre design will change with composable00:11:50 – Server footprint can be optimised with disaggregated00:12:57 – Have we seen this technology before?00:15:34 – Let’s call it a fabric not a network!00:17:30 – Liqid offers an API and “northbound” support00:24:40 – Disaggregation enables the “impossible server”00:27:40 – What other use cases can Liqid be used for?00:33:00 – Virtualisation and composable create the lights-out data centre00:34:37 – Honey Badger uses M.2 SSDs00:40:00 – On-premises infrastructure isn’t going away00:41:00 – FinOps raises its head again!00:43:00 – Wrap Up Related Podcasts & Blogs #119 – Storage Hardware is Back!Liqid’s PCIe Fabric is the Key to Composable InfrastructureWhat is Software Composable Infrastructure?Mainframe – the Original Composable InfrastructureThe Ideal of Composable Infrastructure Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #1b9a. The post #204 – Liqid Composable Disaggregated Infrastructure appeared first on Storage Unpacked Podcast.
53 minutes | May 28, 2021
#203 – Storage and Chia Cryptocurrency Farming
This week, Martin and Chris delve into cryptocurrency mining in a lengthy discussion on Chia with Robert Novak. Chia is a new cryptocurrency technology that differs from previous online currencies in that the basis for earning rewards (or coins) is based on the volume of data stored on disk, rather than raw compute performed. Rob takes the team through a discussion of the difference between the processes that earn currency, explaining how Proof of Work and Proof of Stake has evolved to Proof of Space and Time with Chia. Rob goes on to explain how he developed the best mining (or farming) rig, based on used server technology and a mix of HDD and SSD. You can find more of Rob’s experiences both with Chia and other cryptocurrencies on his blog at https://rsts11.com/crypto/. Elapsed Time: 00:52:40 Timeline 00:00:00 – Intros00:04:30 – What is cryptocurrency?00:07:00 – Crypto mining becomes harder over time00:08:37 – What is Proof of Stake, Proof of Work and Proof of Space?00:10:00 – Chia uses Proof of Space, storing data on disk00:14:30 – How are Chia rewards generated?00:15:50 – The chance of a reward is relatively random00:17:50 – Multiple exabytes (eight at the time) already hold plots00:20:00 – More disk space is allocated to Chia than shipped by some vendors00:21:20 – What’s the right choice of hardware for farming?00:22:20 – Chia quickly uses up SSD endurance00:25:00 – Could a memory drive increase performance?00:30:00 – Chia is causing a large capacity HDD shortage00:31:20 – Is Chia as environmentally friendly as we think?00:34:30 – Tape is impractical for Chia data due to harvesting reporting times00:35:20 – Could MAID technology help to cut Chia power costs?00:38:40 – What is the right hardware architecture for Chia?00:41:24 – How should we protect our Chia plots?00:44:40 – Old hardware is a good place to start with Chia00:47:44 – Are people using the public cloud for farming?00:51:00 – Wrap Up Related Podcasts & Blogs Rent Out Your Spare Disk Space with StorjHDD Capacity Threshold Reaches 20TB Rob’s Bio Robert Novak brought his broadcast communication degree to Silicon Valley over 25 years ago. Since then, he’s been a full-stack sysadmin for companies ranging from startups to the Fortune 100, an early javelin-catcher for production big data environments, and most recently a datacenter technical solutions architect for an enormous network manufacturer headquartered on Tasman Drive in San Jose. He is currently between adventures, working on his home lab and cryptocurrency environments, blogging at rsts11.com and rsts11travel.com, and trying to reconcile the smoker and grill with his need to fit into his clothes when he does go back to work. Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #25ou. The post #203 – Storage and Chia Cryptocurrency Farming appeared first on Storage Unpacked Podcast.
37 minutes | May 25, 2021
#202 – Enterprise Storage Consolidation with Phil Bullinger from Infinidat (Sponsored)
In this week’s sponsored episode, Chris and Martin talk to Phil Bullinger, newly appointed CEO at Infinidat. The company has reached their ten-year anniversary, shipping over seven exabytes of capacity in that time. With successful growth in every quarter during 2020, we discuss how Infinidat is helping drive the consolidation story for large enterprises and MSPs looking to optimise their storage systems. The ability to deploy entire storage arrays into customer data centres is both an operational benefit and financial tool for Infinidat. Enterprises love the set-and-forget approach, while MSPs like to pay as their customer usage grows. This highlights one of the many trends for data storage in the future, where architectures need to fit into the financial business model of the customer. For more information on Infinidat, check out the website at www.infinidat.com. Elapsed Time: 00:37:06 Timeline 00:00:00 – Intros00:00:30 – Martin is farming Chia00:02:00 – Phil Bullinger has joined Infinidat as CEO00:03:34 – 7 exabytes shipped in 10 years00:06:13 – Infinidat grew every quarter during the pandemic00:08:40 – Are we into a consolidation “megatrend”?00:10:00 – Consolidation reduces costs and increases agility00:12:30 – Modern storage systems have to provide smart management00:13:50 – Infinidat aims to deploy fully-populated arrays00:15:00 – Storage systems need to be proactively balanced, not reactive00:18:00 – Infinidat offers a range of 4 consumption/purchase models00:20:34 – What’s the financial and risk aspect of partial deployments?00:25:10 – Where is consolidation heading?00:29:00 – Ransomware needs a systems-level protection00:31:00 – What happens next for Infinidat and enterprise storage?00:33:00 – The Infinidat architecture works with all storage media00:36:13 – Wrap Up Related Podcasts & Blogs #166 – Infinidat Elastic Storage Pricing with Eran Brown#141 – Building Storage Systems of the Future#132 – Accelerating Ransomware Recovery with Eran Brown#113 – The Expanding Storage Hierarchy with Erik Kaulberg Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #nanq. The post #202 – Enterprise Storage Consolidation with Phil Bullinger from Infinidat (Sponsored) appeared first on Storage Unpacked Podcast.
45 minutes | May 19, 2021
#201 – Introducing Scality ARTESCA (Sponsored)
In this week’s episode, Chris and Martin talk to Paul Speciale (Chief Product Officer at Scality) and Chris Tinker (Distinguished Technologist with HPE) about the launch of ARTESCA, a new lightweight, cloud-native object storage solution. HPE and Scality have announced the product jointly, delivering the solution on HPE hardware with an exclusive distribution agreement for 6 months. We get the detail on why ARTESCA is different to RING and the types of workload Scality is aiming to support. Since the launch of Scality and RING a decade ago, a lot has changed in technology and the industry. Physical media is 10x the capacity and solid-state media is affordable at scale. Customers are looking for new solutions that can be deployed in cloud and edge scenarios, with much lower capacity entry points, but with greater I/O capabilities. ARTESCA is the culmination of previous work on S3 Server and Zenko that addresses the needs of modern unstructured data solutions. HPE has an exclusive 6-month agreement to distribute ARTESCA on HPE server hardware. This introduces a range of new options, including all-flash and NVMe. For more details on ARTESCA, visit https://www.scality.com/products/artesca Elapsed Time: 00:44:40 Timeline 00:00:00 – Intros00:01:15 – What is ARTESCA?00:02:45 – Object storage from a functional perspective is a new requirement00:03:30 – How have object storage requirements changed, in general?00:04:50 – Edge is starting to add to requirements for object storage00:07:00 – Object stores are great for concurrent I/O requests00:09:40 – Data protection has become a big user of object storage00:10:20 – New media and capacity increases has changed object storage design00:11:10 – HPE is optimising hardware solutions to fit the ARTESCA requirements00:12:30 – Scality has an evolving development to reach ARTESCA00:16:45 – Developers want more dynamic access to object storage buckets00:20:00 – ARTESCA uses a Scality version of Kubernetes00:23:40 – ARTESCA introduces a new erasure coding scheme00:27:00 – HPE assists customers in designing the right hardware solution00:30:00 – Software defined is at a point where specific hardware is essential00:33:30 – The management of ARTESCA is based on APIs and GUIs.00:36:00 – How do new customers choose between ARTESCA and RING?00:38:40 – How do developers consume ARTESCA endpoints?00:41:00 – What is the licensing model for ARTESCA?00:43:00 – Call to action – what’s the availability?00:44:00 – Wrap Up Related Podcasts & Blogs #189 – The Quiet Success of Software-Defined Storage#168 – Storage Unicorns#148 – Unpacking HPE’s Storage Strategy#139 – Storage Predictions for 2020 (Part II)Scality Broadens Object Storage Adoption with ARTESCAUnderstanding HPE’s Storage Product PortfolioBuilding Data Storage With Containers Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #519t. The post #201 – Introducing Scality ARTESCA (Sponsored) appeared first on Storage Unpacked Podcast.
46 minutes | May 14, 2021
#200 – Virtualisation, Containers, Serverless and Data
This week, Martin and Chris are joined by Jim Walker, VP of Product Marketing at Cockroach Labs. In a slight departure from a purely storage-based recording, this discussion follows the evolution of application packaging and deployment, from virtualisation to containers and serverless. The logical conclusion of any computing environment is to execute code and gradually, we are abstracting from the specifics of infrastructure and allowing the code to do the work. How will serverless technologies standardise and implement non-functional requirements such as security and credentials management, data access and most important – portability? Will de-facto standards emerge or do we have to rely on standards’ bodies? We discuss all this and more. Elapsed Time: 00:46:05 Timeline 00:00:00 – Intros00:00:30 – Martin goes retro with CP/M and Wordstar00:03:00 – The move to serverless is going to change the approach to application deployment00:05:00 – How is the transition to serverless going?00:07:10 – How does serverless compare to the mainframe days?00:10:30 – Are security, storage, networking all sorted in serverless?00:12:15 – Databases should be serverless, accessed by a SQL-based API00:15:30 – What is the transition to serverless, are we doing it right?00:20:30 – De-facto standards – are there any in serverless?00:24:15 – Do the standards’ bodies actually work?00:26:30 – Docker (the company) didn’t succeed but did popularise containers00:31:45 – Do we need to think about data protocols rather than storage protocols?00:36:00 – Physical factors like latency (and speed of light) affects distributed applications00:38:45 – Why don’t we abstract data into unified service tiers?00:41:00 – APIs still need a lot of work to make them generically consumable00:44:00 – Billing and FinOps becomes important with Serverless00:45:00 – Wrap Up Related Podcasts & Blogs #182 – FaunaDB – Client Serverless Computing#192 – Storage & Kubernetes with Nigel PoultonAmazon S3 Object Lambda provides dynamic access to data Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #c3dr. The post #200 – Virtualisation, Containers, Serverless and Data appeared first on Storage Unpacked Podcast.
51 minutes | Apr 30, 2021
#199 – Quantifying Data Storage Innovation
This week, Chris and Martin discuss the level of innovation seen in the data storage industry over the last decade. This topic is one that comes up repeatedly as we review the market, startups and incumbents. In reality, of course there’s been innovation, whether the “micro-innovations” that move the storage media industry along, or the more significant changes like NVMe and persistent memory. What happens when you look at innovation by market spending or VC investments? Is there an alignment of innovation with the development of patentable technology? Finally, the discussion concludes with areas where innovation is missing, including the perennial favourite, storage management tools. Elapsed Time: 00:51:27 Timeline 00:00:00 – Intros00:01:15 – What does innovation mean in data storage?00:02:20 – What is innovation?00:04:40 – Explaining innovation with Dyson vacuum cleaners00:06:50 – Has most storage innovation occurred in media?00:10:15 – What storage solutions innovation have we seen?00:12:10 – NVMe is a standards-based innovation00:15:20 – Persistent Memory is an innovation waiting to gain mass adoption00:18:30 – Is open-source storage an innovation?00:21:25 – SmartNICs and Computational Storage are new innovations00:24:50 – Can we use market share to determine innovation?00:26:20 – Whatever happened to ATMOS?00:31:00 – Could innovation be tracked by VC investments?00:35:00 – Innovation is needed in the cloud for storage APIs00:35:50 – Container-attached storage – a good investment?00:38:35 – Is innovation aligned with patentable technology?00:39:30 – Public cloud storage innovation is missing – or perhaps obfuscated?00:40:40 – Doing block storage is hard!00:42:45 – What unrealised storage innovations exist? Storage Management?00:47:05 – Data Security still needs some work to become more “DRM-like”.00:49:30 – Wrap Up Related Podcasts & Blogs #141 – Building Storage Systems of the Future#168 – Storage Unicorns#170 – The End of Pure Play Storage CompaniesExploiting Technology Inflection Points Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #2w97. The post #199 – Quantifying Data Storage Innovation appeared first on Storage Unpacked Podcast.
37 minutes | Apr 16, 2021
#198 – Software-Only Storage Vendors
With the news that VAST Data is moving to a software-only model, this week Chris and Martin debate the merits of the move, for both the vendor and customers. Does this transition for VAST represent a new move in the industry, or can we point to other companies that have already made the jump? What does a disaggregated commercial model mean for customers and how should IT teams and CIOs view the focus on software-only? We discuss all this and more in this week’s podcast. Elapsed Time: 00:36:43 Timeline 00:00:00 – Intros00:00:02 – Unlock time and it snows!00:01:50 – What is VAST Data’s Gemini offering?00:03:55 – The Gemini Man – terrible film and TV series00:05:30 – Nasuni, Nutanix, Pure Storage, NetApp, all done software separation00:06:50 – Software/Hardware disaggregation is like having an ELA00:08:40 – Customer buying power may offer benefits to large enterprises00:10:45 – Many vendors never build their own hardware support organisation00:11:20 – Going software-only is a public cloud positioning move00:12:40 – Storage software vendors are offering per-hour usage models00:15:00 – Is VAST Data simply doing financial engineering?00:20:40 – Could hardware innovation stall with a focus on software?00:23:00 – External “controller-based” storage is stagnant – moving to cloud?00:25:40 – How does the software-only move affect customer decisions?00:28:30 – Could the software model result in customers paying for non-supported solutions?00:29:40 – More clarity is needed to understand new purchasing models00:34:05 – Could hardware/software disaggregation result in increased software costs?00:36:00 – Wrap Up Related Podcasts & Blogs #105 – Introduction to VAST Data Part I#106 – Introduction to VAST Data Part II#170 – The End of Pure Play Storage CompaniesVAST Data goes software-only – what does this mean for customers? Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #qjdr. The post #198 – Software-Only Storage Vendors appeared first on Storage Unpacked Podcast.
44 minutes | Mar 26, 2021
#197 – Prioritising Disaster Recovery Planning
The ongoing saga at OVH highlights the need to have solid disaster recovery plans that reflect the risk and impact of systems failure on the business. In this episode, Chris and Martin dig into the issues at OVH and how they represent a wake-up call for businesses in general. High profile failures have been apparent over many years, so does DR get the investment and respect it deserves? With so many more businesses operating with a 100% dependency on technology, you would expect so. Why does DR continue to be an afterthought and how has the public cloud distorted the perception of who is responsible for systems, applications and data recovery? Elapsed Time: 00:44:21 Timeline 00:00:00 – Intros00:01:00 – Martin walks 4 million steps in a year00:02:30 – The ongoing OVH Saga00:04:00 – Other disasters – 9/11 & Buncefield00:06:30 – Disaster Experiences!00:09:15 – A disaster doesn’t mean losing the entire data centre00:12:15 – Businesses need to understand the level of risk00:12:45 – Risk translates to impact and risk assessment00:13:57 – How should a plan be developed?00:15:00 – The business owner needs to determine the value of the application00:16:40 – Test, test, test – within reason!00:19:00 – Data centre “flipping” is a complex process00:20:00 – If DR is an afterthought then you have a problem00:20:45 – Nobody likes doing documentation00:22:30 – How can software changes break systems?00:25:15 – Many storage upgrades had no fallback position00:27:55 – How much does cost affect decisions to do DR?00:29:30 – War gaming disaster recovery!00:31:45 – How is the perception of technology warping the need for DR?00:34:00 – Starting an airline has regulation – why don’t e-businesses have the same?00:36:45 – How much are non-functional requirements being ignored?00:38:30 – There are many tools for doing DR – it’s never been easier!00:41:00 – Is Disaster Recovery just not glamorous enough?00:43:00 – Wrap Up Related Podcasts & Blogs Backup as a ServiceCloud-Native Data ProtectionData Protection in a Multi-Cloud WorldBackup in Your Responsibility – Even in Public Cloud#135 – Introducing Datrium DRaaS Connect with Simon Long Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #bnic. The post #197 – Prioritising Disaster Recovery Planning appeared first on Storage Unpacked Podcast.
44 minutes | Mar 19, 2021
#196 – Creating a Multi-Cloud Data Strategy
This week, Chris and Martin discuss the issues of building a multi-cloud data storage strategy. The range of options for private, public and hosted data services is huge, with each offering a different set of services and implemented in subtly unique ways. How can an enterprise take advantage of these services, while maintaining data protection, cost control and service availability all front and centre? This conversation is incredibly wide-ranging, touching on data centre build and usage, Finops for cloud, global name spaces, operating system envy and finding that killer app for the public cloud that locks in the user. Elapsed Time: 00:44:27 Timeline 00:00:00 – Intros00:00:26 – It’s a year since UK lockdown00:01:30 – What do we mean by multi-cloud storage?00:03:00 – There are many choices for data placement00:04:00 – Enterprises won’t completely shut down their data centres00:07:17 – Each cloud service provider has specific strengths00:09:00 – On-premises is good for data sovereignty issues00:12:00 – Do we really want to spin up applications in multiple locations?00:13:50 – How do we address storage cost management in cloud?00:14:40 – Application access profiles directly affects service costs00:17:00 – Structured data sharing works best at the application layer00:18:20 – Global Name Spaces are essential for making data mobile00:20:48 – The traditional storage management role is dying off00:24:00 – File and Object are a good candidate for merger00:30:00 – Announcing the Finops for Storage Accounting podcast!00:32:20 – Will hyper-scalers ever open up their storage platforms?00:37:00 – My operating system is better than yours!00:38:42 – Is there a killer app or technology in any public cloud?00:41:30 – The first vendor to stitch data together will make big money00:43:43 – Wrap Up Related Podcasts & Blogs #179 – The Myth of Cheap Cloud Storage#116 – Fixing Gaps in Cloud Storage with Andy Watson#86 – Eran Brown Discusses Storage, Security & Multi-CloudThe Myth of Cloud Bursting Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #0qu3. The post #196 – Creating a Multi-Cloud Data Strategy appeared first on Storage Unpacked Podcast.
49 minutes | Mar 5, 2021
#195 – Fungible Data Processing Units
This week’s podcast episode continues the discussion on SmartNICs and DPUs with Fungible, a company that claims to have originally coined the term DPU. Chris and Martin talk with Pradeep Sindhu (CEO and co-founder) and Jai Menon (Chief Scientist) about Fungible’s storage cluster and host-based DPU. The Fungible architecture aims to solve the challenges of disaggregation, a topic we first looked at back in September 2017. This discussion highlights some interesting challenges that new technology such as NVMe-oF is introducing into the data centre. As we move to a model of highly parallelised workloads, the interaction between storage and compute is back under the spotlight. Fungible is working on storage products today but claims to be able to disaggregate GPUs and in the future, potentially DRAM. Interesting times. Find out more at https://www.fungible.com/ Elapsed Time: 00:49.25 Timeline 00:00:00 – Intros00:01:30 – What problem is Fungible looking to solve?00:02:45 – The benefits of Moore’s Law growth are almost flat00:03:35 – Modern applications are data-centric00:05:00 – Fungible is working on disaggregated architectures00:06:15 – There’s storage for compute and storage for – storage!00:07:45 – What’s “hyper-disaggregation”?00:09:30 – Fungible offers volume-specific characteristics like encryption00:11:00 – Everything can be disaggregated except DRAM (for now)00:13:10 – Data I/O has specific requirements including in-order processing00:14:15 – Fungible can use an Ethernet network at 90% without packet drop00:15:30 – Data-centric workloads are heavily multiplexed00:18:45 – Does disaggregation finally deliver a real software-defined data centre?00:20:00 – Mainframes “reconfigured” overnight for batch workloads00:21:30 – Even hyperscalers operate in silos00:24:15 – The Fungible Storage Cluster – SAN 2.0?00:25:30 – New hero numbers! 15 million IOPS!!00:30:45 – Volumes are virtual across any or all storage clusters00:34:30 – Fungible claims better than local performance00:35:00 – The Storage Cluster gains additional benefits with a host DPU00:37:00 – Let’s not get diverted towards VSAM!00:39:20 – Disaggregated technologies could deliver truly reconfigurable data centres00:41:00 – Fibre Channel networks divide their traffic across multiple SANs00:42:00 – Who is using the Fungible technology?00:45:00 – Fungible is working on I/O primitives for SQL databases00:49:00 – Wrap Up Related Podcasts & Blogs #194 – ScaleFlux & Computational Storage Devices#190 – NVIDIA BlueField SmartNICs & DPUs#180 – SmartNICs – Pliops Storage Processor#177 – SmartNICs and Project Monterey#96 – Discussing SmartNICs and Storage with Rob Davis from Mellanox Pradeep’s Bio Pradeep’s career includes founding Juniper Networks, the company widely recognized for inventing and industrializing silicon-based routers—the invention that played a central role in bringing about the Internet age. Over the years he has held several key roles at Juniper, including founding CEO and Chairman, then Vice Chairman and CTO, and now Chief Scientist. Pradeep had a hand in the inception, design and development of virtually every product Juniper shipped from 1996 through 2015. Before founding Juniper, Pradeep worked at the Computer Science Lab at Xerox PARC for 11 years developing design tools and multiprocessor architectures. During this period he invented the first cache coherency algorithms for packet switched buses and made fundamental contributions to Sun Microsystem’s high performance multiprocessor servers. Pradeep holds a Bachelors in Electrical Engineering from the Indian Institute of Technology in Kanpur, as well as a Masters in Electrical Engineering from the University of Hawaii. In addition, Pradeep holds both a Masters and a Doctorate in Computer Science from Carnegie-Mellon University. Pradeep is a member of the National Academy of Engineering and holds more than 200 patents. Jai’s Bio Jai joined Fungible after serving as CTO for multi-billion dollar Systems businesses (Servers, Storage, Networking) at both IBM and Dell. He was an IBM Fellow, IBM’s highest technical honor, and one of the early pioneers who helped create the RAID technology. He also led the team that created the industry’s first, and still the most successful, storage virtualization product and his team at IBM also built one of the fastest and earliest parallel file systems in the world. When he left IBM, Jai was CTO for the $20B IBM Systems Group, responsible for guiding 15,000 developers. In 2012, he joined Dell as VP and CTO for Dell Enterprise Solutions Group. In 2013, he became Head of Research and Chief Research Officer for Dell. Jai earned a Doctorate from Ohio State University, holds 53 patents and has published 82 papers and is a recipient of the IEEE Wallace McDowell Award and the IEEE Reynold B. Johnson Information Systems Award. Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #2d00. The post #195 – Fungible Data Processing Units appeared first on Storage Unpacked Podcast.
44 minutes | Feb 26, 2021
#194 – ScaleFlux & Computational Storage Devices
In this week’s episode, Martin and Chris discuss Computational Storage with Tong Zhang, Chief Scientist and co-founder at ScaleFlux. Computational Storage devices add value to traditional NAND by offloading data processes directly onto the storage media. ScaleFlux offers two families of CSDs, including the CSD 2000 series, which implements inline data compression to improve endurance and logical device capacity. In this conversation, Tong covers the benefits of using CSDs as well as some of the challenges of implementation. It’s likely we will se CSDs being used for AI/Analytics pre-processing, especially in the public cloud. More information on ScaleFlux can be found at https://www.scaleflux.com/ with further technical details of Computational Storage on the SNIA Website. Elapsed Time: 00:43:39 Timeline 00:00:00 – Intros00:04:00 – What is the ScaleFlux view of Computational Storage?00:06:45 – What will the drivers of Computational Storage be?00:08:40 – Compression can increase endurance and capacity00:11:00 – The CSD2000 does inline compression/decompression in FPGA00:13:05 – Why aren’t all vendors doing inline compression?00:14:45 – Databases make a good use case for CSD00:17:10 – Compression can be 2:1 or as high as 5:1, depending on data00:20:15 – SSDs do get hot!00:21:30 – FPGAs will be replaced by ASICs in the future products00:23:00 – AI/Analytics is a big target for Computational Storage00:25:00 – Advanced functionality may require APIs00:27:00 – NVMe will be used as a protocol to push code to CS drives00:30:20 – Compute and storage are going to have to work closer together00:35:00 – Computational Storage adoption will be evolutionary00:38:00 – How will RAID/erasure coding be affected with CS?00:40:20 – ScaleFlux will support PCIe-4/5 and possibly PLC NAND00:43:00 – Wrap Up Related Podcasts & Blogs #190 – NVIDIA BlueField SmartNICs and DPUs#180 – SmartNICs – Pliops Storage Processor#177 – SmartNICs and Project Monterey#96 – Discussing SmartNICs and Storage with Rob Davis from Mellanox Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #8vwo. The post #194 – ScaleFlux & Computational Storage Devices appeared first on Storage Unpacked Podcast.
47 minutes | Feb 19, 2021
#193 – HYCU Protégé Office 365 Backup as a Service
This week, Chris and Martin are talking to a podcast repeat offender, Subbiah Sundaram, VP of Products at HYCU. HYCU has recently announced the availability of Protégé for Microsoft Office 365, delivered as SaaS or Backup as a Service (BaaS). This continues an expansion of the HYCU and Protégé backup offerings that started with Nutanix data protection and has expanded past on-premises virtualisation to encompass the public cloud and now SaaS. The conversation covers a wide range of topics relating SaaS and data protection, including the way in which services are implemented via APIs provided by SaaS vendors. With unlimited data storage, SaaS backup offers real opportunities for data mining and analysis. To learn more about Protégé for Office 365, follow the link to https://www.hycu.com/tryhycu/. You can find more details on the support for Office 365 in the press release – here. Elapsed Time: 00:47:15 Timeline 00:00:00 – Intros00:03:00 – Vendors do not back up your SaaS service (for you)00:04:45 – Office365 is a multitude of separate services00:09:00 – Where does HYCU BaaS reside?00:10:00 – The charging model for BaaS is different to on-premises offerings00:11:25 – How does the customer monitor success/failure in SaaS offerings?00:17:00 – Some businesses are happy with self-restore, others are not!00:18:30 – What features do SaaS vendors offer to do backup?00:20:00 – SaaS API users need to rethink how their services are built00:22:00 – Centralised credentials management is key to delivering SaaS00:25:50 – How can unlimited storage be justified and what does it mean?00:29:00 – Can unlimited storage be abused?00:31:50 – Can unlimited be used as deliberately limited (active deletion)?00:35:50 – Is BaaS a perfect tool for e-discovery?00:42:55 – How will HYCU bring in the existing non-SaaS service together with SaaS?00:44:20 – The Public Cloud is the right place to doing backup analytics00:45:30 – Wrap Up Related Podcasts & Blogs #165 – Homogeneous Data Protection with HYCU#73 – HYCU – Data Protection for Hyper-Converged InfrastructureHYCU Announces GA of HYCU for AzureBackup os Your Responsibility – Even in Public Cloud Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #tv7t. The post #193 – HYCU Protégé Office 365 Backup as a Service appeared first on Storage Unpacked Podcast.
53 minutes | Feb 12, 2021
#192 – Storage & Kubernetes with Nigel Poulton
This week, Chris and Martin chat to long-time friend and Kubernetes legend, Nigel Poulton. Nigel is well-known in the industry for producing training courses and books on Kubernetes, although was once a storage person in a previous life. The aim of this podcast episode is to examine how storage and Kubernetes come together. However, we start by asking Nigel to explain his pivot to containers and now the Kubernetes ecosystem. This discussion touches on some interesting aspects of how persistent storage and Kubernetes should be managed together, which today is via the CSI (Container Storage Interface). Is this plugin a long-term solution for data mobility? We also manage to get an obligatory mainframe reference into the conversation. Sadly we didn’t get through all of our discussion topics in this long-running episode, so Nigel will be back later in the year to continue the conversation. You can find more on Kubernetes, over at Nigel’s Website: https://nigelpoulton.com/ Elapsed Time: 00:53:08 Timeline 00:00:00 – Intros00:01:00 – Nigel is looking for things to “read”00:05:10 – Why did Nigel pivot to Kubernetes?00:07:50 – Is Kubernetes the future of containers?00:09:55 – Kubernetes needs to avoid the OpenStack risk00:13:15 – Are we building permanent or temporary cluster?00:17:40 – Why did AWS open source EKS?00:22:30 – Shouldn’t we talk about storage now?00:25:40 – Networking is just “pass the parcel” (hot potato)00:28:00 – Customers should be using CSI-supported storage00:32:00 – Nigel believes data mobility should be an application responsibility00:35:10 – Obligatory mainframe reference (DFSMS)00:39:20 – How should autoscaling work for storage and Kubernetes?00:42:20 – Why is QoS in storage not seen more frequently?00:49:00 – Nigel is into muscle cars00:51:00 – Wrap Up Related Podcasts & Blogs #53 – Persistent Storage and Kubernetes with Evan Powell#151 – Introduction to StorageOS v2.0#145 – Anthos Ready Storage for the Enterprise#129 – Choices for Persistent Container Storage with Niraj ToliaWill We Care About Kubernetes in 2025? Nigel’s Bio Nigel’s a technology geek, author of three utterly mind-blowing and life-changing books, and creator of weapons-grade Kubernetes training videos (his words not ours). Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #lx84. The post #192 – Storage & Kubernetes with Nigel Poulton appeared first on Storage Unpacked Podcast.
33 minutes | Feb 5, 2021
#191 – CIO Pandemic Priorities
This week, Chris and Martin are in discussion with Cathy Southwick, CIO at Pure Storage. The topic of conversation is the results of a CIO survey undertaken by Pure to look back at the challenges of the coronavirus pandemic and how customers have changed their IT strategies. The discussion covers how Pure Storage has adapted to the lack of in-person site visits and adapting to remote installation and operations. Digital transformation projects have continued, while customers have adapted their priorities and goals to align with the challenges presented by COVID-19. Public cloud has been an easy target for migrations. Customers are now looking at how best to rebalance workloads between on-premises and the public cloud as companies stabilise and re-adjust to a COVID-compliant way of working. You can find Cathy’s LinkedIn article and the results of the survey here – https://www.linkedin.com/pulse/what-we-can-learn-from-cios-covid-19s-impact-cathleen-southwick/ Elapsed Time: 00:32:51 Timeline 00:00:00 – Intros 00:00:30 – Martin gets trolled by Google Maps! 00:03:30 – How has Pure and their customers managed the Pandemic? 00:06:00 – How has data centre access being managed? 00:09:00 – Digital transformation projects have continued to be delivered 00:14:00 – Automation is top of the priorities, with security and customer experience 00:16:30 – Employee well-being figured highly 00:17:30 – How have customers adopted the cloud? Tactical or Strategic? 00:21:00 – FinOps – Financial operations for cloud will be the dream next job 00:24:00 – Will Cloud drive the “as a service” models? 00:26:10 – The Portworx acquisition indicates a pivot towards data and data mobility 00:28:35 – EMC – Where Information Lives – describes the future for storage companies 00:30:40 – Wrap Up Related Podcasts & Blogs #152 – Post Pandemic Storage Efficiencies#149 – Coronavirus 2.0#146 – Coronavirus and Impacts on the Technology Industry#185 – Pure-as-a-Service 2.0 Cathy’s Bio Cathy Southwick joined Pure Storage in 2018 as Chief Information Officer. In this role, she leads Pure’s global IT strategy and advances the company’s operations through the delivery of next-generation technology capabilities and systems. Cathy is an accomplished leader with over 20 years of experience defining and executing forward-looking IT strategies. Prior to Pure, Cathy held leadership positions at AT&T, including Vice President, Technology Engineering and Vice President, Cloud Planning & Engineering. During her tenure at AT&T, Cathy led the planning and execution of IT strategies from the Core Network, IT application modernisation, and the IT cloud. Before joining AT&T, Cathy spent 11 years at Viking Freight System (now owned by FedEx) where she held escalating leadership positions in IT architecture and planning, software development, merger integration, strategic planning, human resources management, procurement, project/portfolio management, and process re-engineering. Cathy holds a bachelor’s degree in Business Administration from Saint Mary’s College and an MBA from the University of Phoenix. Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #30lx. The post #191 – CIO Pandemic Priorities appeared first on Storage Unpacked Podcast.
48 minutes | Jan 29, 2021
#190 – NVIDIA BlueField SmartNICs & DPUs
This week, Chris and Martin chat to Kevin Deierling, SVP of Marketing for Networking Products at NVIDIA. SmartNICs and DPUs (Data Processing Units) are starting to become mainstream as application use-cases such as AI and analytics drive a need for greater data throughput and performance. Kevin explains the design and thinking behind BlueField, NVIDIA’s family of DPU products that combine offloaded network, storage and security functionality. Elapsed Time: 00:47:36 Timeline 00:00:00 – Intros00:01:30 – We’re not a networking podcast!00:02:15 – Why will we need DPUs and SmartNICs?00:04:30 – Von Neumann is diverging00:06:50 – What is the BlueField architecture?00:08:45 – A DPU could act as a storage array controller00:11:30 – Storage DPUs make devices appear local00:13:00 – DPUs enable efficient bare-metal server deployments00:15:00 – Storage, networking & security use around 30% of traditional cores00:16:00 – Does a DPU represent better or worse performance than CPU?00:20:15 – NVIDIA DPUs emulate existing devices, reducing application changes00:25:00 – Where does the outboard management take place?00:26:00 – DOCA is the application framework for DPUs00:28:00 – BlueField 2X combines GPU and DPU on the same card00:31:40 – DPUs enable the real-time nature of data processing00:35:40 – Mainframe reference!00:38:00 – Where will initial adoption take place?00:40:00 – How does the use of SmartNICs affect TCO?00:45:10 – The future is 1000x improvement with BlueField 400:46:00 – Wrap Up Related Podcasts & Blogs #96 – Discussing SmartNICs and Storage with Rob Davis from Mellanox#177 – SmartNICs and Project Monterey#180 – SmartNICs – Pliops Storage ProcessorVMware Project Monterey – First ImpressionsFixing the x86 Problem Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #ky1r. The post #190 – NVIDIA BlueField SmartNICs & DPUs appeared first on Storage Unpacked Podcast.
41 minutes | Jan 22, 2021
#189 – The Quiet Success of Software-Defined Storage
This week, Chris and Martin debate the success of software-defined storage or SDS. Over the past 10-15 years, storage systems have evolved into a wide range of software solutions, appliances and cloud-based products. What has driven this move to focus entirely on software and what will the next stage of evolution be? Are we in a position where everything is based on software, in some form? This discussion also touches on consumption models. Has SDS provided the catalyst for vendors to offer better subscription licences and consumption models? Is the writing on the wall for the traditional storage array? Elapsed Time: 00:40:51 Timeline 00:00:00 – Intros00:00:55 – Intel H10 recap00:01:43 – Siri interrupts!00:03:34 – Has SDS been a quiet success?00:05:20 – The five phases of SDS evolution00:07:08 – What compatibility issues exist?00:07:40 – Phase 2 – bespoke SDS solutions00:10:14 – Phase 4 – Infrastructure abstraction00:11:50 – Phase 5 – Partnered solutions00:13:00 – Is it possible to defined SDS today?00:14:21 – PowerStore is SDS (not sold like that)00:15:20 – Some SDS packaging is a commercial decision00:17:00 – Many storage solutions will run as virtual & cloud instances00:19:00 – SDS has been around for many years in different forms00:21:30 – Is high-end storage (like PowerMax) SDS?00:23:10 – Where is SDS challenged?00:26:01 – Is Open Source the biggest challenger to commercial storage?00:29:39 – Where will SDS go in the future? 00:30:04 – Will SmartNICs affect the development of SDS?00:32:30 – Do purchasing models need to change for SDS?00:36:00 – Vendors may need to rewrite their software00:37:00 – Does SDS need to be more “data aware”?00:39:00 – Wrap Up Related Podcasts & Blogs #156 – Introduction to Hammerspace#148 – Unpacking HPE’s Storage Strategy#141 – Building Storage Systems of the FutureStorage Predictions for 2021 and Beyond (Part III – SDS)Will TCO Drive Software Defined Storage?ScaleIO Becomes Software Defined on Hardware Vendors & products referenced in this podcast: Nexenta, OpenFiler, FreeNAS/TrueNAS, VMware, NetApp, Isilon, HPE (Primera & 3PAR), Dell PowerStore, Pure StorageCeph, OpenEBS, Rook, Weka, StorageOS, Portworx, Hedvig, Hammerspace, StoreONE, Backblaze, Wasabi, AWS S3,LizardFS, Gluster, GPFS, Spectrum Scale, SVC, PowerMax, Hitachi, Nimble Storage, Red Hat, NVIDIA, Komprise, Qumulo Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #ugbm. The post #189 – The Quiet Success of Software-Defined Storage appeared first on Storage Unpacked Podcast.
32 minutes | Jan 15, 2021
#188 – Is Intel Optane Ready for Primetime?
This week, Chris and Martin dig deeper into the adoption of Intel Optane technology in both the consumer and enterprise markets. Optane is a new persistent memory technology that offers greater performance and lower latency compared to NAND flash, but currently is more expensive and offers smaller capacity devices. As Optane becomes more prevalent in both markets, how is it being adopted and what needs to change to increase adoption levels? Note: Since recording, Chris has tried out the Intel H10 device and the results were disappointing. Read more here. Elapsed Time: 00:31:49 Timeline 00:00:00 – Intros00:01:20 – Is Optane poised for data centre and consumer dominance?00:02:20 – What are the Optane consumption forms?00:03:38 – How can Optane be used at home?00:06:10 – Could Optane offer “instant on”?00:07:05 – How does H10 manage the Optane/QLC cache?00:08:10 – Optane offers benefits for home video editing00:10:22 – Optane is 16x more expensive than QLC00:10:40 – Could FuzeDrive be a better solution to build a hybrid drive?00:12:00 – A home PC can offer greater performance than 10-year old arrays00:13:45 – Will we use M.2 in the data centre (or U.2)?00:14:00 – Where is Optane being used in the data centre?00:16:25 – Operating System support for Optane is already here00:17:44 – Will Open Source databases be first to exploit Optane00:19:28 – Optane is optimised for byte-level addressing00:22:08 – Optane adoption is still tactical, like SSDs00:22:50 – How would containers and Optane work?00:26:40 – Intel doesn’t appear to sell raw Optane chips00:28:35 – Would Martin consider Optane for home?00:30:40 – Wrap Up Vendors mentioned in this podcast: StorONE, VAST Data, Pure Storage, 3PAR/HPE, Vexata, MemVerge, IBM, Western Digital, Infinidat, StorageOS, Portworx. Related Podcasts & Blogs Intel H10 Hybrid Optane M.2 SSD is a DisappointmentWhen Will Optane SSDs Replace NAND Flash?What is Intel Optane?FlashArray//X Gets Optane Acceleration with DirectMemoryHPE Demos 3PAR with Intel Optane (3D-XPoint)#174 – Introduction to Zoned Storage with Phil Bullinger#171 – Exploiting Persistent Memory with MemVerge#184 – MCAS – Memory Centric Active Storage Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #kehf. The post #188 – Is Intel Optane Ready for Primetime? appeared first on Storage Unpacked Podcast.
50 minutes | Dec 30, 2020
#187 – End of the Year Show 2020 – Part Two
This week, Martin and the two Chris’s conclude their end of year discussions. This episode covers “Turkeys” of 2020, Missing in Action companies and those technologies that should be put into Storage Room 101. These are the products we never want to see darken our doors again. Finally, the team finish on a discussion of what to expect in 2021. And for your delight, here are some Hitachi Mr T videos: https://www.youtube.com/watch?v=tW1S2tsxVHghttps://www.youtube.com/watch?v=boZyHDJ5qCs Elapsed Time: 00:49:54 Timeline 00:00:00 – Intros00:01:00 – What are the Turkeys of the Year?00:01:40 – Is secondary data re-use a turkey?00:05:00 – Is Snowflake offering real data analytics?00:07:50 – We explain ETL processes00:12:00 – Teradata did analytics 30 years ago00:12:40 – Internet of Things – another turkey?00:15:00 – We said goodbye to Stellus, Datrium, Violin00:16:40 – Storage buyers’ remorse?00:19:00 – Is Optane aimed at being the AMD killer?00:24:00 – Missing in action – Formulus Black?00:26:00 – Whatever happened to Memristors?00:27:00 – It’s time for Storage Room 101!00:30:00 – Viruses create holes in floppy disks!00:32:00 – What is the point of DNA storage?00:36:50 – Johnny Mnemonic!00:37:00 – Optical drives – never again!00:39:20 – Who still uses bare metal recovery?00:41:10 – What will 2021 look like?00:43:00 – HAMR disks will appear in 202100:46:00 – PLC NAND will appear in 202100:48:30 – What won’t we see the end of in 2021?00:49:00 – Wrap Up Vendors mentioned in this podcast – Delphix, Actifio, Catalogic, Cohesity, Snowflake, Rubrik, HYCU, IBM, Veritas, Commvault, Stellus Technologies, Datrium, Violin Systems, StorCentric, Vexata, Retrospect, Kaseya, DDN, Tintri, EMC, Mozy, Iomega, NetApp, Storwize, Pure Storage, Intel, AMD, Micron, Formulus Black, InfiniteIO, Hammerspace Related Podcasts & Blogs #186 – End of the Year Show 2020 – Part One The post #187 – End of the Year Show 2020 – Part Two appeared first on Storage Unpacked Podcast.
43 minutes | Dec 11, 2020
#186 – End of the Year Show 2020 – Part One
This week, Martin and the two Chris’s look back at 2020 for the highlights and lowlights of the storage industry. What’s been hyped and what’s been a success? Will we be travelling in 2021 and what impact has the lockdown had on 2020 sales? This episode is the first of a two parter over the next couple of weeks before we close down for the Christmas period. Elapsed Time: 00:43:10 Timeline 00:00:00 – Intros 00:00:56 – Chris has not been to China 00:01:55 – Chris M is on a health kick! 00:02:48 – Gartner analysts haven’t moved 00:04:05 – Chris M sees all-flash arrays as a disappointment 00:05:20 – Sales has had a tough year 00:07:00 – What about conferences? 00:08:11 – Chris would consider ExCeL – if it wasn’t closed 00:10:15 – Is it the end for influencers and big conferences? 00:12:00 – Just call me Mr Chips! 00:13:00 – The cloud is great – apart from AWS us-east-1! 00:14:30 – What happened with fund raising in 2020? 00:17:30 – How was Zerto funded this year? 00:21:30 – How did the Kasten/Veeam acquisition work? 00:24:50 – Veeam must be an IPO target at some point 00:27:00 – NASUNI could be an IPO target too 00:28:00 – Jensen likes leather jackets and no socks 00:29:00 – Datrium lives on in VMware 00:29:35 – CloudJumper or CloudSweater? 00:31:50 – Whatever happened to Stellus? 00:33:00 – Remember Whiptail? 00:34:00 – What tech has been over-hyped in 2020? 00:41:00 – NVMe Ethernet drives are coming 00:42:40 – Wrap Up Related Podcasts & Blogs #146 – Coronavirus and Impacts on the Technology Industry#149 – Coronavirus 2.0 Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #9d7j. The post #186 – End of the Year Show 2020 – Part One appeared first on Storage Unpacked Podcast.
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021