Bigger and better. LTO technology innovation shows no sign of slowing!

Andrew Dodd

Worldwide Marketing Communications Manager at HPE Storage

45 TB on an LTO-9 cartridge that you can comfortably hold in the palm of your hand is a lot of data! It means that the HPE StoreEver MSL6480, with its 580 slots, is now capable of holding up to 26.1 PB in a single 42U rack.

Twenty years ago, when HPE launched LTO-1, you would have needed 130,500 cartridges for the same amount of capacity, which would have filled a warehouse! So that’s a pretty formidable example of LTO innovation and improvement, one that flies in the face of criticisms that tape is an outdated storage solution that has failed to keep up with data centre storage needs. 

In addition to this incredible engineering evolution, there have been other achievements along the way, such as Write Once Read Many (WORM) technology to provide data integrity; Linear Tape File System (LTFS) to make tape easy to use; and native AES-256 encryption to enhance data security. All these innovations have kept LTO technology relevant to the challenges faced by modern businesses.

But what’s gone before is far less interesting than what lies ahead!

Tape v HDD capacity growth: which holds the advantage?

It’s generally accepted that data is growing at 40-50% compound annual growth rate (CAGR), fuelled by new technology like AI, 5G and billions of sensors in the Internet of Things. 

Faced with this sharp increase in data volumes, LTO technology still has plenty of headroom to increase the capacity per cartridge. The LTO roadmap presently extends to 480 TB with LTO-12, and by the end of the decade, it’s entirely feasible that a single LTO data cartridge could hold well over 700 TB. Prototypes of a 550 TB LTO cartridge based on Strontium Ferrite (SrFe) particle technology have already been demonstrated and the Information Storage Industry Consortium (INSIC) technology roadmap projects cartridges with 723 TB could be achievable by the early 2030s. This means that tape is well positioned to be a superlatively dense and low cost storage medium for preserving the vast quantities of data we will see in years to come.

01

In comparison, according to public statements and projections made by disk manufacturers (e.g. during investor meetings and presentations), the capacity of a single hard disk may only have just breached the 100 TB threshold in the same timeframe.

Organisations that base their archives solely on HDD are likely to be challenged by the slowdown in areal density growth evident in the published roadmaps of leading HDD vendors. 

02

Areal Density: It’s A Kind Of Magic

Areal density is very important in storage terms because it’s the measurement of how many bits can be stored on a magnetic recording surface. This in turn determines the capacity of the medium in question, whether that be tape or disk.

If you want more data, you need to make the space occupied by each bit smaller but scaling down at this microscopic level is challenging, especially when HDDs already have a very high areal density to begin with. HDDs have a much smaller surface area than an LTO tape (a 3.5 inch diameter platter versus a one kilometre long tape) with which to achieve the same amount of storage capacity. This means the bit cells of hard disk drives are many times smaller than they are on for a length of tape media even if they hold the same amount of data.

As a practical example, we can see that current 18 TB hard disk products have an areal density of 1022 Gb/in2. The new 18 TB LTO-9 Barium Ferrite cartridge, however, ‘only’ has an areal density of 12 Gb/in2. So the areal density of today’s high capacity HDDs is already almost 100X greater than that of a comparable LTO tape. 

03

This does not mean HDD areal densities are “better”. Instead, it shows that LTO technology still has abundant potential to dramatically increase capacity before it approaches the same level of areal density of current hard disk drives. If a kilometre length of tape could achieve a comparable areal density to a 3.5 inch diameter disk platter, then obviously, it would have a huge capacity advantage.

This is fundamental to the future of archival storage. It should be clear that hard disk manufacturers must find a way to improve the areal density of their solutions if they are to keep pace with the explosion of data. The main reason why disk capacity growth has slowed recently is because the vendors have been unable to do this. The challenge they have encountered is a constraint of magnetic media recording physics known as the ‘superparamagnetic limit’. 

04

As the magnetic particles that coat the hard disk platter become smaller and more tightly packed, increasing areal density, they lose their ability to maintain the stable magnetic state which is necessary to write data in the first place. In other words, beyond the superparamagnetic limit, the error rate created by this phenomenon is too high to support reliable data recording. 

As we have just seen in the example above, however, LTO tape areal densities are still far lower than those of disk. This means the capacity of LTO tape can still increase dramatically while still remaining well inside the superparamagnetic threshold. It’s this characteristic of tape technology that gives the aforementioned LTO roadmap and INSIC forecasts their credibility; the science is entirely feasible and has already been demonstrated with actual R&D prototypes.

The superparamagnetic limit is a pretty technical subject to cover in detail in a short blog, so if you want to learn more, check out the Tape Innovation Webinar that I hosted with the LTO Program back in October 2021. There I discuss different approaches to magnetic media recording with three of the world’s leading specialists in the field from HPE, IBM and Quantum.

Future Innovations In Tape And Disk

It’s worth mentioning that the new approaches being developed by HDD manufacturers to overcome this problem (e.g. Heat Assisted and Microwave Assisted Magnetic Recording - HAMR and MAMR), and the multi-actuator technology required to maintain random IOPs-per-TB performance) are both relatively unproven at the kind of scale that will be required and they will add cost and complexity to production. This doesn’t rule out disk technology, of course, but it seems likely that disk capacity growth will progress more slowly and there are likely to be some additional adoption challenges and costs.

In comparison, tape’s evolutionary path builds on existing technologies while refining them with new innovations – e.g. new low friction tape head technology to allow the use of very smooth tape media, and an ultra-narrow read sensor just 29 nanometres wide to read back data written to high capacity SrFe media.

06

So the short summary is that businesses will almost certainly either need to deploy increased numbers of hard disks, in ever-expanding data centres of colossal scale, or use ultra-dense tape technology to store the majority of their cold data. And while there are a host of cost/benefit trade offs that come into making that decision, in my opinion, it seems very hard to argue against LTO tape having a key role to play in the decade ahead, regardless of whether your data is stored on-premises or in a public cloud. I don’t see any way around this logic.

For some businesses, the cost of keeping all their data on hard disk might seem like a justifiable return on investment regardless of the absolute cost. For others, ease of fast access might be the primary consideration, although it’s worth noting that this very much depends on use case: for streaming contiguous datasets, tape is actually much faster than the cheap mechanical hard disks it’s often compared against for archiving. 

And in either case, the potential environmental impact of this disk storage architecture will also be a factor - not just in terms of power or cooling, but also because of the resources and the amount of space, or land, required to host it. And because so much of this data is infrequently accessed, the durability of tape - up to thirty years - could make it a more practical choice for creating a deep layer of archival information. Unless there is a truly compelling commercial need, it’s arguably an ineffective and inefficient utilisation of resources to keep data on a more expensive storage platform just for the sake of it.

Active Archiving Lets You Navigate Your Data Oceans

This brings me neatly onto active archiving. It’s often said that data allows us to break down boundaries and create new horizons. And with the average company now having four petabytes of archive content to manage, it means that relatively small organisations have huge quantities of information at their fingertips. But with traditional data protection models, too often business data has been divided and walled up in silos, trapping this enormous potential inside closed and incompatible storage platforms.

Active archiving can set your data free by collapsing the barriers that technology erects. But although the purpose of an active archive is to create a universal content store, in reality, technology and financial restrictions still impose boundaries. This is because businesses need to deploy different storage technologies based on other factors such as cost, storage density and security. As a consequence, what appears to the software as a single, virtual namespace may be realised across different physical storage technologies - flash, disk appliance, object storage server or tape - in multiple locations, both on-premises and in the cloud. 

But while these benefits are compelling, one of the drawbacks of deploying tape in an active archive system was always the fact that tape uses traditional file storage hierarchies to store data. This complicated the free movement of information between an object storage tier, such as Scality RING or AWS, to tape within a single, software-defined, active archive system. There always needed to be some kind of gateway and a conversion process to read S3 data into a file storage hierarchy and vice versa when data was being requested off tape.

But now there is a tremendous amount of innovation centred on solutions that allow you to migrate data natively from any S3 data source, whether that’s in the cloud or on-premises disk-based storage, directly to LTO tape. Examples would include HPE’s Data Management Framework and Spectra Black Pearl.

Tape For Tomorrow

In conclusion, HPE StoreEver LTO tape innovation is still at the forefront of storage engineering breakthroughs in 2022. Although it’s a very familiar sight in data centres, I expect that it will take at least ten more years before tape might begin to run into some challenges caused by areal densities and the superparamagnetic threshold. Because of media durability – the subject of a later article in this series – information stored on tape today should be accessible until well into the middle of the current century. And although there is a lot of discussion about alternative recording media, such as holographic discs, DNA storage, femtosecond laser-etched glass etc – none of these are capable of matching LTO tape technology when it comes to meeting zettabyte demand for archival storage in the short or even medium term.

In the next article, I’ll be looking in more detail about tape’s credentials as a super low cost tier for long term archiving. But in the meantime, feel free to give me feedback in the comments here in LinkedIn or by following me on Twitter @tapevine.

Thank you once again for reading!

Follow or contact us!

Sales Expert | Technical Support