The Problem with Big Hard Drives

Ever since hard drives went mainstream in the late 1980s, computer owners have been able to count on one thing: their increasing capacity. As a result, what once used to be little more than a convenient method for reducing disk swapping has now become an integral part of daily life, with hard drives today storing not just our programs and our documents, but our very memories in the form of the photos and videos. The ever-increasing numbers of those have forced storage capacity to skyrocket to still more dizzying levels, to the point where most ordinary people now only really think of it when they’re in danger of filling it up. But there’s a reason that really large hard drives should be at the forefront of every computer owner’s mind: they’re about to become unusable.

Okay, perhaps that’s a bit hyperbolic—but only a bit. The fact is, we’re not just approaching the threshold at which hard drives can’t get bigger: We’re already there. The advent of very large hard drives, we’re talking over 2.19 terabytes (TB) in capacity, in 2010 has highlighted a problem that’s been growing for well over a decade, and whose seeds were planted when the very first hard drives hit the market some 25 or 30 years ago. The good news is that there are some solutions to it; the bad news is, they’re little more than workarounds for an issue that needs more than makeshift bandages.

What Is the 2.19TB Barrier?
Back in the days when hard drives’ capacities were measured in megabytes at the outside (we’re talking the 1950s here, though it was still the case well into the 1980s, when floppy disks’ sizes were measured only in kilobytes), a block size of 512 bytes made sense as a way of maximizing available space. But during the boom years of the 1990s, when drives were rapidly getting bigger and performance started straining, the 512-byte block size became more restrictive. So the industry decided to up the standard block size to 4,096 bytes, or 4KB (sometimes referred to as “Advanced Format”); 4KB blocks have more efficient methods of error checking, which mean they can devote less space to it, resulting in drives of higher capacities and that encounter fewer errors.

Complete Comptia A+ trainingComptia A+ Certification at certkingdom.com

It was a good idea, but it didn’t gain mainstream adoption until after Windows XP made its big splash in 2001. That OS creates primary disk partitions at logical block address 63, just short of being perfectly divisible by eight, so it writes data across both sides of the physical boundary of the 4KB sector. In other words, the world’s most popular operating system for the better part of a decade didn’t support 4KB blocks, which put a huge crimp in the industry’s efforts to expand the initiative. Luckily, there was a way around this problem that could both benefit Windows XP users and ease the transition for everyone to the larger block sizes: Hard drives would report their blocks as only being 512 bytes in size, so any OS that didn’t understand the larger sizes could just “see” each 4KB block as eight 512-byte blocks. Windows XP, then, would just need to “align” its sectors properly; that was hardly an insurmountable problem and, indeed, Western Digital has designed software to compensate.

Unfortunately, even 4KB sectors couldn’t guarantee drives of infinite capacity. Because of the limitations of the BIOS and 32-bit MBR partition in the earliest years of computing, hard drive capacities had a firm upper ceiling of 232 logical blocks. If each of those blocks was 512 bytes in size, that meant that MBR drives could store no more than 2,199,023,255,552 bytes, or 2.19TB.

Click to rate this post!
[Total: 0 Average: 0]

Leave a comment

(*) Required, Your email will not be published