Me, programs, examinations and coursework.

Megabits and Megabytes

I don’t hate telecommunications companies — what I do hate is their lack of ability to standardise what they are doing.

My latest assignment was to create a test music video of about 60-90 seconds, with the intention of learning video editing software. Alex is currently writing an article on the differences between linear and non-linear editing techniques and how Adobe is one of the best information technology companies that were ever created. Or so he claims.

I shall be ranting about why I get confused all the time about upload speeds and file sizes.

Computers don’t work like us humans. Humans have grown used to thinking in tens. Our decimal based counting system is based in tens — it moves up one place for every ten new items we have.

Programming, however, thinks in binary — a two number counting system. This works absolutely fine, except that it came to the point when we started attaching prefixes to certain measurements and it caused absolute chaos.

Think the term, ‘megabyte’. The ‘byte’ makes a lot of sense, since it’s a common, accepted measure of storage. It’s the prefix, ‘mega’, that causes all the problems.

The SI defines the prefix ‘mega’ as meaning a million, or 1000000. Therefore, following SI definition, the term ‘megabyte’ means one million bytes, which is the official standard today. And that’s fine.

Until the old system of defining storage media is involved. Because computers worked on binary systems, storage capacity actually went up in powers of 2 — 24,28,210 etc. — in the more familiar way, at least to IT developers, of 128, 256, 512, 1024, 2048… and so forth. It was great, but most people didn’t like having to remember all these numbers, so people began to make generalisations, rounding some values to the nearest power of 10. Thus, the term ‘megabyte’ actually referred to 1 048 576 bytes, which was close enough to one million — at least, it was only a 4.9% error.

English: This chart shows the growing percenta...

The increasing error percentage between metric and binary units. Image via Wikipedia.

Unfortunately, as storage capability became bigger, the differences became more noticeable. At a gigabyte, the error went up to 7.4%. A terabyte — and 10%. Eventually, technology manufacturers caught the idea, and starting calculating their storage mediums in powers of 10 instead of 2. Therefore, they would market their products at 16GB, when actual formatted capacity was much less — 14.88 GB, to be precise. Thus, for every gigabyte of data consumers bought, they were robbed of 7.4% of it.

Naturally, people weren’t exactly too happy with the idea. The new standard is to use the old terms of ‘mega’, ‘giga’, and ‘tera’ to refer to the old multiples of 10, in line with SI prefixes, and new prefixes to detonate such powers of two. (These new prefixes are ‘mebi‘, ‘gibi’, and ‘tebi’ respectively, and my spell check insists they’re not words.)

But that’s not all. Then there’s the difference between a byte and a bit.

A byte of information is actually composed of eight bits. Consumers rarely worried about bits because they were such small quantities of information — until the advent of the Internet, and especially mobile, 3G networks. Telecommunication companies advertise their capacity in bits — but then they don’t write it out, simply expressing their data speeds as being 54Mb/s.

This is the evil of information standards. The symbol for byte is ‘B’, and for bit is ‘b’. If you weren’t a technical expert, had done research, or you weren’t observant, you’d very easily confuse one for the other. I fell for this trick many times (I still do so now), and it’s annoyed me a lot since.

I was uploading my music video to the Internet, and, even with my previous experience and knowing I was probably going to get it wrong, tried to calculate how long it was going to take. The file was 135 MB (which was pretty small, because my first compression attempt put it at 806 MB), and my upload speed was 0.45 Mb/s. Being me, I simply worked out 135 divided by 0.45, and came up with an answer of about five minutes.

Great. Five minutes later, I returned, and found my video hadn’t even uploaded 5% yet. Then I checked my calculations and mentally whacked myself. To upload one megabyte, eight megabits are needed — meaning the actual upload time would be somewhere in the vicinity of half an hour. It wasn’t a setback, but it annoyed me when I realise I had fallen for the trap yet again.

Comments on: "Megabits and Megabytes" (4)

  1. I think you’re overplaying it a little bit, dude. You’re right that there’s a discrepancy between kilobytes and kibibytes, and between megabits and megabytes. No, the system is not perfect, nor does it line up with our experience with 10, 100, 1000, etc. However, I don’t think that this is “evil,” “robbing,” or a “trap” like you suggest. Come back with a few heavy sources saying the same thing, and then we can talk!

    • Hey!

      I apologise if I sound overboard with what I’m writing, as I do tend to get very agitated as I rant. But really, this difference is irritating. I’m paying for my bandwidth and storage space and I’d like every single bit of it. From what right is there that providers don’t give me all they’d advertised for? Perhaps it’s not ‘robbing’, but it’s still an unfair, misleading practise.


  2. […] Megabits and Megabytes ( Rate this: Share this:ShareStumbleUponEmailFacebookTwitterLike this:LikeOne blogger likes this post. […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s