The turn of the twenty-first century has been described as the beginning of the (or perhaps “an”) Information Age, a moniker that is difficult to dismiss and likely to be understated. Throughout the period, contemporary science has evolved at a swift pace. Ever-faster scanning, sensing, recording, and computing technologies have developed which, in turn, generate data from ever-more complex phenomena.

The result is a rapidly growing amount of “information.” When viewed as quantitative collections, the term heard colloquially is “Big Data,” suggesting a wealth of information – and sometimes disinformation – available for study and archiving.

Where once computer processing and disk storage were relegated to the lowly kilobyte (1024 bytes) and megabyte (1024KB) scales, we have moved past routine gigabyte- (1024MB) and terabyte- (1024GB) scale computing and now collect data on the petabyte (1024TB) and even the exabyte (1024PB) scales.


Operations on the zettabyte scale (1024EB) are growing, and yottabyte- (1024ZB) scale computing looms on the horizon. Indeed, one imagines that the brontobyte (1024YB) and perhaps geopbyte (1024BB) scales are not far off (and may themselves be common by the time you read this).

Our modern society seems saturated by the “Big Data” produced from these technological advances. In many cases, the lot can appear disorganized and overwhelming – and sometimes it is! – engendering a sort of “quantitative paralysis” among decision makers and analysts.

But we should look more closely: through clever study of the features and latent patterns in the underlying information, we can enhance decision- and policy-making in our rapidly changing society. The key is applying careful and proper analytics to the data.

Leave a Reply

Your email address will not be published. Required fields are marked *