The Great Grid Prodigy (and its glitches)

April 28, 2009
By Daniel Porter, Cornell University

A darkened room. Row upon row of computer stacks. When most people think of supercomputers, this is likely the image that is conjured up – but a new type of technology called “Grid Computing” is giving us a new vision. In fact, many talk about “Grid Computing” as if it will be the next biggest thing since the World Wide Web (Quocirca, 2005). I disagree. After giving an introduction to grid computing, I will discuss several obstacles that will be difficult, if not impossible to overcome before grid computing can become as prolific and abundant as its proponents hope.

The principle behind grid computing is relatively simple; take the resources of many computers in many places around the world and use them in unison (Biersdorfer, 2007). If you have a computational task that involves a very large number of operations, say calculating future atmospheric conditions by considering every half square meter of atmosphere, you break that task into thousands of calculations, send it off to thousands of computers, and have them each return the results. This proves to be extremely helpful in computing situations that do not require a lot of data transfer, but that use a comparatively large amount of processor time to complete (GridCafe, 2009).

The term grid computing comes from an analogy to the electrical grid that the world uses to get electricity. The hope is that someday, one will just be able to “plug in” to an ideal “global computing grid” just like you plug into the electricity grid, and access non-local processing power without being concerned as to its origins (GridCafe, 2009). Your computer’s operations would be sent to processors at locations around the world to be computed and returned to you. Users would be charged for how much they use, and could connect to it anywhere that would have a grid “plug in.”

This version of grid computing is obviously not in place yet – most individuals still operate their own computers and central processing units to do their own computing – but many believe this global computing grid is the future of computing (GridCafe, 2009). Many grids have already been established for the purposes of Scientific and Business computing (Biersdorfer, 2007), linking both the volunteered home computers of the public, as well as computing centers designed specifically for that purpose. From searching for extraterrestrial life (SETI@Home, 2009), to supporting the massive stream of data from the world’s largest particle accelerator (LHC@Home, 2009) to simulating protein assembly (FOLDING@Home, 2009), computing grids are already at work on some of the world’s most complicated problems.

Needless to say, Grid computing has certainly made many impressive contributions to the way our society does computing, but what of the electricity-like global computing grid envisioned to transform computing power into a commercially available commodity? The grid concept has existed nearly as long as the World Wide Web (Quocirca, 2005), so why are we not all plugging into processing power? There are three important reasons that I believe grid technology will not evolve into this vision of computing infrastructure. Perhaps they are artefacts of the way computing is done nowadays, but they pose three significant obstacles that will need to be overcome for grid computing of this magnitude to become feasible. These include data transfer speed, how pricing monetary matters are handled, and simply that grid computing is not likely to be useful for everyday individuals based on current computing norms.

Data transfer speed has been increasing in leaps and bounds over the last ten years. Speeds that are achieved over fibre optic networks today were unfathomable ten years ago, and demand is still constantly increasing. For every computer application, speed is an issue. When I use my personal computer, the data transfer time (between the input, processing, and output devices) is almost negligible because all the components are in one place. I only notice my computer slow down when it does a lot of processing. In the grid computing model, the processing speed would be faster, as the computational work is distributed across many computers. However, I would now have to wait for data to be sent to and returned from these non-local computers. I would be, in effect, trading time waiting for my local processor for time waiting for data to transfer back and forth. Even with current advances in fibre optics, I imagine the infrastructure that would need to be put in place to make the waiting times comparable is excessive.

We can also raise the issue of how customers will be charged. There are some ways in which the analogy of charging for electricity will carry over, but it is not a stretch to imagine that there are many ways in which it will not. Issues such as that of taxation are easily posed, and are being addressed by the BeinGRID project . For example, taxable income could be generated by the computations done at various components in a grid – how does one consider, or even track, one node, component, or server of a grid in terms of the income it generates (BeinGRID, 2009)? On a more individual financial level, I think the transition from personal computing to grid computing will not be an easy one for people who will go from free (after hardware is purchased) processing on their own computers to billed processing on a grid.

Furthermore, the difficulties for the individual user do not end there. The transition from the current systems of computers to the grid system would add inconveniences on top of monetary concerns: changes to hardware, changes in how computing systems are used and managed, and the need for large-scale infrastructure additions. Is it all worth it? What benefit would the average user gain from these changes? The average user uses their computer to word process, browse the internet, play video games, listen to music, and communicate. Few of these applications require large amounts of processor time, and for most, today’s processors are more than adequate. Consider the ratio between amount of processing time, or number of operations a process uses, and the amount of data exchanged between the user and the processor. For many scientific computing applications, the number of computations for a given amount of input and output is very high. For your average user, most processes require lots of data exchange for a relatively small number of operations. While the grid computing model, as it is currently framed, is very efficient for the former type of use, because transfer times across networks are very slow compared to those in a single CPU, it does not make sense for the latter.

Without a doubt, grid computing has made significant contributions to the field of scientific computing. Fields requiring a large amount of computation no longer have to rely on expensive and inconvenient local supercomputers. Nevertheless, I believe that the grid computing model will not soon evolve into the global computing grid envisioned by many of its proponents, simply because for the individual user the costs outweigh the benefits. Grid computing as a global model of computing is a nice theoretical idea, but after a closer look, I predict it will not catch on without significant advancements in the areas of data transfer, monetary feasibility, and practicality for the average user.


BEinGRID Consortium, The. 2009. BeinGRID: Business Experiments in GRID (accessed April 15 2009).

Biersdorfer, JD. 2007. Seeking Security in Grid Computing The New York Times, February 2 2007.

GridCafe. 2009. What is grid computing? GRID Talk Project (accessed April 15 2009)

Pande, Vijay. 2008. FOLDING@Home. Stanford University. (accessed April 15 2009)

Quocirca. 2005. Grid computing: a real-world solution? The Register. (accessed April 15 2009)

SETI@Home. The science of SETI@home. SETI@Home Project. (accessed April 15 2009)