Page 10 - DCAP609_CLOUD_COMPUTING
P. 10

Unit 1: Introduction to Cloud Computing




          control rested with the mainframe and with the guardians of that single computer. It was not a  Notes
          user-enabling environment.

          1.2.2 Peer-to-Peer Computing: Sharing Resources

          As you can imagine, accessing a client/server system was kind of a “hurry up and wait”
          experience. The server part of the system also created a huge bottleneck. All communication
          between computers had to go through the server first, however inefficient that might be.
          The obvious need to connect one computer to another without first hitting the server led to the
          development of Peer-to-Peer (P2P) computing. The P2P computing defines a network architecture
          in which each computer has equivalent capabilities and responsibilities. This is in contrast to the
          traditional client/server network architecture, in which one or more computers are dedicated to
          serving the others. (This relationship is sometimes characterized as a master/slave relationship,
          with the central server as the master and the client computer as the slave.)
          The P2P was an equalizing concept. In the P2P environment, every computer is a client and a
          server; there are no masters and slaves. By recognizing all computers on the network as peers,
          P2P enables direct exchange of resources and services. There is no need for a central server;
          because any computer can function in that capacity when called on to do so.

          The P2P was also a decentralizing concept. Control is decentralized, with all computers
          functioning as equals. Content is also dispersed among the various peer computers. No centralized
          server is assigned to host the available resources and services.
          Perhaps the most notable implementation of P2P computing is the Internet. Many of today’s
          users forget (or never knew) that the Internet was initially conceived, under its original ARPAnet
          guise, as a peer-to-peer system that would share computing resources across the world. The
          various ARPAnet sites-and there were not many of them were connected together not as clients
          and servers, but as equals.
          The P2P nature of the early Internet was best exemplified by the Usenet network. Usenet, which
          was created back in 1979, was a network of computers (accessed via the Internet), each of which
          hosted the entire contents of the network. Messages were propagated between the peer computers;
          users connecting to any single Usenet server had access to all (or substantially all) the messages
          posted to each individual server. Although the user’s connection to the Usenet server was of the
          traditional client/server nature, the relationship between the Usenet servers was definitely P2P
          and presaged the cloud computing of today. That said, not every part of the Internet is P2P in
          nature. With the development of the World Wide Web came a shift away from P2P back to the
          client/server model. On the Web, each Website is served up by a group of computers, and sites’
          visitors use client software (Web browsers) to access it. Almost all contents and controls are
          centralized, and the clients have no autonomy or control in the process.

          1.2.3 Distributed Computing: Providing More Computing Power

          One of the most important subsets of the P2P model is that of distributed computing, where idle
          PCs across a network or across the Internet are tapped to provide computing power for large,
          processor-intensive projects. It is a simple concept, all about cycle sharing between multiple
          computers.

          A personal computer, running full-out 24 hours a day, 7 days a week, is capable of tremendous
          computing power. Most people do not use their computers 24 × 7, however, so a good portion of
          a computer’s resources go unused. Distributed computing uses those resources.
          When a computer is enlisted for a distributed computing project, software is installed on the
          machine to run various processing activities during those periods when the PC is typically




                                           LOVELY PROFESSIONAL UNIVERSITY                                    5
   5   6   7   8   9   10   11   12   13   14   15