Friday, September 12, 2008

http://bebas.vlsm.org/v06/Kuliah/SistemOperasi/BUKU/SistemOperasi-4.X-1/ch07s09.html
http://www.total.or.id/info.php?kk=Distributed%20Processing

Teori Dasar Komunikasi Data

2.I. Pengertian Komunikasi Data, Telekomunikasi dan Pengolahan Data
Komunikasi data merupakan gabungan dari teknik telekomunikasi dengan teknik pengolahan data.
• Telekomunikasi adalah segala kegiatan yang berhubungan dengan penyaluran informasi dari titik ke titik yang lain;
• Pengolahan data adalah segala kegiatan yang berhubungan dengan pengolahan data;
• Gabungan kedua tehnik ini selain disebut dengan komunikasi data juga disebut dengan teleprocessing (pengolahan jarak jauh);
• Secara umum komunikasi data dapat dikatakan sebagai proses pengiriman informasi (data) yang telah diubah dalam suatu kodetertentu yang telah disepakati melalui media listrik atau elektro-optik dari titik ke titik yang lain;
• Sistem komunikasi data adalah jaringan fisik dan fungsi yang dapat mengakses komputer untuk mendapatkan fasilitas seperti menjalankan program, mengakses basis data, melakukan komunikasi dengan operator lain, sedemikian rupa sehingga semua fasilitas berada pada terminalnya walaupun secara fisik berada pada lokasi yang terpisah.

2.2. Pemikiran Dalam Komunikasi Data
• Menyalurkan informasi secepat mungkin dengan kesalahan sedikit mungkin;
• Mengintegrasikan semua jenis komunikasi menjadi satu sistem, yaitu ISDN (Integrated Service Digital Network ) atau Jaringan Digital Pelayanan Terpadu;

Gambar 2.1. ISDN
2.3. Keuntungan Komunikasi Data
a. Pengumpulan dan persiapan data
Bila pada saat pengumpulan data digunakan suatu terminal cerdas maka waktu untuk pengumpulan data dapat dikurangi sehingga dapat mempercepat proses (menghemat waktu).
b. Pengolahan data
Karena komputer langsung mengolah data yang masuk dari saluran transmisi (efesiensi).
c. Distribusi
Dengan adanya saluran transmisi hasil dapat langsung dikirim kepada pemakai yang memerlukannya.

2.4. Tujuan Komunikasi Data
a. Memungkinkan pengiriman data dalam jumlah besar efesien, tanpa kesalahan dan ekonomis dari suatu tempat ketempat yang lain;
b. Memungkinkan penggunaan sistem komputer dan peralatan pendukung dari jarak jauh (remote computer use);
c. Memungkinkan penggunaan komputer secara terpusat maupun secara tersebar sehingga mendukung manajemen dalam hal kontrol, baik desentralisasi maupun sentralisasi;
d. Mempermudah kemungkinan pengelolaan dan pengaturan data yang ada dalam berbagai macam sistem komputer;
e. Mengurangi waktu untuk pengolahan data;
f. Mendapatkan data langsung dari sumbernya (mempertinggi kehandalan);
g. Mempercepat penyebarluasan informasi.

2.5. Faktor - faktor pertimbangan Komunikasi Data
a. Pengsinyalan
Pengsinyalan (signalling) adalah suatu prosedur atau protokol yang harus dilaksanakan
terlebih dahulu sebelum pengiriman informasi dimulai.
b. Transmisi
Media transmisi harus efesien dan dapat melayani berbagai jenis alat. Karakteristik transmisi :
- lebar frekwensi yang dapat ditampung
- redaman
- daya yang dapat ditampung
- waktu yang dibutuhkan
c. Cara Penomoran
Penomoran harus unik dan mengikuti rekomendasi atau persetujuan dari pihak tertentu.
d. Cara menyalurkan hubungan (routing)
Menentukan policy ( kebijaksanaan ) bagaimana suatu hubungan akan dilaksanakan.
e. Cara menghitung biaya (tarif)
Menentukan struktur harga bagi jasa pelayanan yang harus dibayarkan.

2.6. Bidang-bidang Operasi Komunikasi Data
a. Bidang Data Collection
Data dapat dikumpulkan dari beberapa tempat (remote station), disimpan dalam memori dan pada waktu - waktu tertentu data tersebut akan diolah.
Contoh : aplikasi inventori, penggajian, dll.
b. Bidang Inquiry and Response
Pemakai dapat mengakses langsung ke file atau program. Data yang didikirimkan ke sistem Komputer dapat langsung diproses dan hasilnya dapat segera diberikan. Bila pemakai melakukan dialog dengan komputer maka sistem semacam ini disebut interaktif.
Contoh : aplikasi perbankan, pembayaran dipertokoan.

c. Bidang Storage and Retrival
Data yang sebelumnya disimpan dalam komputer dapat diambil sewaktu - waktu oleh pihak yang berkepentingan.
Contoh : aplikasi Message Switcing dan E-Mail.
d. Bidang Time Sharing
Sejumlah pemakai dapat mengerjakan programnya secara bersama-sama. Setiap pemakai diberikan kesempatan untuk bekerja selama jangka waktu tertentu yang tetap besarnya, setelah itu pemakai lain akan mendapatkan kesempatan. Kalau terlalu banyak data yang harus dikerjakan dalam satu satuan waktu fasilitas roll in-roll out harus dipergunakan.
Contoh : aplikasi pemakai sistem komputer secara bersama untuk pengembangan perangkat lunak (software), perhitungan, rekayasa, pengolah kata (word processing), CAD (computer aided design), dan sebagainya.
e. Bidang Remote Job Entry
Remote Job terminal mengirimkan program atau data (teks) untuk disimpan ke komputer pusat tempat data diproses. Program itu akan dikerjakan secara batch, yaitu diolah setelah gilirannya tiba.
Contoh : aplikasi yang menggunakan peralatan sistem komputer yang tempatnya berjauhan.
f. Bidang Real Time Data Processing and Process Control
Hasil proses dikehendaki dalam waktu yang sesuai dengan kepentingan proses tersebut (real time).
Contoh : aplikasi pengaturan peralatan industri, sistem kendali proses, sistem telekomunikasi, dsb.
g. Bidang Data Exchange Among Computers
Pertukaran data berupa program, file dan sebagainya antar sistem komputer. Pada aplikasi ni data yang dipertukarkan jumlahnya banyak dan waktu yang dikehendaki singkat sekali.

2.7. Komponen Dasar Sistem Komunikasi Data
a. Sumber (pemancar atau pengirim)
Yaitu pengirim atau pemancar informasi data. Karena pembahasan berkisar pada sistem komputer maka pemancar adalah sistem komputer. Komunikasi data dapat juga berlangsung dua arah sehingga pemancar juga dapat berfungsi sebagai penerima.
b. Medium transmisi
Yaitu saluran tempat informasi tersebut disalurkan ketempat tujuan. Media Yang dipergunakan dapat berupa : kabel, udara, cahaya, dan sebagainya.
c. Penerima
Yaitu alat yang menerima informasi yang dikirimkan

Gambar 2.2 Komponen Dasar Sistem Komunikasi
2.8. Signal Listrik
Komunikasi data berkaitan dengan komunikasi mesin ke mesin seperti terminal ke komputer dan komputer ke komputer. Karena mesin ini signalnya digital maka komunikasi yang termudah dengan sinyal digital.

Alasan penggunaan sinyal listrik atau elektro optik dalam komunikasi jarak jauh :
- Jarak jangkau tidak terbatas.
- Kecepatan sangat tinggi ( +/- 300.000 km/dt ).
- Pembangkitan sinyal listrik mudah.
- Pengubahan sinyal menjadi besaran listrik dan sebaliknya dapat dilakukan secara mudah.
Jenis Signal Listrik
a. Signal analog
Yaitu sinyal yang sifatnya seperti gelombang, selalu sambung menyambung dan tidak ada perubahan yang tiba - tiba antara bagian - bagian signal tersebut. Penyaluran data banyak dilakukan dengan sinar analog.
b. Signal digital
Yaitu signal yang sifatnya seperti pulsa, terputus - putus atau terjadi perubahan yang tiba-tiba antara bagian- agian signal tersebut. Sistem komputer bekerja dengan sinyal ini.

http://apadefinisinya.blogspot.com/2008_05_18_archive.html

Komputasi Tersebar dengan Java RMI

Friday, August 24th, 2007

oleh

Iriano Jaya M. S.   (02/157632/PA/09186)
Wijaya Adhi S.      (03/165672/PA/09459)
Wim Permana      (03/165273/PA/09313)

Jurusan Ilmu Komputer
Fakultas Matematika dan Ilmu Pengetahuan Alam
Universitas Gadjah Mada
Yogyakarta
2007

1. Pendahuluan

1.1. Latar Belakang

Pada awalnya, pada dekade 70-an, para developer banyak yang menggunakan konsep pemrograman client-server untuk membuat aplikasi dalam sebuah jaringan. Dengan konsep client-server, seorang developer mempunyai gambaran yang jelas ketika melihat suatu aplikasi. Dengan kata lain, developer tersebut minimal sudah mengetahui, aplikasi mana yang bertugas sebagai server dan mana yang akan menjadi client-nya.

Aplikasi di sisi server (server-side application) adalah aplikasi yang akan memberikan tanggapan/respon dari permintaan yang sudah diajukan oleh client. Sementara itu, client (client-side application), bertugas sebagai pihak yang akan membuat permintaan (request) ke aplikasi server.

Sayangnya, konsep ini memiliki kelemahan dari sisi abstraksi. Hal ini terjadi karena developer tersebut harus selalu berurusan dengan detail socket dan protokol yang digunakan oleh kedua belah pihak – aplikasi server dan client. Adapun masalah yang harus dihadapi oleh developer tersebut antara lain:
Server dan client harus saling mengetahui lokasi keduanya (ip address).
Client harus tahu keadaan server (sudah pindah atau masih di tempat yang sama)
Apa yang akan terjadi jika beberapa client mengakses server pada saat yang bersamaan?
Baik server ataupun client harus tahu tentang protokol dan bahasa yang digunakan oleh mitranya.

Untuk mengatasi masalah abstraksi seperti ini, pada tahun 1970, sebuah teknologi baru yang dikenal dengan nama RPC (Remote Procedure Call) diperkenalkan. Dengan teknologi ini, developer dapat mengimplementasikan kesederhanaan konsep pemanggilan prosedur untuk aplikasi-aplikasi client-server. Inilah alasan utama di balik nama RPC.

Meskipun teknologi RPC ini relatif sudah memberikan kenyamanan bagi developer, tapi perkembangan yang terjadi di bidang pemrograman berorientasi objek akhirnya menuntut kehadiran teknologi baru. Sederet teknologi akhirnya benar-benar muncul, antara lain; RMI (Remote Method Invocation), CORBA (Common Object Request Broker Architecture), dan SOAP (Simple Object Access Protocol).

Sampai tulisan ini dibuat, ketiga teknologi ini telah menjadi paradigma baru dalam lingkungan komputasi tersebar. Baik RMI, CORBA, maupun SOAP memang memiliki kelemahan dan kelebihan masing-masing. Jadi, keberadaan RMI tidak akan memusnahkan CORBA atau SOAP, pun sebaliknya juga begitu.

1.2. Apa itu RMI?

RMI atau Remote Method Invocation adalah pustaka kelas (library) sekaligus API (application programming interface) dalam bahasa pemrograman Java yang memungkinkan objek-objek yang berjalan di sebuah JVM (Java Virtual Machine) melakukan pemanggilan metode milik objek yang berada di dalam JVM lain yang terletak di mesin lainnya.

Tidak seperti CORBA1 atau DCOM2 yang bisa dikembangkan menggunakan bahasa pemrograman yang berbeda-beda, RMI hanya dibuat khusus untuk objek-objek yang ditulis menggunakan Java.

http://www.wimpermana.web.ugm.ac.id/budi_s/category/semester-8/sistem-terdistribusi/

RPC and REST: Dilemma, Disruption, and Displacement

Toward Integration
RPC and REST: Dilemma, Disruption, and Displacement

Sept./Oct. 2008



Steve Vinoski • Verivue

In the previous four issues, I've explored problems with the remote procedure call (RPC) abstraction and explained how the Representational State Transfer (REST) architectural style is one alternative that can yield a superior approach to building distributed systems. Because RPC is inherently tied to programming language abstractions, my May/June 2008 column also investigated multilingual programming, in which developers choose languages according to how well they actually fit the problem at hand, rather than the typical approach of choosing a popular general-purpose language and bending it to fit the problem. Choosing the right language and teaming it with a network programming style like REST can obviate the need for problematic techniques like RPC, thus letting developers build distributed systems both conveniently and correctly.

Some readers agree with my conjectures and conclusions in the past several columns, and others vehemently oppose them. Although there's really nothing surprising about that, the forces that lead different readers to agree or disagree are quite interesting. To make sense of these forces, we must try to understand how, when, and why different customers adopt different technologies, based on factors that can extend well outside purely technical characteristics. We must also understand how technologies evolve, why certain approaches win out over others even when they appear to be technically disadvantaged, and how we might be able to analyze and even predict how new technologies will perform in the marketplace. Armed with such knowledge and understanding, each of us can even analyze our own tendencies and preferences when it comes to adopting technologies — perhaps gaining a better understanding of why certain approaches appeal to us more than others.
Innovation

Many are familiar with the popular book The Innovator's Dilemma,1 in which author Clayton Christensen provides important insights about the nature of innovation, technological change, and how technology markets work. He gained these insights by studying companies from several disparate industries, including hard-disk manufacturers, businesses involved in making steel, and firms that create and sell mechanical excavators.

With respect to innovation, Christensen explains that there are two kinds of technologies:
Sustaining technologies are essentially improvements to products or approaches that already satisfy customers within a given market. Christensen states that they "improve the performance of established products, along the dimensions of performance that mainstream customers ... have historically valued."
Disruptive technologies are promising approaches that users of the incumbent sustaining technologies in a given market initially perceive as being less capable. Those that are successful eventually evolve to fulfill the needs of customers within that market at a lower cost than the sustaining technologies can deliver — and often with greater capability as well.

The dilemma to which Christensen's book title refers is that the steps that managers must take to ensure their products' success and growth in the marketplace also make it extremely difficult for them to respond to disruptive technological changes that eventually push their products into obsolescence. Consider how a successful product generally evolves:

The product addresses the needs of certain customers within a market. The customers are reasonably happy with the product, but they feel that some added or improved features and capabilities would make it even better.

To keep the customers happy, the product's manager ensures that the product is enhanced with the requested additions and improvements.

The additions and improvements not only help make existing customers happier but also help attract new customers, for whom the cycle begins all over at step 1.

These steps form a loop that repeats throughout a product's life cycle. Although customers have certainly viewed some product versions and releases as poorer than their predecessors — many feel that Windows Vista falls squarely into this category, for example — a competent product manager would never intentionally choose to release a product version that doesn't, at a minimum, meet existing customers' expectations and requirements. The reason, of course, is that the product can't succeed without those customers. In fact, to achieve the growth rates that firms normally seek, products must gradually move "up-market" to be able to command premium prices from the very best customers.
Overshooting opens the door

The dilemma presents itself because existing customers want improvements, not setbacks, but disruptive technologies are initially unable to meet those customers' demands. Product managers have little choice but to avoid disruptive innovations and move forward with sustaining technologies to continually improve their products to meet these customers' demands; by doing so, they're more likely to be able to secure the premium prices they seek. Yet, as Christensen so lucidly explains in his follow-on book, Seeing What's Next,2 catering to higher-end customers can lead to products that overshoot a nontrivial segment of other customers within that market — those who don't want to pay a premium, especially for features and capabilities they don't need.

Managers of successful products generally aren't concerned about this because they view such overshot customers as undesirable compared to their up-market clients. However, this leaves the door open for disruptive products to take root. Overshot customers turn to the less expensive and seemingly less capable disruptive technology because it's "good enough" for them — the initial problems inherent in the newer product simply don't get in their way. This allows the disruptive product to begin the three-step cycle described earlier, and its customers start to drive it to improve. As the disruptive product improves, it appeals to more and more customers, thus driving the incumbent product into smaller market segments in which it can still command the premiums needed to maintain revenue and profit. The manager of the incumbent product is therefore essentially unable to respond to the disruption because doing so would mean lower margins, less profit, and unhappy customers — a dilemma indeed.

Product-adoption rates also figure into the overall equation. As a product matures and its adoption rate increases, its market grows until the product becomes mainstream and then capable of demanding premiums from the best customers. Eventually and inevitably, however, the product's adoption rate starts to decrease, thus beginning a downhill slide that can ultimately end in obsolescence. Graphing the adoption rate reveals a bell curve that's better known as the "technology adoption life cycle" made famous by Geoffrey Moore's book Crossing the Chasm.3 Depending on the market, these curves can span anywhere from just a few years to many decades; consider the long life cycle of the land-line telephone, for example.

The technology adoption life-cycle curve helps categorize customer types. Those on the rising (left) side of the curve are early adopters of technology who are willing to try something new and look past its perceived initial shortcomings in the hope that it will provide a competitive advantage. The opposite customer type is found on the descending (right) side of the bell curve, where well-vetted, mature products live until they become obsolete. These conservative customers want nothing to do with new, unproven, risky, and potentially buggy technologies and products. They want something solid and well-proven, and they typically complain loudly when the odd problem crops up, no matter how trivial. In the middle, we find the average customers whose balanced risk/reward ratio leads them to favor products and approaches that the early adopters have already proven to work reasonably well. The average customers seek competitive advantage over more conservative adopters, and — at a minimum — they want the products and approaches they use to help them stay even with other similar competitors without incurring too much risk.
RPC sustains, REST disrupts

Applying Christensen's insights about innovation and technological change to the approaches, products, and customers in the enterprise integration space can be illuminating. For example, if we go back over the history of RPC-oriented systems that I covered last time, we see the pattern of sustaining innovations moving systems up-market. Early RPC systems were indeed rudimentary. However, they appealed to overshot customers — developers who didn't have the time, knowledge, or skills required to employ the typical techniques for creating networked applications of the day, which generally involved carefully hand-crafting custom network protocols along with the custom code needed to drive them. Even the earliest, buggiest RPC framework of the time was good enough for the small-scale systems of the day.

Soon, though, customers wanted more, and the march of sustaining innovations began: Sun RPC and Apollo NCS, DCE, Corba, RMI, J2EE, SOAP, and WS-*. These approaches are all relatively similar in form and function, but each was perceived in the market largely as an improvement over what had come before it. Firms that created products based on these technologies moved right along with each change, building their next sustaining products on each as it appeared. Frequently, "new" products were simply adaptation layers for existing products. Customers for these products also tended to follow along with these sustaining innovations. From my own experience, for example, customers using WS-* in this decade were those using Corba in the 1990s, and they refused to even consider using WS-* until it integrated relatively cleanly with their Corba systems, like a good sustaining innovation should.

RESTful HTTP, on the other hand, has all the makings of a disruptive technology to the RPC market. As RPC systems moved up-market and gained capabilities and features over time to continue to satisfy the most demanding customers, they overshot more and more potential users who shunned the complexity and cost of such systems. In RESTful HTTP, which was born in the adjacent market of the World Wide Web and is a sustaining technology there, these overshot users are finding an approach that helps them build solutions that are less expensive, simpler to build, and easier to extend and maintain than what RPC approaches can offer. It's precisely these qualities that make RESTful HTTP a disruptive technology in this context.

Not surprisingly, however, users who favor RPC approaches view RESTful HTTP with suspicion, just as Christensen's theories and empirical evidence predict they would. Such users commonly raise arguments along the lines that RESTful HTTP lacks tooling and interface definition languages, or that it works for human-driven browser-based systems but is unsuitable for application-to-application integration, and it can't adequately support distributed transactions. In short, RESTful HTTP doesn't yet appear to be "good enough" for them.
Grading on a curve

The degree to which incumbent RPC users view RESTful HTTP with skepticism depends directly on how far to the right they lie on the technology-adoption life-cycle curve. In fact, many technical arguments and disagreements result not from purely technological differences but from the participants' very different places in the technology-adoption life cycle. With respect to the enterprise integration space, REST proponents tend to inhabit the early adopter side of the curve, whereas RPC supporters hail from the conservative right side. It's no surprise that the RPC vs. REST argument never seems to die down; the participants have completely different risk–reward ratios and value systems, and thus are unable to find common ground. Of course, within any such disagreement, you'll also find the "can't we all just get along" middle-ground folks who point out that both approaches have merits — they, of course, are the pragmatic majority who populate the middle of the bell curve.
Fight or flight

Another hallmark of a disruptive technology is that as it becomes "good enough" for more users within a market, it gradually displaces the incumbent sustaining technology, thereby invoking "fight or flight" reactions from those still using the sustaining approaches. Such reactions are evident in the consolidation of vendors in the SOA/WS-* market, such as Oracle's acquisition of BEA and Progress Software's purchase of IONA Technologies, and in the fact that some WS-* frameworks and toolkits have incorporated RESTful HTTP into parts of their systems.

For example, WSO2 uses Atom4 and AtomPub5 (both built on RESTful HTTP) within its registry product (www.wso2.com/products/registry/), which is part of a set of open source products based on SOA and WS-*. Somewhat ironically, the registry uses a RESTful approach to handle the publication and lookup of metadata for non-RESTful RPC-oriented Web services. Christensen refers to this approach as "cramming," in which firms try to capitalize on disruptive technologies by incorporating them into sustaining products; it's not an approach he recommends because "it takes an innovation from a circumstance in which its unique features are valuable to a circumstance in which its unique features are a liability."2 In this case, the benefits of REST are hidden behind an RPC-oriented API for accessing the registry, and those benefits disappear completely as soon as an application uses the registry to find a non-RESTful service and starts to use it. WSO2's strategy might also be risky because it could drive customers away from the company's other non-RESTful products. It's not hard to imagine registry users finding the approach appealing and realizing that they can use similar techniques to gradually rid themselves of their own complicated, expensive, and brittle WS-* implementations in favor of RESTful HTTP Web services.

It's also interesting to think about how new RPC systems such as Facebook's Thrift, Google's Protocol Buffers, and Cisco's Etch fit into the picture. From the enterprise RPC market perspective, these are purely sustaining innovations, and so they're quite unlikely to make inroads with existing customers who view them as inferior to existing products and systems they already use. However, these systems might well take root by targeting non-users of RPC technology in adjacent markets. For example, given Cisco's typical target market, Etch might take root in the embedded networking device space, which is a very conservative market that has started to trust RPC only within the past few years. Similarly, Thrift and Protocol Buffers might find users among developers who build the back ends of Web-based systems. Developers in this space, who tend to worry quite a bit about performance and scalability, are generally loathe to buy into the complexity and runtime overhead of WS-* approaches, but they'll gladly snap up a lightweight framework from the likes of Google and Facebook, who both make it quite clear that they use their respective frameworks themselves with great success.

Whether RESTful HTTP will continue to displace RPC-oriented systems within the enterprise isn't ultimately just a matter of whether one approach is technically "better" than the other. The technology-adoption life cycle clearly indicates that such evaluations are relative. Technology choice is never black-and-white, and in the big picture, the time we spend arguing for one technology over another based on pure technical merit is, frankly, largely wasted. It ultimately comes down to cost — if RESTful HTTP can indeed yield "good enough" integration solutions that cost less to develop and maintain, it will slowly displace heavier, more costly RPC-oriented approaches in more and more enterprise scenarios. As Christensen, Moore, and others have so clearly explained for us, such changes are inevitable, regardless of any technical arguments sustaining technology fans might try to muster to prevent them.
References
C.M. Christensen, The Innovator's Dilemma, Harvard Business School Press, 1997.
C.M. Christensen, S.D. Anthony, and E.A. Roth, Seeing What's Next, Harvard Business School Press, 2004.
G.A. Moore, Crossing the Chasm, Harper-Collins, 1999.
M. Nottingham and R. Sayre, The Atom Syndication Format, IETF RFC 4287, Dec. 2005.
J. Gregorio and B. de hOra, The Atom Publishing Protocol, IETF RFC 5023, Oct. 2007.

Steve Vinoski is a member of the technical staff at Verivue in Westford, Mass. He is a senior member of the IEEE and a member of the ACM. You can read Vinoski's blog at http://steve.vinoski.net/blog/ and reach him at vinoski@ieee.org.

http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2008/09&file=w5tow.xml&xsl=article.xsl&

Distributed Computing

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.

In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers.

http://en.wikipedia.org/wiki/Distributed_computing

SISTEM DISTRIBUSI

DISTRIBUTED SYSTEMS: CONCEPTS AND DESIGN
Second Edition, 1994
George Coulouris, Jean Dollimore and Tim Kindberg
644 pages
Addison-Wesley Publishing Company
ISBN 0-201-62433-8
CONTENTS
FOREWORD (by Ken Birman, Cornell University)
PREFACE
(Each chapter commences with an outline and concludes with an extensive set of exercises designed to test and consolidate the reader's understanding of the contents).
1 CHARACTERIZATION OF DISTRIBUTED SYSTEMS
1.1 Introduction
1.2 Examples
1.3 Key characteristics
1.4 Historical background
1.5 Summary
2 DESIGN GOALS
2.1 Introduction
2.2 Basic design issues
2.3 User requirements
2.4 Summary
3 NETWORKING AND INTERNETWORKING
3.1 Introduction
3.2 Network technologies
3.3 Protocols
3.4 Technology case studies: Ethernet, Token Ring and ATM
3.5 Protocol case studies: Internet protocols and FLIP
3.6 Summary
4 INTERPROCESS COMMUNICATION
4.1 Introduction
4.2 Building blocks
4.3 Client-server communication
4.4 Group communication
4.5 Case study: interprocess communication in UNIX
4.6 Summary
5 REMOTE PROCEDURE CALLING
5.1 Introduction
5.2 Design issues
5.3 Implementation
5.4 Case studies: Sun and ANSA
5.5 Asynchronous RPC
5.6 Summary
6 DISTRIBUTED OPERATING SYSTEMS
6.1 Introduction
6.2 The kernel
6.3 Processes and threads
6.4 Naming and protection
6.5 Communication and invocation
6.6 Virtual memory
6.7 Summary
7 FILE SERVICE: A MODEL
7.1 Introduction
7.2 File service components
7.3 Design issues
7.4 Interfaces
7.5 Implementation techniques
7.6 Summary
8 FILE SERVICE: CASE STUDIES
8.1 Introduction
8.2 The Sun Network File System
8.3 The Andrew File System
8.4 The Coda File System
8.5 Summary
9 NAME SERVICES
9.1 Introduction
9.2 The SNS - a name service model
9.3 Discussion of the SNS and further design issues
9.4 Case studies: DNS, GNS and X.500
9.5 Summary
10 TIME AND COORDINATION
10.1 Introduction
10.2 Synchronizing physical clocks
10.3 Logical time and logical clocks
10.4 Distributed coordination
10.5 Summary
11 REPLICATION
11.1 Introduction
11.2 Basic architectural model 316
11.3 Consistency and request ordering
11.4 The gossip architecture
11.5 Process groups and ISIS
11.6 Summary
12 SHARED DATA AND TRANSACTIONS
12.1 Introduction
12.2 Conversations between a client and a server
12.3 Fault tolerance and recovery 358
12.4 Transactions
12.5 Nested transactions
12.6 Summary
13 CONCURRENCY CONTROL
13.1 Introduction
13.2 Locks
13.3 Optimistic concurrency control
13.4 Timestamp ordering
13.5 Comparison of methods for concurrency control
13.6 Summary
14 DISTRIBUTED TRANSACTIONS
14.1 Introduction
14.2 Simple distributed transactions and nested transactions
14.3 Atomic commit protocols
14.4 Concurrency control in distributed transactions
14.5 Distributed deadlocks
14.6 Transactions with replicated data
14.7 Summary
15 RECOVERY AND FAULT TOLERANCE
15.1 Introduction
15.2 Transaction recovery
15.3 Fault tolerance
15.4 Hierarchical and group masking of faults
15.5 Summary
16 SECURITY
16.1 Introduction
16.2 Cryptography
16.3 Authentication and key distribution
16.4 Case study: Kerberos
16.5 Logics of authentication
16.6 Digital signatures
16.7 Summary
17 DISTRIBUTED SHARED MEMORY
17.1 Introduction
17.2 Design and implementation issues
17.3 Sequential consistency and Ivy
17.4 Release consistency and Munin
17.5 Other consistency models
17.6 Summary
18 DISTRIBUTED OPERATING SYSTEMS: CASE STUDIES
18.1 Introduction
18.2 Mach
18.3 Chorus
18.4 UNIX emulation in Mach and Chorus
18.5 Amoeba
18.6 A comparison of Mach, Amoeba and Chorus
18.7 Clouds
18.8 Firefly RPC
18.9 The Amoeba multicast protocol
REFERENCES
INDEX

http://www.dcs.qmw.ac.uk/research/distrib/dsbook/