RSS

Multiprocessor Interconnection Networks: Shared Buses and Crossbar Networks

04 Mar

Interconnection Networks in Multiprocessor Systems

The principle characteristic of a multiprocessor system is ability of each processor to share access to common sets of main memory modules and peripheral devices. An interconnection structure is to be used between the memories and processors (and between memories and I/O channels, if needed).

• How do we move data between processors?

• Design Options:

• Topology (what)

• Routing (which)

• Physical implementation

– Switching (circuit or packet) (how)

– Flow control (when)

Shared Buses

A bus network topology is a network architecture in which a set of clients are connected via a shared communications line, called a bus. There are several common instances of the bus architecture, including one in the motherboard of most computers, and those in some versions of Ethernet networks.

In this interconnection system a common communication path is shared by all the functional units. Where each uses the same path for some time. It is simple and cost-effective for small-scale multiprocessors and not scalable (limited bandwidth; electrical complications).

How it works:

Bus networks are the simplest way to connect multiple clients, but may have problems when two clients want to transmit at the same time on the same bus. Thus systems which use bus network architectures normally have some scheme of collision handling or collision avoidance for communication on the bus, quite often using Carrier Sense Multiple Access or the presence of a bus master which controls access to the shared bus resource.

A true bus network is passive – the computers on the bus simply listen for a signal; they are not responsible for moving the signal along. However, many active architectures can also be described as a “bus”, as they provide the same logical functions as a passive bus; for example, switched Ethernet can still be regarded as a logical network, if not a physical one. Indeed, the hardware may be abstracted away completely in the case of a software bus.

With the dominance of switched Ethernet over passive Ethernet, passive bus networks are uncommon in wired networks. However, almost all current wireless networks can be viewed as examples of passive bus networks, with radio propagation serving as the shared passive medium.

The bus topology makes the addition of new devices straightforward. The term used to describe clients is station or workstation in this type of network. Bus network topology uses a broadcast channel which means that all attached stations can hear every transmission and all stations have equal priority in using the network to transmit data.

The Ethernet bus topology works like a big telephone party line — before any device can send a packet, devices on the bus must first determine that no other device is sending a packet on the cable. When a device sends its packet out over the bus, every other network card on the bus sees and reads the packet. Ethernet’s scheme of having devices communicate like they were in chat room is called Carrier Sense Multiple Access/ Collision Detection (CSMA/CD). Sometimes two cards talk (send packets) at the same time. This creates a collision, and the cards themselves arbitrate to decide which one will resend its packet first. All PCs on a bus network share a common wire, which also means they share the data transfer capacity of that wire – or, in tech terms, they share its bandwidth.

This creates an interesting effect. Ten PCs chatting on a bus each get to use a much higher proportion of its total bandwidth than, for instance, 100 PCs on the same bus (in this case, one – tenth compared to one – hundredth). The more PCs on a bus, the more likely you’ll have a communication traffic jam.

Advantages and disadvantages of a bus network

Advantages

* Easy to implement and extend.

* Easy to install.

* Well-suited for temporary or small networks not requiring high speeds (quick     setup), resulting in faster networks.

* Cheaper than other topologies (But in recent years has became less important due to devices like a switch)

* Cost effective; only a single cable is used.

* Easy identification of cable faults.

* Reduced weight due to fewer wires.

Disadvantages

* Limited cable length and number of stations.

* If there is a problem with the cable, the entire network breaks down.

* Maintenance costs may be higher in the long run.

* Performance degrades as additional computers are added or on heavy traffic (shared bandwidth).

* Proper termination is required (loop must be in closed path).

* Significant Capacitive Load (each bus transaction must be able to stretch to most distant link).

* It works best with limited number of nodes.

* Commonly has a slower data transfer rate than other topologies.

* Only one packet can remain on the bus during one clock pulse.

Crossbar Networks

Crossbar switch network system for multiprocessors.

p processors are connected to memory modules and d I/O devices.

A crossbar switch n/w  crossbar switches.

A separate path is available for each memory unit. Every processor is connected to each memory module through a crosspoint switch. Obviously H/W complexity increases. All Processors can send memory requests independently and asynchronously.

Each crosspoint switch in a crossbar network can be set open or closed, providing a point- to-point connection path between the source and destination. On each row of the crossbar mesh multiple switches can be connected simultaneously. In each column of crossbar mesh only one switch can be connected at a time.

Problem arises when multiple requests are destined for same memory module at the same time. In such cases, only one of the requests is serviced at a time. (since in one column of crossbar mesh only one switch can be connected).

To resolve the contention for each memory module, each crosspoint switch must be designed with extra H/W. All necessary switching and conflict resolution logics are built into the crosspoint switch. An arbitration module makes the selection on the basis of priority. Acknowledge signals are used to indicate the arbitration result to all requesting processors.

And a multiplexer module multiplexes data, addresses and control signals from the processor. Furthermore, each crosspoint switch requires the use of a large number of connecting lines accommodating address, data path, and control signals. Hence the hardware required to implement the switch can become quite large and complex.

The crossbar switch has the potential for the highest bandwidth and system efficiency. The maximum number of simultaneous transfers (of data between processor and memory) is limited by number of memory modules, bandwidth & speed of the buses rather than the number of paths available. However, because of its complexity and cost, it may not be cost-effective for a large multiprocessor system.

A crossbar switch avoids competition for bandwidth by using   switches to connect N inputs to N outputs (Figure 1). In this case, S=1 . Although highly nonscalable, crossbar switches are a popular mechanism for connecting small numbers of workstations, typically 20 or fewer. For   example, the DEC GIG switch can connect up to 22 workstations. While   larger crossbars can be constructed (for example, the Fujitsu VPP 500 uses a 224 224 crossbar to connect 224 processors), they are very expensive.

Go to Exercise # 1

Advertisements
 
Leave a comment

Posted by on March 4, 2011 in Topic 4

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: