RSS

Introductions to Ethernet and Device Driver Design

Ethernet

Ethernet is the most common LAN (Local Area Network) technology in use today. Ethernet was developed by Xerox in the 1970s, and became popular after Digital Equipment Corporation and Intel joined Xerox in developing the Ethernet standard in 1980. Ethernet was officially accepted as IEEE standard 802.3 in 1985.The original Xerox Ethernet operated at 3Mbps. Ethernet networks up to 10Gbps now exist.

Ethernet Cabling

The first Ethernet standard, 10Base-5, ran over thick coaxial cable. A later standard, Ethernet 10Base-2, ran over a much thinner coaxial cable. These two versions of Ethernet were colloquially known as thicknet and thinnet.

Modern Ethernet standards run on UTP (Unshielded Twisted Pair) or fiber-optic cabling.

Ethernet Standard Cable Specification
10Base-T Category 3 UTP
100Base-TX Category 5 UTP
1000Base-T Cat 5e UTP
1000Base-SX Optical Fiber

Ethernet Topologies

Ethernet 10Base-5 and 10Base-2 used a bus topology. Bus topologies were difficult to maintain and troubleshoot.

Modern Ethernet networks use a star topology with an Ethernet hub, switch, or router at the center of the star.

It is still possible to create a two-node Ethernet network in a bus topology using a null-Ethernet cable between the two devices. Read the rest of this entry »

Advertisements
 
Leave a comment

Posted by on March 5, 2011 in Topic 5

 

Linkers, Loaders and Libraries

Linkers

In computer science, a linker or link editor is a program that takes one or more objects generated by a compiler and combines them into a single executable program. In IBM mainframe environments such as OS/360 this program is known as a linkage editor.

On Unix variants the term loader is often used as a synonym for linker. Other terminology was in use, too. For example, on SINTRAN III the process performed by a linker (assembling object files into a program) was called loading (as in loading executable code onto a file). Because this usage blurs the distinction between the compile-time process and the run-time process, this article will use linking for the former and loading for the latter. However, in some operating systems the same program handles both the jobs of linking and loading a program. Read the rest of this entry »

 
Leave a comment

Posted by on March 5, 2011 in Topic 6

 

System Software Advanced: Assembly language and Assemblers

Assembly language

Assembly language, commonly called assembly or asm, is a human-readable notation for the machine language that a specific computer architecture uses. Machine language, a pattern of bits encoding machine operations, is made readable by replacing the raw values with symbols called mnemonics.

For example, a computer with the appropriate processor will understand this x86/IA-32 machine language:

10110000 01100001

For programmers, however, it is easier to remember the equivalent assembly language representation:

mov al, 061h

which means to move the hexadecimal value 61 (97 decimal) into the processor register with the name “al”. The mnemonic “mov” is short for “move”, and a comma-separated list of arguments or parameters follows it; this is a typical assembly language statement. Read the rest of this entry »

 
Leave a comment

Posted by on March 5, 2011 in Topic 6

 

Peripheral Device Architectures: Caching Disks and Disk Arrays

Peripheral Device Architectures

In computer’s hardware language the peripheral devices can be defined as the devices that are connected to the computer in order to get most of the advantage out of it, if you in to detail description of peripheral devices it can be explained as the devices that are optional, and which are not required in principle. In earlier ages when computers were too much expensive and personal computer were very hard to afford, at that time the Motherboard, CPU (Central Processing Unit) and Memory (RAM (random access memory) & ROM (read only memory)) were considered to be the main components of the computer and any other device additionally attached to the computer were considered as peripheral device so this means that the keyboard, mice etc at that time were considered as peripheral devices but now a days generally they are not considered peripheral devices.

The common peripheral devices are microphones, cameras, disk drives, scanners etc. There is a common understanding that among people that the internal devices such as Sound Cards etc are not peripheral because they are added in the computer case which is wrong following is the peripheral devices list for a general idea. Read the rest of this entry »

 
Leave a comment

Posted by on March 5, 2011 in Topic 5

 

Hypercube Networks, Butterfly Network, Shuffle Exchanges and Fault Tolerant Designs

Hypercube Networks

A network control system in a hyper cube type network having 2n nodes (n>0, integer), each of the nodes being arranged on an apex of a cube and having n sets of links for interconnecting other nodes so as to form an n-dimensional hyper cube type network; a plurality of processors, each processor being connected to each node by input/output links, thereby providing communication paths between processors through the nodes and links; each of the nodes comprising: a device for setting 2n different connection patterns corresponding to 2n phase signals, and a switching device for interconnecting between the links and between the input/output links in accordance with the connection patterns synchronized with the phase signals.

Butterfly Network

In the butterfly network, there are two sources (at the top of the picture), each having knowledge of some value A and B. There are two destination nodes (at the bottom), which each want to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot).

If we only used routing, then the central line would be able to carry A or B, but not both. Suppose we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B simultaneously to both destinations. Read the rest of this entry »

 
Leave a comment

Posted by on March 5, 2011 in Topic 4

 

Systolic Arrays, Vector Processors and FPGAs (programmable fast rate arrays)

Systolic arrays

In computer architecture, a systolic array is a pipe network arrangement of processing units called cells. It is a specialized form of parallel computing, where cells (i.e. processors), compute data and store it independently of each other.

A systolic array is composed of matrix-like rows of data processing units called cells. Data processing units DPUs are similar to central processing units (CPU)s, (except for the usual lack of a program counter, since operation is transport-triggered, i.e., by the arrival of a data object). Each cell shares the information with its neighbours immediately after processing. The systolic array is often rectangular where data flows across the array between neighbour DPUs, often with different data flowing in different directions. The data streams entering and leaving the ports of the array are generated by auto-sequencing memory units, ASMs. Each ASM includes a data counter. In embedded systems a data stream may also be input from and/or output to an external source.

An example of a systolic algorithm might be designed for matrix multiplication. One matrix is fed in a row at a time from the top of the array and is passed down the array, the other matrix is fed in a column at a time from the left hand side of the array and passes from left to right. Dummy values are then passed in until each processor has seen one whole row and one whole column. At this point, the result of the multiplication is stored in the array and can now be output a row or a column at a time, flowing down or across the array.

Systolic arrays are arrays of DPUs which are connected to a small number of nearest neighbour DPUs in a mesh-like topology. DPUs perform a sequence of operations on data that flows between them. Because the traditional systolic array synthesis methods have been practiced by algebraic algorithms, only uniform arrays with only linear pipes can be obtained, so that the architectures are the same in all DPUs. The consequence is, that only applications with regular data dependencies can be implemented on classical systolic arrays. Like SIMD machines, clocked systolic arrays compute in “lock-step” with each processor undertaking alternate compute | communicate phases. But systolic arrays with asynchronous handshake between DPUs are called wavefront arrays. One well-known systolic array is Carnegie Mellon University’s iWarp processor, which has been manufactured by Intel. An iWarp system has a linear array processor connected by data buses going in both directions. Read the rest of this entry »

 
Leave a comment

Posted by on March 5, 2011 in Topic 2

 

SIMD, MIMD and SISD

SIMD (Single Instruction/Multiple Data)

SIMD stands for Single Instruction Multiple Data. It is a way of packing N (usually a power of 2) like operations (e.g. 8 adds) into a single instruction. The data for the instruction operands is packed into registers capable of holding the extra data. The advantage of this format is that for the cost of doing a single instruction, N instructions worth of work are performed. This can translate into very large speedups for parallelizable algorithms.

Both PowerPC and ia-32 architectures have SIMD extensions to their vector architectures. On PowerPC, the extension is called AltiVec. On ia-32 the vector architecture extensions have been gradually introduced, at first as the Intel MultiMedia eXtensions (MMX) and then later as the Intel Streaming SIMD Extensions (SSE, SSE2, SSE3). Examples of common areas where SIMD can result in very large improvements in speed are 3-D graphics (Electric Image, games), image processing (Quartz, Photoshop filters), video processing (MPEG, MPEG2, MPEG4), and theater-quality audio (Dolby AC-3, DTS, mp3), and high performance scientific calculations. SIMD units are present on all G4, G5 or Pentium 3/4/M class processors.

Why do we need SIMD?

SIMD offers greater flexibility and opportunities for better performance in video, audio and communications tasks which are increasingly important for applications. SIMD provides a cornerstone for robust and powerful multimedia capabilities that significantly extend the scalar instruction set. Read the rest of this entry »

 
Leave a comment

Posted by on March 5, 2011 in Topic 2

 
 
%d bloggers like this: