Recents in Beach

Header Ads

All Assignment's Questions and Answers of CO (Computer Organization) By Everything For Help

Computer Organization
Assignment 1
Q1. Explain various types of CPU organizations?

(i)              Single Accumulator Organization

In this type of CPU organization, the accumulator register is used implicitly for processing all instructions of a program and store the results into the accumulator. The instruction format that is used by this CPU Organization is One address field. Due to this the CPU is known as One Address Machine.
The main points about Single Accumulator based CPU Organization are:
1.    In this CPU Organization, the first ALU operand is always stored into the Accumulator and the second operand is present either in Registers or in the Memory.
2.    Accumulator is the default address thus after data manipulation the results are stored into the accumulator.
3.    One address instruction is used in this type of organization.

The format of instruction is: Opcode + Address


Opcode indicates the type of operation to be performed.
Mainly two types of operation are performed in single accumulator-based CPU organization:
1.    Data transfer operation –
In this type of operation, the data is transferred from a source to a destination.
For ex: LOAD X, STORE Y
Here LOAD is memory read operation that is data is transfer from memory to accumulator and STORE is memory write operation that is data is transfer from accumulator to memory.


2.    ALU operation –
In this type of operation, arithmetic operations are performed on the data.
For ex: MULT X
where X is the address of the operand. The MULT instruction in this example performs the operation,
AC <-- AC * M[X]
AC is the Accumulator and M[X] is the memory word located at location X.
Advantages –
·         One of the operands is always held by the accumulator register. This results in short instructions and less memory space.
·         Instruction cycle takes less time because it saves time in instruction fetching from memory.
Disadvantages –
·         When complex expressions are computed, program size increases due to the usage of many short instructions to execute it. Thus, memory size increases.
·         As the number of instructions increases for a program, the execution time increases.




(ii)     General Register Organization

In this type of organization, computer uses two or three address fields in their instruction format. Each address field may specify a general register or a memory word. If many CPU registers are available for heavily used variables and intermediate results, we can avoid memory references much of the time, thus vastly increasing program execution speed, and reducing program size.
For example:
MULT R1, R2, R3
This is an instruction of an arithmetic multiplication written in assembly language. It uses three address fields R1, R2 and R3. The meaning of this instruction is:
R1 <-- R2 * R3
This instruction also can be written using only two address fields as:
MULT R1, R2
In this instruction, the destination register is the same as one of the source registers. This means the operation



R1 <-- R1 * R2
The use of large number of registers results in short program with limited instructions.
Some examples of General register-based CPU Organization are IBM 360 and PDP-



The advantages of General register-based CPU organization –
·         Efficiency of CPU increases as there are large number of registers are used in this organization.
·         Less memory space is used to store the program since the instructions are written in compact way.
The disadvantages of General register-based CPU organization –
·         Care should be taken to avoid unnecessary usage of registers. Thus, compilers need to be more intelligent in this aspect.
·         Since large number of registers are used, thus extra cost is required in this organization.
·          
General register CPU organization of two type:
1.    Register-memory reference architecture (CPU with less register)– In this organization Source 1 is always required in register, source 2 can be present either in register or in memory. Here two address instruction formats are the compatible instruction format.
2.    Register-register reference architecture (CPU with more register)– In this organization ALU operations are performed only on a register data. So, operands are required in the register. After manipulation result is also placed in register. Here three address instruction formats are the compatible instruction format.



(iii)  Stack Organization
The computers which use Stack-based CPU Organization are based on a data structure called stack. The stack is a list of data words. It uses Last In First Out (LIFO) access method which is the most popular access method in most of the CPU. A register is used to store the address of the topmost element of the stack which is known as Stack pointer (SP). In this organization, ALU operations are performed on stack data. It means both the operands are always required on the stack. After manipulation, the result is placed in the stack.
The main two operations that are performed on the operators of the stack are Push and Pop. These two operations are performed from one end only.

1.    Push –
This operation results in inserting one operand at the top of the stack and it decrease the stack pointer register. The format of the PUSH instruction is:
PUSH
It inserts the data word at specified address to the top of the stack. It can be implemented as:
//decrement SP by 1
SP <-- SP - 1

//store the content of specified memory address
//into SP; i.e, at top of stack
SP <-- (memory address)
2.    Pop –
This operation results in deleting one operand from the top of the stack and it increase the stack pointer register. The format of the POP instruction is:
POP
It deletes the data word at the top of the stack to the specified address. It can be implemented as:
//transfer the content of  SP (i.e, at top most data)
//into specified memory location                  
(memory address) <-- SP

//increment SP by 1
SP <-- SP + 1
Operation type instruction does not need the address field in this CPU organization. This is because the operation is performed on the two operands that are on the top of the stack. For example:
SUB
This instruction contains the opcode only with no address field. It pops the two-top data from the stack, subtracting the data, and pushing the result into the stack at the top.

PDP-11, Intel’s 8085 and HP 3000 are some of the examples of the stack organized computers.


The advantages of Stack based CPU organization –
·         Efficient computation of complex arithmetic expressions.
·         Execution of instructions is fast because operand data are stored in consecutive memory locations.
·         Length of instruction is short as they do not have address field.


The disadvantages of Stack based CPU organization –
·         The size of the program increases.
Note: Stack based CPU organization uses zero address instruction




Q2. Advantages & Disadvantages of hardwired control unit and Advantages and Disadvantages of micro-programmed control unit:-


Design of Control Unit
The Control Unit is classified into two major categories:
  1. Hardwired Control
  2. Micro-programmed Control
Hardwired Control
The Hardwired Control organization involves the control logic to be implemented with gates, flip-flops, decoders, and other digital circuits.






  • A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates.
  • An instruction fetched from the memory unit is placed in the instruction register (IR).
  • The component of an instruction register includes; I bit, the operation code, and bits 0 through 11.
  • The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
  • The outputs of the decoder are designated by the symbols D0 through D7.
  • The operation code at bit 15 is transferred to a flip-flop designated by the symbol I.
  • The operation codes from Bits 0 through 11 are applied to the control logic gates.
  • The Sequence counter (SC) can count in binary from 0 through 15.
Micro-programmed Control
The Micro-programmed Control organization is implemented by using the programming approach.

In Micro-programmed Control, the micro-operations are performed by executing a program consisting of micro-instructions.






  • The Control memory address register specifies the address of the micro-instruction.
  • The Control memory is assumed to be a ROM, within which all control information is permanently stored.
  • The control register holds the micro-instruction fetched from the memory.
  • The micro-instruction contains a control word that specifies one or more micro-operations for the data processor.
  • While the micro-operations are being executed, the next address is computed in the next address generator circuit and then transferred into the control address register to read the next micro-instruction.
  • The next address generator is often referred to as a micro-program sequencer, as it determines the address sequence that is read from control memory.


Q. 2. Advantages and disadvantages of hardwired and micro-programmed control unit.



Advantages of micro-programmed control unit

1. Simplifies design of CU.
2. Cheaper
3. Less error prone to implement.


Disadvantage of micro-programmed control unit

1. Slower compared to hardwired control unit.


Advantages of hardwired control unit

1. Faster than micro- programmed control unit.
2. Can be optimized to produce fast mode of operation.

Disadvantages of hardwired control

1. Instruction set control logic are directly
2. Require change in wiring if designed has to be controlled.




Assignment 2

Q1. Differentiate between direct mapping, Associative mapping and set associative mapping.

Direct Mapping: -

The simplest way of associating main memory blocks with cache block is the direct mapping technique. In this technique, block k of main memory maps into block k modulo m of the cache, where m is the total number of blocks in cache. In this example, the value of m is 128. In direct mapping technique, one particular block of main memory can be transferred to a particular block of cache which is derived by the modulo function.







Associated Mapping Technique:


In the associative mapping technique, a main memory block can potentially reside in any cache block position. In this case, the main memory address is divided into two groups, low-order bits identify the location of a word within a block and high-order bits identifies the block.

In the example here, 11 bits are required to identify a main memory block when it is resident in the cache, high-order 11 bits are used as TAG bits and low-order 5 bits are used to identify a word within a block. The TAG bits of an address received from the CPU must be compared to the TAG bits of each block of the cache to see if the desired block is present.
In the associative mapping, any block of main memory can go to any block of cache, so it has got the complete flexibility and we have to use proper replacement policy to replace a block from cache if the currently accessed block of main memory is not present in cache. It might not be practical to use this complete flexibility of associative mapping technique due to searching overhead, because the TAG field of main memory address has to be compared with the TAG field of all the cache block. In this example, there are 128 blocks in cache and the size of TAG is 11 bits. The whole arrangement of Associative Mapping Technique is shown in the figure 







Block-Set-Associative Mapping Technique:



This mapping technique is intermediate to the previous two techniques. Blocks of the cache are grouped into sets, and the mapping allows a block of main memory to reside in any block of a specific set. Therefore, the flexibility of associative mapping is reduced from full freedom to a set of specific blocks. This also reduces the searching overhead, because the search is restricted to number of sets, instead of number of blocks. Also, the contention problem of the direct mapping is eased by having a few choices for block replacement.





Q2. Explain various modes of data transfer.


Modes of I/O Data Transfer

Data transfer between the central unit and I/O devices can be handled in generally three types of modes which are given below:
1.    Programmed I/O
2.    Interrupt Initiated I/O
3.    Direct Memory Access

Programmed I/O

Programmed I/O instructions are the result of I/O instructions written in computer program. Each data item transfer is initiated by the instruction in the program.
Usually the program controls data transfer to and from CPU and peripheral. Transferring data under programmed I/O requires constant monitoring of the peripherals by the CPU.

Interrupt Initiated I/O

In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates that it is ready for data transfer. This is time consuming process because it keeps the processor busy needlessly.
This problem can be overcome by using interrupt initiated I/O. In this when the interface determines that the peripheral is ready for data transfer, it generates an interrupt. After receiving the interrupt signal, the CPU stops the task which it is processing and service the I/O transfer and then returns back to its previous processing task.

Direct Memory Access

Removing the CPU from the path and letting the peripheral device manage the memory buses directly would improve the speed of transfer. This technique is known as DMA.
In this, the interface transfer data to and from the memory through memory bus. A DMA controller manages to transfer data between peripherals and memory unit.
Many hardware systems use DMA such as disk drive controllers, graphic cards, network cards and sound cards etc. It is also used for intra chip data transfer in multicore processors. In DMA, CPU would initiate the transfer, do other operations while the transfer is in progress and receive an interrupt from the DMA controller when the transfer has been completed.



Assignment 3
Q1. What is pipelining? Explain various pipelining techniques.



What is Pipelining?

Pipelining is the process of accumulating instruction from the processor through a pipeline. It allows storing and executing instructions in an orderly process. It is also known as pipeline processing.
Pipelining is a technique where multiple instructions are overlapped during execution. Pipeline is divided into stages and these stages are connected with one another to form a pipe like structure. Instructions enter from one end and exit from another end.
Pipelining increases the overall instruction throughput.
In pipeline system, each segment consists of an input register followed by a combinational circuit. The register is used to hold data and combinational circuit performs operations on it. The output of combinational circuit is applied to the input register of the next segment.
Pipeline system is like the modern day assembly line setup in factories. For example in a car manufacturing industry, huge assembly lines are setup and at each point, there are robotic arms to perform a certain task, and then the car moves on ahead to the next arm.

Types of Pipeline

It is divided into 2 categories:
1.    Arithmetic Pipeline
2.    Instruction Pipeline

Arithmetic Pipeline

Arithmetic pipelines are usually found in most of the computers. They are used for floating point operations, multiplication of fixed point numbers etc. For example: The input to the Floating Point Adder pipeline is:
X = A*2^a
Y = B*2^b
Here A and B are mantissas (significant digit of floating point numbers), while a and b are exponents.
The floating point addition and subtraction is done in 4 parts:
1.    Compare the exponents.
2.    Align the mantissas.
3.    Add or subtract mantissas
4.    Produce the result.
Registers are used for storing the intermediate results between the above operations.

Instruction Pipeline

In this a stream of instructions can be executed by overlapping fetchdecode and execute phases of an instruction cycle. This type of technique is used to increase the throughput of the computer system.
An instruction pipeline reads instruction from the memory while previous instructions are being executed in other segments of the pipeline. Thus we can execute multiple instructions simultaneously. The pipeline will be more efficient if the instruction cycle is divided into segments of equal duration.

Pipeline Conflicts

There are some factors that cause the pipeline to deviate its normal performance. Some of these factors are given below:

1. Timing Variations

All stages cannot take same amount of time. This problem generally occurs in instruction processing where different instructions have different operand requirements and thus different processing time.


2. Data Hazards

When several instructions are in partial execution, and if they reference same data then the problem arises. We must ensure that next instruction does not attempt to access data before the current instruction, because this will lead to incorrect results.

3. Branching

In order to fetch and execute the next instruction, we must know what that instruction is. If the present instruction is a conditional branch, and its result will lead us to the next instruction, then the next instruction may not be known until the current one is processed.

4. Interrupts

Interrupts set unwanted instruction into the instruction stream. Interrupts effect the execution of instruction.


5. Data Dependency

It arises when an instruction depends upon the result of a previous instruction but this result is not yet available.

Advantages of Pipelining

1.    The cycle time of the processor is reduced.
2.    It increases the throughput of the system
3.    It makes the system reliable.

Disadvantages of Pipelining

1.    The design of pipelined processor is complex and costly to manufacture.
2.    The instruction latency is more.



Q2. Explain various types of multiprocessor.


Multiprocessor:

A Multiprocessor is a computer system with two or more central processing units (CPUs) share full access to a common RAM. The main objective of using a multiprocessor is to boost the system’s execution speed, with other objectives being fault tolerance and application matching.

There are two types of multiprocessors, one is called shared memory multiprocessor and another is distributed memory multiprocessor. In shared memory multiprocessors, all the CPUs shares the common memory but in a distributed memory multiprocessor, every CPU has its own private memory.






Applications of Multiprocessor –

1.    As a uniprocessor, such as single instruction, single data stream (SISD).
2.    As a multiprocessor, such as single instruction, multiple data stream (SIMD), which is usually used for vector processing.
3.    Multiple series of instructions in a single perspective, such as multiple instruction, single data stream (MISD), which is used for describing hyper-threading or pipelined processors.
4.    Inside a single system for executing multiple, individual series of instructions in multiple perspectives, such as multiple instruction, multiple data stream (MIMD).



Benefits of using a Multiprocessor –


·         Enhanced performance.
·         Multiple applications.
·         Multi-tasking inside an application.
·         High throughput and responsiveness.
·         Hardware sharing among CPUs.


Types of Multiprocessors

There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors. Details about them are as follows:

1.    Symmetric Multiprocessors

In these types of systems, each processor contains a similar copy of the operating system and they all communicate with each other. All the processors are in a peer to peer relationship i.e. no master - slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of Unix for the Multimax Computer.

2.    Asymmetric Multiprocessors

In asymmetric systems, each processor is given a predefined task. There is a master processor that gives instruction to all the other processors. Asymmetric multiprocessor system contains a master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric multiprocessors were created. Now also, this is the cheaper option.

 

 

Advantages of Multiprocessor Systems

There are multiple advantages to multiprocessor systems. Some of these are:
More reliable Systems
In a multiprocessor system, even if one processor fails, the system will not halt. This ability to continue working despite hardware failure is known as graceful degradation. For example: If there are 5 processors in a multiprocessor system and one of them fails, then also 4 processors are still working. So the system only becomes slower and does not ground to a halt.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e. number of processes getting executed per unit of time increase. If there are N processors then the throughput increases by an amount just under N.
More Economic Systems
Multiprocessor systems are cheaper than single processor systems in the long run because they share the data storage, peripheral devices, power supplies etc. If there are multiple processes that share data, it is better to schedule them on multiprocessor systems with shared data than have different computer systems with multiple copies of the data.

 

 

Disadvantages of Multiprocessor Systems

There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple computer systems, still they are quite expensive. It is much cheaper to buy a simple single processor system than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc. So, it is much more complicated to schedule processes and impart resources to processes.than in single processor systems. Hence, a more complex and complicated operating system is required in multiprocessor systems.
Large Main Memory Required
All the processors in the multiprocessor system share the memory. So a much larger pool of memory is required as compared to single processor systems




2. Multi-computer:

multi-computer system is a computer system with multiple processors that are connected together to solve a problem. Each processor has its own memory and it is accessible by that particular processor and those processors can communicate with each other via an interconnection network.










As the multicomputer is capable of messages passing between the processors, it is possible to divide the task between the processors to complete the task. Hence, a multicomputer can be used for distributed computing. It is cost effective and easier to build a multicomputer than a multiprocessor.


Difference between multiprocessor and Multicomputer:

1.    Multiprocessor is a system with two or more central processing units (CPUs) that is capable of performing multiple tasks where as a multicomputer is a system with multiple processors that are attached via an interconnection network to perform a computation task.
2.    A multiprocessor system is a single computer that operates with multiple CPUs where as a multicomputer system is a cluster of computers that operate as a singular computer.
3.    Construction of multicomputer is easier and cost effective than a multiprocessor.
4.    In multiprocessor system, program tends to be easier where as in multicomputer system, program tends to be more difficult.

5.    Multiprocessor supports parallel computing, Multicomputer supports distributed computing.

Post a Comment

0 Comments