Explain the working of a Hard disk?

Hard disk consists of a number of platters which are circular discs on which data are recorded. Data is recorded in circular tracts on these platters. Except for topmost and bottom-most platters, all other platters have data recorded in both sides. In the topmost platter and outermost platter data is recorded only in one side. The spindle rotates clock wise and the head assembly can move linearly. Head assembly place head over appropriate track and the disk rotate and bring the appropriate sector in track under head. Then read or write on that sector is performed.

How does CPU execute program instructions?

CPU executes instructions as follows instructions are in primary memory various steps are:

1. Place the address of instruction which is in a register called PC in another register called memory address register.

2. Memory receives the address and selects the location where the instruction is stored.

3. CPU send read signal and on receiving read signal, memory send the instruction through data bus to processor.

4. Instruction send by the memory is stored inside CPU in a register called instruction register.

5. CPU decodes the instruction in the instruction register to determine what operation is to be performed. If any data is to be fetched from memory as operand CPU will do it. To fetch data from memory, data's address is send through MAR to memory and read signal is given. Data from memory is placed in MDR.

6. Once data required for executing the instruction is available CPU perform the activity mentioned in the instruction. The result generated is stored. Results may be stored in register or in memory. If results are stored in memory, then the address of memory location where results is to be stored is placed in MAR and then places the data in MDR and then CPU send the write signal. Now data in MDR is entered in the selected memory location. If result is to be entered in registers then it is done internally.

7. Once an instruction is executed PC is incremented by the length of the instruction so that PC contains address of next generation. Now CPU can fetch and execute next instruction.

Discuss briefly the role of secondary storage?

Secondary storage is used to store data/programs permanently. Data is not lost when power goes off. They are also used for backup purpose. Every time we cannot type in data/programs so we need a permanent memory to store them so that when required they can be called. Secondary storage is used for that purpose.

Explain the difference between primary and secondary computer memory?

Primary memory is the memory from where CPU fetches instructions and executes them. It often refers to the semiconductor memory. Secondary memories are massive storage devices that are used to store data and programs permanently. Before executing programs in hard disk, they are copied to primary memory.

Explain the purpose of following DOS commands: C: DIR MD CD COPY DEL

C: Make C drive as the current drive.

DIR: Directory list of all files and directories under current directory is displayed.

MD: Make directory command is used to create a new directory. To create a new directory called ABC, we write MD ABC.

CD: Change directory is used to change the current working directory. By giving absolute path name after typing CD we can move to that new directory.

COPY: Copy is used to copy contents of one file to another or to a set of files.

DEL: Delete is used to delete a file. DEL F1 will delete file F1 from the current working directory after reconfirming form user.

Explain the stages of compilation for a C Compiler?

Usually programs are stored in a disk. To compile a program, in standard integrated development environments we need to click at the compile button provided by the IDE. In places where IDE is not provided, the compiler may have to be explicitly involved at the command prompt. For example in UNIX system at shell prompt we must type CC filename C. This causes compiler to be called to primary memory and executed. When compiler is executed, it takes the given C file and translates it. A compiler uses two or more passes to compile the C program. The following activities are performed by a compiler.

1. Lexical analysis: In this process compiler recognize basic elements of program and create a uniform symbol rate.

2. Syntax analysis: In this process basic syntactic constructs are recognized by compiler and their validity is checked.

3. Interpretation: In this phase, exact meanings of syntactic units are determined and compiler creates certain data bases required for translation. It includes creation of tables.

4. Machine independent optimization: In this phase optimization of data bases is performed. This result in removing all redundancies during the code generation step.

5. Storage assignment: In this phase storage space is assigned for code, data etc.

6. Assembly and output: In this phase based, on data bases it created earlier, compiler generate machine code and resolve all symbolic addresses.

What do you mean by Office Automation?

Office Automation system is an information system that supports the wide range of business office activities that provide for improved work flow between workers. OAS help employees create and share documents that support day to day office activities. OAS system is basically an information system. It consists of computer systems and trained operations and software systems that help in creating and sharing documents. Most office automation systems include data base of employees, activities of organization and information related to those who have direct interaction with the organization. Standard formats of forms, applications etc. are stored in computers and at the chick of a button related information is generated. Usually modern office automation system uses computers connected to networks so that activities of various departments can be co-ordinated. Biometric attendance marking systems, sophisticated I/O devices, storage media to store every activity automatically are also part of modern office automation system.

What is an algorithm and flow-chart?

An algorithm is a detailed sequence of simple steps that are needed to solve a problem.

A flow chart is a graphical representation of an algorithm.

What is a file system and an i-node?

File system is a method of storing and organising computer files and their data. It organises the files into a database for storage, organisation, manipulation and retrieval by the computers operating system.

I-node is a unique data structure that contains information about file. Each file has an associated i-node which is created at the time of creation of file. Each file has an i-node and is identified by an i-node number. I-nodes store information on files such as user, group, ownership access mode and type of file and the pointers to blocks of file etc.

User programs before execution, is brought into memory. Memory is partitioned and each partition contains a base address and a size. Base address is stored in a relocation register before execution of program of program and the size of program checked against the allowable size and addresses that are generated by the program are continuously monitored. This is done with the help of a limit register. If base address plus the address generated by the program exceed the limit register contents, then it result in trap being generated and further execution of the program is not allowed. Operating systems have other protection mechanisms also like access rights bits to implement read/write/execute accesses. Only owners of a file may be allowed to execute. If only owner allow others and group members, they can execute the files. This is done by setting access rights bits by owner of file.

List six major steps that one can take in setting up a database for a particular enterprise?

The six major steps involved in setting up a database for an enterprise:

1. Requirement Analysis
2. Conceptual database design
3. Logical database design
4. Scheme Refinement
5. Physical database design
6. Security design

Explain the difference between the logical and physical data independence?

Data independence is the capacity to change the schema at one level of database system without having to change the schema at the next higher level. Physical data independence refers to the ability to modify the internal schema without having to change the conceptual or external schemas. That is the application programs remain the same even though the schema to physical level gets modified. Modifications at the physical level are occasionally necessary to improve the performance of the system.

Logical data independence refers to the ability to modify the conceptual schema without having to change external schema or applications programs. The logical data independence ensures that the application programs remain the same. Modifications at the conceptual level are necessary whenever logical structures of the database get modified because of some unavoidable reasons.

List four significant differences between a file processing system and a DBMS.

1). The Database Management System allows access to tables at a time. A file management system allows access to single file at a time. File systems accommodate flat files that have no relation to other files.

2). A database coordinates the physical and logical access to the data. A file processing system only coordinates physical access to the data.

3). A DBMS reduces the amount of data duplication. Files often have redundant or duplicate data items.

4). A DBMS is designed to allow flexibility using queries that gives access to the data. A file processing system only allows pre-determined access to data.

5). A DMBS is designed to co-ordinate and permit multiple users to access data at the same time. A file processing system is much more restrictive in simultaneous data access.

What are linkers and loaders?

A loader is a program that places programs into main memory for execution. The assembler or compiler outputs the machine language translation of a program on a secondary storage device. The loader places this machine language into memory. Loader program is much smaller than assembler or compiler. Thus more memory space will be available for user programs because at the time of executing user programs assembler or compiler need not be in memory.

Typical user program consists of a number of modules. A function may be called from another function. Until the modules are the loaded into main memory, the address of various functions cannot be determined. The purpose of linker is to link various modules by placing proper address of functions at their calling point in other functions. After the linker links the modules, they are now ready to execute them.

What is an Interrupt in computer system?

Interrupt is an output to the CPU that causes CPU to suspend its normal sequence of execution and force the CPU to branch to a predetermined memory location and execute program located there. After executing that program CPU return back to the initial program that it was executing and resume its execution. When interrupt occur:

1. CPU completes the current instruction it is executing.

2. Save program counter (PC) contents in a stack.

3. Load PC with the address of interrupt service subroutine.

4. Executes the interrupt service subroutine.

5. Return to the main program and resumes its execution by loading from stack into PC the address which was earlier stored.

Explain the important aspects in which the Windows operating system enhances the MS-DOS operating system.

Windows operating system provide following enhancements to the old MS DOS operating system:

1. GUI capability: Windows is graphical user interface based system and it is user friendly.
2. It provides single user multitasking facility. User can open a number of windows and through each execute different task on time sharing basis.

Windows OS contain a cache manager that improves the performance of file based I/O by causing recently referenced disk data to reside in main memory for quick access and by deferring disk writes by holding the updates in memory for short time before sending them to the disk. Windows security references monitor enforces better security to the computer system than DOS. Windows support threads and symmetric multiprocessing.

What is time-shared operating system and briefly explain it.

In a time sharing operating system, multiple users simultaneously access the system through terminals with the operating system interleaving the execution of each user program in a short bust or quantum of computation. Time sharing system give an impression to each user that the computer is dedicated to him because of the slow nature of human response compared to computer.
The type of events that lead to each state transition for a process and the possible transitions are:-

1. New: A new process is created to execute a program. This happens when user submit a job or an interactive user give a command through terminal.

2. Ready: The OS will move a process from the new state to the Ready state when it is prepared to take on an additional process.

3. Running: When it is time to select a new process to run the OS choose one of the processes in the ready state. This is the job of scheduler.

4. Exit: The currently running process is terminated by the operating system when it has completed or if it aborts. Termination can be due to many reasons other than completion of program execution, like arithmetic error, I/O failure, protection error, bounds violation etc.

5. Ready: When a running process reach its allotted time limit or when the allotted quantum of execution is over it is moved to ready list and remain there in a suspended state until again its allotted time arrive.

An Operating System Is often defined as a resource manager. Explain which resources of a computer it manages and how it manages.

Operating system can be considered as a resource manager. It is software that manages the resources available in a computer system. Operating system can be considered as collection of program modules. Each module has a distinct function, and it accomplish fully or partially the task of managing some computer resources. There are four major resources. Various resources and software that accomplish the task of managing the resource are given below:

Memory management module partition memory into pages and implement concepts like virtual memory, cache memory etc.

Scheduler decides which process is to be scheduled for execution and for how much time in a time sharing system.

Some devices are inherently non-sharable. Operating systems manage these types of devices using a technique called spooling.

One of the most important tasks of an operating system is the effective management of information. The modules of the operating system dealing with the management of information are called the file system. The purpose of file system is to free the programmer from the problems related to the allocation of space for his information, as well as to free from other physical problems such as storage format and I/O accessing. Information is stored files and they are organized in the form of a tree structure where each root/sub root in tree is a special file called directory. They contain information about other files.

How does TCP ensure reliable transfer of packets?

TCP uses acknowledgement mechanism to check safe and sound arrival of data. TCP uses error control which include mechanisms for detecting corrupted segments, lost segments, out of order segments. TCP achieve this using three tools, check sum, acknowledgement and time out. Each segment includes a checksum field which is used to check for a corrupted segment. If the segment is corrupted it is discarded by the destination TCP and is considered as lost. TCP uses a 16 bit checksum that is mandatory in every packet. Check sum is computed at the source machine and attached with packet. Destination machine recomputes the checksum and if the checksum in packet and recomputed checksum are same then the packet is accepted as a valid packet, otherwise it is discarded. TCP uses acknowledgements to confirm the receipt of packets. When a packet is corrupted or lost or delayed, it is re-transmitted. TCP normally maintains a re-transmission time out time (RTO) for all outstanding packets. When the timer runout, the earliest outstanding segment is re-transmitted. The value of RTO is dynamic in TCP and is updated based on the round trip time of packets. RTT is the time needed for a packet to reach destination and for an acknowledgement to be received.

What are the functions of TCP and IP?

TCP/IP is a protocol suite that defines the exchange of transmissions across Internet. It is a 5 layer protocol suite. TCP is the transport layer protocol. It is connection-oriented, reliable transport protocol. TCP allows the sending process to deliver data as a stream of bytes and allows the receiving process to obtain data as a stream of bytes. TCP creates an environment in which the two process seem to be connected by an imaginary tube that carries their data across the Internet. TCP offers full duplex service in which data can flow in both directions at the same time. TCP is a reliable transport protocol. It uses an acknowledgement mechanism to check the safe and sound arrival of data. TCP use flow control. The receiver of the data controls the amount of data that are to be sent by the sender. TCP also provide error control and congestion control.

Internet working Protocol (IP) is the transmission mechanism used by the TCP/IP protocols. It is an unreliable and connection less protocol used by the internet layer in the 5 layer TCP/IP protocol suite. IP transports data in packets called datagrams each of which is transported separately. Datagrams can travel along different routes and can arrived out of sequence or be duplicated. IP does not keep track of the routes and has no facility for reordering datagrams once they arrive at their destination. IP uses four supporting protocols.

ARP: Address Resolution Protocol
RAPP: Reverse Address Resolution Protocol
ICMP: Internet Control Message Protocol
IGMP: Internet Group Message Protocol

ARP is used to associate a logical address with a physical address.

RAPP allows a host to deliver its Internet address when it knows only its physical address.

ICMP is a mechanism used by hosts and gateways to send notification of datagram problems back to sender.

IGMP is used to facilitate the simultaneous transmission of a message to a group of recipients.