Tuesday, January 15, 2008
-the major difference between deadlock,starvation and race is that in deadlock, the problem occurs when the jobs are processed. Starvation, however is the allocation of resource that prevents one job to be executed. Race occurs before the process has been started.
2.Give some "real life" examples(not related to a computer system environment) of deadlock,starvation,and race.
Example of Deadlock: When two person are about to buy one product at the time.
Example of Starvation: When one person borrowed a pen from his classmate and hisclassmate get his pen back.
Example of Race: When two guys have the same girlfriend.
3.Select one example of deadlock from exercise 2 and list the four necessary conditions if the product is only one.
-if the two person needed that one product urgently.if there's other alternative products available.
-if the two person are brand concious and the product happen to be what they like. needed for the deadlock.
4.Suppose the narrow staircase (used as an example in the beginning of this chapter)has become a major source of aggravation.Design an algorithm for using it so that both deadlock and starvation are not possible.
5.Figure 5.16 shows a tunnel going through a mountain and two streets parallel to each other-one at each entrance/exit of the tunnel. Traffic lights are located at each end of the tunnel to control the crossflow of traffic through each intersection.Based on the the figure, answer the following question.
a.Can deadlock occur?How can it happen and under what circumstances?
-Deadlock will not happen because there are two traffic lights that control the traffic. But when some motorist don't follow the traffic lights, deadlock can occur because there's only one bridge to drive through.
b.How can desdlock be detected?
-Deadlock can be detected when there will be a huge bumper to bumper to the traffic and there will be accident that will happen.
c.Give a solution to prevent deadlock but watch out for starvation.
The solution to prevent deadlock is that, the traffic lights should be accurate and motorist should follow it. In order to have a nice driving through the bridge.
Thursday, December 13, 2007
-There is currently one user and he is not doing anything. The drive is showing a lot of activity. There is adequate disk space, and the machine is not an http, nis, or nfs server. The machine reaches this state of "thrashing", for lack of knowledge as to what's going on, after several weeks uptime, with no apparent difference in load by the few users that work on it.
2.How does the operating system detect
-The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming.
3.Once it detects thrashing, what can the system do to eliminate this problem?
-It can be eliminated by reducing the level of multiprogramming.
1.Explain the following:
a.Multiprogramming. Why is it used?
-The interleaved execution of two or more programs by a computer, in which the central processing unit executes a few instructions from each program in succession.
b.Internal Fragmentation.How does it occur?
-Internal fragmentation occurs as a failure of the file system to store the file with enough precise enough a granularity such that all available free space is utilized. This occurs because reduced granularity allows for increased performance or simplicity when implementing the file system. Internal fragmentation on most computer systems is not usually considered a problem, because optimizing it invokes only small performances benefits, and involves modifying the file system, which can lead to a host of compatibility issues.
c.External Fragmentation. How does it occuInternal
-External fragmentation occurs to the free space on the hard drive, after data has been removed. Ideally, free space would be kept contiguously, so that data can be written to the medium contiguously and without fragmentation. However, if segments of data are removed at arbitrary places along the storage medium, then the free space is automatically fragmented. Ideally, a storage medium would be able to shuffle files into that free space and reallocate the free space back to a "heap" of it, however this is time consuming. This can be seen in defragmentation software, which utilize significant amounts of time to defragment heavily fragmented disks.
d.Compaction. Why is it needed?
- In a data center, compaction is the reduction or consolidation of hardware to make better use of physical floor space. Although the goal of compaction is to be cost-effective and maximize real estate, increased hardware compaction puts more demands on power consumption and cooling requirements, two major cost elements in maintaining a large data center.-In storage area management (SAM), compaction is the automatic removal of expired data from a storage area network (SAN) to condense the existing archive and make room for new data.
e. Relocation. How often should it be performed?
-Once the relocation of the iliac artery bifurcation is completed, timing of the endovascular procedure is crucial. In a thin patient with appropriate anatomy, the endovascular aneurysm repair could be performed immediately after iliac bifurcation relocation, in which case the abdominal incision is kept open, and saline-soaked sponges are applied to the wound. In more complicated cases or obese patients, procedures should be staged at 7 to 10 days to prevent bleeding during heparinization for the endoluminal procedure.
2.Describe the major disadvantages for each of the four memory allocation scheme.
--The major disadvantage for each of the four memory allocation schemes presented is only 1 job per partition and waste of storage.
3.Describe the major advantages for each of the memory allocation schemes presented.
-The major advantage is easy to manage and implement.
Wednesday, December 12, 2007
VIRTUAL MEMORY IN WINDOWS
Memory paging occurs when memory resources required by the processes running on the server exceed the physical amount of memory installed. Windows, like most other operating systems, employs virtual memory techniques that allow applications to address greater amounts of memory than what is physically available. This is achieved by setting aside a portion of disk for paging. This area, known as the paging file, is used by the operating system to page portions of physical memory contents to disk, freeing up physical memory to be used by applications that require it at a given time. The combination of the paging file and the physical memory is known as virtual memory. Virtual memory is managed in Windows by the virtual memory manager (VMM). Physical memory can be accessed at exponentially faster rates than disk. Every time a server has to move data between physical memory and disk will introduce a significant system delay. While some degree of paging is normal on servers, excessive, consistent memory paging activity is referred to as thrashing and can have a very debilitating effect on overall system performance. Thus, it is always desirable to minimize paging activity. Ideally servers should be designed with sufficient physical memory to keep paging to an absolute minimum. The paging file, or pagefile, in Windows, is PAGEFILE.SYS. Virtual memory settings are configured via the System control panel. To configure the page file size: 1. Open the System Control Panel. 2. Select the Advanced tab. 3. Within the Performance frame, click the Settings button. 4. Select the Advanced tab. 5. Click the Change button. The window shown in Figure 1-1 will appear. Window Server 2003 has several options for configuring the page file that previous versions of Windows did not. Windows Server 2003 has introduced new settings for virtual memory configuration, including letting the system manage the size of the page file, or to have no page file at all. If you let Windows manage the size, it will create a pagefile of a size equal to physical memory + 1MB. This is the minimum amount of space required to create a memory dump in the event the server encounters a STOP event (blue screen).
Virtual memory settings
A pagefile can be created for each individual volume on a server, up to a maximum of sixteen page files and a maximum 4 GB limit per pagefile. This allows for a maximum total pagefile size of 64 GB. The total of all pagefiles on all volumes is managed and used by the operating system as one large pagefile. When a pagefile is split between smaller pagefiles on separate volumes, the virtual memory manager optimizes the workload by selecting the least busy disk based on internal algorithms when it needs to work write to the pagefile. This ensures best possible performance for a multiple-volume pagefile. While not best practice, it is possible to create multiple page files on the same volume. This is different to what most documentation suggests. This is achieved by placing the page files in different folders on the same volume. This change is carried out via editing the system registry rather than via the standard GUI interface.
Virtual Memory in UNIX
Most corporations have UNIX systems for handling heavy-duty applications. Microsoft Windows 2000 has been rapidly gaining ground because it provides better performance at lower cost. But companies aren't going to replace UNIX with Windows 2000—they've invested too much in their UNIX systems over the years. So many companies are choosing to add Windows 2000 to support departmental functions. It's expensive and inefficient to run two separate systems side by side so network and IT managers need to learn how to integrate Windows 2000 with their existing UNIX systems. This book shows them how to do just that and much more. The expert authors of this book approach Windows 2000 from a UNIX Systems administrator's point of view.
Wednesday, November 21, 2007
It tells about the uses of operating system,and their definition.
2. Regional bank might decide to buy a six server computer than a one supercomputer:
-it easy to use
-can perform well