Tutorial Week 5
Questions
-
Given that disks can stream data quite fast (1 block in tens of microseconds), why are average access times for a block in milliseconds?
-
Consider a file currently consisting of 100 records of 400 bytes. The filesystem uses fixed blocking, i.e. one 400 byte record is stored per 512 byte block. Assume that the file control block (and the index block, in the case of indexed allocation) is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one record, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow at the beginning, but there is room to grow at the end of the file. Assume that the record information to be added is stored in memory.
- The record is added at the beginning.
- The record is added in the middle.
- The record is added at the end.
- The record is removed from the beginning.
- The record is removed from the middle.
- The record is removed from the end.
-
In the previous example, only 400 bytes is stored in each 512 byte block. Is this wasted space due to internal or external fragmentation?
-
Old versions of UNIX allowed you to write to directories. Newer ones do not even allow the superuser to write to them? Why? Note that many unices allow you read directories.
-
Given a file which varies in size from 4KiB to 4MiB, which of the three allocation schemes (contiguous, linked-list, or i-node based) would be suitable to store such a file? If the file is access randomly, how would that influence the suitability of the three schemes?
-
Why is there VFS Layer in Unix?
-
How does choice of block size affect file system performance? You should consider both sequential and random access.
-
Is the open() system call in UNIX essential? What would be the consequence of not having it?
-
Some operating system provide a rename system call to give a file a new name. What would be different compared to the approach of simply copying the file to a new name and then deleting the original file?
-
In both UNIX and Windows, random file access is performed by having a special system call that moves the current position in the file so the subsequent read or write is performed from the new position. What would be the consequence of not having such a call. How could random access be supported by alternative means?