bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bug when 'cd ..' to a directory who's parent has been deleted


From: Linda Walsh
Subject: Re: bug when 'cd ..' to a directory who's parent has been deleted
Date: Tue, 09 Feb 2016 13:21:23 -0800
User-agent: Thunderbird



Andreas Schwab wrote:
Linda Walsh <bash@tlinx.org> writes:

You can't open a file handle on a file.  The fd comes from some
                                    ^^^^
OS call associating it with the file on disk (or some other connection).
                                  ^^^^
You have to decide which sentence is true.
---
It depends on how you are defining file.  If you go by a standard
definition, like wikipedia's:

"A computer file is a resource for storing information, which is available to a computer program and is usually based on some kind of durable storage".

On linux and on unix, a fundamental difference between it and many other OS's, is that the data is pointed to in a file-system specific manner, but such that the OS can always return a "handle" or "FileDescriptor" to the file.

Processes that have the "handle" can alter or read the information on disk unless it has been physically overwritten (which is why, for security-sensitive applications, files are not "deleted", but overwritten with multiple layers of random garbage, before they are returned to the "free-disk-space pool").

On Unix, you can have multiple names for the same "data-collection" on disk.
Each is a separate name (a pathname) which points to the same "data-collection". When you open a pathname, you get back an OS handle to an internal OS-structure
that fully describes where and how to access the data on disk.  This is
the "in-memory" version of the disk-inode that identifies the data.
So whether you have multiple pathnames pointing to an inode, or processes
holding "file descriptors", the actual data isn't released until all the
references to that data are "released".  Deleting pathnames simply wipes
the disk pointers to the data, but as long as the OS references the file
with "in memory" pointers to the data, it is not deleted. Of special note --
this isn't true on Windows, which locks data-areas on disk associated
with pathnames so they cannot be removed if some process has a "writable"
handle open to the Data. This, often requires the "Windows reboot" solution
to fully release the "write-options" to the data by all processes.

The linux method makes data storage and sharing easier for users --
and even updating executables as long as they own them.  On Windows,
a file's content cannot be updated as long as some process holds
open the Windows-type file-handle in  Shared-read mode (used to
read an executable from disk).
On linux, when a executes, parts of the program that are marked
for execution, can be set to be read from disk only when the program
on-demand.  Much work has gone into preventing "race" conditions
on linux -- where on process is able to write to a file's data
area on disk while another process is executing it.  When done
with malevolent intent, it can be a security flaw.  This is all
done because long-lived processes can hold open executables for
a long time, either to execute or write from.  Note: many modern
CPU's have protections against this problem by defaulting to
disabling "write-access" to areas in memory that are executing or
paged-in directly from disk.

That's why I wrote the "deleted_procs" script -- since after
a Suse update, old versions can still continue running for
weeks or months unless the program is restarted (zypper has
a similar option to find this out: "zypper ps".

Linda.

Note -- this is using the wikipedia definition of file.  If you
have some other agenda to create confusion by using different
terminology, all bets are off.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]