Merge tag 'v3.10.108' into update
[GitHub/mt8127/android_kernel_alcatel_ttab.git] / Documentation / filesystems / Locking
CommitLineData
1da177e4
LT
1 The text below describes the locking rules for VFS-related methods.
2It is (believed to be) up-to-date. *Please*, if you change anything in
3prototypes or locking protocols - update this file. And update the relevant
4instances in the tree, don't leave that to maintainers of filesystems/devices/
5etc. At the very least, put the list of dubious cases in the end of this file.
6Don't turn it into log - maintainers of out-of-the-tree code are supposed to
7be able to use diff(1).
8 Thing currently missing here: socket operations. Alexey?
9
10--------------------------- dentry_operations --------------------------
11prototypes:
0b728e19 12 int (*d_revalidate)(struct dentry *, unsigned int);
ecf3d1f1 13 int (*d_weak_revalidate)(struct dentry *, unsigned int);
b1e6a015
NP
14 int (*d_hash)(const struct dentry *, const struct inode *,
15 struct qstr *);
621e155a
NP
16 int (*d_compare)(const struct dentry *, const struct inode *,
17 const struct dentry *, const struct inode *,
18 unsigned int, const char *, const struct qstr *);
1da177e4
LT
19 int (*d_delete)(struct dentry *);
20 void (*d_release)(struct dentry *);
21 void (*d_iput)(struct dentry *, struct inode *);
c23fbb6b 22 char *(*d_dname)((struct dentry *dentry, char *buffer, int buflen);
9875cf80 23 struct vfsmount *(*d_automount)(struct path *path);
cc53ce53 24 int (*d_manage)(struct dentry *, bool);
1da177e4
LT
25
26locking rules:
34286d66
NP
27 rename_lock ->d_lock may block rcu-walk
28d_revalidate: no no yes (ref-walk) maybe
ecf3d1f1 29d_weak_revalidate:no no yes no
34286d66
NP
30d_hash no no no maybe
31d_compare: yes no no maybe
32d_delete: no yes no no
33d_release: no no yes no
f0023bc6 34d_prune: no yes no no
34286d66
NP
35d_iput: no no yes no
36d_dname: no no no no
9875cf80 37d_automount: no no yes no
ab90911f 38d_manage: no no yes (ref-walk) maybe
1da177e4
LT
39
40--------------------------- inode_operations ---------------------------
41prototypes:
ebfc3b49 42 int (*create) (struct inode *,struct dentry *,umode_t, bool);
00cd8dd3 43 struct dentry * (*lookup) (struct inode *,struct dentry *, unsigned int);
1da177e4
LT
44 int (*link) (struct dentry *,struct inode *,struct dentry *);
45 int (*unlink) (struct inode *,struct dentry *);
46 int (*symlink) (struct inode *,struct dentry *,const char *);
18bb1db3 47 int (*mkdir) (struct inode *,struct dentry *,umode_t);
1da177e4 48 int (*rmdir) (struct inode *,struct dentry *);
1a67aafb 49 int (*mknod) (struct inode *,struct dentry *,umode_t,dev_t);
1da177e4
LT
50 int (*rename) (struct inode *, struct dentry *,
51 struct inode *, struct dentry *);
52 int (*readlink) (struct dentry *, char __user *,int);
b83be6f2
CH
53 void * (*follow_link) (struct dentry *, struct nameidata *);
54 void (*put_link) (struct dentry *, struct nameidata *, void *);
1da177e4 55 void (*truncate) (struct inode *);
b74c79e9 56 int (*permission) (struct inode *, int, unsigned int);
4e34e719 57 int (*get_acl)(struct inode *, int);
1da177e4
LT
58 int (*setattr) (struct dentry *, struct iattr *);
59 int (*getattr) (struct vfsmount *, struct dentry *, struct kstat *);
60 int (*setxattr) (struct dentry *, const char *,const void *,size_t,int);
61 ssize_t (*getxattr) (struct dentry *, const char *, void *, size_t);
62 ssize_t (*listxattr) (struct dentry *, char *, size_t);
63 int (*removexattr) (struct dentry *, const char *);
b83be6f2 64 int (*fiemap)(struct inode *, struct fiemap_extent_info *, u64 start, u64 len);
c3b2da31 65 void (*update_time)(struct inode *, struct timespec *, int);
d9585277 66 int (*atomic_open)(struct inode *, struct dentry *,
30d90494 67 struct file *, unsigned open_flag,
47237687 68 umode_t create_mode, int *opened);
1da177e4
LT
69
70locking rules:
b83be6f2 71 all may block
a7bc02f4 72 i_mutex(inode)
1da177e4
LT
73lookup: yes
74create: yes
75link: yes (both)
76mknod: yes
77symlink: yes
78mkdir: yes
79unlink: yes (both)
80rmdir: yes (both) (see below)
81rename: yes (all) (see below)
82readlink: no
83follow_link: no
b83be6f2 84put_link: no
1da177e4 85setattr: yes
b74c79e9 86permission: no (may not block if called in rcu-walk mode)
4e34e719 87get_acl: no
1da177e4
LT
88getattr: no
89setxattr: yes
90getxattr: no
91listxattr: no
92removexattr: yes
b83be6f2 93fiemap: no
c3b2da31 94update_time: no
d18e9008 95atomic_open: yes
c3b2da31 96
a7bc02f4 97 Additionally, ->rmdir(), ->unlink() and ->rename() have ->i_mutex on
1da177e4
LT
98victim.
99 cross-directory ->rename() has (per-superblock) ->s_vfs_rename_sem.
1da177e4
LT
100
101See Documentation/filesystems/directory-locking for more detailed discussion
102of the locking scheme for directory operations.
103
104--------------------------- super_operations ---------------------------
105prototypes:
106 struct inode *(*alloc_inode)(struct super_block *sb);
107 void (*destroy_inode)(struct inode *);
aa385729 108 void (*dirty_inode) (struct inode *, int flags);
b83be6f2 109 int (*write_inode) (struct inode *, struct writeback_control *wbc);
336fb3b9
AV
110 int (*drop_inode) (struct inode *);
111 void (*evict_inode) (struct inode *);
1da177e4 112 void (*put_super) (struct super_block *);
1da177e4 113 int (*sync_fs)(struct super_block *sb, int wait);
c4be0c1d
TS
114 int (*freeze_fs) (struct super_block *);
115 int (*unfreeze_fs) (struct super_block *);
726c3342 116 int (*statfs) (struct dentry *, struct kstatfs *);
1da177e4 117 int (*remount_fs) (struct super_block *, int *, char *);
1da177e4 118 void (*umount_begin) (struct super_block *);
34c80b1d 119 int (*show_options)(struct seq_file *, struct dentry *);
1da177e4
LT
120 ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
121 ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
b83be6f2 122 int (*bdev_try_to_free_page)(struct super_block*, struct page*, gfp_t);
1da177e4
LT
123
124locking rules:
336fb3b9 125 All may block [not true, see below]
7e325d3a
CH
126 s_umount
127alloc_inode:
128destroy_inode:
aa385729 129dirty_inode:
7e325d3a 130write_inode:
f283c86a 131drop_inode: !!!inode->i_lock!!!
336fb3b9 132evict_inode:
7e325d3a 133put_super: write
7e325d3a 134sync_fs: read
06fd516c
VA
135freeze_fs: write
136unfreeze_fs: write
336fb3b9
AV
137statfs: maybe(read) (see below)
138remount_fs: write
7e325d3a
CH
139umount_begin: no
140show_options: no (namespace_sem)
141quota_read: no (see below)
142quota_write: no (see below)
b83be6f2 143bdev_try_to_free_page: no (see below)
1da177e4 144
336fb3b9
AV
145->statfs() has s_umount (shared) when called by ustat(2) (native or
146compat), but that's an accident of bad API; s_umount is used to pin
147the superblock down when we only have dev_t given us by userland to
148identify the superblock. Everything else (statfs(), fstatfs(), etc.)
149doesn't hold it when calling ->statfs() - superblock is pinned down
150by resolving the pathname passed to syscall.
1da177e4
LT
151->quota_read() and ->quota_write() functions are both guaranteed to
152be the only ones operating on the quota file by the quota code (via
153dqio_sem) (unless an admin really wants to screw up something and
154writes to quota files with quotas on). For other details about locking
155see also dquot_operations section.
b83be6f2
CH
156->bdev_try_to_free_page is called from the ->releasepage handler of
157the block device inode. See there for more details.
1da177e4
LT
158
159--------------------------- file_system_type ---------------------------
160prototypes:
5d8b2ebf
JC
161 int (*get_sb) (struct file_system_type *, int,
162 const char *, void *, struct vfsmount *);
b83be6f2
CH
163 struct dentry *(*mount) (struct file_system_type *, int,
164 const char *, void *);
1da177e4
LT
165 void (*kill_sb) (struct super_block *);
166locking rules:
b83be6f2 167 may block
b83be6f2
CH
168mount yes
169kill_sb yes
1da177e4 170
1a102ff9
AV
171->mount() returns ERR_PTR or the root dentry; its superblock should be locked
172on return.
1da177e4
LT
173->kill_sb() takes a write-locked superblock, does all shutdown work on it,
174unlocks and drops the reference.
175
176--------------------------- address_space_operations --------------------------
177prototypes:
178 int (*writepage)(struct page *page, struct writeback_control *wbc);
179 int (*readpage)(struct file *, struct page *);
180 int (*sync_page)(struct page *);
181 int (*writepages)(struct address_space *, struct writeback_control *);
182 int (*set_page_dirty)(struct page *page);
183 int (*readpages)(struct file *filp, struct address_space *mapping,
184 struct list_head *pages, unsigned nr_pages);
4e02ed4b
NP
185 int (*write_begin)(struct file *, struct address_space *mapping,
186 loff_t pos, unsigned len, unsigned flags,
187 struct page **pagep, void **fsdata);
188 int (*write_end)(struct file *, struct address_space *mapping,
189 loff_t pos, unsigned len, unsigned copied,
190 struct page *page, void *fsdata);
1da177e4
LT
191 sector_t (*bmap)(struct address_space *, sector_t);
192 int (*invalidatepage) (struct page *, unsigned long);
193 int (*releasepage) (struct page *, int);
6072d13c 194 void (*freepage)(struct page *);
1da177e4
LT
195 int (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
196 loff_t offset, unsigned long nr_segs);
b83be6f2
CH
197 int (*get_xip_mem)(struct address_space *, pgoff_t, int, void **,
198 unsigned long *);
199 int (*migratepage)(struct address_space *, struct page *, struct page *);
200 int (*launder_page)(struct page *);
201 int (*is_partially_uptodate)(struct page *, read_descriptor_t *, unsigned long);
202 int (*error_remove_page)(struct address_space *, struct page *);
62c230bc
MG
203 int (*swap_activate)(struct file *);
204 int (*swap_deactivate)(struct file *);
1da177e4
LT
205
206locking rules:
6072d13c 207 All except set_page_dirty and freepage may block
1da177e4 208
b83be6f2
CH
209 PageLocked(page) i_mutex
210writepage: yes, unlocks (see below)
211readpage: yes, unlocks
212sync_page: maybe
213writepages:
214set_page_dirty no
215readpages:
216write_begin: locks the page yes
217write_end: yes, unlocks yes
218bmap:
219invalidatepage: yes
220releasepage: yes
221freepage: yes
222direct_IO:
223get_xip_mem: maybe
224migratepage: yes (both)
225launder_page: yes
226is_partially_uptodate: yes
227error_remove_page: yes
62c230bc
MG
228swap_activate: no
229swap_deactivate: no
1da177e4 230
4e02ed4b 231 ->write_begin(), ->write_end(), ->sync_page() and ->readpage()
1da177e4
LT
232may be called from the request handler (/dev/loop).
233
234 ->readpage() unlocks the page, either synchronously or via I/O
235completion.
236
237 ->readpages() populates the pagecache with the passed pages and starts
238I/O against them. They come unlocked upon I/O completion.
239
240 ->writepage() is used for two purposes: for "memory cleansing" and for
241"sync". These are quite different operations and the behaviour may differ
242depending upon the mode.
243
244If writepage is called for sync (wbc->sync_mode != WBC_SYNC_NONE) then
245it *must* start I/O against the page, even if that would involve
246blocking on in-progress I/O.
247
248If writepage is called for memory cleansing (sync_mode ==
249WBC_SYNC_NONE) then its role is to get as much writeout underway as
250possible. So writepage should try to avoid blocking against
251currently-in-progress I/O.
252
253If the filesystem is not called for "sync" and it determines that it
254would need to block against in-progress I/O to be able to start new I/O
255against the page the filesystem should redirty the page with
256redirty_page_for_writepage(), then unlock the page and return zero.
257This may also be done to avoid internal deadlocks, but rarely.
258
3a4fa0a2 259If the filesystem is called for sync then it must wait on any
1da177e4
LT
260in-progress I/O and then start new I/O.
261
2054606a
ND
262The filesystem should unlock the page synchronously, before returning to the
263caller, unless ->writepage() returns special WRITEPAGE_ACTIVATE
264value. WRITEPAGE_ACTIVATE means that page cannot really be written out
265currently, and VM should stop calling ->writepage() on this page for some
266time. VM does this by moving page to the head of the active list, hence the
267name.
1da177e4
LT
268
269Unless the filesystem is going to redirty_page_for_writepage(), unlock the page
270and return zero, writepage *must* run set_page_writeback() against the page,
271followed by unlocking it. Once set_page_writeback() has been run against the
272page, write I/O can be submitted and the write I/O completion handler must run
273end_page_writeback() once the I/O is complete. If no I/O is submitted, the
274filesystem must run end_page_writeback() against the page before returning from
275writepage.
276
277That is: after 2.5.12, pages which are under writeout are *not* locked. Note,
278if the filesystem needs the page to be locked during writeout, that is ok, too,
279the page is allowed to be unlocked at any point in time between the calls to
280set_page_writeback() and end_page_writeback().
281
282Note, failure to run either redirty_page_for_writepage() or the combination of
283set_page_writeback()/end_page_writeback() on a page submitted to writepage
284will leave the page itself marked clean but it will be tagged as dirty in the
285radix tree. This incoherency can lead to all sorts of hard-to-debug problems
286in the filesystem like having dirty inodes at umount and losing written data.
287
288 ->sync_page() locking rules are not well-defined - usually it is called
289with lock on page, but that is not guaranteed. Considering the currently
290existing instances of this method ->sync_page() itself doesn't look
291well-defined...
292
293 ->writepages() is used for periodic writeback and for syscall-initiated
294sync operations. The address_space should start I/O against at least
295*nr_to_write pages. *nr_to_write must be decremented for each page which is
296written. The address_space implementation may write more (or less) pages
297than *nr_to_write asks for, but it should try to be reasonably close. If
298nr_to_write is NULL, all dirty pages must be written.
299
300writepages should _only_ write pages which are present on
301mapping->io_pages.
302
303 ->set_page_dirty() is called from various places in the kernel
304when the target page is marked as needing writeback. It may be called
305under spinlock (it cannot block) and is sometimes called with the page
306not locked.
307
308 ->bmap() is currently used by legacy ioctl() (FIBMAP) provided by some
b83be6f2
CH
309filesystems and by the swapper. The latter will eventually go away. Please,
310keep it that way and don't breed new callers.
1da177e4
LT
311
312 ->invalidatepage() is called when the filesystem must attempt to drop
313some or all of the buffers from the page when it is being truncated. It
314returns zero on success. If ->invalidatepage is zero, the kernel uses
315block_invalidatepage() instead.
316
317 ->releasepage() is called when the kernel is about to try to drop the
318buffers from the page in preparation for freeing it. It returns zero to
319indicate that the buffers are (or may be) freeable. If ->releasepage is zero,
320the kernel assumes that the fs has no private interest in the buffers.
321
6072d13c
LT
322 ->freepage() is called when the kernel is done dropping the page
323from the page cache.
324
e3db7691
TM
325 ->launder_page() may be called prior to releasing a page if
326it is still found to be dirty. It returns zero if the page was successfully
327cleaned, or an error value if not. Note that in order to prevent the page
328getting mapped back in and redirtied, it needs to be kept locked
329across the entire operation.
330
62c230bc
MG
331 ->swap_activate will be called with a non-zero argument on
332files backing (non block device backed) swapfiles. A return value
333of zero indicates success, in which case this file can be used for
334backing swapspace. The swapspace operations will be proxied to the
335address space operations.
336
337 ->swap_deactivate() will be called in the sys_swapoff()
338path after ->swap_activate() returned success.
339
1da177e4
LT
340----------------------- file_lock_operations ------------------------------
341prototypes:
1da177e4
LT
342 void (*fl_copy_lock)(struct file_lock *, struct file_lock *);
343 void (*fl_release_private)(struct file_lock *);
344
345
346locking rules:
b83be6f2
CH
347 file_lock_lock may block
348fl_copy_lock: yes no
349fl_release_private: maybe no
1da177e4
LT
350
351----------------------- lock_manager_operations ---------------------------
352prototypes:
8fb47a4f
BF
353 int (*lm_compare_owner)(struct file_lock *, struct file_lock *);
354 void (*lm_notify)(struct file_lock *); /* unblock callback */
355 int (*lm_grant)(struct file_lock *, struct file_lock *, int);
8fb47a4f
BF
356 void (*lm_break)(struct file_lock *); /* break_lease callback */
357 int (*lm_change)(struct file_lock **, int);
1da177e4
LT
358
359locking rules:
b83be6f2 360 file_lock_lock may block
8fb47a4f
BF
361lm_compare_owner: yes no
362lm_notify: yes no
363lm_grant: no no
8fb47a4f
BF
364lm_break: yes no
365lm_change yes no
b83be6f2 366
1da177e4
LT
367--------------------------- buffer_head -----------------------------------
368prototypes:
369 void (*b_end_io)(struct buffer_head *bh, int uptodate);
370
371locking rules:
372 called from interrupts. In other words, extreme care is needed here.
373bh is locked, but that's all warranties we have here. Currently only RAID1,
374highmem, fs/buffer.c, and fs/ntfs/aops.c are providing these. Block devices
375call this method upon the IO completion.
376
377--------------------------- block_device_operations -----------------------
378prototypes:
e1455d1b
CH
379 int (*open) (struct block_device *, fmode_t);
380 int (*release) (struct gendisk *, fmode_t);
381 int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long);
382 int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long);
383 int (*direct_access) (struct block_device *, sector_t, void **, unsigned long *);
1da177e4 384 int (*media_changed) (struct gendisk *);
e1455d1b 385 void (*unlock_native_capacity) (struct gendisk *);
1da177e4 386 int (*revalidate_disk) (struct gendisk *);
e1455d1b
CH
387 int (*getgeo)(struct block_device *, struct hd_geometry *);
388 void (*swap_slot_free_notify) (struct block_device *, unsigned long);
1da177e4
LT
389
390locking rules:
b83be6f2
CH
391 bd_mutex
392open: yes
393release: yes
394ioctl: no
395compat_ioctl: no
396direct_access: no
397media_changed: no
398unlock_native_capacity: no
399revalidate_disk: no
400getgeo: no
401swap_slot_free_notify: no (see below)
e1455d1b
CH
402
403media_changed, unlock_native_capacity and revalidate_disk are called only from
404check_disk_change().
405
406swap_slot_free_notify is called with swap_lock and sometimes the page lock
407held.
1da177e4 408
1da177e4
LT
409
410--------------------------- file_operations -------------------------------
411prototypes:
412 loff_t (*llseek) (struct file *, loff_t, int);
413 ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
1da177e4 414 ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
027445c3
BP
415 ssize_t (*aio_read) (struct kiocb *, const struct iovec *, unsigned long, loff_t);
416 ssize_t (*aio_write) (struct kiocb *, const struct iovec *, unsigned long, loff_t);
1da177e4
LT
417 int (*readdir) (struct file *, void *, filldir_t);
418 unsigned int (*poll) (struct file *, struct poll_table_struct *);
1da177e4
LT
419 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
420 long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
421 int (*mmap) (struct file *, struct vm_area_struct *);
422 int (*open) (struct inode *, struct file *);
423 int (*flush) (struct file *);
424 int (*release) (struct inode *, struct file *);
02c24a82 425 int (*fsync) (struct file *, loff_t start, loff_t end, int datasync);
1da177e4
LT
426 int (*aio_fsync) (struct kiocb *, int datasync);
427 int (*fasync) (int, struct file *, int);
428 int (*lock) (struct file *, int, struct file_lock *);
429 ssize_t (*readv) (struct file *, const struct iovec *, unsigned long,
430 loff_t *);
431 ssize_t (*writev) (struct file *, const struct iovec *, unsigned long,
432 loff_t *);
433 ssize_t (*sendfile) (struct file *, loff_t *, size_t, read_actor_t,
434 void __user *);
435 ssize_t (*sendpage) (struct file *, struct page *, int, size_t,
436 loff_t *, int);
437 unsigned long (*get_unmapped_area)(struct file *, unsigned long,
438 unsigned long, unsigned long, unsigned long);
439 int (*check_flags)(int);
b83be6f2
CH
440 int (*flock) (struct file *, int, struct file_lock *);
441 ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, loff_t *,
442 size_t, unsigned int);
443 ssize_t (*splice_read)(struct file *, loff_t *, struct pipe_inode_info *,
444 size_t, unsigned int);
445 int (*setlease)(struct file *, long, struct file_lock **);
2fe17c10 446 long (*fallocate)(struct file *, int, loff_t, loff_t);
1da177e4
LT
447};
448
449locking rules:
b83be6f2 450 All may block except for ->setlease.
02c24a82 451 No VFS locks held on entry except for ->setlease.
b83be6f2
CH
452
453->setlease has the file_list_lock held and must not sleep.
1da177e4
LT
454
455->llseek() locking has moved from llseek to the individual llseek
456implementations. If your fs is not using generic_file_llseek, you
457need to acquire and release the appropriate locks in your ->llseek().
458For many filesystems, it is probably safe to acquire the inode
866707fc
JB
459mutex or just to use i_size_read() instead.
460Note: this does not protect the file->f_pos against concurrent modifications
461since this is something the userspace has to take care about.
1da177e4 462
b83be6f2
CH
463->fasync() is responsible for maintaining the FASYNC bit in filp->f_flags.
464Most instances call fasync_helper(), which does that maintenance, so it's
465not normally something one needs to worry about. Return values > 0 will be
466mapped to zero in the VFS layer.
1da177e4
LT
467
468->readdir() and ->ioctl() on directories must be changed. Ideally we would
469move ->readdir() to inode_operations and use a separate method for directory
470->ioctl() or kill the latter completely. One of the problems is that for
471anything that resembles union-mount we won't have a struct file for all
472components. And there are other reasons why the current interface is a mess...
473
1da177e4
LT
474->read on directories probably must go away - we should just enforce -EISDIR
475in sys_read() and friends.
476
1da177e4
LT
477--------------------------- dquot_operations -------------------------------
478prototypes:
1da177e4
LT
479 int (*write_dquot) (struct dquot *);
480 int (*acquire_dquot) (struct dquot *);
481 int (*release_dquot) (struct dquot *);
482 int (*mark_dirty) (struct dquot *);
483 int (*write_info) (struct super_block *, int);
484
485These operations are intended to be more or less wrapping functions that ensure
486a proper locking wrt the filesystem and call the generic quota operations.
487
488What filesystem should expect from the generic quota functions:
489
490 FS recursion Held locks when called
1da177e4
LT
491write_dquot: yes dqonoff_sem or dqptr_sem
492acquire_dquot: yes dqonoff_sem or dqptr_sem
493release_dquot: yes dqonoff_sem or dqptr_sem
494mark_dirty: no -
495write_info: yes dqonoff_sem
496
497FS recursion means calling ->quota_read() and ->quota_write() from superblock
498operations.
499
1da177e4
LT
500More details about quota locking can be found in fs/dquot.c.
501
502--------------------------- vm_operations_struct -----------------------------
503prototypes:
504 void (*open)(struct vm_area_struct*);
505 void (*close)(struct vm_area_struct*);
d0217ac0 506 int (*fault)(struct vm_area_struct*, struct vm_fault *);
c2ec175c 507 int (*page_mkwrite)(struct vm_area_struct *, struct vm_fault *);
28b2ee20 508 int (*access)(struct vm_area_struct *, unsigned long, void*, int, int);
1da177e4
LT
509
510locking rules:
b83be6f2
CH
511 mmap_sem PageLocked(page)
512open: yes
513close: yes
514fault: yes can return with page locked
515page_mkwrite: yes can return with page locked
516access: yes
ed2f2f9b 517
b827e496
NP
518 ->fault() is called when a previously not present pte is about
519to be faulted in. The filesystem must find and return the page associated
520with the passed in "pgoff" in the vm_fault structure. If it is possible that
521the page may be truncated and/or invalidated, then the filesystem must lock
522the page, then ensure it is not already truncated (the page lock will block
523subsequent truncate), and then return with VM_FAULT_LOCKED, and the page
524locked. The VM will unlock the page.
525
526 ->page_mkwrite() is called when a previously read-only pte is
527about to become writeable. The filesystem again must ensure that there are
528no truncate/invalidate races, and then return with the page locked. If
529the page has been truncated, the filesystem should not look up a new page
530like the ->fault() handler, but simply return with VM_FAULT_NOPAGE, which
531will cause the VM to retry the fault.
1da177e4 532
28b2ee20
RR
533 ->access() is called when get_user_pages() fails in
534acces_process_vm(), typically used to debug a process through
535/proc/pid/mem or ptrace. This function is needed only for
536VM_IO | VM_PFNMAP VMAs.
537
1da177e4
LT
538================================================================================
539 Dubious stuff
540
541(if you break something or notice that it is broken and do not fix it yourself
542- at least put it here)