Previously, I accidentally introduced a new way for duplicate
directory entries to be returned from readdir(). That patch fails
to properly decrement the nlupgs counter when breaking out of the
inner-for loop. This accounting error causes an extra iteration
of the inner-for loop when processing the next cfs page and a bad
ldp_hash_end value is then saved in the lu_dirpage. To fix this,
always decrement the nlupgs counter on entry into the inner loop.
Note: this bug only affects architectures with > 4k-sized pages, e.g.
PowerPC.
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-3182
Lustre-change: http://review.whamcloud.com/6405
Signed-off-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Fan Yong <fan.yong@intel.com>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Bobi Jam <bobijam@gmail.com>
Signed-off-by: Peng Tao <tao.peng@emc.com>
Signed-off-by: Andreas Dilger <andreas.dilger@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
__u64 hash_end = dp->ldp_hash_end;
__u32 flags = dp->ldp_flags;
- for (; nlupgs > 1; nlupgs--) {
+ while (--nlupgs > 0) {
ent = lu_dirent_start(dp);
for (end_dirent = ent; ent != NULL;
end_dirent = ent, ent = lu_dirent_next(ent));
kunmap(pages[i]);
}
+ LASSERTF(nlupgs == 0, "left = %d", nlupgs);
}
#else
#define lmv_adjust_dirpages(pages, ncfspgs, nlupgs) do {} while (0)