1
0
mirror of git://git.sv.gnu.org/coreutils.git synced 2026-02-26 17:16:01 +02:00

tests: adjust the new, very expensive rm test to be less expensive

* tests/rm/4-million-entry-dir: Create only 200,000 files, rather
than 4 million.  The latter was overkill, and was too likely to
fail due to inode exhaustion.  Not everyone is using btrfs yet.
Now that this test doesn't take so long, label it as merely
"expensive", rather than "very expensive".  Thanks to
Bernhard Voelker for pointing out the risk of inode exhaustion.
This commit is contained in:
Jim Meyering
2011-08-24 10:36:25 +02:00
parent 1f93c96339
commit ebc63d33ea

View File

@@ -1,5 +1,6 @@
#!/bin/sh
# in coreutils-8.12, this would have required ~1GB of memory
# In coreutils-8.12, rm,du,chmod, etc. would use too much memory
# when processing a directory with many entries (as in > 100,000).
# Copyright (C) 2011 Free Software Foundation, Inc.
@@ -17,19 +18,21 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. "${srcdir=.}/init.sh"; path_prepend_ ../src
print_ver_ rm
print_ver_ rm du
very_expensive_
expensive_
# Put 4M files in a directory.
# With many files in a single directory...
mkdir d && cd d || framework_failure_
seq 4000000|xargs touch || framework_failure_
seq 200000|xargs touch || framework_failure_
cd ..
# Restricted to 50MB, rm from coreutils-8.12 would fail with a
# diagnostic like "rm: fts_read failed: Cannot allocate memory".
ulimit -v 50000
# Restricted to 40MB, rm from coreutils-8.12 each of these would fail
# with a diagnostic like "rm: fts_read failed: Cannot allocate memory".
ulimit -v 40000
du -sh d || fail=1
chmod -R 700 d || fail=1
rm -rf d || fail=1
Exit $fail