Finding largest files on a filesystem

Finding largest files on a filesystem

am 01.11.2007 20:30:53 von groups.user

Hi

i'm looking for a command or a simple script to find the largest files
in a filesystem,
ordered by size.

Does anyone have any recommendations on a similar command structure or
script

Thanks

Re: Finding largest files on a filesystem

am 01.11.2007 20:56:28 von Stephane CHAZELAS

2007-11-1, 12:30(-07), groups.user@gmail.com:
> i'm looking for a command or a simple script to find the largest files
> in a filesystem,
> ordered by size.
>
> Does anyone have any recommendations on a similar command structure or
> script
[...]

With zsh, this gives the top-10:

ls -ld /**/*(DOL[1,10])

(note that it crosses filesystems)

If you have GNU utilities:

find / -xdev -printf '%s:%p\0' |
sort -znr |
tr '\0\n' '\n\0' |
cut -d: -f2- |
head -n 10 |
tr '\0\n' '\n\0' |
xargs -r0 ls -ldU

--
Stéphane

Re: Finding largest files on a filesystem

am 01.11.2007 21:26:41 von jellybean stonerfish

On Thu, 01 Nov 2007 12:30:53 -0700, groups.user wrote:

> Hi
>
> i'm looking for a command or a simple script to find the largest files
> in a filesystem,
> ordered by size.
>
> Does anyone have any recommendations on a similar command structure or
> script
>
> Thanks

Using find to execute ls.

find path -type f -exec ls -s {} + | sort -n > filesizes.table


path Directory you wish to scan.
-type f find all regular files
-exec ls -s {} + execute the command replacing {} with files find finds

ls -s list sizes with filenames
| sort -n pipe output through sort with numeric option
> filesizes.table Direct output to file



stonerfish

Re: Finding largest files on a filesystem

am 02.11.2007 00:09:50 von Jstein

You may ALSO want to look for directories that are holding a lot of
storage.
( in case there are thousands of smaller files building up, and taking
space )

$ du /usr | sort -n -r | head -20
^^^--- substitute whatever filesystem you want here.


#-- For the individual Files, you might try

find /usr -size +15000 -ls | sort -n -r +0.40 -0.52 | head -20
find /usr -size +15000 -ls | cut -c40- | sort -n -r | head -20

#-- It might be good to target larger files that are not changed in
>20 days, and <2 years

find /usr -size +2000 -mtime +20 -mtime -700 -ls | sort -n -r +0.40
-0.52 | head -20


Hopefully that helps.
-- Joseph

Re: Finding largest files on a filesystem

am 02.11.2007 12:40:13 von snehansu.chatterjee

On Nov 2, 12:30 am, groups.u...@gmail.com wrote:
> Hi
>
> i'm looking for a command or a simple script to find the largest files
> in a filesystem,
> ordered by size.
>
> Does anyone have any recommendations on a similar command structure or
> script
>
> Thanks

# du -sk * | sort -n
or sort -nr to sort in reverse.

Re: Finding largest files on a filesystem

am 02.11.2007 14:56:15 von groups.user

On Nov 1, 4:26 pm, jellybean stonerfish
wrote:
> On Thu, 01 Nov 2007 12:30:53 -0700, groups.user wrote:
> > Hi
>
> > i'm looking for a command or a simple script to find the largest files
> > in a filesystem,
> > ordered by size.
>
> > Does anyone have any recommendations on a similar command structure or
> > script
>
> > Thanks
>
> Using find to execute ls.
>
> find path -type f -exec ls -s {} + | sort -n > filesizes.table
>
> path Directory you wish to scan.
> -type f find all regular files
> -exec ls -s {} + execute the command replacing {} with files find finds
>
> ls -s list sizes with filenames
> | sort -n pipe output through sort with numeric option
>
> > filesizes.table Direct output to file
>
> stonerfish


Hi.. Thanks Stonerfish..

So if I want to find all the filesizes in the root filesystem, would I
execute the following command

find / -type f -exec ls -s {} + | sort -n > filesizes.table

I'm not sure what you mean by

-exec ls -s {} + execute the command replacing {} with files find
finds

Thanks

Re: Finding largest files on a filesystem

am 02.11.2007 16:18:42 von jellybean stonerfish

On Fri, 02 Nov 2007 13:56:15 +0000, groups.user wrote:

> Hi.. Thanks Stonerfish..
>
> So if I want to find all the filesizes in the root filesystem, would I
> execute the following command
>
> find / -type f -exec ls -s {} + | sort -n > filesizes.table
>
> I'm not sure what you mean by
>
> -exec ls -s {} + execute the command replacing {} with files find
> finds

Read the man page for find. As "find" looks and finds files, it will
execute the command after "-exec" replacing {} with the file names.
So if in the folder "fred" you have files "tom", "dick", "harry", then
find fred -type f -exec ls -s {} +
would be like doing the command
ls -s tom dick harry

Also you could add -xdev to the find command to prevent searching other
filesystems.

Re: Finding largest files on a filesystem

am 05.12.2007 21:56:26 von Mikhail Teterin

Stephane CHAZELAS wrote:
> If you have GNU utilities:
>
> find / -xdev -printf '%s:%p\0' |
> sort -znr |
> tr '\0\n' '\n\0' |
> cut -d: -f2- |
> head -n 10 |
> tr '\0\n' '\n\0' |
> xargs -r0 ls -ldU

Why not simply use the `-ls' predicate available with GNU as well as on
modern BSD (NetBSD, FreeBSD, OpenBSD, DragonFlyBSD, MacOS-10, BSDOS), and,
at least, Solaris 10?

find /path/to/fs -xdev -type f -ls

The second column is the file's size in Kb:

find /path/to/fs -xdev -type f -ls | awk '{print $2 " " $NF}' | sort -n

A lot simpler (IMO) and certainly more efficient, because each file is only
stat-ed once, rather than twice as in the quoted example (first by find and
then by ls).

Yours,

-mi

Re: Finding largest files on a filesystem

am 06.12.2007 03:21:07 von Allodoxaphobia

On Wed, 05 Dec 2007 15:56:26 -0500, Mikhail Teterin wrote:
> Stephane CHAZELAS wrote:
>> If you have GNU utilities:
>>
>> find / -xdev -printf '%s:%p\0' |
>> sort -znr |
>> tr '\0\n' '\n\0' |
>> cut -d: -f2- |
>> head -n 10 |
>> tr '\0\n' '\n\0' |
>> xargs -r0 ls -ldU
>
> Why not simply use the `-ls' predicate available with GNU as well as on
> modern BSD (NetBSD, FreeBSD, OpenBSD, DragonFlyBSD, MacOS-10, BSDOS), and,
> at least, Solaris 10?
>
> find /path/to/fs -xdev -type f -ls
>
> The second column is the file's size in Kb:
>
> find /path/to/fs -xdev -type f -ls | awk '{print $2 " " $NF}' | sort -n
>
> A lot simpler (IMO) and certainly more efficient, because each file is only
> stat-ed once, rather than twice as in the quoted example (first by find and
> then by ls).

Needs a fixup for filenames with embedded blanks.

Here on linux (Mandrake) it is ala (without the $#^%!*&$!$# embedded
blanks fixup (yet)):

$ find /path/to/fs -xdev -type f -ls | awk '{print $7 " " $NF}' \
| sort -nr \
| head -n 40

Note the proper field position for filesize in `awk`.

Yours, too.
Jonesy
--
Marvin L Jones | jonz | W3DHJ | linux
38.24N 104.55W | @ config.com | Jonesy | OS/2
*** Killfiling google posts:

Re: Finding largest files on a filesystem

am 06.12.2007 08:51:26 von Stephane CHAZELAS

On 6 Dec 2007 02:21:07 GMT, Allodoxaphobia wrote:
> On Wed, 05 Dec 2007 15:56:26 -0500, Mikhail Teterin wrote:
>> Stephane CHAZELAS wrote:
>>> If you have GNU utilities:
>>>
>>> find / -xdev -printf '%s:%p\0' |
>>> sort -znr |
>>> tr '\0\n' '\n\0' |
>>> cut -d: -f2- |
>>> head -n 10 |
>>> tr '\0\n' '\n\0' |
>>> xargs -r0 ls -ldU
>>
>> Why not simply use the `-ls' predicate available with GNU as well as on
>> modern BSD (NetBSD, FreeBSD, OpenBSD, DragonFlyBSD, MacOS-10, BSDOS), and,
>> at least, Solaris 10?
>>
>> find /path/to/fs -xdev -type f -ls
>>
>> The second column is the file's size in Kb:

No, the second column is the disk usage of the file in block
whose size depends on the find implementation and/ore the
environment, or one of the words of the filename if that
filename contains blanks and newline characters.

With GNU find:
~$ find a -ls
329231 4 -rw-r--r-- 1 stephane spider 8 Dec 5 18:56 a
~$ POSIXLY_CORRECT=1 find a -ls
329231 8 -rw-r--r-- 1 stephane spider 8 Dec 5 18:56 a

That 8 byte file uses 4 kB or disks or 8 512b blocks.

>> find /path/to/fs -xdev -type f -ls | awk '{print $2 " " $NF}' | sort -n
>>
>> A lot simpler (IMO) and certainly more efficient, because each file is only
>> stat-ed once, rather than twice as in the quoted example (first by find and
>> then by ls).

ls -ldU was just given as an example application to run on the
result of that search.

> Needs a fixup for filenames with embedded blanks.

Or newline characters.

> Here on linux (Mandrake) it is ala (without the $#^%!*&$!$# embedded
> blanks fixup (yet)):

To fix that and files containing NLs, you need to use a NUL
character or to do a find // and search for that with awk to
determine where each filename starts. Too much trouble, hence
the use of GNU's -printf.

> $ find /path/to/fs -xdev -type f -ls | awk '{print $7 " " $NF}' \
> | sort -nr \
> | head -n 40
>
> Note the proper field position for filesize in `awk`.
[...]

That's only if none of the previous fields had blanks in them.

--
Stephane

Re: Finding largest files on a filesystem

am 06.12.2007 22:14:10 von Michael Tosch

Stephane Chazelas wrote:
> On 6 Dec 2007 02:21:07 GMT, Allodoxaphobia wrote:
>> On Wed, 05 Dec 2007 15:56:26 -0500, Mikhail Teterin wrote:
>>> Stephane CHAZELAS wrote:
>>>> If you have GNU utilities:
>>>>
>>>> find / -xdev -printf '%s:%p\0' |
>>>> sort -znr |
>>>> tr '\0\n' '\n\0' |
>>>> cut -d: -f2- |
>>>> head -n 10 |
>>>> tr '\0\n' '\n\0' |
>>>> xargs -r0 ls -ldU
>>> Why not simply use the `-ls' predicate available with GNU as well as on
>>> modern BSD (NetBSD, FreeBSD, OpenBSD, DragonFlyBSD, MacOS-10, BSDOS), and,
>>> at least, Solaris 10?
>>>
>>> find /path/to/fs -xdev -type f -ls
>>>
>>> The second column is the file's size in Kb:
>
> No, the second column is the disk usage of the file in block
> whose size depends on the find implementation and/ore the
> environment, or one of the words of the filename if that
> filename contains blanks and newline characters.
>
> With GNU find:
> ~$ find a -ls
> 329231 4 -rw-r--r-- 1 stephane spider 8 Dec 5 18:56 a
> ~$ POSIXLY_CORRECT=1 find a -ls
> 329231 8 -rw-r--r-- 1 stephane spider 8 Dec 5 18:56 a
>

Really?

Solaris /usr/bin/find and /usr/xpg4/bin/find prints kilobyte,
and so sais the man page.
OSF1 /usr/bin/find prints kilobyte, too.
HP-UX 11.23 /usr/bin/find does not have -ls at all.

--
Michael Tosch @ hp : com

Re: Finding largest files on a filesystem

am 07.12.2007 08:32:08 von Stephane CHAZELAS

On Thu, 06 Dec 2007 22:14:10 +0100, Michael Tosch wrote:
[...]
>> No, the second column is the disk usage of the file in block
>> whose size depends on the find implementation and/ore the
>> environment, or one of the words of the filename if that
>> filename contains blanks and newline characters.
>>
>> With GNU find:
>> ~$ find a -ls
>> 329231 4 -rw-r--r-- 1 stephane spider 8 Dec 5 18:56 a
>> ~$ POSIXLY_CORRECT=1 find a -ls
>> 329231 8 -rw-r--r-- 1 stephane spider 8 Dec 5 18:56 a
>>
>
> Really?
>
> Solaris /usr/bin/find and /usr/xpg4/bin/find prints kilobyte,
> and so sais the man page.
> OSF1 /usr/bin/find prints kilobyte, too.
> HP-UX 11.23 /usr/bin/find does not have -ls at all.

-ls is not a standard option, so you may get anything. The
POSIXLY_CORRECT in GNU find is to force the block size to be
512b. Otherwise (as in ls -s, du, df...) GNU tools tend to use
kB as the block size as it's more convenient for users nowadays
that most of the time it has nothing to do with the actual block
size of the file system or block device.

It's documented in the GNU find manual. Both GNU and Solaris
manuals however are not clear on whether it's the allocated disk
space or size.

--
Stephane