Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
tips:threelinestip [2009/05/25 00:34] 127.0.0.1 external edit |
tips:threelinestip [2012/12/01 18:13] a [Create static mirror of dynamic web site (ex. Wordpress)] |
||
---|---|---|---|
Line 146: | Line 146: | ||
-density 300 -quality 80 $file `echo $file | sed ' | -density 300 -quality 80 $file `echo $file | sed ' | ||
+ | ===== rename upper to lowercase in bash ==== | ||
+ | for x in *.JPG; do y=$(echo $x | tr ' | ||
+ | ===== Find duplicate files in Linux ===== | ||
+ | Let’s say you have a folder with 5000 MP3 files you want to check for duplicates. Or a directory containing thousands of EPUB files, all with different names but you have a hunch some of them might be duplicates. You can cd your way in the console up to that particular folder and then do a | ||
+ | < | ||
+ | find -not -empty -type f -printf “%s\n” | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate | ||
+ | </ | ||
+ | This will output a list of files that are duplicates, according tot their HASH signature. | ||
+ | Another way is to install fdupes and do a | ||
+ | |||
+ | | ||
+ | |||
+ | The -r is for recursivity. Check the duplicates_list.txt afterwards in a text editor for a list of duplicate files. | ||
+ | |||
+ | ===== Linux - Top 10 CPU-hungry apps ===== | ||
+ | |||
+ | ps -eo pcpu, | ||
+ | |||
+ | ===== Create static mirror of dynamic web site (ex. Wordpress) ===== | ||
+ | |||
+ | |||
+ | wget --mirror -w 2 -p -r -np --html-extension --convert-links -R xmlrpc.php, | ||
+ | |||
+ | ===== Find processes utilizing high memory in human readable format ====== | ||
+ | |||
+ | ps -eo size, |