Do you have a tip/hack that's max. three lines long??
The command top is a very useful tool to capture information about process running on Linux. Many times this information may need to be captured to a file. This can be done with the following command:
top -b -n1 > /tmp/top.txt
This will run top once write the output to a file and exit.
The command top can also be run so that it will give multiple reports. To run top 5 times and wait 5 seconds between each output the command would be:
top -b -n5 -d5 > /tmp/top.txt
# i in *.[jJ][pP][gG]; do mv "$i" $(echo "$i" | sed "s/ /_/g"); done
ali
# rename 's/ /_/' *.[jJ][pP][gG]
:)
# printf "HEAD / HTTP/1.0nn" | nc $server 80 | grep Server
tar -czf /dev/stdout $(DIRECTORY_OR_FILE_TO_COMPRESS) | split -d -b $(CHUNK_SIZE_IN_BYTES) - $(FILE_NAME_PREFIX)
putting them together agian
cat $(FILE_NAME_PREFIX)* >> /dev/stdout | tar -xzf /dev/stdin
export MAKEFLAGS=“-j3”
$ diff -c prog.c.old prog.c > prog.patch $ patch < prog.patch
% sudo tcpdump -Atq -s 0 -i en1
-i en1 will display traffic on your airport card. Use en0 (or nothing, for most systems) for built-in ethernettage. You can add a host line to limit output to a particular host
% sudo tcpdump -Atq -s 0 -i en1 host foobar.com
# lsof | grep ' root ' | awk '{print $NF}' | sort | wc -l
Of course, if you want to drop the count and show the actual processes, you can run:
# lsof | grep ' root '
# ldd `which sshd` | grep -i libwrap
or
# strings `which sshd` | grep -i libwrap
Method 1
cmp -s file1 file2 || { # do something }
Method 2
cmp -s file1 file2 if [ $? = 1 ] then # do something fi
egrep '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}'
and
sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4
tail -f filename |grep –line-buffered STRING
Serve files on port 8080 for anybody from the directory from where you start this command:
:;while [ $? -eq 0 ];do nc -vlp 8080 -c'(r=read;e=echo;$r a b c;z=$r;while [ ${#z} -gt 2 ];do $r z;done;f=`$e $b|sed 's/[^a-z0-9_.-]//gi'`;h="HTTP/1.0";o="$h 200 OK\r\n";c="Content";if [ -z $f ];then($e $o;ls|(while $r n;do if [ -f "$n" ]; then $e "<a href=\"/$n\">`ls -gh $n` ";fi;done););elif [ -f $f ];then $e "$o$c-Type: `file -ib $f`\n$c-Length: `stat -c%s $f`";$e;cat $f;else $e -e "$h 404 Not Found\n\n404\n";fi)';done
Serve files on port 8000 for anybody from the directory from where you start this command:
python 2.x
python -m SimpleHTTPServer
python 3.x
python -m http.server
If other port is desired (for example 9000) then add port number to the command:
python -m SimpleHTTPServer 9000
nmap -sL $1 2>/dev/null | perl -ne 'print unless /^Host [\d.]+ /' | grep 'not scanned' | cut -d ' ' -f 2,3 | sed -e 's/\(.*\) (\(.*\))/\2 resolves to \1/'
output:
198.133.219.10 resolves to fed.cisco.com 198.133.219.11 resolves to asp-web-sj-1.cisco.com 198.133.219.12 resolves to asp-web-sj-2.cisco.com 198.133.219.13 resolves to fedtst.cisco.com 198.133.219.14 resolves to www.netimpactstudy.com 198.133.219.15 resolves to deployx-sj.cisco.com 198.133.219.16 resolves to contact-sj1.cisco.com 198.133.219.17 resolves to scc-sj-1.cisco.com 198.133.219.18 resolves to scc-sj-2.cisco.com 198.133.219.19 resolves to scc-sj-3.cisco.com 198.133.219.20 resolves to jmckerna-test.cisco.com 198.133.219.21 resolves to events.cisco.com 198.133.219.22 resolves to bam-prod-1.cisco.com 198.133.219.23 resolves to redirect.cisco.com 198.133.219.25 resolves to www.cisco.com 198.133.219.26 resolves to partners.cisco.com
dmidecode |grep -i serial
for file in `ls *.pdf`; do convert -verbose -colorspace RGB -resize 800 -interlace none \ -density 300 -quality 80 $file `echo $file | sed 's/\.pdf$/\.jpg/'`; done
for x in *.JPG; do y=$(echo $x | tr '[A-Z]' '[a-z]'); echo $y; mv $x $y; done
Let’s say you have a folder with 5000 MP3 files you want to check for duplicates. Or a directory containing thousands of EPUB files, all with different names but you have a hunch some of them might be duplicates. You can cd your way in the console up to that particular folder and then do a
find -not -empty -type f -printf “%s\n” | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
This will output a list of files that are duplicates, according tot their HASH signature. Another way is to install fdupes and do a
fdupes -r ./folder > duplicates_list.txt
The -r is for recursivity. Check the duplicates_list.txt afterwards in a text editor for a list of duplicate files.
ps -eo pcpu,pid,args | sort -k 1 -r | head -10
wget --mirror -w 2 -p -r -np --html-extension --convert-links -R xmlrpc.php,trackback <URL>
ps -eo size,pid,user,command | sort -rn | head -5 | awk '{ hr[1024^2]="GB"; hr[1024]="MB"; for (x=1024^3; x>=1024; x/=1024) { if ($1>=x) { printf ("%-6.2f %s ", $1/x, hr[x]); break } } } { printf ("%-6s %-10s ", $2, $3) } { for ( x=4 ; x<=NF ; x++ ) { printf ("%s ",$x) } print ("\n") }'