Showing posts with label linux trick. Show all posts
Showing posts with label linux trick. Show all posts

Sunday, June 13, 2010

Display Information About File - Stat

In Unix/Linux world everything is treated as files. whether it is a devices, directories and sockets — all of these are files.

Stat command displays file or file system status.

[Prabhat@Server1 Archive]$ stat 1_16470_587807474.arc
File: `1_16470_587807474.arc'
Size: 208514560 Blocks: 407664 IO Block: 4096 regular file
Device: fd02h/64770d Inode: 17006596 Links: 1
Access: (0640/-rw-r-----) Uid: ( 500/ oracle) Gid: ( 500/ dba)
Access: 2010-06-12 23:28:58.000000000 -0700
Modify: 2010-06-12 23:31:22.000000000 -0700
Change: 2010-06-12 23:31:22.000000000 -0700

Details of Linux Stat Command Output

* File: `1_16470_587807474.arc’ – Name of the file.
* Size: 208514560 – File size in bytes.
* Blocks: 407664 – Total number of blocks used by this file.
* IO Block: 4096 – IO block size for this file.
* regular file – Indicates the file type. This indicates that this is a regular file. Following are available file types.
o regular file. ( ex: all normal files ).
o directory. ( ex: directories ).
o socket. ( ex: sockets ).
o symbolic link. ( ex: symbolic links. )
o block special file ( ex: hard disk ).
o character special file. ( ex: terminal device file ).

* Device: fd02h/64770d – Device number in hex and device number in decimal
* Inode: 17006596 – Inode number is a unique number for each file which is used for the internal maintenance by the file system.
* Links: 1 – Number of links to the file
* Access: (0640/-rw-r-----): Access specifier displayed in both octal and character format. Let us see explanation about both the format.
* Uid: ( 500/oracle) – File owner’s user id and user name are displayed.
* Gid: ( 500/dba) – File owner’s group id and group name are displayed.
* Access: 2010-06-12 23:28:58.000000000 -0700 – Last access time of the file.
* Modify: 2010-06-12 23:31:22.000000000 -0700 – Last modification time of the file.
* Change: 2010-06-12 23:31:22.000000000 -0700 – Last change time of the inode data of that file.

Stat – Display Information About Directory

You can use the same command to display the information about a directory as shown below.

[Prabhat@Server1 oradata]$ stat Archive
File: `Archive'
Size: 12288 Blocks: 24 IO Block: 4096 directory
Device: fd02h/64770d Inode: 16990209 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 500/ oracle) Gid: ( 500/ dba)
Access: 2009-11-02 23:43:22.000000000 -0800
Modify: 2010-06-13 05:26:54.000000000 -0700
Change: 2010-06-13 05:26:54.000000000 -0700

Wednesday, April 07, 2010

Problem : CTRL+S in Putty

Over the years, we all have habit of using CTRL+S every few minutes during working on a document, because we all had too much work lost from stupid errors; In the Windows world, CTRL+S is used as the Save you work.
But, this habit will be a problem on working in the Linux world.

By accident while inside a terminal window (in PUTTY) we press CTRL+S, this accidental keystroke meant we had reconnect to my Linux server, kill whatever program we were running, and then start it again.

But here is solution :

CTRL+S actually does XOFF, which means the terminal will accept key strokes but won’t show the output of anything. It will appear as if your terminal is dead when it’s really just waiting to be turned back on. The fix? Simply press CTRL+Q to turn flow-control on (XON). If you pressed a whole bunch of keys before pressing CTRL+Q, you’ll see the output from those keystrokes.

Wednesday, December 23, 2009

Temporarily stop/start a process in linux

Some time we have requirement that, particular job should stop for certain period of time and start again.

Most of us familiar with KILL command, but here is another feature of KILL command , which saves your life :-

#kill -STOP 10067 (where 10067 is process id)

#kill -CONT 10067 (where 10067 is process id)



have a fun :)

Tuesday, October 27, 2009

nohup : commands keep executing even you exit from a shell prompt

nohup command if added in front of any command will continue running the command or process even if you shut down your terminal or close your session to machine

nohup command-name &

Where,

* command-name : is name of shell script or command name. You can pass argument to command or a shell script.
* & : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.

examples:

# nohup mysql -q -uUSER1 -pPASS1 < dump.sql > dump.log 2> error.log &


source:
http://www.idevelopment.info/data/Unix/General_UNIX/GENERAL_RunningUNIXCommandsImmunetoHangups_nohup.shtml

http://www.cyberciti.biz/tips/nohup-execute-commands-after-you-exit-from-a-shell-prompt.html

Thursday, July 30, 2009

AWK: Pattern Matching and Processing

awk 'pattern {action}' filename

Reads one line at a time from file, checks for pattern match, performs action if pattern matched pattern.

NR is a special awk variable meaning the line number of the current record

can use a line number, to select a specific line, by comparing it to NR (for example: NR == 2)
can specify a range of line numbers (for example: NR == 2, NR == 4)

can specify a regular expression, to select all lines that match

$n are special awk variables, meaning the value of the nth field (field delimiter is space or tab)

$0 is the entire record
can use field values, by comparing to $n (for example: $3 == 65)

every line is selected if no pattern is specified

Instructions

print - print line(s) that match the pattern, or print fields within matching lines
print is default if no action is specified
there are many, many instruction, including just about all C statements with similar syntax
other instructions will be covered in future courses


examples, using the file testfile.

awk 'NR == 2, NR == 4' testfile - print the 2nd through 4th lines (default action is to print entire line)

awk '/chevy/' testfile - print only lines matching regular expression, same as grep 'chevy' testfile
awk '{print $3, $1}' testfile - print third and first field of all lines (default pattern matches all lines)

awk '/chevy/ {print $3, $1}' testfile - print third and first fiield of lines matching regular expression
awk '$3 == 65' testfile - print only lines with a third field value of 65
awk '$5 < = 3000' testfile - print only lines with a fifth field value that is less than or equal to 3000

awk '{print $1}' testfile - print first field of every record
awk '{print $3 $1}' testfile
awk '{print $3, $1}' testfile - inserts output field separator (variable OFS, default is space)

awk -F, '{print $2}' testfile - specifies that , is input field separator, default is space or tab
awk '$2 ~ /[0-9]/ {print $3, $1}' testfile - searches for reg-exp (a digit) only in the second field

awk '{printf "%-30s%20s\n", $3, $2}' testfile - print 3rd field left-justified in a 30 character field, 2nd field right-justified in a 20 character field, then skip to a new line (required with printf)


awk '$3 <= 23' testfile - prints lines where 3rd field has a value <= 23
awk '$3 <='$var1' {print $3}' testfile - $var1 is a shell variable, not an awk variable, e.g. first execute: var1=23

awk '$3<='$2' {$3++} {print $0}' testfile - if field 3 <= argument 2 then increment field 3, e.g. first execute: set xxx 23


awk '$3> 1 && $3 < 23' testfile - prints lines where 3rd field is in range 1 to 23

awk '$3 < 2 || $3 > 4' testfile - prints lines where 3rd field is outside of range 2 to 4
awk '$3 < "4"' testfile - double quotes force string comparison
NF is an awk variable meaning # of fields in current record

awk '! (NF == 4)' testfile - lines without 4 fields
NR is an awk variable meaning # of current record
awk 'NR == 2,NR==7' testfile - range of records from record number 2 to 7

BEGIN is an awk pattern meaning "before first record processed"
awk 'BEGIN {OFS="~"} {print $1, $2}' testfile - print 1st and 2nd field of each record, separated by ~
END is an awk pattern meaning "after last record processed"

awk '{var+=$3} END {print var}' testfile - sum of 3rd fields in all records
awk '{var+=$3} END {print var/NR}' testfile - average of 3rd fields in all records - note that awk handles decimal arithmetic


awk '$5 > var {var=$5} END {print var}' testfile - maximum of 5th fields in all records
awk '$5 > var {var=$5} END {print var}' testfile - maximum of 5th fields in all records

sort -rk5 testfile | awk 'NR==1 {var=$5} var==$5 {print $0}' - print all records with maximum 5th field

Simple awk operations involving functions within the command line:


awk '/chevy/' testfile

# Match lines (records) that contain the keyword chevy note that chevy is a regular expression...

awk '{print $3, $1}' testfile

# Pattern not specified - therefore, all lines (records) for fields 3 and 1 are displayed

# Note that comma (,) between fields represents delimiter (ie. space)

awk '/chevy/ {print $3, $1}' testfile

# Similar to above, but for chevy

awk '/^h/' testfile


# Match testfile that begin with h

awk '$1 ~ /^h/' testfile ### useful ###

# Match with field #1 that begins with h

awk '$1 ~ /h/' testfile


# Match with field #1 any epression containing the letter h

awk '$2 ~ /^[tm]/ {print $3, $2, "$" $5}' testfile

# Match testfile that begin with t or m and display field 3 (year), field 2 (model name) and then $ followed by field 4 (price)

--------------------------------------------------------------------------------------------------

Complex awk operations involving functions within the command line:


awk ?/chevy/ {print $3, $1}? testfile

# prints 3rd & 1st fields of record containing chevy

awk ?$1 ~ /^c/ {print $2, $3}? testfile

# print 2nd & 3rd fields of record with 1st field beginning with c

awk ?NR==2 {print $1, $4}? testfile

# prints 1st & 4th fields of record for record #2

awk ?NR==2, NR==8 {print $2, $3}? testfile

# prints 2nd & 3rd fields of record for records 2 through 8

awk ?$3 >= 65 {print $3, $1}? testfile

# prints 3rd & 1st fields of record with 3rd field >= 65

awk ?$5 >= ?2000? && $5 < ?9000? {print $2, $3}? testfile

# prints 2nd & 3rd fields of record within range of 2000 to under 9000

Friday, April 17, 2009

Block IP addresses using IPtables

Block a particular
#service iptables start
#iptables -I INPUT -s 10.1.24.4 -j DROP

This command will simply drop any packet coming from the address 10.1.24.4

To list the chains:
#iptables -L -n

To make persist :

#service iptables status
#iptables-save (copy output)
#emacs /etc/sysconfig/iptables (paste output)
#service iptables restart

make sure iptables service start on default.

Monday, March 30, 2009

creating bulk users in linux

Today I have configured NX server.
Now, Next task is to create users and really its very time consuming and boring task.

Usually you use useradd command to create a new user or update default new user information from command line.

So i have explore Linux and searched on Google , I have found few scripts to do this. But later I have found one good and easy solution.

Here is that,

Update and create new users in bulk.

newusers command reads a file of user name and clear-text password pairs and uses this information to update a group of existing users or to create new users. Each line is in the same format as the standard password file.

This command is intended to be used in a large system environment where many accounts are updated at a single time (batch mode). Since username and passwords are stored in clear text format make sure only root can read/write the file. Use chmod command:
# touch /root/bulk-user-add.txt
# chmod 0600 /root/bulk-user-add.txt


Create a user list as follows. Open file:
# emacs /root/bulk-user-add.txt

Append username and password:
sanjay:mypass99:555:555:Sanjay Singh:/home/Sanjay:/bin/bash
frampton:mypass99n:556:556:Frampton Martin:/home/Frampton:/bin/bash
----
--
---
barun:mypass99:560:560:Barun Ghosh:/home/Barun:/bin/bash

Now create users in batch:
# newusers /root/bulk-user-add.txt

Read man page of newusers for more information.
May be I will automate entire procedure using a php

Friday, May 02, 2008

Check and Rebuilding failed Linux software RAID

# cat /proc/mdstat
# mdadm -D /dev/md0
Detail : http://sitearticles.com/cms/show/43.html

ssh commands to log in without the password

ssh commands to log in without the password
# ssh-keygen -t rsa
* This will generate your id_rsa and id_rsa.pub in the .ssh directory in your home directory
* copy the id_rsa.pub to the .ssh directory of the remote host you want to logon to as authorized_keys2
* If you have more than one host from which you want to connect to the remote host, you need to add the local host's id_rsa.pub as one line in the authorized_keys2 file of the remote host command for the 1and1 servers is:
# scp .ssh/id_rsa.pub [email protected]:./.ssh/authorized_keys2

:)

Friday, April 11, 2008

Changing Run Levels

On production environment normal run level 3 is used and it get rarely changed.
Runlevel and usage:
  • 0 — Halt
  • 1 — Single-user mode
  • 2 — Not used (user-definable)
  • 3 — Full multi-user mode
  • 4 — Not used (user-definable)
  • 5 — Full multi-user mode (with an X-based login screen)
  • 6 — Reboot
Check run level using :-
#who -r or #runlevel

Change run level using:-
#telinit 5 or
open file
#emacs /etc/inittab
Look for the default runlevel called initdefault which look like as follows:
id:3:initdefault:
Replace run level x with y (where x is current run level and y you want to set).

Set services in run level using :
#ntsysv or
#
chkconfig or
#serviceconf

Tuesday, April 01, 2008

How To: Transfer your PuTTY settings between computers

Putty stores its settings in the Windows registry. To save a backup of your Putty settings, you'll need to export this registry key to a file.

HKEY_CURRENT_USER\Software\SimonTatham

Steps :
1. Click Start->Run and type "RegEdt32" in the "Open" dialog. Click "Ok"
2. One RegEdt32 starts, you'll be presented with an application.
3. Press "Ctrl+F" to bring up the Find dialog. Enter the name of the key, "SimonTratham" in the "Find What" field, and make sure only "Keys" is checked in the "Look At" section of the dialog. Finally, click "Find Next"
4. The search may take a while, reminding us that the Windows Registry is a large and mysterious place where dragons be. Let's use these few seconds to reflect on the fact that you should never, ever, never change things in the registry unless you are absolutely, positively, totally, completely, 100% dead sure that you know exactly what you're doing. When the search completes we'll see the key name for which we're looking.
5. Click File->Export. Give your file an appropriate name like, "putty.reg" and click "Save"
6. We're done! Save the putty.reg file somewhere safe. The file doesn't contain any passwords or actual SSH key values so, it's relatively safe from prying eyes. Still, it does contain your configuration and that kind of data is a private matter.


Importing Your PuTTy Configuration:
Windows will ask you for confirmation that you want to import this set of registry values. We know this file is safe, because we created it but, you should never import registry information from an unknown source

Basics of Kubernetes

 Kubernetes, often abbreviated as K8s , is an open-source platform designed to automate the deployment, scaling, and management of container...