Linux Commands - mkdir | rmdir | touch | rm | cp | more |
less | head | tail | cat
mkdir: Create Directory
rmdir: Remove Directory
touch: Create a file
rm: Remove file
cp: Copy file
more: View file content <More than 1 page>
less: View the file content
head: Display first 10 lines of file
tail: Display last 10 lines of file
mkdir:
User can create their directory using mkdir command.
Example:
hadoopguru@hadoop2:~$ mkdir hadoop_test
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hadoop_test
data hive
datanode hive-0.11.0-bin.tar.gz
derby.log mahout-distribution-0.8.tar.gz
flume metastore_db
hadoop-1.2.1 namenode
touch:
Creating a file:
Creating a file can be done using touch command
Example:
hadoopguru@hadoop2:~$ cd hadoop_test/
hadoopguru@hadoop2:~/hadoop_test$ touch pig.txt
hadoopguru@hadoop2:~/hadoop_test$ touch hive.txt
hadoopguru@hadoop2:~/hadoop_test$ touch mahout
hadoopguru@hadoop2:~/hadoop_test$ ls -l
total 0
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 hive.txt
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 mahout
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 pig.txt
1. touch –t :
One can create a new file with user defined date and time, using this touch -t command.
Example:
hadoopguru@hadoop2:~/hadoop_test$ touch -t 201310272350 testemp1
hadoopguru@hadoop2:~/hadoop_test$ ls -l
total 0
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 hive.txt
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 mahout
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 pig.txt
-rw-rw-r-- 1 hadoop hadoop 0 Oct 27 23:50 testemp
-rw-rw-r-- 1 hadoop hadoop 0 Oct 27 23:50 testemp1
rm:
Removing a file:
File can be removed using rm command.
Note: A file can’t be recovered back after remove. A file once removed is gone. Hence one should be
careful before removing a file.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp testemp1 testemp2
hadoopguru@hadoop2:~/hadoop_test$ rm testemp2
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp testemp1
rm -i :
This command asks for a confirmation from user to remove a file.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp testemp1
hadoopguru@hadoop2:~/hadoop_test$ rm -i testemp1
rm: remove regular empty file `testemp1'? y
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp
rm -r :
This command can be used to remove any file or directory.
Example:
test_dir is a directory which can be removed using rm –r command
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt test_dir testemp test_empty
hadoopguru@hadoop2:~/hadoop_test$ rm –r test_dir
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp test_empty
cp:
Copy a file:
In order to copy a file cp command is used.
If in case the target is a directory, the source file will get copied to that directory.
Example
1: Copy from file to file
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt testemp Test_file1.txt
mahout Test_dir test_empty Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ cp Test_file1.txt Test_file2.txt
2: Target is a directory
hadoopguru@hadoop2:~/hadoop_test$ cd Test_dir
hadoopguru@hadoop2:~/hadoop_test/Test_dir$ ls
hadoopguru@hadoop2:~/hadoop_test/Test_dir$ cd ..
hadoopguru@hadoop2:~/hadoop_test$ cp Test_file1.txt Test_dir
hadoopguru@hadoop2:~/hadoop_test$ cd Test_dir
hadoopguru@hadoop2:~/hadoop_test/Test_dir$ ls
Test_file1.txt
3: If user wants to copy whole directory, cp -r command is used.
Example:
Case I: If target directory doesnot exist
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir2 Test_file1.txt
mahout Test_dir testemp Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ cp -r Test_dir Test_dir1
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir1/
Test_file1.txt
Case II: If target directory already exist
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir1
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ cp -r Test_dir1 Test_dir2
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_dir1 Test_file2.txt
4: File(s) from one directory to another:
File can be copied from one directory to another using same cp command where last name will always
be a directory name
Example:
Case I: Copy single file
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir1
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ cp Test_dir1/Test_file1.txt Test_dir2
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_file1.txt Test_file2.txt
Case II: Copy multiple files
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ cp hive.txt pig.txt Test_dir1/Test_file1.txt Test_dir2
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
hive.txt pig.txt Test_file1.txt Test_file2.txt
5: Interactive commands:
Interactive command (cp –i) ask user confirmation to overwrite the existing file.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt Test_dir Test_dir1 Test_dir2
hadoopguru@hadoop2:~/hadoop_test$ cp -i hive.txt mahout
cp: overwrite `mahout'? no
hadoopguru@hadoop2:~/hadoop_test$
more:
more command is used to view the contents of a file containing more than 1 page.
Can scroll to the next page using Space bar or PageUp Pagedown buttons.
Example:
hadoop@hadoop2:~$ ls
AboutHadoop.txt aveo data derby.log hadoop-
1.2.1 hadoop_test hive-0.11.0-bin.tar.gz metastore_db
apache-flume-1.4.0-bin.tar.gz.1 count.txt
datanode flume hadoop-1.2.1-bin.tar.gz
hive mahout-distribution-0.8.tar.gz namenode
hadoop@hadoop2:~$ more AboutHadoop.txt
Hadoop is framework written in Java.
1. Scalable fault tolerant distributed system for large data storage & processing.
2. Designed to solve problems that involve storing, processing & analyzing large data (Terabytes,
petabytes, etc.)
3. Programming Model is based on Google's Map Reduce.
4. Infrastructure based on Google's Big Data & distributed file system.
less:
less is also a command to view file content. After viewing file press q to quit.
Example:
hadoop@hadoop2:~$ less count.txt
Press enter
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
count.txt (END)
Type "q:" & <Hit Enter> to come out of this.
hadoop@hadoop2:~$
Head:
Displays first ten lines of a file.
Example:
hadoop@hadoop2:~$ head count.txt
one
two
three
four
five
six
seven
eight
nine
ten
Tail:
Displays last 10 lines of file
Example:
hadoop@hadoop2:~$ tail count.txt
three
four
five
six
seven
eight
nine
ten
eleven
twelve
More Commands:
head –n or tail -n : Displays n number of lines
head –cn or tail –cn : Displays n no of bytes of file.
-----------------------------------------------------------------------------------------------------------------
Linux Commands - pwd | cd | ls | mkdir | rmdir | pushd |
popd | clear
Lets talk about basic Linux commands:
pwd : Print working directory
cd : Change directory
ls : List Directory
mkdir: Make Directory
rmdir: Remove Directory
pushd: pushd adds a directory to the stack n changes to new current
directory
popd : popd removes a directory from the stack and sets the current
directory.
clear: Clear the screen n user irritation
pwd (print working directory):
print working directory(pwd) will display your current working directory.
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
cd (Change directory):
Change directory (cd) command can be used to change your current working
directory.
Example: inside /home/hadoop there is a folder named hadoop-1.2.1
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/
hadoopguru@hadoop2:~/hadoop-1.2.1$ pwd
/home/hadoop/hadoop-1.2.1
1. cd ~ or cd:
cd command without any target directory will take you to your home directory, same
is the effect of cd ~
Example: our current working directory was /home/hadoop/hadoop-
I. hadoopguru@hadoop2:~/hadoop-1.2.1$ cd ~
hadoopguru@hadoop2:~$ pwd
/home/hadoop
II. hadoopguru@hadoop2:/$ pwd
/
hadoopguru@hadoop2:/$ cd
hadoopguru@hadoop2:~$ pwd
/home/hadoop
2. cd .. :
cd .. command will take you to the parent working directory. Parent working
directory is the one which is just above your current working directory.
The usage of slash after ../.. will take user the directory which parent to the parent
directory.
Example:
I. hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd ..
hadoopguru@hadoop2:~/hadoop-1.2.1$ pwd
/home/hadoop/hadoop-1.2.1
II. hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd ../..
hadoopguru@hadoop2:~$ pwd
/home/hadoop
III. hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd ../../..
hadoopguru@hadoop2:/home$ pwd
/home
3. cd - :
If one wishes to go back to the previous directory, cd – will help to do that.
Example:
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd -
/home/hadoop
Slash (/) usage makes a differen
Possible scenarios:
1. If user wants to open a directory present in root directory:
Sol: Starting directory name with slash (/) always direct you to the root of the file
tree.
Example:
I. hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ cd /home
hadoopguru@hadoop2:/home$ pwd
/home
II. hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd /bin
hadoopguru@hadoop2:/bin$ pwd
/bin
2. If user wants to open a directory in current working directory:
Sol: Ignore slash (/) prior to directory name.
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/
hadoopguru@hadoop2:~/hadoop-1.2.1$ pwd
/home/hadoop/hadoop-1.2.1
3. If user wants to open a directory present in root directory, and current working
directory is root directory.
Sol: one can find solution in above 2 scenarios, in such cases usage of slash (/)
makes no difference
Example:
I. Without slash(/)
hadoopguru@hadoop2:/$ pwd
/
hadoopguru@hadoop2:/$ cd home
hadoopguru@hadoop2:/home$ pwd
/home
II. Is same as with slash(/)
hadoopguru@hadoop2:/$ pwd
/
hadoopguru@hadoop2:/$ cd /home
hadoopguru@hadoop2:/home$ pwd
/home
List directory contents:
User can list the contents of directory using ls command.
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoop-1.2.1
1. ls –a:
In order to list all files including hidden files use –a with ls (i.e. ls –a).
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ ls -a
. flume
.. hadoop-1.2.1
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
.bash_history hive-0.11.0-bin.tar.gz
.bash_logout .hivehistory
.bash_profile mahout-distribution-0.8.tar.gz
.bashrc metastore_db
.cache namenode
data .profile
datanode .ssh
derby.log .viminfo
2. ls –l:
For files with details user can use ls –l command
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ ls -l
total 261824
-rw-rw-r-- 1 hadoop hadoop 60965956 Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4096 Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4096 Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4096 Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4096 Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 38096663 Oct 6 12:37 hadoop-1.2.1-
bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4096 Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 59859572 Oct 6 12:08 hive-0.11.0-
bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 109137498 Oct 6 12:29 mahout-
distribution-0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 23:30 namenode
3. ls –lh or ls –hl or ls –l –h or ls –h -l:
ls –lh shows the size of file in human readable form.
Example:
A. ls -lh :
hadoopguru@hadoop2:~$ ls -lh
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
B. ls -l -h :
hadoopguru@hadoop2:~$ ls -l -h
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-
bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
C. ls -hl :
hadoopguru@hadoop2:~$ ls -hl
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
D. ls -h -l :
hadoopguru@hadoop2:~$ ls -h -l
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
Make directory:
User can create their directory using mkdir command.
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ mkdir aveo_hadoop
hadoopguru@hadoop2:~$ ls -l
total 261828
-rw-rw-r-- 1 hadoop hadoop 60965956 Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 24 01:22 aveo
drwxrwxr-x 2 hadoop hadoop 4096 Oct 27 10:10 aveo_hadoop
drwxr-xr-x 6 hadoop hadoop 4096 Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4096 Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4096 Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4096 Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 38096663 Oct 6 12:37 hadoop-1.2.1-
bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4096 Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 59859572 Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 109137498 Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 23:30 namenode
1. mkdir -p:
mkdir –p will help creating parent directory if needed
Example:
hadoopguru@hadoop2:~$ mkdir -p aveo_hadoop/aveo_hadoop1/aveo_hadoop2
hadoopguru@hadoop2:~$ mkdir -p
aveo_hadoop/aveo_hadoop1/aveo_hadoop2
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1
aveo hadoop-1.2.1-bin.tar.gz
aveo_hadoop hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoopguru@hadoop2:~$ cd aveo_hadoop
hadoopguru@hadoop2:~/aveo_hadoop$ ls
aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop$ cd aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ ls
aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ cd aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1/aveo_hadoop2$ pwd
/home/hadoop/aveo_hadoop/aveo_hadoop1/aveo_hadoop2
Remove directory:
One can delete the existing directory using rmdir command but iff the directory is
empty.
Example:
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
aveo_hadoop hive-0.11.0-bin.tar.gz
data mahout-distribution-0.8.tar.gz
datanode metastore_db
derby.log mydir
flume namenode
hadoop-1.2.1
hadoopguru@hadoop2:~$ rmdir mydir/
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1
aveo hadoop-1.2.1-bin.tar.gz
aveo_hadoop hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
rmdir –p:
User can remove directory from any specified path using rmdir –p.
Example:
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1
aveo hadoop-1.2.1-bin.tar.gz
aveo_hadoop hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoopguru@hadoop2:~$ cd aveo_hadoop
hadoopguru@hadoop2:~/aveo_hadoop$ ls
aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop$ cd aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ ls
aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ cd aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1/aveo_hadoop2$ cd
hadoopguru@hadoop2:~$ rmdir -p
aveo_hadoop/aveo_hadoop1/aveo_hadoop2/
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoop-1.2.1
pushd and popd:
Both these commands works on the common stack on previous directories
pushd: pushd adds a directory to the stack n changes to new current directory
popd: popd removes a directory from the stack and sets the current directory.
Example:
pushd :
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/
hadoopguru@hadoop2:~/hadoop-1.2.1$ pushd /bin
/bin ~/hadoop-1.2.1
hadoopguru@hadoop2:/bin$ pushd /lib
/lib /bin ~/hadoop-1.2.1
hadoopguru@hadoop2:/lib$ pushd /hadoop
/hadoop /lib /bin ~/hadoop-1.2.1
popd :
hadoopguru@hadoop2:/hadoop$ popd
/lib /bin ~/hadoop-1.2.1
hadoopguru@hadoop2:/lib$ popd
/bin ~/hadoop-1.2.1
hadoopguru@hadoop2:/bin$ popd
~/hadoop-1.2.1