0% found this document useful (0 votes)
62 views

LINUX

This document provides instructions and summaries for various Linux filesystem and disk management tasks. It covers topics such as: - Checking and mounting filesystems using blkid, fstab, lsblk, and mount commands. - Viewing mounted filesystems with mount, mtab, and proc/mounts. Formatting disks and creating filesystems with mkfs. - Managing swap space and files using swapon, swapoff, and free commands. Checking filesystems with fsck. - Creating and managing Linux RAID arrays using mdadm. Encrypting disks and partitions with cryptsetup. - Managing LVM volumes with pvcreate, vgcreate, lvcreate, lvextend, and resize

Uploaded by

ecrivezadaniel
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

LINUX

This document provides instructions and summaries for various Linux filesystem and disk management tasks. It covers topics such as: - Checking and mounting filesystems using blkid, fstab, lsblk, and mount commands. - Viewing mounted filesystems with mount, mtab, and proc/mounts. Formatting disks and creating filesystems with mkfs. - Managing swap space and files using swapon, swapoff, and free commands. Checking filesystems with fsck. - Creating and managing Linux RAID arrays using mdadm. Encrypting disks and partitions with cryptsetup. - Managing LVM volumes with pvcreate, vgcreate, lvcreate, lvextend, and resize

Uploaded by

ecrivezadaniel
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 17

blkid => check disks UUID

/etc/fstab => contains mount points '/dev/sda or UUID' <mount point> <type>
<options> <dump (dump command to backup)> <pass (order that fsck checks for error
during boot time)>
- You can mount by using UUID , Label or device
lsblk => disks options
mount => display all system mounts
mount -a => causes all filesystems mentioned in fstab to be mounted as indicated,
except for those whose line contains the noauto keyword.
The use of UUID helps when the disks's naming order changes or disk added and
removed
//samba/share nfs://export user@sshfs:/home/user /mnt/share
- There are 3 ways you can view mounted fs: mount, /etc/mtab, /proc/mounts
- There are virtual fs like cgroug, fuse, proc, selinux and real like / /boot /home
/var
- sync command is used to flush write buffers to disk . Use it before ejecting a
usb
- eject command to eject a removable device under software control
- Vmemory in linux can be disk partition(type 82) or a swap file . Managed with
swapon / swapoff command
- You can format swap fs with mkswap . The swap file can be created with dd or
fallocate command and should be owned by root and be 0600
#fallocate -l 10M /mnt/swap
#mkswap /mnt/swap
#chmod 600 /mnt/swap
#swapon /mnt/swap and add it to fstab file
- free command displays available ram and swap
- fdisk to create partition and wipefs

fsck to check a fs but it should be unmounted before (util-linux)


fsck.xfs to check xfs file systems and fsck.btrfs (xfsprogs and btrfs-progs)
- You can force a fs check by creating a forcefsck file at / . Once the check is
completed the file is deleted
- df -hT displays fs type
#fsck -f /dev/sdb1 or #fsck.ext4 /dev/sdb1
- mkfs -c => checks for bad bocks before fs creation and -m % of disk to reserve
for online defrag
- tune2fs -c => conf max mount counts between forced fs checks and -i for max
interval between fs checks ; -m % of reserved space
- before you rebbot you can use the dumpe2fs to print the fs meta-data for ext2/3/4
fs so if a forced check of the fs is in place you can see the mount count and mount
interval before you reboot . #dumpe2fs /dev/sda | less
#mkfs.ext4 -U <UUID> -m 1 <%reserved> /dev/sdb1
#tune2fs -L <label> /dev/sdb1 => setup a label for disk
- debugfs => used to see the working of the ext fs . ls -d can show deletd files.
When we remove files we just delete the link from the metadata
!$ represents the last argument
- smartmontools package to monitor physical disks ; #smartctl or smartd (run in
bckgrd)
#smartctl -i or -H /dev/sdb
#smartd to schedule and run in bckground

#xfs_info /dev/sdb1 => metadata shown after fs creation


- fs check doesnt run on boot with xfs
#xfs_repair -L => where the log is corrupt , -n to report in what would be done .
Some systems have xfs_check. #xfs_repair /dev/sdb1
#xfsdump -l 0 <full backup> or -l <first incremental> -f /var/backup <file to
backup to> /dev/sdb1
#xfsrestore -f /var/backup -L <label> or -S <session ID> /data-xfs/
#xfsrestore -I => inventory or -t to test the backup

#btrfs --version
#btrfsck /dev/sdb1
#btrfs fi show -m => list devices
- copy on write is also available in ZFS . It allows instant backups (snapshots).
It may not work well on large files . you can disable it with the attribute
'NoDataCoW' .
#chattr +C /data-btrfs/nocow
#lsattr -d /data-btrfs/nocow
- a btrfs is always a LV and we can easily extend that volume across multiple disks

#btrfs device add /dev/sdd /data-btrfs/


#btrfs balance start -d (data) -m (metadata) /data-brfs/ => balance the data
across drives
- #df -h may not be sufficient ; use #btrfs filesystem df /data-btrfs

#apt install autofs


- /etc/auto.master => autofs is used to auto-mount remote shares .
</corp /etc/auto.corp --timeout=900>
- the shares are defined in the auto.* file .
<pdf -fstype=nfs 192.168.0.53:/pdf>
- once completed you have to restart the service
- autofs is a client side utility but you can install it on one server and connect
to it to acces the share
--------------------
#showmount -e <ip> => see the exports on NFS server ; cd <ip>
#mkdir /net
#vi /etc/auto.master { /corp /etc/auto.corp --timeout 600 }
#cp auto.misc auto.corp
#vi auto.corp { pdf fstype=nfs 192.168.0.53:/pdf }
#systemctl restart autofs
- the directory specified does not list inside the /corp but you can still access
it
----------------------
- You can create iso files with mkisofs
#mkisofs -V BOOT <label> -J -r -o <output> /tmp/boot.iso /boot
#mount -o loop /tmp/boot.iso /mnt
- you can also add it to the fstab file
{/tmp/boot.iso /mnt iso9660 auto,loop 0 0}
{/tmp/dvd.iso /mnt udf auto,loop 0 0} = for udf, dvd files
- a full disk or partition can be encrypted with LUKS (linux unified key setup)
#cryptsetup -v -y luksFormat /dev/sdb1
#cryptsetup luksOpen /dev/sdb1 secure
#ls -l /dev/mapper/secure
#mkfs.ext4 /dev/mapper/secure
#mount /dev/mapper/secure /secure
- instruction for cryptsetup can be added to the /etc/crypttab file
DEV MAPPER NAME DEVICE OR UUID PASSPHRASE
secure /dev/sdb1 none
- enabling cryptsetup overwrites data on the disk

- Raid levels : Linear uses partitions of different sizes and volume is expanded
across all disks. Spare disks are not supported - Raid 0 ; Raid 1 ; Raid 4/5/6
- Partition type : 0xDA Non FS (recommended) ; OxFD Raid auto
- You can use sfdisk , it can read partitioning instructions from standard input,
so we can dump the partition table from one disk to another .
#fdisk /dev/sdb
#sfdisk -d /dev/sdb | sfdisk --force /dev/sdc
- /proc/mdstat displays the existing raid devices
#mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1
/dev/sdc1
- you can include a host spare by adding --spare-devices=1 /dev/sdd1
#lsmod => Show the status of modules in the Linux Kernel
---------
#cat /proc/mdstat
#lsmod | grep raid
---------
- to persist raid config , save the config in /etc/mdadm.conf file
#mdadm --detail --scan >> /etc/mdadm.conf
#mdadm --stop /dev/md0 => stop the raid
#mdadm --assemble --scan => start the raid
--------------------
#mkfs.btrfs -m raid1 -d raid1 /dev/sdd /dev/sde

- In the lsblk output, maj number represents the type of device and the driver to
be used while the minor number can represents the incrementation
- hdparm (ata) and sdparm(scsi) are used to tune disk performance
#hdparm -tT /dev/sda1
#sdparm (--command=eject) /dev/sda
#sdparm --get=WCE/dev/sda => to see if write cache is enabled
- vm.dirty_background_ration : the % of sys memory that can be filled with dirty
pages before they are written to disk by system bkgrd processes.
- vm.dirty_ratio : max amount of system memory that can be filled with dirty pages
before process itself writes to disk.
#cat /proc/vmstat |grep nr_dirty
#sysctl -a | grep vm.dirty
- The iSCSI target is the server that shares disks or LV n the network.

- targetd (service) & targetcli


#targetcli
>backstore/blobk/create my_san /dev/sdb
>isci/ create iqn.xxxxxx:san1
- if you do not add the acl then the LUN is ro
>cd iqn.xxxxxx:san1/tpg1/
>luns/ create /backstores/block/my_san
>acls/ create iqn.xxxxxx.client:san1
OR
>set attribute demo_write_protect=0
- iSCI initiator is the client
#iscsiadm --mode discovery --type sendtargets --portal <ip> --discover
#iscsiadm --mode node iqn.xxx:san1 --portal <ip> --login
- iscsid is the service ; iscsid.conf is the config file
- WWN or WWID World Wide Name is a unique 8 or 16 byte number used to identify
storage devices ,like MAC .

- partitions need to be of type 8E


#pvcheck ??
---------------------
#pvcreate /dev/sdb1
#vgcreate vg1 /dev/sdb1
#vgscan ; #vgs (more detail)
#lvcreate -n data_lv -L 750m vg1
#mkfs.ext4 /dev/vg1/data_lv
#mount /dev/vg1/data_lv /data
----------------------
#pvcreate /dev/sdc1
#vgextend vg1 /dev/sdc1
#lvextend -L +1000M /dev/vg1/data
#resize2fs /dev/vg1/data
#df -h
----------------------
- VG are activated by default on creation but can be activated to de activated
manually
#vgchange -a y vg1
#vgchange -a n vg1
#lvscan / pvscan / vgscan
- snapshots volumes needs to be created in the same vg as the target volume. the
size needs to be big enough to store data changes during the lifetime of the
snapshot.
-------------------------
#lvcreate -L 200 -s -n backup /dev/vg1/data
#mkdir /mnt/backup
#mount /dev/vg1/backup /mnt/backup
#tar -cf /tmp/backup.tar /mnt/backup
#umount /mnt/backup
#lvremove /dev/vg1/backup

=============
#vmstat => r: total number processes waiting for CPU time
b: total number blocked processes, waiting for disk or network IO
swpd : used Vmemory
Free : Free Vmemory
Buff : Memory uses as bufers (what's in directories ,permissions)
Cache : Memory used as cache (contents of files)
Si : memory swapped from disk every second
So : " " to disk every second
Bi : Blocks in per second
Bo : Blocks out per second
In : Interrupts per second
Cs : Context switches per second
#vmstat -S M => displays memory usage in Mb
#vmstat -a => display active/inactive memory
---------------------
#free -m ; sync (writes buffers through the disk)
#bash -c "echo 3 > /proc/sys/vm/drop_caches"
---------------------
ls -R / => buffer content
#vmstat 5 3
#uptime [1,5,15min]
#who , w , who -l , who -T (if messaging is on/off)
#mesg / mesg y
#tty => which terminal are you in
#netstat -alt , x (listening socket)
#ps ps-e => all processes runing on system
#pstree
- procps package

CLI MONITORING TOOLS


- limitations of out of the box tools are real time ; no historical data ;
Reactive
- with sysstat you can use tools like iostat and mpstat and you can collect
information every 10 minutes to build a real picture of performance windows.
- Real time monitoring : vmstat ps top uptime w lsof(-i displays listening ports)
netstat/iptables
#netstat -i => show tx/rx packets and network usage ; -r per protocol
#iptables -nvL => counters related to firewall rules
#watch -d -n 2 iptables -nvL =>see in real time and -d to highlights changes since
last refresh
- using sysstat you can make history of data
- data is collecte every 10 min and can be read with sar. sa1 collects data every
10 min and sa2 summ daily info . sa1 and sa2 are enabled through cron
- data is stored at /var/log/sysstat [sa for centos]/sa<say number>
- /etc/sysstat/systat and /etc/default/sysstat are the config files
- Once installed we can use iostat (generates cpu and disk infos) ; mpstat (more
detailed cpu infos) ; pidstat (process id information) ; cifsiostat ( samba share
IO) ; nfsiostat ( nfs export IO) ; sar (collects and displays system activities)
#sar -V = version;
#sar -u = cpu infos ;
#sar -q = load average info;
#sar -q 1 3 = load average 3 times with 1 second interval;
#sar -q -f /var/log/sysstat/sa15 = load average from day 15 of the current month ;
#sar -w processes created per second ;
#sar -n DEV = network interfaces statistics
#sar -b = overall IO activity
#sar -q -s 10:00:00 -e 11:00:00 = load average from 10 to 11 on the current day

COLLECTD
apt install collectd rrdtool
#yum install -y collectd collect-rrdtool rrdtool collectd-web httpd
- /etc/collectd.conf is the config file
- on Centos make sure Apache server is listening on ipv4
- set web permissions in collectd.conf in apache directory by adding "Require ip
<ip> only network bits

MONITORING WITH NAGIOS


- Nagios can monitor NTP sync, MySQL servers , network nodes ...
#apt install tasksel : to ease LAMPS installation
#tasksel OR tasksel install lamp-server
- If you got disk critical event , it is because /home/user/.gvfs is not accessible
to the nagios user account .
To solve it go to /etc/nagios-plugins/config/disk.cfg and add <-A -i'.gvfs'>
- You can define hosts with < define host { use generic-host
host_name
localhost
alias localhost
address 127.0.0.1
}
- before restarting nagios to apply the changes , you can test the configs with
#nagios3 -v /etc/nagios3/nagios.cfg
- service entries : < define service { host_name tick
service_description NTP
check_command check_ntp
use generic-service
}
BASIC NETWORK ADMIN
#ifconfig
#ifconfig eth0 <ip>
#ip -6 address show
- You can add additional ip address with #ifconfig add eth0 <ip> netmask <mask>
broadcast <b_ip>
#ifconfig eth0 <up/down>
#ip neighbor show
#ip link set eth0 <up/down>
- You can use ip to add route , arp entries
- to display route tables you can use #route or ip route. Names are resolved
via /etc/networks
#route -n display the networks numbers
#route add default gw <ip>
#route add -net <network> netmask gw
#route add -host <ip> reject = block access to a single host
- to make routes persistent , add them to /etc/network/interfaces file < post-up
route add -net <net> netmask gw >
- on Centos it is in /etc/sysconfig/network-scripts/route-eth0
- There are route tables flags like U in use, G gateway
#arp -a / -n
- The command arp reads outpout from /proc/net/arp ; ip n s
- To add arp entry use -s and to delete -d . to make the config persistent add
entries to /etc/ethers file
- static arp can helps prevents arp poisoning
- wireless-tools package
- /etc/wpa_supplicant/wpa_supplicant.conf is the config file . < network =
{ ssid="xx"
psk="xx"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP TKIP
}
#iwlist wlan0 scan | grep ESSID => scan for ssid
#iwconfig wlan0 OR cat /proc/net/wireless => display and set adaptor information
#wpa_cli status => display status infos

ADVANCED NETWORK ADMIN


- tcmpdump is used to capture packets .
#tcpdump -c 5 -i eth0 not port 22 => capture 5 packets not from port 22
#netstat -a = show all ; -n = numeric ; -r routes ; -l listening ; -t TCP ; -x
sockets
#lsof -i all listening ports (requires privileges)
#lsof -iTCP:22
#lsof -i@<ip> filtering connections from or to the host <ip>
#nmap --iflist = displays local ports and routes
#nmap -sT -sS -sV -A (OS) --script http-title / http-enum \ --script-args http-
enum.displayall <ip>
#iptables -L F
- order of lookups can be seen in /etc/nsswitch.conf
- dig or host to check name entries and you can check resolv.conf along with ifcfg-
files (PEERDNS=yes).
#dig <name> @<dns_ip>
- if a service supports TCp wrappers then access may be restricted via the
hosts.allow for hosts.deny files
#which <service>
#ldd </path/to/service name> | grep libwrap
< in.tftpd:<ip> > <in.tftpd:ALL>
- if the client appears in both files then the allow file takes precedence and
access is granted.
- On CentOS check /var/log/audit/audit.log for SeLinux logs
- dmesg is used to examine or control the kernel ring buffer
- the persistent hostanme of a system is storred in /etc/hostname (Debian)
#hostname => displays the transient hostaname (not persistent)
#hostnamectl => displays system info
#hostnamectl -set-hostname <name>
- you need wheel access with policy kit to use hostnamectl without root privileges
- enp9s0 Ethernet PCI in bus 9 slot 0
- wlp12s0 Wlan PCI in bus 12 slot 0
- the addresses can be ratified using lspci . The output is in HEX
- to disable consistent network device naming you can add the HWADDR atribute to
network script file and eiher rename the file ifcfg-eth0 or configure the DEVICE
name attribute. You can also edit /etc/default/grub
<GRUB_CMDLINE_LINUX="crashkernel=auto biosdevname=0 net.ifnames=0 quiet" and update
the grub config #grub2-mkconfig -o /boot/grub2/grub.cfg
#systemctl status NetworkManager.service (network-manager service)
#nmtui / nmcli {device wifi list} ; connection profiles with nmcli

BUILDING C PROGRAM
- Sources repo are located in /etc/apt/sources.list
- add deb and deb-src repos
* deb-src http://mirrordirector.raspbian.org/raspbian/ wheezy main contrib non-free
rpi
#apt-get source nmap : obtaining source packages
- Installing software compilation tools : #apt-get install build-essential OR yum
groupinstall "Development Tools"
#./configure = configure script create the makefile instruction set to compile for
your system
#make = looks for makefile and instruction to compile
#sudo make install
- compile C program : #gcc app.c -o out
- To create a patch , copy the source code to a new verison file , edit the new
version file and use diff to compare version and create pacth file.
#diff -u app.c app2.c > app.patch
#patch < app.patch

BACKUP
- with tar there is no compression but you can compress it during or after the
creation.
- tar is used to create on file from one or more directories
#tar -c to create an archive ; -t test ; -x expand or restore
#tar -cv(verbose)f(file) <new> <location> --exclude <file>
#gzip -z to compress ; #gunzip etc.tar.gz to expand
#gzip -1 (lower algorithm) <file>
#bzip2 -j to compress ; #bunzip2 etc.tar.bz2
#rsync -a(archive to maintain permissions)r(recursive)v /home/ /backup to mirror
directories
#rsync -rve ssh /home/ fred@svr1:/backup
- you can configure a rsync server (port 873). to configure it on ubuntu , edit the
file /etc/defult/rsync <RSYNC_ENABLE=true>
and /etc/rsyncd.conf
{ [doc]
path = /usr/share/doc
read only = true
}
#service rsync start
#rsync -av server1::doc/ /data (on client)
- if you stop it during process and start it again ; it will continue . By
default , deletion is not sync ; you can use :
#rsync -av --delete server1::doc/ /data (on client)
- dd can be used to image a disk or partition
#dd if=/dev/cdrom of=/tmp/disk.iso => copy content of cdrom to iso file
#dd if=/dev/sda of=/tmp/sda.mbr count =1 bs=512 => copy one block on a block size
of 512 bytes
- rewinding tape device: /dev/st*
- non-rewinding tape device : /dev/nst*
- utility to control magnetic tapes : /bin/mt
- Backup suites : bacula, amanda, BackupPC
- Bacula components are : Director tcp 9101, Client TCP 9102, Storage 9103.
- It requires a catalog like MySQL, SQLite and PostegreSQL . #apt-get install
bacula
> mysql -u root -p -e "USE bacula; SHOW TABLES;"
-----------------
#mkdir -p -m 700 /bacula/{restore,backup}
#chown -R bacula.bacula /bacula
- on the client , create /bacula/restore directory and on the storage server ,
create /bacula/backup
------------------
- configure the storage server

#/etc/bacula/bacula-sd.conf
<Archive Device = /bacula/backup>
#bacula-sd -tc /etc/bacula/bacula-sd.conf => to test the config
#service bacula-sd restart
------------------
- configure the Director FileSet

#vi /etc/bacula/bacula-dir.conf
<FileSet {
Name= "Full Set"
Include {
Options {
signature = MD5
}
##Backup targets , you can also exclude files in DIR
File = /etc
File = /home
}
##Configure the Director Job
Job {
Name="RestoreFiles"
Type=restore
Client=bacula-server-fd
FileSet="Full Set"
Storage= File
Pool = Default
Messages= Standard
Where= /bacula/restore
}
#bacula-dir -tc /etc/bacula/bacula-dir.conf =>test the config
#service bacula-director restart
------------------
- To start the bacula console , use bconsole
*label = to create a label , enter new volume name and choose 2 for the File pool
*restore all , select 5 for most recent backup
*done to start the restore
*messages = check if everything was successful
- You can also check the fs if everything was successful
- Backup is created on the storage server : /bacula/backup/<label>
- restore is created on the client : /bacula/restore/

NOTIFYING USERS
- the contents of the file /etc/issue are displayed on "physical consoles" prior to
login . Support for escape characters that can be read by agetty getty mingetty ...
- In the file \n displays the nodename as seen from uname -n
- /etc/issue.net is used for remote connections such as SSh but shows after login
and does not support ESC characters
- the content of /etc/motd is displayed on physical consoles and pseudo consoles
after login . it is controlled via the pam_motd module .
- Ubuntu based systems include /etc/update-motd.d that can contains some scripts to
run. other debian systems include /etc/motd.tail
- in /etc/pam.d/sshd , comment the noupdate
- wall is used to send messages to logged in users . All users can use wall but it
more often used by root. Console messaging needs to be turned on if sent by
standard users but not if sent by root . #wall < MessageFile
#mesg n/y
- if you are root is sends the message whether messaging is on or off
- shutdown can include a command to warn users of the impending disaster. If a time
element is usedrather than now , logins are disabled 5 minutes before the shutdown
by /etc/nologin
#shutdown -h +6 "Server is being shutdown"

=====================================

#uname -s kernel name


#uname -r kernel release
#uname -v kernel version
- uname output comes from /proc/version
#cat /proc/cmdline => view which options were supplied to the kernel at boot time.
- The kernel doc is located at /usr/share/doc/kernel-doc-2.6.32/Documentation
- Kernel files at /boot/vmlinu"x" , no-compression . the x represents earlier
versions. the latest versions have z , compressed with zlib . May include Vmemory
support as a zImage or higher compression as a bzImage.
#file /boot/vmlinuz-<xxxxx> : check the kernel Image Type
- the initial ram disk is repsonsible for loading a temp root fs during the linux
boot process. this allows for the real root fs to be checked and drivers to access
the root fs loaded.
- 2 types of initial ram disks : initrd (old) initramfs(new), it is the cpio
archive . It is unpacked by the kernel to tmpfs which become the initial root fs
#mv initramfs<xxx> initramfs<xxx>.gz
#gunzip vlminuz<xxx>.gz
#cpio -id <../initramfs...
---------
#mount -t sysfs initrd.imgxxx /mnt
- kernel-devel package can be used on centos to viex the source code of the running
kernel
- You can download the kernel on kernel.org
#tar -Jxvf linux-xxxxx -C /usr/src/kernels
#ln -s /usr/src/kernels/linux-xxxx /usr/src/linux
- You need the development tools on kernel to compile the kernel and ncurses-devel
package
#make mrproper ; make menuconfig
#make bzImage : compile the kernel
#make modules : compile all loadable modules
#make modules_install : add the modules in correct directory
#make install : copy kernel to /boot and create init ram ...
#mkninitrd ;dracut
-device drivers are typically kernel modules that are loaded as required into the
kernel. Managed by lsmod and modproble and options that can be configured on
/etc/modproble.d/
#mdproble -l : displays all drivers
#lsmod or cat /proc/modules : List loaded drivers
#modprobe -r(v) sr_mod : Unload loaded driver . In the bckgrd , rmmod is used .
#modprobe sr_mod : load a driver , In the bckgrd , inmod is used .
#modinfo sr_mod : view driver information
- If you load a module with options and you want it to start with the options each
time the module loads , you can add it to modprobe.conf file .
#cat /sys/module/<mod_name>/<param>/<param_name>

============================
DNS
#aptitude search "?name(^bind)"
- You can use dpkg or rpm to list the package contents .
#dpkg -L bind9
- main config file /etc/bind/named.conf
- /etc/bind/rndc.key is only readable by the bind user . You can control who have
access to the server using this key
#named -v -V
- /var/cache/bind contains the caching files
- zone files are dns information stored in text file . Bind can auto-create entries
with $GENERATE . /etc/bind/db.local /etc/bind/db.127
$GENERATE 10-254 $ PTR dhcp-$.example.com
- a chroot jail can protect against attack. in /etc/default/bind9 , use -t option
to specify chroot jail
- rndc can be used to control the named service or take remote control of a machine
. #rndc-confgen ; uses tcp 953 .
#sudo -u bind rndc status
#sudo -u bind rndc-confgen
#sudo named-checkconf <domain> /etc/bind/db.local
#sudo named-checkzone localhst /etc/bind/db.local
- dns utils provides tools like dig, nslookup.
- the package is bind but the service is named
- /usr/lib/systemd/system/named.service
- Bind modes : caching-only , forwarding , master (rw) , slave (ro) , listen-on ,
allow-query
- yum install dnsmasq
#groupadd -r dnsmasq && useradd -rg dnsmasq dnsmasq
- some options : domain-needed ;bogus-priv ;no-hosts ;dns-forward-max=100 ;cache-
size=500 resolv-file=/etc/resolv.dnsmasq; no-poll
- djbDNS : secure dns
#nslookup
server x
set type= x
google.com
set debug
#dig +short x
#host x
#named-checkzone example.com db.example
#named.checkconf
#named-checkzone 1.168.192.in-addr.arpa db.1.168.192
#named.checkconf -z : check all zone files
- bind user have write access to config and zone files
{$GENERATE 100-150 student-$ IN A 10.0.0.$}

- if the server has two interfaces, you can add two A records, and the server will
return the record in your network. if not in your subnet , it will use round robin.
- do not forget the trailing dot , or the name will be add to the domain name as
FQDN.
- You can replace @ with $ORIGIN value
- if a record starts with a space or tab , then it is interpreted as the last
resource record.
#rndc reload example.com : reload only example zone
- by default logs are sent to messages or syslog files
logging {
channel bind_log {
file "bind.log" versions 3 size 10m;
severity warning;
};
category default {
bind_log;
};
};
- severity: critical error warning notice info debug dynamic
- rndc status can displayed if query logging is on/off
#rndc querylog
#rndc flush

#dig axfr example.com @<ip>


- slave servers use the zone transfer to replicate the zone at the interval value
specified in the SOA record . replication only takes place if the serial has been
incremented
- zone "example.com" IN {
type slave;
file "slaves/db.example"; bind user create thoses files
masters {<ip>;};
};
- to force a transfer , #rndc retransfer example.com = on the slave

- zone transfer ACL on the master : allow transfer { <ip>; };


- zone transfer on slave : {none; }; in named.conf.options
- TSIG can be used for authentication and message integrity.
- haveged package can be installed to generate entropy, random data when generating
keys.
#dnssec-keygen -a HMAC-MD5 -b 128 -n (host or zone) HOST master-slave.example.com
=> create key pair in /etc/bind
- we can create a conf file on the master server that references this key. The
secret comes from the generated pub key .
#cat > transfer.conf <<END
key master-slave.example.com. {
algorithm hmac-md5;
secret "key";
}
END
- you can append the include line to the main config .
#echo "include /etc/bind/transerfer.conf " >> named.conf
- alter the ACL: allow-transfer { key master-slave.example.com.; };
#tsig-keygen -a hmac-md5 slave-master.tech.local > slave-master.tech.local
- Transfer the transfer.conf file securely to the slave and use it from the
named.conf
inclue /etc/transfer.conf
server <ip> {
keys { slave-master.tech.local.;};
};
- DNSSEC can sign each resource record with an rrsig , to validate the integrity of
the query
#dig +short +dnssec NS co.uk => displays the rrsig
- each zone signed with DNSSec should be signed by an issuing or parent zone . In
this we check if the public key is valid and trusted by the parent domain.
#dig +short +dnssec DS co.uk => DS or Delegation Signer record
- in named.conf , add : dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
- create a zone signing key with : #cd /var/cache/bind
#dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE example.com
- create key signing key : #dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 2048 -n ZONE
tech.local
- add public key to the zone :
for k in $(ls K*.key); do
echo "\$INCLUDE $k" >> db.example
- Sign zone : #dnssec-signzone -3 5674 -A -N INCREMENT -o example.com -t db.example

- reference the signed zone : file "db.example.signed"


- Alternatives to Bind: DNSMasq djbDNS PowerDNS
- PowerDNS separate the caching and recursive functions.
#apt install pdns-recursor : uses less resources than bind
#apt install pdns-server mysql-server mysql-client pdns-backend-mysql
#rm /etc/powerdns/pdns.d/pdns.simplebind.conf

DHCP
----------------
dnsmasq.conf
interface=enp0s8
dhcp-range=192.168.10.50, 192.168.10.150, 12h
dhcp-host= <MAC>, server2.example.com, 192.168.10.12, 24h
-----------------
#apt install -y isc-dhcp-server
#vi /etc/dhcp/dhcp.conf
- global options effect the complete config.
ddns-update-style none;
option domain-name "example.com"
option domain-name-servers 8.8.8.8, 8.8.4.4;
default-lease-time 86400;
max-lease-time 86400;
log-facility local7;
- reservation with : server2 {
hardware ethernet <MAC>;
fixed-address <ip>
option host-name "h1.test.tg";
}
- You can test the config with #dhcpd -t prior to restart the service
- in /etc/default/isc-dhcp-server , you can edit the listening interfaces.
- arp cache flags : C cache M modified(static)
- log are stored by default in syslog file . you can grep dhcpd.
#journalctl _PID=12996 => Tshoot command
You can restart the networking server on a host so you can see the lifespan of a
dhcp lease
- you also can read logs with #journalctl -f -u isc-dhcp-server
- The lease file is /var/lib/dhcpd/dhcpd.leases
- when you reserve an IP for a MAC , it creates its own lease file.
- You can configure ipv6 in /etc/network/interfaces file .
iface enp0s8 inet6 static
address FC01::1
netmask 64
#ip -6 a s enp0s8
- You can aslo use auto-config in interfaces file
iface enp0s8 inet6 auto
- You can install an RADVD server to advertises the network address to reduce the
need of a dhcp server.
#vim /etc/sysctl.conf
net.ipv6.conf.all.forwarding=1
#sudo sysctl -p
#apt install radvd
#vim /etc/radvd.conf (/usr/share/doc/radvd/examples):
interface enp0s8
{
AdvSendAdvert on;
prefix fc01::/64
{
AdvAutonomous on;
};
};

- LDAP use cases : Linux Auth; DNS entry storage; corporate white pages
- When installing openLDAP on ubuntu , the Directoty Information Tree is created
for you. If your host domain in not correct, neither will your DIT name
- set the server name as fqdn to create a DN . Names of entries in LDAP are always
fqdn and delimited with a comma.
- you need to resolve hostnames with dns or hosts file . The entry for 127.0.1.1 in
the hosts file can cause issues so we remove it.
- Important to have time sync if using LDAP and user authentication . You can
install chrony time server
#apt install -y chrony
- ldap-utils package . The auto-config of DIT is in the package slapd.
- ldapsearch -x -LLL -H ldap:/// -b dc=tech,dc=local dn
- if you do not find results , you can reinstall with #dpkg-reconfigure slapd
- vim structure.ldif
----------------
dn: ou=people,dc=example,dc=com #Entry Name
objectClass: organizationalUnit #Attributes
ou: people

dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups
-----------------
#ldapadd -W -D cn=admin,dc=example,dc=com -f structure.ldif
- later versions of ldap keep their config in an ldap directory replacing the
/etc/ldap/slapd.conf file.
- you can list directories with ldapsearch or slapcat
#slapcat -b cn=config
#ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b cn=config
- ldap logging (olcLogLevel) : any none conns filter stats
- List current log level : #ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b
cn=config olcLogLevel . You can use journalctl -f -n0 -u slapd from another shell
to see the log results or lack of them
- Modify olcLogLevel : #vim loglevel.ldif
dn: cn=config
changetype: modify
replace: olcLogLevel
olcLogLevel: any
#ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f loglevel.ldif

==========================
WEB SERVICES
- cfdisk for formating
- httpd.conf is the config file. You can load modules in it like dir_module for
DirectoryIndex use , to load index.html where a page is not supplied in the URI
< LoadModule dir_module/mod_dir.so >
#apachectl -M : display loaded modules
#apachectl -M
#apachectl configtest
#
- the document is normally not in the serverroot directory
- LogFormat %h cleint_hostname %u username %t timestamp %s access_method %b size
- CustomLog is used for access logs
- You can load more modules to restrict access to user, group or IPs
#apachetcl configtest
#pgrep apache
- VHosts will allow for different DocumentRoot settings for sites accessed with
different Host names, IP Address or Ports .
- Vhost can have own DocumentRoot ServerName ServerAlias ErrorLog CustomLog
- There is a server status page that can be viewed . You can restrict access to
this page .
- You can use status_module and authz_host_module (to restrict access)
<Location /status>
SetHandler server-status
Require ip 127.0.0.1
</Location>
- To authenticate users , you need to load auth_basic_module authn_file_module and
authz_user_module for authentication type , provider , authorization module. The
authn_core and authz_core needs to be loaded too
#htpasswd -c /etc/apache2/sales.pwd fred => add user and create file ; -D to delete
; none to add existing user ; -v verify user password
- autoindex modules helps listing content of a directory
- <Require valid-user or Require group tech> . authz_groupfile module needed.
- htaccess can restrict access to config files
- Dynamic content can be created with CGI scripts created in PERL or Bash
- alias-module and cgid_module (script execution) neeeds to be enabled
- Adding a scriptAlias allow us to have executable content in a central directory.
ScriptAlias "/cgi-bin/" "/srv/cgi-bin"
<Directory /srv/cgi-bin>
AddHandler cgi-script .sh .pl
Options +ExecCGI
Require all granted
</Directory>
- The .htaccess file provides a way to make config changes on a per-directory basis
without the need to restart the server .
- AllowOverride none => used where htaccess files are not required? use of these
files slows the server down . all => where tenants are allowed complete use of
htaccess files . AuthConfig Options FileInfo => where tenants have a restricted set
of options that are allowed from htaccess files
#apt install php php-apache . The php module mpm_event_module does not work in
threaded mpm mode so we need to run apache in pre-fork mode : mpm_prefork_module
with "Include conf/extra/php_module.conf
- You can load php module and associate .php files from the php config file.
LoadModule php7_module ...
Addhandler php7-script .php
# apt install libapache2-mod-php

#openssl genrsa -out server.key 2048 => generate private key


#openssl rsa -noout -text -in server.key
#openssl req -new -key server.key -out server.csr => generate CSR
#openssl req -noout -text -in server.csr
#openssl x509 -req - sha256 -in server.csr -signkey server.key -out server.crt =>
self sign the csr
#openssl x509 -noout -text -in server.crt
- You can run man on the subcommand like x509 req genrsa
- To know ciphers are allowed in our filter , you can use #openssl ciphers -V
'HIGH:MEDIUM:!aRSA'
#openssl ciphers -v => displays supported ciphers and ssl versions
#openssl ciphers -V 'HIGH:MEDIUM:!aRSA:!SSLv3:!TLSv1.1:!TLSv1'
- Certificate requests are automated via the ACME protocol . A simple ACME client
is the python script acme_tiny.py
-----------------------------------------------
#git clone git://github.com/diafygi/acme-tiny
#cp acme-tiny/acme_tiny.py /usr/local/bin
#chmod +x /usr/local/bin/acme_tiny.py
-----------------------------------------------
- The acme protocol ensure you can control the site you resquested a cert for .
Let's Encrypt will put a challenge file in the web site.
----------------------------------
#mkdir -p /var/www/.well-known/acme-challenge
#openssl genrsa -out le.key 2048
#acme-tiny.py --account-key le.key --csr server.csr --acme-dir /var/www/.well-
known/acme-challenge/ > server.crt
- The web server needs to present the complete cert chain to the browser. The
browser has the root CA cert but not from the issuing intermedidate CA . Download
this and merge it into the server x509 cert .
----------------------------------
#wget http://cert.init-x3.letsencrypt.org/ -O issue.der
#openssl x509 -in issue.der -inform DER -out issue.crt -outform PEM
#cat issue.crt >> server.cert
-----------------------------------
- Redirect http to https <Redirect permanent / https://site.com
- HSTS tell the browser to only use https and cert wrnings cannot be overriden by
the user. You need to enable headers module .
<Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
- mod_proxy and mod_proxy_balancer modules are used to balanced the requests
- You also need lbmethod_byrequests and slotmem_shm modules
- disable the standard proxy <ProxyRequests off>
----------------------------------
< Proxy balancer://webfarm >
balancerMember http://host1:80
balancerMember http://host2:80
ProxySet lbmethod=byrequests
</Proxy>
Proxypass / balancer://webfarm/
ProxyPassReverse / balancer://webfarm/
-----------------------------------
- You can allow the lb manager
<Location /balancer-manager>
SetHandler balancer-manager
Require ip 127.0.0.1
</Location>
ProxyPass "/balancer-manager" ! (don't proxy)
-----------------------------------

#apt install squid


- ACL directives are used to name entities to be used to control resource access.
Consist of a name , acl tye and value
- http_access can include ACL names to gain access to resources. The last one often
denies any non-matched entries
<http_access allow localnet>
<http_access deny all>
- You can also athenticate users
-------------------------------------
#htpasswd -c /etc/squid/squid.users user1
<auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/squid.users>
<acl ncsa_users proxy_auth REQUIRED>
<http_access allow ncsa_users>
---------------------------------------

- Nginx config is modular underneath the http parent and we can create server
entries. Location entries can be found within server blocks. More than one server
entry is equivalent to vHosts in Apache.
---------------------------------
http {
server {
location / {
root /usr/share/nginx/index.html;
index index.html index.htm;
allow 127.0.0.1; <Location Access restriction>
allow X.X.X.X;
deny all;
}}}
---------------------------------
#sed -i.bak '/^\s*#/d;/^$/d' nginx.conf => remove commented and empty lines
- NGINX is fast to deliver static content but not dynamic content . We might pass
PHP to apache servers (Reverse Proxy)
-----------------------------------
http
server {
location /balancer/ {
proxy_pass http://192.168.56.10/;
}
-----------------------------------

SPF authorized & DKIM signed mails

SELECT problem.*, agent.agent_name, service.service_name FROM problem JOIN agent


ON problem.agent_id = agent.agent_id JOIN service ON problem.service_id =
service.service_id;

kpota djifa
hotel california , terrain de foot depasser , rentrer dans le premier von a gauche
de hotel california ; 2 e maison etages en carreu a gauche
256 ZSK
257 KSK

- recursion
- 127.0.0.53
- soa explained

You might also like