Shell 中的扩展命令 (wget, curl, xargs, awk, ln, tail, cat, less, more, find)

本文详细介绍了Shell和Cmd中常见的命令用法,包括git自动化操作、mvn安装本地jar、wget下载文件、curl请求数据、xargs传递参数、awk处理日志、sed文本编辑、ln创建链接、tail查看文件尾部以及多命令关联执行。同时,讲解了这些命令的参数、选项和使用场景,帮助读者更好地理解和运用这些命令。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

shell 换行: \

gcb <new_branch> && \
git push -o merge_request.create \
-o merge_request.target=$(git_develop_branch) \
-o merge_request.remove_source_branch \
-o merge_request.merge_when_pipeline_succeeds \
-o merge_request.title="<title>" \
-o merge_request.assign="<user>" \
origin $(git_current_branch)
gcd -m && gbd old_branch_name

cmd 换行: ^

mvn install:install-file -Dfile=iot-tcp.jar ^
-DgroupId=com.ccc ^
-DartifactId=iot-jar ^
-Dversion=1.0.0-SNAPSHOT ^
-Dpackaging=jar

1 wget

命令描述:在终端中下载文件。

命令格式:wget [参数] 下载地址

参数说明:

参数作用
-b后台下载
-P下载到指定目录
-t最大重试次数
-c断点续传
-p下载页面内所有资源,包括图片、视频等
-r递归下载

命令使用示例:

下载一张图片到路径/root/static/img/中,-P 参数默认值为当前路径,如果指定路径不存在会自动创建。

wget -P /root/static/img/ http://img.alicdn.com/tfs/TB1.R._t7L0gK0jSZFxXXXWHVXa-2666-1500.png

输出结果:

img

2 curl

https://curl.se/docs/manpage.html

参数

  • -L --location 直接跳转
  • -s (silent) -S (–show-error)(only) # 通常一起使用,只输出错误信息+屏蔽错误信息=完全静音
  • -O # 保存文件,将 URL 的最后部分当作文件名,等同于 wget -P ./ urlwget url 命令
  • -o filename # 保存文件为自定文件名
  • -X POST #指定请求方式
  • -d -G # 设置请求体data,默认post请求,指定get请求需要单独加-G
  • --data-urlencode # 设置请求体data并自动转码
  • -H # header
  • -A # user-agent
  • -b --cookie # cookie设置
  • -i # 打印响应的 HTTP 标头,再输出网页的源码
  • -I # 大写的i,仅打印返回的 HTTP 标头
  • -v --trace # 参数输出通信的整个过程,用于调试。–trace 会输出原始的二进制数据
  • -F --form 传输表单数据 (同时默认请求方法为 POST)
## case one (POST take body request):
curl -H "Content-Type: application/json" -X POST -d '{ "cash": "123456" }' http://127.0.0.1:8001/payment
## case two (in windows, 中文注意字符集, 输入:chcp 65001):
curl -H "Content-Type: application/json" -d "{ \"cash\": \"123456\" }" http://127.0.0.1:8001/payment

## case three (sS:完全静音、L:直达、O:保存文件)
curl -sSLO https://dlcdn.apache.org/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz

上传文件

If you start the data with the letter @, the rest should be a filename to read the data from, or - if you want curl to read the data from stdin. Posting data from a file named ‘foobar’ would thus be done with -d, --data @foobar. When -d, --data is told to read from a file like that, carriage returns, newlines and null bytes are stripped out. If you do not want the @ character to have a special interpretation use --data-raw instead.
如果数据以字母@开头,则其余部分应该是从中读取数据的文件名,或者-如果您希望 curl 从 stdin 读取数据。因此,从名为’foobar’的文件中发布数据将使用-d,–data@foobar完成。当-d,–data被告知从这样的文件中读取时,回车,换行和空字节被剥离。如果不希望@字符有特殊的解释,请使用–data-raw。

curl -d "name=curl" https://example.com
curl -d "name=curl" -d "tool=cmdline" https://example.com
curl -d @filename https://example.com
curl -F profile=@portrait.jpg https://example.com/upload.cgi
curl -F name=John -F shoesize=11 https://example.com/
# send your essay in a text field to the server. Send it as a plain text field, but get the contents for it from a local file
curl -F "story=<hugefile.txt" https://example.com/
# You can also instruct curl what Content-Type to use by using "type=", in a manner similar to
curl -F "web=@index.html;type=text/html" example.com
# You can also explicitly change the name field of a file upload part by setting filename=, like this
curl -F "file=@localfile;filename=nameinpost" example.com
curl -F "file=@\"local,file\";filename=\"name;in;post\"" https://example.com
# or
curl -F 'file=@"local,file";filename="name;in;post"' https://example.com

https://curl.se/docs/manpage.html#-F
This enables uploading of binary files etc. To force the ‘content’ part to be a file, prefix the filename with an @ sign. To just get the content part from a file, prefix the filename with the symbol <. The difference between @ and < is then that @ makes a file get attached in the post as a file upload, while the < makes a text field and just get the contents for that text field from a file.

Win 下 curl 的使用方式

[Win] 在 PowerShell 中,curl 是 Invoke-WebRequest 的别名

这意味着,当你在 PowerShell 中输入 curl 时,实际执行的是 Invoke-WebRequest 命令

实际使用, 应该用 curl.exe

 curl.exe http://127.0.01:9876/upload -F file=@"D:\file.png"

Win 11 中 Ctrl+Shift+C 可以直接复制带双引号的路径, 贴在 @ 符号后面即可

3 xargs(用于传参)

The term “xargs” stands for “execute arguments.”

ls |xargs -P10 -I{} git -C {} pull

  • -P parallel
  • -I “replace string” or “input placeholder”
  • -d ‘\t’(分割)
  • -p (打印询问)
  • -t (打印后直接执行)
  • -L 1(max-line)
  • -n 1(max-args)

4 awk(用于处理表格日志类数据)

# 基本用法
awk '{print $1}' file.txt
# 条件用法
awk 'NR < 3 {print $1}' file.txt
# 带参用法 (F: 分隔符)
awk -F ':' '{print $1}' file.txt
# 带 if-else 条件的用法
awk -F ':' '{if ($1 > "m") print $1; else print "---"}' demo.txt
1. 基本语法
  • 基本: 条件与动作条件{动作}

    • 例如:NR < 3 {print $1}
      • 当行号小于 3 时,打印该行的第一个字段
  • 参数: 分隔符-F ':'

    • 指定字段分隔符为冒号 :
2. 常用函数
  • tolower():将字符串中的所有字符转换为小写
  • length():返回字符串的长度
  • substr(string, start, length):返回从 start 开始的指定长度的子字符串
  • sin(x):计算 x 的正弦值(弧度)
  • cos(x):计算 x 的余弦值(弧度)
  • sqrt(x):返回 x 的平方根
  • rand():生成一个介于 0 和 1 之间的随机数
3. 内置变量
  • $1, $2, ...:表示当前记录的字段,$1 是第1个字段,$2 是第2个字段依此类推(字段是由空白字符或指定分隔符分隔的部分)
  • $0:Each field in the input record may be referenced by its position: $1, $2, and so on. $0 is the whole record, including leading and trailing whitespace.
  • NF:当前行的字段数(Number of Fields)
  • NR:已处理的行数(Number of Records)
  • FS:输入字段分隔符(Field Separator)
  • RS:输入记录分隔符(Record Separator),默认是换行符
  • OFS:输出字段分隔符(Output Field Separator),默认是空格
  • ORS:输出记录分隔符(Output Record Separator),默认是换行符
  • OFMT:输出格式,用于控制浮点数的输出格式(例如小数点位数)
4. 示例: 统计词频

words.txt

the day is sunny the the
the sunny is is
# Read from the file words.txt and output the word frequency list to stdout.
cat words.txt | tr -s ' ' '\n' | sort | uniq -c | sort -nr | awk '{print $2, $1}'

tr = translate, -s = squeeze-repeats, ’ ’ ‘\n’ = 空格转换行
sort = sort line of text files, -n = nemeric-sort, -r = reverse
uniq = report or omit repeated lines, -c = count, -d = only print duplicate lines, one for each group
awk = gawk 是 gnu project 对 awk 编程语言的实现 (awk 由 三个程序员开发, Aho, Kernighan, Weinberger), ‘{print $2, $1}’ = 依次打印第2个字段, 第1个字段

5. sed Stream EDitor (同 awk)
 The sed utility reads the specified files, or the standard input if no files are
 specified, modifying the input as specified by a list of commands.  The input is then
 written to the standard output.

 A single command may be specified as the first argument to sed.  Multiple commands
 may be specified by using the -e or -f options.  All commands are applied to the
 input in the order they are specified regardless of their origin.

 The following options are available:

 -E      Interpret regular expressions as extended (modern) regular expressions rather
         than basic regular expressions (BRE's).  The re_format(7) manual page fully
         describes both formats.

 -a      The files listed as parameters for the “w” functions are created (or
         truncated) before any processing begins, by default.  The -a option causes
         sed to delay opening each file until a command containing the related “w”
         function is applied to a line of input.

 -e command
         Append the editing commands specified by the command argument to the list of
         commands.

 -f command_file
         Append the editing commands found in the file command_file to the list of
         commands.  The editing commands should each be listed on a separate line.
         The commands are read from the standard input if command_file is “-”.

 -I extension
         Edit files in-place, saving backups with the specified extension.  If a zero-
         length extension is given, no backup will be saved.  It is not recommended to
         give a zero-length extension when in-place editing files, as you risk
         corruption or partial content in situations where disk space is exhausted,
         etc.

         Note that in-place editing with -I still takes place in a single continuous
         line address space covering all files, although each file preserves its
         individuality instead of forming one output stream.  The line counter is
         never reset between files, address ranges can span file boundaries, and the
         “$” address matches only the last line of the last file.  (See Sed
         Addresses.) That can lead to unexpected results in many cases of in-place
         editing, where using -i is desired.

 -i extension
         Edit files in-place similarly to -I, but treat each file independently from
         other files.  In particular, line numbers in each file start at 1, the “$”
         address matches the last line of the current file, and address ranges are
         limited to the current file.  (See Sed Addresses.) The net result is as
         though each file were edited by a separate sed instance.

 -l      Make output line buffered.

 -n      By default, each line of input is echoed to the standard output after all of
         the commands have been applied to it.  The -n option suppresses this
         behavior.

 -r      Same as -E for compatibility with GNU sed.

 -u      Make output unbuffered.

 The form of a sed command is as follows:

       [address[,address]]function[arguments]

 Whitespace may be inserted before the first address and the function portions of the
 command.

 Normally, sed cyclically copies a line of input, not including its terminating
 newline character, into a pattern space, (unless there is something left after a “D”
 function), applies all of the commands with addresses that select that pattern space,
 copies the pattern space to the standard output, appending a newline, and deletes the
 pattern space.

 Some of the functions use a hold space to save all or part of the pattern space for
 subsequent retrieval.

5 ln(创建,修改,删除软连接)

  • -s (加-s是软链接,不加是硬链接,软链接是快捷方式,随源文件消亡,硬链接是指针,指向源文件地址,删了还有,删除只是该地址无主了,硬链接保存的地址还在)
    • ln -s sourcefile linkfile
  • -snf 修改软链接
  • 删除:
    • unlink linkfile (格式: unlink 链接源 [原文件或目录])
    • rm -rf linkfile (注意后面别带 /,带了会删除链接的源文件夹。)
    • rm linkfile # rm link
    • rm -r linkfile # rm -r link 这里的参数 r 其实是没有意义的, 因为link是一个软连接 不是目录

6 tail

  • -f 追加
  • -n 行数

tail -fn 10 /var/log/messages


7 shell 多条命令关联

  • && 代表上一个成功才执行下一个;
  • || 代表上一个失败才执行下一个;
  • ; 分号 代表无关联顺序执行;
  • & 代表前一个命令进入后台立即执行后面的命令;

8 cat 展示小文件

-n 显示行号
-b--number-nonblank 显示行号但是不对空行进行编号
-s--squeeze-blank 合并连续两行以上的空行

9 more 和 less 的区别:

  • more like cat
  • less like vim

同样是分页查看文件, 但实现方式不同. more 类似于echo 或 cat 的方式输出到屏幕上, 而 less 类似于 vim 到方式占满一个屏幕,退出后没有残留。


10「more」不能指定行数, 而「head、tail」可以指定行数的命令 (cat like):

  1. head 命令指定从前往后显示的行数的内容。(-c指定字符 -n指定行数 -q不显示文件名)
  2. tail 命令用于查看文档的后N行或持续刷新内容。(多了一个 -f 可以附加显示)
head -n 5 /etc/passwd # 可以省略为 head -5 /etc/passwd

tail -f -n 5 /etc/passwd # 有多个参数, 不能省略-n

11 unzip

  • unzip (请务必添加 -d 参数, 不然文件会散落满地)

unzip file.zip -d /location/new_directory

# 解压缩到指定目录
unzip filename.zip -d /path/to/directory
# -o:自动覆盖已存在的文件,而不提示
# -q:安静模式,不显示解压过程中的文件列表
# -n:不覆盖已存在的文件,只有在目标文件不存在时才解压
# -x:排除某些文件,不解压缩指定的文件
unzip filename.zip -x file_to_exclude.txt
# 指定解压哪些文件
unzip filename.zip file_to_include.txt
unzip filename.zip file1.txt file2.txt *.sh
unzip filename.zip 'directory_name/*'
# 查看 ZIP 文件内容
unzip -l filename.zip
  • zip
zip filename.zip file1.txt file2.txt
zip filename.zip *.txt
zip -r filename.zip directory_name

12 watch 命令

watch -n 1 -d kubectl get nodes

-n interval seconds
-d differences, hightlight the differences between successive updates.

13 here document

here document 是一个为了不用再创建临时文件而创造出来的工具,可以作为输入流来使用,通常搭配 cat、ftp 命令使用

Here and now, boys.
–Aldous Huxley, Island

A here document is a special-purpose code block. It uses a form of I/O redirection to feed a command list to an interactive program or a command, such as ftp, cat, or the ex text editor.

In computing, a here document (here-document, here-text, heredoc, hereis, here-string or here-script) is a file literal or input stream literal: it is a section of a source code file that is treated as if it were a separate file. The term is also used for a form of multiline string literals that use similar syntax, preserving line breaks and other whitespace (including indentation) in the text.

在计算机中,一个here文档(here-document、here-text、heredoc、hereis、here-string或here-script)是一个文件字面量或输入流字面量:它是源代码文件的一部分,被视为一个单独的文件。该术语也用于使用类似语法的多行字符串文字,保留文本中的换行符和其他空格(包括缩进)。

Here documents originate in the Unix shell,[1] and are found in the Bourne shell since 1979, and most subsequent shells. Here document-style string literals are found in various high-level languages, notably the Perl programming language (syntax inspired by Unix shell) and languages influenced by Perl, such as PHP and Ruby. JavaScript also supports this functionality via template literals, a feature added in its 6th revision (ES6). Other high-level languages such as Python, Julia and Tcl have other facilities for multiline strings.

这里的文档起源于Unix shell,1979年以后的Bourne shell和大多数后续shell中都有。在这里,文档样式的字符串字面量可以在各种高级语言中找到,特别是Perl编程语言(受Unix shell启发的语法)和受Perl影响的语言,如PHP和Ruby。JavaScript也通过模板文字支持此功能,这是在其第6版(ES6)中添加的功能。其他高级语言,如Python、Julia和Tcl,也有其他处理多行字符串的工具。

Here documents can be treated either as files or strings. Some shells treat them as a format string literal, allowing variable substitution and command substitution inside the literal.

在这里,文档可以被视为文件或字符串。一些shell将它们视为格式字符串字面量,允许在字面量内进行变量替换和命令替换。

语法: <<终止字符 回车 (终止字符可以是任何字符, 一般用 EOF 当不断回车输入文字, 在需要结尾的最后一行输入终止字符, 将会结束输入流

cat <<EOF >> /etc/hosts
10.0.0.1 node1
10.0.0.2 node2
10.0.0.3 node3
EOF
  • 由于存在另一个写法,所以初看会觉得疑惑
cat >> /etc/hosts <<EOF
10.0.0.1 node1
10.0.0.2 node2
10.0.0.3 node3
EOF

14 ntpdate

同步时间, 走的 NTP 协议 (Network time protocol)

ntpdate time.windows.com
# or
ntpdate time1.aliyun.com

还有另一个少见的协议是 ping 所使用的 ICMP (Internet Control Message Protocol) ICMP是网络层的一个协议,用于在计算机网络中传递控制消息。这些控制消息包括网络通不通、主机是否可达、路由是否可用等信息。虽然这些控制消息不传输用户数据,但它们对于用户数据的传递起着重要作用。ICMP协议是一种面向无连接的协议,不需要建立连接,也不需要维护状态,只需要发送报文和接收报文. 常用于网络故障诊断、网络管理和网络控制等方面

15 find

find 命令是一个强大的 Unix/Linux 工具,用于在文件系统中查找文件和目录。它能够根据不同的条件(如名称、类型、大小、修改时间等)进行深度搜索

find [路径] [选项] [查找条件]
# 找到名字叫 hello 的文件夹, 并忽略错误日志 (重定向到 /dev/null)
find / -type d -name "hello" 2>/dev/null
  1. 路径:指定要搜索的目录路径。如果省略,默认为当前目录 (.)。
  2. 查找条件:
    • -name “模式”:根据名称查找文件或目录(支持通配符)。
    • -type d:查找目录。
    • -type f:查找文件。
    • -size +N:查找大于 N 字节的文件。(-size +100M)
    • -mtime N:查找在 N 天前修改过的文件。(-mtime -7 7天内)
    • -user 用户名:查找特定用户拥有的文件。
  3. 操作:
    • -print:显示找到的文件(默认操作)。({} 占位符表示当前找到的文件)
    • -exec 命令 {}:对找到的每个文件执行指定的命令。例如:
    find . -name "*.txt" -exec cat {} \;
    
    • -delete:删除找到的文件。

(END)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值