news 2026/4/3 4:29:10

K8S 中使用 YAML 安装 ECK

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
K8S 中使用 YAML 安装 ECK

Kubernetes 是目前最受欢迎的容器编排技术,越来越多的应用开始往 Kubernetes 中迁移。Kubernetes 现有的 ReplicaSet、Deployment、Service 等资源对象已经可以满足无状态应用对于自动扩缩容、负载均衡等基本需求。但是对于有状态的、分布式的应用,通常拥有各自的一套模型定义规范,例如 Prometheus,Etcd,Zookeeper,Elasticsearch 等等。部署这些分布式应用往往需要熟悉特定领域的知识,并且在扩缩容和升级时需要考虑如何保证应用服务的可用性等问题。为了简化有状态、分布式应用的部署,Kubernetes Operator 应运而生。

Kubernetes Operator 是一种特定的应用控制器,通过 CRD(Custom Resource Definitions,自定义资源定义)扩展 Kubernetes API 的功能,可以用它来创建、配置和管理特定的有状态应用,而不需要直接去使用 Kubernetes 中最原始的一些资源对象,比如 Pod,Deployment,Service 等等。

Elastic Cloud on Kubernetes(ECK) 是其中的一种 Kubernetes Operator,方便我们管理 Elastic Stack 家族中的各种组件,例如 Elasticsearch,Kibana,APM,Beats 等等。比如只需要定义一个 Elasticsearch 类型的 CRD 对象,ECK 就可以帮助我们快速搭建出一套 Elasticsearch 集群。

使用create安装Elastic的自定义资源定义

[root@k8s-192-168-1-140 ~]# kubectl create -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml

customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/elasticsearchautoscalers.autoscaling.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/stackconfigpolicies.stackconfigpolicy.k8s.elastic.co created

[root@k8s-192-168-1-140 ~]#

使用kubectl apply 安装operator及其RBAC规则

[root@k8s-192-168-1-140 ~]# kubectl apply -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml

namespace/elastic-system created

serviceaccount/elastic-operator created

secret/elastic-webhook-server-cert created

configmap/elastic-operator created

clusterrole.rbac.authorization.k8s.io/elastic-operator created

clusterrole.rbac.authorization.k8s.io/elastic-operator-view created

clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created

clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created

service/elastic-webhook-server created

statefulset.apps/elastic-operator created

validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created

[root@k8s-192-168-1-140 ~]#

查看是否启动完成

[root@k8s-192-168-1-140 ~]kubectl get -n elastic-system pods

NAME READY STATUS RESTARTS AGE

elastic-operator-0 1/1 Running 0 8m38s

[root@k8s-192-168-1-140 ~]#

启动部署

Operator自动创建和管理Kubernetes资源,以实现Elasticsearch集群的期望状态。可能需要几分钟的时间才能创建所有资源并准备好使用群集。

cat <<EOF | kubectl apply -f -

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: quickstart

spec:

version: 9.2.2

nodeSets:

- name: default

count: 1

config:

node.store.allow_mmap: false

EOF

存储用量

创建时默认为1G存储空间,可以在创建时配置申明空间

cat <<EOF | kubectl apply -f -

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: quickstart

spec:

version: 9.2.2

nodeSets:

- name: default

count: 1

volumeClaimTemplates:

- metadata:

name: elasticsearch-data

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 5Gi

storageClassName: nfs-storage

config:

node.store.allow_mmap: false

EOF

查看部署状态

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pod -A

NAMESPACE NAME READY STATUS RESTARTS AGE

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 0/1 Pending 0 2m1s

elastic-system elastic-operator-0 1/1 Running 0 12m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

[root@k8s-192-168-1-140 ~]#

查看日志

[root@k8s-192-168-1-140 ~]# kubectl logs -f quickstart-es-default-0

Defaulted container "elasticsearch" out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)

[root@k8s-192-168-1-140 ~]#

安装NFS动态挂载

[root@k8s-192-168-1-140 ~]# yum install nfs-utils -y

[root@k8s-192-168-1-140 ~]# mkdir /nfs

[root@k8s-192-168-1-140 ~]# vim /etc/exports

/nfs *(rw,sync,no_root_squash,no_subtree_check)

[root@k8s-192-168-1-140 ~]# systemctl restart rpcbind

[root@k8s-192-168-1-140 ~]# systemctl restart nfs-server

[root@k8s-192-168-1-140 ~]# systemctl enable rpcbind

[root@k8s-192-168-1-140 ~]# systemctl enable nfs-server

K8S 编写NFS供给

[root@k8s-192-168-1-140 ~]# vim nfs-storage.yaml

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# cat nfs-storage.yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: nfs-storage

annotations:

storageclass.kubernetes.io/is-default-class: "true"

provisioner: k8s-sigs.io/nfs-subdir-external-provisioner

parameters:

archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: nfs-client-provisioner

labels:

app: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

spec:

replicas: 1

strategy:

type: Recreate

selector:

matchLabels:

app: nfs-client-provisioner

template:

metadata:

labels:

app: nfs-client-provisioner

spec:

serviceAccountName: nfs-client-provisioner

containers:

- name: nfs-client-provisioner

image: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2

# resources:

# limits:

# cpu: 10m

# requests:

# cpu: 10m

volumeMounts:

- name: nfs-client-root

mountPath: /persistentvolumes

env:

- name: PROVISIONER_NAME

value: k8s-sigs.io/nfs-subdir-external-provisioner

- name: NFS_SERVER

value: 192.168.1.140 ## 指定自己nfs服务器地址

- name: NFS_PATH

value: /nfs/ ## nfs服务器共享的目录

volumes:

- name: nfs-client-root

nfs:

server: 192.168.1.140

path: /nfs/

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: nfs-client-provisioner-runner

rules:

- apiGroups: [""]

resources: ["nodes"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

- apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["events"]

verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: run-nfs-client-provisioner

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

roleRef:

kind: ClusterRole

name: nfs-client-provisioner-runner

apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

rules:

- apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

roleRef:

kind: Role

name: leader-locking-nfs-client-provisioner

apiGroup: rbac.authorization.k8s.io

开始安装

[root@k8s-192-168-1-140 ~]# kubectl apply -f nfs-storage.yaml

storageclass.storage.k8s.io/nfs-storage created

deployment.apps/nfs-client-provisioner created

serviceaccount/nfs-client-provisioner created

clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created

clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created

role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

[root@k8s-192-168-1-140 ~]#

查看存储

[root@k8s-192-168-1-140 ~]# kubectl get storageclasses.storage.k8s.io

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE

nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 6h7m

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE

elasticsearch-data-quickstart-es-default-0 Bound pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO nfs-storage <unset> 39s

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE

pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO Delete Bound default/elasticsearch-data-quickstart-es-default-0 nfs-storage <unset> 43s

[root@k8s-192-168-1-140 ~]#

查看ES服务发现

[root@k8s-192-168-1-140 ~]# kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 81d

nginx NodePort 10.68.148.53 <none> 80:30330/TCP 81d

quickstart-es-default ClusterIP None <none> 9200/TCP 74s

quickstart-es-http ClusterIP 10.68.66.232 <none> 9200/TCP 75s

quickstart-es-internal-http ClusterIP 10.68.121.73 <none> 9200/TCP 75s

quickstart-es-transport ClusterIP None <none> 9300/TCP 75s

[root@k8s-192-168-1-140 ~]#

查看ES密码

[root@k8s-192-168-1-140 ~]# PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'

V3VPqwQMURTSg6zFYvVIsH13[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# curl -u "elastic:$PASSWORD" -k "https://10.68.66.232:9200"

{

"name" : "quickstart-es-default-0",

"cluster_name" : "quickstart",

"cluster_uuid" : "JNqGubnmSeao_LO-JmypHg",

"version" : {

"number" : "9.2.2",

"build_flavor" : "default",

"build_type" : "docker",

"build_hash" : "ed771e6976fac1a085affabd45433234a4babeaf",

"build_date" : "2025-11-27T08:06:51.614397514Z",

"build_snapshot" : false,

"lucene_version" : "10.3.2",

"minimum_wire_compatibility_version" : "8.19.0",

"minimum_index_compatibility_version" : "8.0.0"

},

"tagline" : "You Know, for Search"

}

[root@k8s-192-168-1-140 ~]#

安装Kibana服务

cat <<EOF | kubectl apply -f -

apiVersion: kibana.k8s.elastic.co/v1

kind: Kibana

metadata:

name: quickstart

spec:

version: 9.2.2

count: 1

elasticsearchRef:

name: quickstart

EOF

查看服务状态以及密码信息

[root@k8s-192-168-1-140 ~]# kubectl get kibana

NAME HEALTH NODES VERSION AGE

quickstart red 9.2.2 16s

[root@k8s-192-168-1-140 ~]#

# 查看密码

[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

V3VPqwQMURTSg6zFYvVIsH13

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 81d

nginx NodePort 10.68.148.53 <none> 80:30330/TCP 81d

quickstart-es-default ClusterIP None <none> 9200/TCP 2m47s

quickstart-es-http ClusterIP 10.68.66.232 <none> 9200/TCP 2m48s

quickstart-es-internal-http ClusterIP 10.68.121.73 <none> 9200/TCP 2m48s

quickstart-es-transport ClusterIP None <none> 9300/TCP 2m48s

quickstart-kb-http ClusterIP 10.68.103.103 <none> 5601/TCP 24s

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get service quickstart-kb-http

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

quickstart-kb-http ClusterIP 10.68.103.103 <none> 5601/TCP 2m39s

[root@k8s-192-168-1-140 ~]#

开启访问

[root@k8s-192-168-1-140 ~] kubectl port-forward --address 0.0.0.0 service/quickstart-kb-http 5601

Forwarding from 0.0.0.0:5601 -> 5601

# 登录地址

https://192.168.1.140:5601/login

用户:

elastic

密码:

V3VPqwQMURTSg6zFYvVIsH13

修改ES副本数

cat <<EOF | kubectl apply -f -

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: quickstart

spec:

version: 9.2.2

nodeSets:

- name: default

count: 3

config:

node.store.allow_mmap: false

EOF

查看状态

[root@k8s-192-168-1-140 ~]# kubectl get pod -A

NAMESPACE NAME READY STATUS RESTARTS AGE

default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38m

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24m

elastic-system elastic-operator-0 1/1 Running 0 6h2m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w

NAMESPACE NAME READY STATUS RESTARTS AGE

default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38m

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24m

elastic-system elastic-operator-0 1/1 Running 0 6h2m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-1 0/1 Pending 0 0s

default quickstart-es-default-1 0/1 Pending 0 0s

default quickstart-es-default-1 0/1 Pending 0 0s

default quickstart-es-default-1 0/1 Init:0/2 0 0s

default quickstart-es-default-1 0/1 Init:0/2 0 1s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-1 0/1 Init:0/2 0 1s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-1 0/1 Init:0/2 0 2s

default quickstart-es-default-1 0/1 Init:1/2 0 3s

default quickstart-es-default-1 0/1 PodInitializing 0 4s

default quickstart-es-default-1 0/1 Running 0 5s

default quickstart-es-default-1 1/1 Running 0 26s

default quickstart-es-default-2 0/1 Pending 0 0s

default quickstart-es-default-2 0/1 Pending 0 0s

default quickstart-es-default-2 0/1 Pending 0 0s

default quickstart-es-default-2 0/1 Init:0/2 0 0s

default quickstart-es-default-2 0/1 Init:0/2 0 1s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-1 1/1 Running 0 30s

default quickstart-es-default-2 0/1 Init:0/2 0 2s

default quickstart-es-default-1 1/1 Running 0 30s

default quickstart-es-default-2 0/1 Init:0/2 0 2s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-2 0/1 Init:1/2 0 3s

default quickstart-es-default-2 0/1 PodInitializing 0 4s

default quickstart-es-default-2 0/1 Running 0 5s

default quickstart-es-default-2 1/1 Running 0 33s

^C[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w

NAMESPACE NAME READY STATUS RESTARTS AGE

default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h40m

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 1/1 Running 0 5h53m

default quickstart-es-default-1 1/1 Running 0 2m19s

default quickstart-es-default-2 1/1 Running 0 111s

default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h26m

elastic-system elastic-operator-0 1/1 Running 0 6h4m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

卸载删除

# 删除所有命名空间中的所有Elastic资源

kubectl get namespaces --no-headers -o custom-columns=:metadata.name \

| xargs -n1 kubectl delete elastic --all -n

# 删除operator

kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml

kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/3/21 9:48:53

【完整源码+数据集+部署教程】论文内容分类与检测系统源码分享[一条龙教学YOLOV8标注好的数据集一键训练_70+全套改进创新点发刊_Web前端展示]

一、背景意义 随着信息技术的迅猛发展&#xff0c;学术论文的数量和复杂性不断增加&#xff0c;如何高效地对论文内容进行分类与检测已成为一个亟待解决的问题。传统的人工分类方法不仅耗时耗力&#xff0c;而且容易受到主观因素的影响&#xff0c;导致分类结果的不一致性和准确…

作者头像 李华
网站建设 2026/3/27 17:55:09

如何快速搭建电商产品评分系统:Start Bootstrap模板实战指南

在当今竞争激烈的电商市场中&#xff0c;用户评价和评分系统已成为影响购买决策的关键因素。Start Bootstrap电商模板提供了专业且易于集成的评分组件&#xff0c;让你能够在短时间内为产品页面添加完整的用户反馈收集功能。这套开源解决方案特别适合前端开发新手和需要快速上线…

作者头像 李华
网站建设 2026/3/31 7:07:51

16、深入了解psad:从DShield报告到主动响应

深入了解psad:从DShield报告到主动响应 1. DShield报告系统简介 DShield分布式入侵检测系统(http://www.dshield.org )是收集和报告安全事件数据的重要工具。它作为一个集中的数据仓库,接收来自开源和商业软件(如入侵检测系统、路由器和防火墙)提供的数据。许多相关产品…

作者头像 李华
网站建设 2026/3/23 22:55:00

28、端口敲门与单包授权技术的安全性及fwknop应用解析

端口敲门与单包授权技术的安全性及fwknop应用解析 1. 模糊安全争议:端口敲门与SPA是否属于模糊安全技术 在网络安全领域,端口敲门(Port Knocking)和单包授权(Single Packet Authorization,SPA)是否属于模糊安全(Security Through Obscurity)技术一直是备受争议的话题…

作者头像 李华
网站建设 2026/3/22 19:12:28

30、实用脚本编程技巧与示例

实用脚本编程技巧与示例 在脚本编程领域,有许多实用的技巧和程序可以帮助我们更高效地处理各种任务。下面将详细介绍几个不同功能的脚本程序,包括定时提醒、字符转写、打印邮寄标签、统计单词使用频率、去除未排序文本中的重复项以及从 Texinfo 源文件中提取程序等。 1. 定…

作者头像 李华
网站建设 2026/3/26 22:51:49

36、深入了解gawk调试器及算术运算特性

深入了解gawk调试器及算术运算特性 1. gawk调试器的局限性 gawk调试器虽然实用且有趣,但仍存在一些局限性,值得我们关注: - 错误提示不详细 :当输入调试器不认可的内容时,它仅返回“语法错误”,不会详细解释错误原因。不过,当你最终找出错误时,会有很强的成就感。 …

作者头像 李华