配置ceph存储

news/2024/7/27 16:14:50/文章来源:https://blog.csdn.net/weixin_65562581/article/details/136695539

ceph存储

centos7.6主机准备 (禁用selinux, 关闭防火墙)

主机名IP地址(内网)IP地址(外网)
ceph01192.168.88.11172.18.127.11
ceph02192.168.88.12172.18.127.12
ceph03192.168.88.13172.18.127.13
web192.168.88.20172.18.127.20
[root@ceph01 ~]# ifconfig |grep inet | sed -n '1p;3p'|awk '{print $2}'
172.18.127.11
192.168.88.11
[root@ceph02 ~]#  ifconfig |grep inet | sed -n '1p;3p'|awk '{print $2}'
172.18.127.12
192.168.88.12
[root@ceph03 ~]#  ifconfig |grep inet | sed -n '1p;3p'|awk '{print $2}'
172.18.127.13
192.168.88.13
[root@web2 ~]# ifconfig |grep inet | sed -n '1p;3p'|awk '{print $2}'
172.18.127.20
192.168.88.20

一块40G的SCSI系统盘,一块40G的SCSI盘用来当ceph数据盘,一块10G的NVME盘用来当ceph的日志盘

[root@ceph03 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   40G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   39G  0 part ├─centos-root 253:0    0 35.1G  0 lvm  /└─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
sdb               8:16   0   40G  0 disk 
sr0              11:0    1  4.3G  0 rom  /mnt
nvme0n1         259:0    0   10G  0 disk 

SSH免密登陆

[root@web2 ~]# ssh-keygen
[root@web2 ~]# ssh-copy-id root@192.168.88.11
[root@web2 ~]# ssh-copy-id root@192.168.88.12
[root@web2 ~]# ssh-copy-id root@192.168.88.13
[root@web2 ~]# ssh-copy-id root@192.168.88.20

设置主机名

[root@ceph01 ~]# hostnamectl set-hostname ceph01.localdomain
[root@ceph01 ~]# bash
[root@ceph02 ~]#  hostnamectl set-hostname ceph02.localdomain
[root@ceph02 ~]# bash
[root@ceph03 ~]# hostnamectl set-hostname ceph03.localdomain
[root@ceph03 ~]# bash
[root@web2 ~]# hostnamectl set-hostname web2.localdomain
[root@web2 ~]# bash
[root@web2 ~]# vim /etc/hosts
[root@web2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.88.11 ceph01 ceph01.localdomain
192.168.88.20 web2 web2.localdomain 
192.168.88.12 ceph02 ceph02.localdomain
192.168.88.13 ceph03 ceph03.localdomain
[root@web2 ~]# vim /etc/hosts
[root@web2 ~]# scp /etc/hosts 192.168.88.11:/etc/hosts
[root@web2 ~]# scp /etc/hosts 192.168.88.12:/etc/hosts
[root@web2 ~]# scp /etc/hosts 192.168.88.13:/etc/hosts

配置ntp时间同步

[root@web2 ~]# mount /dev/cdrom /mnt/centos/
mount: /dev/sr0 写保护,将以只读方式挂载
[root@web2 ~]# cat /etc/yum.repos.d/local.repo 
[Centos]
name=centos
gpgcheck=0
enabled=1
baseurl=file:///mnt/centos
[root@web2 ~]# yum install -y ntp
[root@ceph01 ~]#  yum install -y ntp
[root@ceph02 ~]#  yum install -y ntp
[root@ceph03 ~]#  yum install -y ntp
#web2节点作为其他三台节点的时间服务器
[root@web2 ~]# systemctl start ntpd
[root@web2 ~]# ntpq -pnremote           refid      st t when poll reach   delay   offset  jitter
==============================================================================162.159.200.123 10.12.3.190      3 u    -   64    1  302.189  -52.158  41.206202.112.29.82   .BDS.            1 u    1   64    1   51.094  -16.313  12.00478.46.102.180   131.188.3.221    2 u    2   64    1  288.068  -72.716   0.000185.209.85.222  89.109.251.24    2 u    1   64    1  150.161   11.939   0.000
#ceph1,2,3操作如下:
[root@ceph01 ~]# vim /etc/ntp.conf
server web2 iburst
[root@ceph02 ~]# vim /etc/ntp.conf
server web2 iburst
[root@ceph03 ~]# vim /etc/ntp.conf
server web2 iburst
[root@ceph01 ~]# systemctl enable ntpd --now
[root@ceph02 ~]# systemctl enable ntpd --now
[root@ceph03 ~]# systemctl enable ntpd --now
[root@ceph01 ~]# ntpq -pnremote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.168.88.20   202.112.29.82    2 u   30   64    1    0.572    4.934   0.
[root@ceph02 ~]# ntpq -pnremote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.168.88.20   202.112.29.82    2 u   30   64    1    0.572    4.934   0.
[root@ceph03 ~]# ntpq -pnremote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.168.88.20   202.112.29.82    2 u   30   64    1    0.572    4.934   0.

配置yum源

#四个节点都需要操作
[root@ceph03 ~]# yum -y install wget
[root@ceph03 ~]# wget -O /etc/yum.repos.d/Centos-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph03 ~]# wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
[root@ceph03 ~]# vim /etc/yum.repos.d/ceph.repo
[ceph_noarch]
name=noarch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0
[ceph_x86_64]
name=x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
[root@web2 ~]# yum clean all && yum makecache

ceph相关的包安装

在部署节点(web)安装ceph的部署工具

[root@web2 ~]# yum install python-setuptools -y
[root@web2 ~]# yum install ceph-deploy -y
[root@web2 ~]# ceph-deploy --version
2.0.1

在ceph节点安装相关的包

[root@web2 ~]# for i in 192.168.88.{11..13}; do ssh root@$i 'yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw ceph-mgr'; done

部署monitor

ceph01作为monitor节点,在部署节点web创建一个工作目录,后续的命令在该目录下执行,

产生的配置文件保存在该目录中

[root@web2 my-cluster]# mkdir my-cluster
[root@web2 my-cluster]# cd my-cluster/
[root@web2 my-cluster]# ceph-deploy new --public-network 172.18.127.0/16 --cluster-network 192.168.88.0/24 ceph01

初始化monitor

[root@web2 my-cluster]# ceph-deploy mon create-initial

将配置文件拷贝到对应的节点

[root@web2 my-cluster]# ceph-deploy admin ceph01 ceph02 ceph03

如果想部署高可用monitor,可以将ceph02,ceph03也加入mon集群

[root@web2 ~]# ceph-deploy mon add ceph02
[root@web2 ~]# ceph-deploy mon add ceph03

查看集群的状态

[root@ceph02 ~]# ceph -scluster:id:     84f1a8d1-21b8-487b-8b20-e1c979082a75health: HEALTH_WARNmons are allowing insecure global_id reclaimservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 5s)mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     #发现有个警告[root@ceph02 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@ceph02 ~]# ceph -scluster:id:     84f1a8d1-21b8-487b-8b20-e1c979082a75health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 3m)mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

部署mgr

ceph01作为mgr节点,在部署节点web执行

[root@web2 my-cluster]# ceph-deploy mgr create ceph01

如果想部署高可用mgr,可以将ceph02,ceph03也加入进来

[root@web2 my-cluster]# ceph-deploy mgr create ceph02 ceph03

查看ceph状态

[root@ceph01 ~]# ceph -scluster:id:     84f1a8d1-21b8-487b-8b20-e1c979082a75health: HEALTH_WARNOSD count 0 < osd_pool_default_size 3services:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 6m)mgr: ceph01(active, since 62s), standbys: ceph03, ceph02osd: 0 osds: 0 up, 0 intask status:data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

部署osd

OSD规划:

​ 选用filestore作为存储引擎,每个节点上选用/dev/sdb作为数据盘,每个节点上选用/dev/nvme0n1作为日志盘,先确认每个节点的硬盘情况,然后在部署节点web执行:

确认每个节点的硬盘情况

[root@web2 my-cluster]# ceph-deploy disk list ceph01 ceph02 ceph03

清理ceph01,ceph02,ceph03节点上硬盘上现有数据和文件系统

[root@web2 my-cluster]# ceph-deploy disk zap ceph01 /dev/nvme0n1
[root@web2 my-cluster]# ceph-deploy disk zap ceph02 /dev/nvme0n1
[root@web2 my-cluster]# ceph-deploy disk zap ceph03 /dev/nvme0n1
[root@web2 my-cluster]# ceph-deploy disk zap ceph01 /dev/sdb
[root@web2 my-cluster]# ceph-deploy disk zap ceph02 /dev/sdb
[root@web2 my-cluster]# ceph-deploy disk zap ceph03 /dev/sbd

添加OSD

[root@web2 my-cluster]# ceph-deploy osd create --data /dev/sdb --journal /dev/nvme0n1 --filestore ceph01
[root@web2 my-cluster]# ceph-deploy osd create --data /dev/sdb --journal /dev/nvme0n1 --filestore ceph02
[root@web2 my-cluster]# ceph-deploy osd create --data /dev/sdb --journal /dev/nvme0n1 --filestore ceph03

查看osd状态

[root@ceph01 ~]# ceph osd status
+----+--------------------+-------+-------+--------+---------+--------+---------+-----------+
| id |        host        |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+--------------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | ceph01.localdomain |  107M | 39.8G |    0   |     0   |    0   |     0   | exists,up |
| 1  | ceph02.localdomain |  107M | 39.8G |    0   |     0   |    0   |     0   | exists,up |
| 2  | ceph03.localdomain |  107M | 39.8G |    0   |     0   |    0   |     0   | exists,up |
+----+--------------------+-------+-------+--------+---------+--------+---------+-----------+

使用systemd管理ceph服务

#列出所有的ceph服务
[root@ceph01 ~]# systemctl status ceph\*.service ceph\*.target
#启动所有服务的守护程序
[root@ceph01 ~]# systemctl start ceph.target
#停止所有服务的守护程序
[root@ceph01 ~]# systemctl stop ceph.target
#按服务类型启动所有的守护进程
[root@ceph01 ~]# systemctl start ceph-osd.target
[root@ceph01 ~]# systemctl start ceph-mon.target
[root@ceph01 ~]# systemctl start ceph-mds.target
#按服务类型停止所有的守护进程
[root@ceph01 ~]# systemctl stop ceph-osd.target
[root@ceph01 ~]# systemctl stop ceph-mon.target
[root@ceph01 ~]# systemctl stop ceph-mds.target

存储池管理

列出已经创建的存储池

[root@ceph01 ~]# ceph osd lspools
[root@ceph01 ~]# ceph osd pool ls

创建存储池

[root@ceph01 ~]# ceph osd pool create test 32 32
pool 'test' created

重新命名存储池

[root@ceph01 ~]# ceph osd pool rename test ceph
pool 'test' renamed to 'ceph'
[root@ceph01 ~]# ceph osd pool ls
ceph

查看存储池属性

#查看对象的副本数
[root@ceph01 ~]# ceph osd pool get ceph size
size: 3
#查看pg数
[root@ceph01 ~]# ceph osd pool get ceph pg_num
pg_num: 32
#查看pgp数,一般小于等于pg_num
[root@ceph01 ~]# ceph osd pool get ceph pgp_num
pgp_num: 32

删除存储池

#删除存储池
[root@ceph01 ~]# ceph osd pool rm ceph
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool ceph.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.
#第一次删除pool会提示错误,需要输入俩遍存储池名字+--yes....-it
[root@ceph01 ~]# ceph osd pool rm ceph ceph  --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
#接着会继续报错,需要在配置文件中添加一条配置
[root@web2 my-cluster]# vim ceph.conf 
[mon]
mon allow pool delete = true
#把配置文件推送到其他节点,因为ceph1--3已经存在配置文件,所有需要加 --overwrite-conf,使其覆盖
[root@web2 my-cluster]# ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03
[root@ceph01 ~]# systemctl restart ceph-mon.target
[root@ceph02 ~]# systemctl restart ceph-mon.target
[root@ceph03 ~]# systemctl restart ceph-mon.target
[root@ceph01 ~]# ceph osd pool rm ceph ceph --yes-i-really-really-mean-it
pool 'ceph' removed
删除成功

状态检测

检查集群的状态

[root@ceph01 ~]# ceph -scluster:id:     84f1a8d1-21b8-487b-8b20-e1c979082a75health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 90s)mgr: ceph01(active, since 17m), standbys: ceph03, ceph02osd: 3 osds: 3 up (since 16m), 3 in (since 24m)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   322 MiB used, 120 GiB / 120 GiB availpgs:     
[root@ceph01 ~]# ceph health
HEALTH_OK
#更详细查看
[root@ceph01 ~]# ceph health detail
HEALTH_OK

检查OSD状态

[root@ceph01 ~]# ceph osd status
+----+--------------------+-------+-------+--------+---------+--------+---------+-----------+
| id |        host        |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+--------------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | ceph01.localdomain |  107M | 39.8G |    0   |     0   |    0   |     0   | exists,up |
| 1  | ceph02.localdomain |  107M | 39.8G |    0   |     0   |    0   |     0   | exists,up |
| 2  | ceph03.localdomain |  107M | 39.8G |    0   |     0   |    0   |     0   | exists,up |
+----+--------------------+-------+-------+--------+---------+--------+---------+-----------+
[root@ceph01 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.11696 root default                            
-3       0.03899     host ceph01                         0   hdd 0.03899         osd.0       up  1.00000 1.00000 
-5       0.03899     host ceph02                         1   hdd 0.03899         osd.1       up  1.00000 1.00000 
-7       0.03899     host ceph03                         2   hdd 0.03899         osd.2       up  1.00000 1.00000

检查Mon状态

[root@ceph01 ~]# ceph mon stat
e3: 3 mons at {ceph01=[v2:172.18.127.11:3300/0,v1:172.18.127.11:6789/0],ceph02=[v2:172.18.127.12:3300/0,v1:172.18.127.12:6789/0],ceph03=[v2:172.18.127.13:3300/0,v1:172.18.127.13:6789/0]}, election epoch 40, leader 0 ceph01, quorum 0,1,2 ceph01,ceph02,ceph03
[root@ceph01 ~]# ceph quorum_status
{"election_epoch":40,"quorum":[0,1,2],"quorum_names":["ceph01","ceph02","ceph03"],"quorum_leader_name":"ceph01","quorum_age":185,"monmap":{"epoch":3,"fsid":"84f1a8d1-21b8-487b-8b20-e1c979082a75","modified":"2024-03-13 23:26:55.853908","created":"2024-03-13 23:22:18.695928","min_mon_release":14,"min_mon_release_name":"nautilus","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"ceph01","public_addrs":{"addrvec":[{"type":"v2","addr":"172.18.127.11:3300","nonce":0},{"type":"v1","addr":"172.18.127.11:6789","nonce":0}]},"addr":"172.18.127.11:6789/0","public_addr":"172.18.127.11:6789/0"},{"rank":1,"name":"ceph02","public_addrs":{"addrvec":[{"type":"v2","addr":"172.18.127.12:3300","nonce":0},{"type":"v1","addr":"172.18.127.12:6789","nonce":0}]},"addr":"172.18.127.12:6789/0","public_addr":"172.18.127.12:6789/0"},{"rank":2,"name":"ceph03","public_addrs":{"addrvec":[{"type":"v2","addr":"172.18.127.13:3300","nonce":0},{"type":"v1","addr":"172.18.127.13:6789","nonce":0}]},"addr":"172.18.127.13:6789/0","public_addr":"172.18.127.13:6789/0"}]}}

为存储池指定ceph的应用类型

ceph osd pool application enable ceph <app>
说明:app的可选值是cephfs,rbd,rgw,如果不显示指定类型,集群将显示HEALTH_WARN状态使用ceph health detail命令查看
[root@ceph01 ~]# ceph osd pool create ceph 16 16
pool 'ceph' created
[root@ceph01 ~]# ceph osd pool application enable ceph cephfs
enabled application 'cephfs' on pool 'ceph'

存储池配额管理

#根据对象数配额
[root@ceph01 ~]# ceph osd pool set-quota ceph max_objects 10000
set-quota max_objects = 10000 for pool ceph
#根据容量配额(单位:字节)
[root@ceph01 ~]# ceph osd pool set-quota ceph max_bytes 1048576000
set-quota max_bytes = 1048576000 for pool ceph

存储池对象访问

上传对象到存储池

[root@ceph01 ~]# echo "test111">test.txt
[root@ceph01 ~]# rados -p ceph put test ./test.txt

列出存储池中的对象

[root@ceph01 ~]# rados -p ceph ls
test

从存储池下载对象

[root@ceph02 ~]# rados -p ceph get test test.txt.tmp
[root@ceph02 ~]# ls
anaconda-ks.cfg  test.txt.tmp

删除存储池的对象

[root@ceph02 ~]# rados -p ceph rm test
[root@ceph02 ~]# rados -p ceph ls

配置ceph fs

安装并启用mds

[root@web2 my-cluster]# ceph-deploy mds create ceph01 ceph02 ceph03
[root@ceph01 ~]#  systemctl status ceph-mds*
● ceph-mds@ceph01.service - Ceph metadata server daemonLoaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)Active: active (running) since 四 2024-03-14 00:26:37 CST; 20s agoMain PID: 32072 (ceph-mds)CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph01.service└─32072 /usr/bin/ceph-mds -f --cluster ceph --id ceph01 --setuser ceph --setgroup ceph314 00:26:37 ceph01.localdomain systemd[1]: Started Ceph metadata server daemon.
314 00:26:37 ceph01.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-mds@.service:15] Unkno...ce'
3月 14 00:26:37 ceph01.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-mds@.service:16] Unkno...ce'
314 00:26:37 ceph01.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-mds@.service:19] Unkno...ce'
3月 14 00:26:37 ceph01.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-mds@.service:21] Unkno...ce'
314 00:26:37 ceph01.localdomain systemd[1]: [/usr/lib/systemd/system/ceph-mds@.service:22] Unkno...ce'
314 00:26:37 ceph01.localdomain ceph-mds[32072]: starting mds.ceph01 at
Hint: Some lines were ellipsized, use -l to show in full.

存储池创建

#新建一个名为data1的存储池,目的是存储数据
[root@ceph02 ~]# ceph osd pool create data1 16
pool 'data1' created
#新建一个名为metadata1的存储池,目的是存储元数据
[root@ceph02 ~]# ceph osd pool create metadata1 16
pool 'metadata1' created
#创建名为myfs1的cephfs,数据保存到data1中,元数据保存到metadata1中
[root@ceph02 ~]# ceph fs new myfs1 metadata1 data1
new fs with metadata pool 4 and data pool 3
[root@ceph02 ~]# ceph df
RAW STORAGE:CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED hdd       120 GiB     120 GiB     323 MiB      323 MiB          0.26 TOTAL     120 GiB     120 GiB     323 MiB      323 MiB          0.26 POOLS:POOL          ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL data1          3      16         0 B           0         0 B         0        38 GiB metadata1      4      16     2.2 KiB          22     2.2 KiB   40        38 GiB
# 查看创建文件系统
[root@ceph02 ~]# ceph fs ls
name: myfs1, metadata pool: metadata1, data pools: [data1 ]

挂载ceph fs

#查看连接ceph的用户名和密码
[root@ceph02 ~]# cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]key = AQCqxPFl8v/aORAArTo47oVAzrLSMkGEUXqJDQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"
[root@web2 my-cluster]# mkdir /ceph
[root@web2 my-cluster]# vim /etc/fstab
172.18.127.11:6789,172.18.127.12:6789,172.18.127.13:6789:/ /ceph  ceph   _netdev,name=admin,secret=AQCqxPFl8v/aORAArTo47oVAzrLSMkGEUXqJDQ==  0  0
[root@web2 my-cluster]# mount -a
[root@web2 my-cluster]# df -Th
文件系统                                                   类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root                                    xfs        36G  2.2G   33G    7% /
devtmpfs                                                   devtmpfs  475M     0  475M    0% /dev
tmpfs                                                      tmpfs     487M     0  487M    0% /dev/shm
tmpfs                                                      tmpfs     487M  7.7M  479M    2% /run
tmpfs                                                      tmpfs     487M     0  487M    0% /sys/fs/cgroup
/dev/sda1                                                  xfs      1014M  146M  869M   15% /boot
tmpfs                                                      tmpfs      98M     0   98M    0% /run/user/0
/dev/sr0                                                   iso9660   4.3G  4.3G     0  100% /mnt
172.18.127.11:6789,172.18.127.12:6789,172.18.127.13:6789:/ ceph       38G     0   38G    0% /ceph

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.luyixian.cn/news_show_1006137.aspx

如若内容造成侵权/违法违规/事实不符,请联系dt猫网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

安装MySQL8.0及以上版本操作步骤

关于mysql安装过程中命令mysqld --initialize --console出错的解答 C:\mysql-8.3.0-winx64\bin>mysqld --initialize --usermysql --console 2024-03-12T11:21:23.201387Z 0 [System] [MY-015017] [Server] MySQL Server Initialization - start. 2024-03-12T11:21:23.2068…

tongweb7部署应用后应用卡顿的参考思路(by lqw)

文章目录 1.优化jvm和openfile相关参数2.排除网络延迟&#xff08;仅供参考&#xff09;3 查看服务器资源的使用情况3.1查看方式3.1.1cpu占用过高方法1&#xff1a;使用脚本show-busy-java-threads.sh进行分析方法2&#xff1a;使用jstack 3.1.2内存占用过高3.1.1线程阻塞 3 数…

【Python使用】嘿马头条完整开发md笔记第1篇:课程简介,ToutiaoWeb虚拟机使用说明【附代码文档】

嘿马头条项目从到完整开发笔记总结完整教程&#xff08;附代码资料&#xff09;主要内容讲述&#xff1a;课程简介&#xff0c;ToutiaoWeb虚拟机使用说明&#xff0c;Pycharm远程开发&#xff0c;产品与开发&#xff0c;数据库1 产品介绍,2 原型图与UI图,3 技术架构,4 开发。OS…

鸿蒙开发学习:【媒体引擎组件】

简介 HiStreamer是一个轻量级的媒体引擎组件&#xff0c;提供播放、录制等场景的媒体数据流水线处理。 播放场景分为如下几个节点&#xff1a;数据源读取、解封装、解码、输出&#xff1b;录制场景分为如下几个节点&#xff1a;数据源读取、编码、封装、输出。 这些节点的具…

云原生消息流系统 Apache RocketMQ 在腾讯云的大规模生产实践

导语 随着云计算技术的日益成熟&#xff0c;云原生应用已逐渐成为企业数字化转型的核心驱动力。在这一大背景下&#xff0c;高效、稳定、可扩展的消息流系统显得尤为重要。腾讯云高级开发工程师李伟先生&#xff0c;凭借其深厚的技术功底和丰富的实战经验&#xff0c;为我们带…

错误: 找不到或无法加载主类 Hello.class

在运行这串代码 public class Hello{ public static void main(String[] args){ System.out.println("Hello world!"); } } 的时候出现报错&#xff1a;错误: 找不到或无法加载主类 Hello.class 入门级错误 1.公共类的文件名和类名不一致 hello.j…

【LeetCode热题100】240. 搜索二维矩阵 II

一.题目要求 编写一个高效的算法来搜索 m x n 矩阵 matrix 中的一个目标值 target 。该矩阵具有以下特性&#xff1a; 每行的元素从左到右升序排列。 ‘每列的元素从上到下升序排列。 二.题目难度 中等 三.输入样例 示例 1&#xff1a; 输入&#xff1a;matrix [[1,4,7…

搭建Hadoop3.x完全分布式集群

零、资源准备 虚拟机相关&#xff1a; VMware workstation 16&#xff1a;虚拟机 > vmware_177981.zipCentOS Stream 9&#xff1a;虚拟机 > CentOS-Stream-9-latest-x86_64-dvd1.iso Hadoop相关 jdk1.8&#xff1a;JDK > jdk-8u261-linux-x64.tar.gzHadoop 3.3.6&am…

17、设计模式之策略模式(Strategy)

一、什么是策略模式 策略模式属于行为型设计模式。定义了一系列算法&#xff0c;并将这些算法封装到一个类中&#xff0c;使得他们可以相互替换。这样&#xff0c;我们可以在改变某个对象使用的算法的情况下&#xff0c;选择一个合适的算法来处理特定的任务&#xff0c;主要解决…

全球首位AI软件工程师诞生,未来程序员会被取代吗?

今天早上看到一条消息&#xff0c;Cognition发布了世界首位AI程序员Devin&#xff0c;直接把我惊呆了&#xff0c;难道程序员是真要失业了吗&#xff1f; 全球首位AI软件工程师一亮相&#xff0c;直接引爆整个互联网圈。只需要一句指令&#xff0c;Devin就可以通过使用自己的s…

摄像机内存卡删除的视频如何恢复?恢复指南来袭

在现代社会&#xff0c;摄像机已成为记录生活、工作和学习的重要设备。然而&#xff0c;随着使用频率的增加&#xff0c;误删或意外丢失视频的情况也时有发生。面对这样的情况&#xff0c;许多用户可能会感到无助和困惑。那么&#xff0c;摄像机内存卡删除的视频真的无法恢复吗…

【05】消失的数字

hellohello~这里是土土数据结构学习笔记&#x1f973;&#x1f973; &#x1f4a5;个人主页&#xff1a;大耳朵土土垚的博客 &#x1f4a5;所属专栏&#xff1a;C语言函数实现 感谢大家的观看与支持&#x1f339;&#x1f339;&#x1f339; 有问题可以写在评论区或者私信我哦…

数据结构-链表(二)

1.两两交换列表中的节点 给你一个链表&#xff0c;两两交换其中相邻的节点&#xff0c;并返回交换后链表的头节点。你必须在不修改节点内部的值的情况下完成本题&#xff08;即&#xff0c;只能进行节点交换&#xff09;。 输入&#xff1a;head [1,2,3,4] 输出&#xff1a;[2…

ASP.NET排课实验室排课,生成班级课表实验室课表教师课表(vb.net)-214-(代码+说明)

转载地址: http://www.3q2008.com/soft/search.asp?keyword214 要看成品演示 请联系客服发给您成品演示 课题&#xff1a;实验课排课系统 计算机 上机课 一周上5天课&#xff0c;周一到周五 一周上5天课&#xff0c;周一到周五 因为我排的是实验课&#xff0c;最好1&#xf…

GPT-4.5 Turbo意外曝光,最快明天发布?OpenAI终于要放大招了!

大家好&#xff0c;我是木易&#xff0c;一个持续关注AI领域的互联网技术产品经理&#xff0c;国内Top2本科&#xff0c;美国Top10 CS研究生&#xff0c;MBA。我坚信AI是普通人变强的“外挂”&#xff0c;所以创建了“AI信息Gap”这个公众号&#xff0c;专注于分享AI全维度知识…

Java基于微信小程序的童装商城

博主介绍&#xff1a;✌程序员徐师兄、7年大厂程序员经历。全网粉丝12w、csdn博客专家、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java技术领域和毕业项目实战✌ &#x1f345;文末获取源码联系&#x1f345; &#x1f447;&#x1f3fb; 精彩专栏推荐订阅&#x1f447;…

【MySQL 系列】MySQL 索引篇

在 MySQL 中&#xff0c;索引是一种帮助存储引擎快速获取数据的数据结构&#xff0c;形象的说就是索引是数据的目录。它一般是以包含索引键值和一个指向索引键值对应数据记录物理地址的指针的节点的集合的清单的形式存在。通过使用索引&#xff0c; MySQL 可以在不需要扫描整个…

K-means算法(一篇文章讲透)

目录 一、引言 二、K-means算法的基本原理 三、优缺点 优点&#xff1a; 1 简单易懂 2 收敛速度快 3 聚类效果好 4 优化迭代功能 缺点&#xff1a; 1 对初始值敏感 2 局部最优问题 3 对非凸形状聚类效果不佳 4 易受噪声和异常值影响 5 K值难以确定 6 数据类型限…

OCR-free相关论文梳理

⚠️注意&#xff1a;暂未写完&#xff0c;持续更新中 引言 通用文档理解&#xff0c;是OCR任务的终极目标。现阶段的OCR各种垂类任务都是通用文档理解任务的子集。这感觉就像我们一下子做不到通用文档理解&#xff0c;退而求其次&#xff0c;先做各种垂类任务。 现阶段&…

Redis 哨兵集群如何实现高可用?(1)

目录 1.哨兵的介绍 2.哨兵的核心知识 3.Redis 哨兵主备切换的数据丢失问题 &#xff08;1&#xff09;异步复制导致的数据丢失 &#xff08;2&#xff09;脑裂导致的数据丢失 4.数据丢失问题的解决方案 &#xff08;1&#xff09;减少异步复制数据的丢失 &#xff08;2&…