root@wangjm-B550M-K-1:~# timedatectl set-ntp up
root@wangjm-B550M-K-2:~# timedatectl set-ntp up
root@wangjm-B550M-K-3:~# timedatectl set-ntp up
部署Ceph Cluster第一个节点
安装部署工具cephadm
root@wangjm-B550M-K-1:~# apt install cephadm
root@wangjm-B550M-K-1:~# cephadm version
ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
部署新cluster
先在第一个节点上执行:
root@wangjm-B550M-K-1:~# cephadm bootstrap --mon-ip 192.168.1.8
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit systemd-timesyncd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit systemd-timesyncd.service is enabled and running
Host looks OK
Cluster fsid: 92046bac-05dd-11ef-979f-572db13abde1
Verifying IP 192.168.1.8 port 3300 ...
Verifying IP 192.168.1.8 port 6789 ...
Mon IP `192.168.1.8` is in CIDR network `192.168.1.0/24`
Mon IP `192.168.1.8` is in CIDR network `192.168.1.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host wangjm-B550M-K-1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://wangjm-B550M-K-1:8443/
User: admin
Password: 22w0czoxwe
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/92046bac-05dd-11ef-979f-572db13abde1/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid 92046bac-05dd-11ef-979f-572db13abde1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.
Create a Monitor and a Manager daemon for the new cluster on the local host.
Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.
Write a copy of the public key to /etc/ceph/ceph.pub.
Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with Ceph daemons.
Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.
Add the _admin label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.
查看docker
root@wangjm-B550M-K-1:~# docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESa0a08a3bae66 quay.io/ceph/ceph-grafana:9.4.7 "/bin/sh -c 'grafana…" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-grafana-wangjm-B550M-K-11203d725c42f quay.io/prometheus/alertmanager:v0.25.0 "/bin/alertmanager -…" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-alertmanager-wangjm-B550M-K-16d1a2197b68e quay.io/prometheus/prometheus:v2.43.0 "/bin/prometheus --c…" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-prometheus-wangjm-B550M-K-1abac2ea9d975 quay.io/prometheus/node-exporter:v1.5.0 "/bin/node_exporter …" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-node-exporter-wangjm-B550M-K-1d3fb63b2a534 quay.io/ceph/ceph "/usr/bin/ceph-crash…" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-crash-wangjm-B550M-K-1fcf870693fc9 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-mgr-wangjm-B550M-K-1-uhkxdb7acca2673cf3 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-92046bac-05dd-11ef-979f-572db13abde1-mon-wangjm-B550M-K-1
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid 92046bac-05dd-11ef-979f-572db13abde1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ceph command. There are several ways to do this:
The cephadm shell command launches a bash shell in a container with all of the Ceph packages installed.
cephadm shell
cephadm shell -- ceph -s
You can install the ceph-common package, which contains all of the ceph commands, including ceph, rbd, mount.ceph (for mounting CephFS file systems)
The effect of ceph orch apply is persistent. This means that drives that are added to the system after the ceph orch apply command completes will be automatically found and added to the cluster. It also means that drives that become available (by zapping, for example) after the ceph orch apply command completes will be automatically found and added to the cluster.
root@wangjm-B550M-K-1:~# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 14VL52S6K82BSIV4XHB4
Secret Key: bN1GVi5aE3lLLZoKDrFoDJ8CbmTnfrQuGkiBqpUt
Default Region [US]: CN
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: ole12138.top:8000
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: wangjm-b1.ole12138.top:8000
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: 14VL52S6K82BSIV4XHB4
Secret Key: bN1GVi5aE3lLLZoKDrFoDJ8CbmTnfrQuGkiBqpUt
Default Region: CN
S3 Endpoint: ole12138.top:8000
DNS-style bucket+hostname:port template for accessing a bucket: wangjm-b1.ole12138.top:8000
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
root@wangjm-B550M-K-1:~# cat /root/.s3cfg
[default]
access_key = 14VL52S6K82BSIV4XHB4
access_token =
add_encoding_exts =
add_headers =
bucket_location = CN
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
connection_max_age = 5
connection_pooling = True
content_disposition =
content_type =
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = ole12138.top:8000
host_bucket = wangjm-b1.ole12138.top:8000
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_copy_chunk_size_mb = 1024
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
public_url_use_https = False
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = bN1GVi5aE3lLLZoKDrFoDJ8CbmTnfrQuGkiBqpUt
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
ssl_client_cert_file =
ssl_client_key_file =
stats = False
stop_on_error = False
storage_class =
throttle_max = 100
upload_id =
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
root@wangjm-B550M-K-1:~# s3cmd ls
2024-05-05 16:13 s3://wangjm-b1
root@wangjm-B550M-K-1:~# s3cmd put /mnt/tmp1/hello.txt s3://wangjm-b1
upload: '/mnt/tmp1/hello.txt' -> 's3://wangjm-b1/hello.txt' [1 of 1]
6 of 6 100% in 1s 4.39 B/s done
root@wangjm-B550M-K-1:~# s3cmd ls s3://wangjm-b1
2024-05-05 16:55 6 s3://wangjm-b1/hello.txt
创建cephfs
参考: https://www.cnblogs.com/hukey/p/17828946.html
参考: https://juejin.cn/post/7155656346048659493
参考: https://docs.ceph.com/en/latest/cephfs/
方式一: 创建文件系统并自动分配pool
# Create a CephFS volume named (for example) "cephfs":
ceph fs volume create cephfs
Before mounting CephFS, ensure that the client host (where CephFS has to be mounted and used) has a copy of the Ceph configuration file (i.e. ceph.conf) and a keyring of the CephX user that has permission to access the MDS. Both of these files must already be present on the host where the Ceph MON resides.
Generate a minimal conf file for the client host and place it at a standard location:
# on client host
mkdir -p -m 755 /etc/ceph
ssh {user}@{mon-host} "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
Alternatively, you may copy the conf file. But the above method generates a conf with minimal details which is usually sufficient. For more information, see Client Authentication and Bootstrap options.
In above command, replace cephfs with the name of your CephFS, foo by the name you want for your CephX user and / by the path within your CephFS for which you want to allow access to the client host and rw stands for both read and write permissions. Alternatively, you may copy the Ceph keyring from the MON host to client host at /etc/ceph but creating a keyring specific to the client host is better. While creating a CephX keyring/client, using same client name across multiple machines is perfectly fine.
发表回复