We suspect that this may be a network issue, but we are unsure of how Ceph detects such long latency. When running the 'ceph -s' command, we observed a slow OSD heartbeat on the back and front, with the longest latency being 2250.54ms. ![]() We also noticed a large number of UDP port 5405 packets and the 'corosync' process utilizing a significant amount of CPU. Upon checking, we found that the ping is around 0.1ms, and there is occasional 2% packet loss when using flood ping, but not consistently. ![]() We are encountering a slow OSD heartbeat and have not been able to identify any network traffic issues. We are experiencing with Ceph after deploying it by PVE with the network backed by a 10G Cisco switch with VPC feature on. Is e buidheann carthannais a th' ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336. The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. If I should ask this question somewhere else, please point me in the right direction. I do not see what I am doing wrong in the above code.Ĭeph-mon and ceph-osd are run on Ubuntu 22.04 installed via apt-get update ceph ceph-mds ceph-volume However, this does not seem to balance the reads across replicas. ![]() Rados_read_op_read(read_op, offset, outSize, buffer, &bytes_read, &prval) Įrr = rados_read_op_operate(read_op, pool->ioctx, keyName.c_str(), LIBRADOS_OPERATION_BALANCE_READS) I would like to balance the reads across the replicas. When I generate ~1000 read requests to a single object, they all get serviced by the same primary OSD. I am currently using Ceph for replicated storage to store many objects across 5 nodes with 3x replication.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |