Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(routes/ceph): add ceph blog #16980

Merged
merged 4 commits into from
Oct 1, 2024
Merged

feat(routes/ceph): add ceph blog #16980

merged 4 commits into from
Oct 1, 2024

Conversation

pandada8
Copy link
Contributor

@pandada8 pandada8 commented Oct 1, 2024

Involved Issue / 该 PR 相关 Issue

N/A

Example for the Proposed Route(s) / 路由地址示例

/ceph/blog/
/ceph/blog/a11y

New RSS Route Checklist / 新 RSS 路由检查表

  • New Route / 新的路由
  • Anti-bot or rate limit / 反爬/频率限制
    • If yes, do your code reflect this sign? / 如果有, 是否有对应的措施?
  • Date and time / 日期和时间
    • Parsed / 可以解析
    • Correct time zone / 时区正确
  • New package added / 添加了新的包
  • Puppeteer

Note / 说明

为 ceph 博客添加 rss feed

@github-actions github-actions bot added Route Auto: Route Test Complete Auto route test has finished on given PR labels Oct 1, 2024
Copy link
Contributor

github-actions bot commented Oct 1, 2024

Successfully generated as following:

http://localhost:1200/ceph/blog/ - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 15:09:07 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;aside&gt;&lt;hr class=&quot;hr lg:hidden my-16&quot;&gt;&lt;div class=&quot;grid md-to-lg:grid--cols-2 to-md:grid--gap-14 lg:grid--gap-0&quot;&gt;&lt;div&gt;&lt;h2 class=&quot;h5&quot;&gt;Share this article&lt;/h2&gt;&lt;div class=&quot;social-shares&quot;&gt;&lt;a class=&quot;social-shares__twitter&quot; href=&quot;https://twitter.com/share?url=https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/&amp;amp;text=v19.2.0%20Squid%20released&amp;amp;via=ceph&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M36 16.575c-.9.375-1.837.675-2.813.788a4.958 4.958 0 002.175-2.738 9.771 9.771 0 01-3.112 1.2c-.938-.975-2.212-1.575-3.637-1.575a4.905 4.905 0 00-4.913 4.913c0 .375.038.75.113 1.125-4.088-.188-7.726-2.175-10.125-5.138a4.986 4.986 0 00-.676 2.475c0 1.725.863 3.225 2.175 4.087a5.231 5.231 0 01-2.212-.6v.076c0 2.4 1.688 4.387 3.938 4.837a5.043 5.043 0 01-1.313.188c-.3 0-.637-.038-.938-.076a4.935 4.935 0 004.613 3.413 9.962 9.962 0 01-6.112 2.1c-.413 0-.788-.037-1.163-.075 2.175 1.35 4.762 2.175 7.538 2.175 9.075 0 14.024-7.5 14.024-14.025v-.638A10.355 10.355 0 0036 16.575z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Twitter&lt;/span&gt; &lt;/a&gt;&lt;a class=&quot;social-shares__facebook&quot; href=&quot;https://www.facebook.com/sharer.php?u=https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M21.043 36.75V25.7H17.25v-5.1h3.793v-3.562c0-3.88 2.456-5.788 5.917-5.788 1.658 0 3.082.123 3.498.179v4.054l-2.4.001c-1.883 0-2.308.894-2.308 2.207V20.6h5.1l-1.7 5.1h-3.4v11.05h-4.707z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Facebook&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;hr class=&quot;hidden lg:block hr&quot;&gt;&lt;/div&gt;&lt;div&gt;&lt;h2 id=&quot;tags-desc&quot; class=&quot;h5&quot;&gt;Read more articles like this&lt;/h2&gt;&lt;ul class=&quot;list-none p-0&quot; aria-describedby=&quot;tags-desc&quot;&gt;&lt;li class=&quot;mb-3&quot;&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/release/&quot;&gt;release&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/squid/&quot;&gt;squid&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;/aside&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div&gt;&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;/div&gt;&lt;aside&gt;&lt;hr class=&quot;hr lg:hidden my-16&quot;&gt;&lt;div class=&quot;grid md-to-lg:grid--cols-2 to-md:grid--gap-14 lg:grid--gap-0&quot;&gt;&lt;div&gt;&lt;h2 class=&quot;h5&quot;&gt;Share this article&lt;/h2&gt;&lt;div class=&quot;social-shares&quot;&gt;&lt;a class=&quot;social-shares__twitter&quot; href=&quot;https://twitter.com/share?url=https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/&amp;amp;text=Cephalocon%202024%20Shirt%20Design%20Competition&amp;amp;via=ceph&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M36 16.575c-.9.375-1.837.675-2.813.788a4.958 4.958 0 002.175-2.738 9.771 9.771 0 01-3.112 1.2c-.938-.975-2.212-1.575-3.637-1.575a4.905 4.905 0 00-4.913 4.913c0 .375.038.75.113 1.125-4.088-.188-7.726-2.175-10.125-5.138a4.986 4.986 0 00-.676 2.475c0 1.725.863 3.225 2.175 4.087a5.231 5.231 0 01-2.212-.6v.076c0 2.4 1.688 4.387 3.938 4.837a5.043 5.043 0 01-1.313.188c-.3 0-.637-.038-.938-.076a4.935 4.935 0 004.613 3.413 9.962 9.962 0 01-6.112 2.1c-.413 0-.788-.037-1.163-.075 2.175 1.35 4.762 2.175 7.538 2.175 9.075 0 14.024-7.5 14.024-14.025v-.638A10.355 10.355 0 0036 16.575z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Twitter&lt;/span&gt; &lt;/a&gt;&lt;a class=&quot;social-shares__facebook&quot; href=&quot;https://www.facebook.com/sharer.php?u=https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M21.043 36.75V25.7H17.25v-5.1h3.793v-3.562c0-3.88 2.456-5.788 5.917-5.788 1.658 0 3.082.123 3.498.179v4.054l-2.4.001c-1.883 0-2.308.894-2.308 2.207V20.6h5.1l-1.7 5.1h-3.4v11.05h-4.707z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Facebook&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;hr class=&quot;hidden lg:block hr&quot;&gt;&lt;/div&gt;&lt;div&gt;&lt;h2 id=&quot;tags-desc&quot; class=&quot;h5&quot;&gt;Read more articles like this&lt;/h2&gt;&lt;ul class=&quot;list-none p-0&quot; aria-describedby=&quot;tags-desc&quot;&gt;&lt;li class=&quot;mb-3&quot;&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/ceph/&quot;&gt;ceph&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/cephalocon/&quot;&gt;cephalocon&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;/aside&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Wed, 31 Jul 2024 16:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&

...

Copy link
Contributor

github-actions bot commented Oct 1, 2024

http://localhost:1200/ceph/blog/a11y - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog/a11y" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 15:09:08 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;aside&gt;&lt;hr class=&quot;hr lg:hidden my-16&quot;&gt;&lt;div class=&quot;grid md-to-lg:grid--cols-2 to-md:grid--gap-14 lg:grid--gap-0&quot;&gt;&lt;div&gt;&lt;h2 class=&quot;h5&quot;&gt;Share this article&lt;/h2&gt;&lt;div class=&quot;social-shares&quot;&gt;&lt;a class=&quot;social-shares__twitter&quot; href=&quot;https://twitter.com/share?url=https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/&amp;amp;text=v19.2.0%20Squid%20released&amp;amp;via=ceph&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M36 16.575c-.9.375-1.837.675-2.813.788a4.958 4.958 0 002.175-2.738 9.771 9.771 0 01-3.112 1.2c-.938-.975-2.212-1.575-3.637-1.575a4.905 4.905 0 00-4.913 4.913c0 .375.038.75.113 1.125-4.088-.188-7.726-2.175-10.125-5.138a4.986 4.986 0 00-.676 2.475c0 1.725.863 3.225 2.175 4.087a5.231 5.231 0 01-2.212-.6v.076c0 2.4 1.688 4.387 3.938 4.837a5.043 5.043 0 01-1.313.188c-.3 0-.637-.038-.938-.076a4.935 4.935 0 004.613 3.413 9.962 9.962 0 01-6.112 2.1c-.413 0-.788-.037-1.163-.075 2.175 1.35 4.762 2.175 7.538 2.175 9.075 0 14.024-7.5 14.024-14.025v-.638A10.355 10.355 0 0036 16.575z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Twitter&lt;/span&gt; &lt;/a&gt;&lt;a class=&quot;social-shares__facebook&quot; href=&quot;https://www.facebook.com/sharer.php?u=https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M21.043 36.75V25.7H17.25v-5.1h3.793v-3.562c0-3.88 2.456-5.788 5.917-5.788 1.658 0 3.082.123 3.498.179v4.054l-2.4.001c-1.883 0-2.308.894-2.308 2.207V20.6h5.1l-1.7 5.1h-3.4v11.05h-4.707z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Facebook&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;hr class=&quot;hidden lg:block hr&quot;&gt;&lt;/div&gt;&lt;div&gt;&lt;h2 id=&quot;tags-desc&quot; class=&quot;h5&quot;&gt;Read more articles like this&lt;/h2&gt;&lt;ul class=&quot;list-none p-0&quot; aria-describedby=&quot;tags-desc&quot;&gt;&lt;li class=&quot;mb-3&quot;&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/release/&quot;&gt;release&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/squid/&quot;&gt;squid&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;/aside&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div&gt;&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;/div&gt;&lt;aside&gt;&lt;hr class=&quot;hr lg:hidden my-16&quot;&gt;&lt;div class=&quot;grid md-to-lg:grid--cols-2 to-md:grid--gap-14 lg:grid--gap-0&quot;&gt;&lt;div&gt;&lt;h2 class=&quot;h5&quot;&gt;Share this article&lt;/h2&gt;&lt;div class=&quot;social-shares&quot;&gt;&lt;a class=&quot;social-shares__twitter&quot; href=&quot;https://twitter.com/share?url=https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/&amp;amp;text=Cephalocon%202024%20Shirt%20Design%20Competition&amp;amp;via=ceph&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M36 16.575c-.9.375-1.837.675-2.813.788a4.958 4.958 0 002.175-2.738 9.771 9.771 0 01-3.112 1.2c-.938-.975-2.212-1.575-3.637-1.575a4.905 4.905 0 00-4.913 4.913c0 .375.038.75.113 1.125-4.088-.188-7.726-2.175-10.125-5.138a4.986 4.986 0 00-.676 2.475c0 1.725.863 3.225 2.175 4.087a5.231 5.231 0 01-2.212-.6v.076c0 2.4 1.688 4.387 3.938 4.837a5.043 5.043 0 01-1.313.188c-.3 0-.637-.038-.938-.076a4.935 4.935 0 004.613 3.413 9.962 9.962 0 01-6.112 2.1c-.413 0-.788-.037-1.163-.075 2.175 1.35 4.762 2.175 7.538 2.175 9.075 0 14.024-7.5 14.024-14.025v-.638A10.355 10.355 0 0036 16.575z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Twitter&lt;/span&gt; &lt;/a&gt;&lt;a class=&quot;social-shares__facebook&quot; href=&quot;https://www.facebook.com/sharer.php?u=https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/&quot; rel=&quot;noreferrer noopener&quot; target=&quot;_blank&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; width=&quot;48&quot; height=&quot;48&quot; viewBox=&quot;0 0 48 48&quot; aria-hidden=&quot;true&quot; focusable=&quot;false&quot;&gt;&lt;g fill=&quot;transparent&quot; fill-rule=&quot;evenodd&quot; stroke-width=&quot;2&quot;&gt;&lt;circle cx=&quot;23.925&quot; cy=&quot;23.925&quot; r=&quot;21.75&quot; stroke=&quot;#eb1414&quot; stroke-linecap=&quot;square&quot;&gt;&lt;/circle&gt;&lt;path fill=&quot;#fff&quot; stroke=&quot;#0a0c38&quot; stroke-linejoin=&quot;round&quot; d=&quot;M21.043 36.75V25.7H17.25v-5.1h3.793v-3.562c0-3.88 2.456-5.788 5.917-5.788 1.658 0 3.082.123 3.498.179v4.054l-2.4.001c-1.883 0-2.308.894-2.308 2.207V20.6h5.1l-1.7 5.1h-3.4v11.05h-4.707z&quot;&gt;&lt;/path&gt;&lt;/g&gt;&lt;/svg&gt; &lt;span class=&quot;visually-hidden&quot;&gt;Facebook&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;hr class=&quot;hidden lg:block hr&quot;&gt;&lt;/div&gt;&lt;div&gt;&lt;h2 id=&quot;tags-desc&quot; class=&quot;h5&quot;&gt;Read more articles like this&lt;/h2&gt;&lt;ul class=&quot;list-none p-0&quot; aria-describedby=&quot;tags-desc&quot;&gt;&lt;li class=&quot;mb-3&quot;&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/ceph/&quot;&gt;ceph&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a class=&quot;button button--pill&quot; href=&quot;https://ceph.io/en/news/blog/category/cephalocon/&quot;&gt;cephalocon&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;/aside&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Wed, 31 Jul 2024 16:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull

lib/routes/ceph/blog.ts Outdated Show resolved Hide resolved
lib/routes/ceph/namespace.ts Outdated Show resolved Hide resolved
lib/routes/ceph/blog.ts Outdated Show resolved Hide resolved
lib/routes/ceph/blog.ts Outdated Show resolved Hide resolved
lib/routes/ceph/blog.ts Outdated Show resolved Hide resolved
lib/routes/ceph/blog.ts Outdated Show resolved Hide resolved
Co-authored-by: Tony <TonyRL@users.noreply.github.com>
Copy link
Contributor

github-actions bot commented Oct 1, 2024

Successfully generated as following:

http://localhost:1200/ceph/blog/ - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 15:27:55 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&gt;pr#56102&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: fix get_version for nvmeof (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56099&quot;&gt;pr#56099&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: improve cephadm pull usage message (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56292&quot;&gt;pr#56292&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: remove restriction for crush device classes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56106&quot;&gt;pr#56106&lt;/a&gt;, Seena Fallah)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: rm podman-auth&lt;span&gt;&lt;/span&gt;.json if removing last cluster (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56105&quot;&gt;pr#56105&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: remove distutils Version classes because they&#39;re deprecated (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54119&quot;&gt;pr#54119&lt;/a&gt;, Venky Shankar, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-top: include the missing fields in --dump output (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54520&quot;&gt;pr#54520&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client/fuse: handle case of renameat2 with non-zero flags (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55002&quot;&gt;pr#55002&lt;/a&gt;, Leonid Usov, Shachar Sharon)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: append to buffer list to save all result from wildcard command (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53893&quot;&gt;pr#53893&lt;/a&gt;, Rishabh Dave, Jinmyeong Lee, Jimyeong Lee)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: call _getattr() for -ENODATA returned _getvxattr() calls (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54404&quot;&gt;pr#54404&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: fix leak of file handles (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56122&quot;&gt;pr#56122&lt;/a&gt;, Xavi Hernandez)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: Fix return in removexattr for xattrs from &lt;code&gt;system&amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;.&lt;/code&gt; namespace (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55803&quot;&gt;pr#55803&lt;/a&gt;, Anoop C S)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: queue a delay cap flushing if there are ditry caps/snapcaps (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54466&quot;&gt;pr#54466&lt;/a&gt;, Xiubo Li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: readdir_r_cb: get rstat for dir only if using rbytes for size (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53359&quot;&gt;pr#53359&lt;/a&gt;, Pinghao Wu)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/arrow: don&#39;t treat warnings as errors (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57375&quot;&gt;pr#57375&lt;/a&gt;, Casey Bodley)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/modules/BuildRocksDB&lt;span&gt;&lt;/span&gt;.cmake: inherit parent&#39;s CMAKE_CXX_FLAGS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55502&quot;&gt;pr#55502&lt;/a&gt;, Kefu Chai)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake: use or turn off liburing for rocksdb (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54122&quot;&gt;pr#54122&lt;/a&gt;, Casey Bodley, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/options: Set LZ4 compression for bluestore RocksDB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55197&quot;&gt;pr#55197&lt;/a&gt;, Mark Nelson)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/weighted_shuffle: don&#39;t feed std::discrete_distribution with all-zero weights (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55153&quot;&gt;pr#55153&lt;/a&gt;, Radosław Zarzyński)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common: resolve config proxy deadlock using refcounted pointers (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54373&quot;&gt;pr#54373&lt;/a&gt;, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DaemonServer&lt;span&gt;&lt;/span&gt;.cc: fix config show command for RGW daemons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55077&quot;&gt;pr#55077&lt;/a&gt;, Aishwarya Mathuria)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add ceph-exporter package (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56541&quot;&gt;pr#56541&lt;/a&gt;, Shinya Hayashi)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add missing bcrypt to ceph-mgr &lt;span&gt;&lt;/span&gt;.requires to fix resulting package dependencies (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54662&quot;&gt;pr#54662&lt;/a&gt;, Thomas Lamprecht)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst - fix typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55384&quot;&gt;pr#55384&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst: improve rados definition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55343&quot;&gt;pr#55343&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: correct typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56012&quot;&gt;pr#56012&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: improve some paragraphs (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55399&quot;&gt;pr#55399&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: remove pleonasm (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55933&quot;&gt;pr#55933&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm - edit t11ing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55482&quot;&gt;pr#55482&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm/services: Improve monitoring&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56290&quot;&gt;pr#56290&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: correct nfs config pool name (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55603&quot;&gt;pr#55603&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: improve host-management&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56111&quot;&gt;pr#56111&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: Improve multiple files (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56130&quot;&gt;pr#56130&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs/client-auth&lt;span&gt;&lt;/span&gt;.rst: correct ``fs authorize cephfs1 /dir1 clie… (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55246&quot;&gt;pr#55246&lt;/a&gt;, 叶海丰)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: edit add-remove-mds (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55648&quot;&gt;pr#55648&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: fix architecture link to correct relative 

...

Copy link
Contributor

github-actions bot commented Oct 1, 2024

http://localhost:1200/ceph/blog/a11y - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog/a11y" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 15:27:56 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;</description>
      <link>https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io//en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io//en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&gt;pr#56102&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: fix get_version for nvmeof (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56099&quot;&gt;pr#56099&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: improve cephadm pull usage message (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56292&quot;&gt;pr#56292&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: remove restriction for crush device classes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56106&quot;&gt;pr#56106&lt;/a&gt;, Seena Fallah)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: rm podman-auth&lt;span&gt;&lt;/span&gt;.json if removing last cluster (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56105&quot;&gt;pr#56105&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: remove distutils Version classes because they&#39;re deprecated (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54119&quot;&gt;pr#54119&lt;/a&gt;, Venky Shankar, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-top: include the missing fields in --dump output (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54520&quot;&gt;pr#54520&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client/fuse: handle case of renameat2 with non-zero flags (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55002&quot;&gt;pr#55002&lt;/a&gt;, Leonid Usov, Shachar Sharon)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: append to buffer list to save all result from wildcard command (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53893&quot;&gt;pr#53893&lt;/a&gt;, Rishabh Dave, Jinmyeong Lee, Jimyeong Lee)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: call _getattr() for -ENODATA returned _getvxattr() calls (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54404&quot;&gt;pr#54404&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: fix leak of file handles (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56122&quot;&gt;pr#56122&lt;/a&gt;, Xavi Hernandez)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: Fix return in removexattr for xattrs from &lt;code&gt;system&amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;.&lt;/code&gt; namespace (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55803&quot;&gt;pr#55803&lt;/a&gt;, Anoop C S)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: queue a delay cap flushing if there are ditry caps/snapcaps (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54466&quot;&gt;pr#54466&lt;/a&gt;, Xiubo Li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: readdir_r_cb: get rstat for dir only if using rbytes for size (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53359&quot;&gt;pr#53359&lt;/a&gt;, Pinghao Wu)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/arrow: don&#39;t treat warnings as errors (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57375&quot;&gt;pr#57375&lt;/a&gt;, Casey Bodley)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/modules/BuildRocksDB&lt;span&gt;&lt;/span&gt;.cmake: inherit parent&#39;s CMAKE_CXX_FLAGS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55502&quot;&gt;pr#55502&lt;/a&gt;, Kefu Chai)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake: use or turn off liburing for rocksdb (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54122&quot;&gt;pr#54122&lt;/a&gt;, Casey Bodley, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/options: Set LZ4 compression for bluestore RocksDB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55197&quot;&gt;pr#55197&lt;/a&gt;, Mark Nelson)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/weighted_shuffle: don&#39;t feed std::discrete_distribution with all-zero weights (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55153&quot;&gt;pr#55153&lt;/a&gt;, Radosław Zarzyński)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common: resolve config proxy deadlock using refcounted pointers (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54373&quot;&gt;pr#54373&lt;/a&gt;, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DaemonServer&lt;span&gt;&lt;/span&gt;.cc: fix config show command for RGW daemons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55077&quot;&gt;pr#55077&lt;/a&gt;, Aishwarya Mathuria)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add ceph-exporter package (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56541&quot;&gt;pr#56541&lt;/a&gt;, Shinya Hayashi)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add missing bcrypt to ceph-mgr &lt;span&gt;&lt;/span&gt;.requires to fix resulting package dependencies (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54662&quot;&gt;pr#54662&lt;/a&gt;, Thomas Lamprecht)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst - fix typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55384&quot;&gt;pr#55384&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst: improve rados definition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55343&quot;&gt;pr#55343&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: correct typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56012&quot;&gt;pr#56012&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: improve some paragraphs (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55399&quot;&gt;pr#55399&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: remove pleonasm (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55933&quot;&gt;pr#55933&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm - edit t11ing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55482&quot;&gt;pr#55482&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm/services: Improve monitoring&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56290&quot;&gt;pr#56290&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: correct nfs config pool name (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55603&quot;&gt;pr#55603&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: improve host-management&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56111&quot;&gt;pr#56111&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: Improve multiple files (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56130&quot;&gt;pr#56130&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs/client-auth&lt;span&gt;&lt;/span&gt;.rst: correct ``fs authorize cephfs1 /dir1 clie… (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55246&quot;&gt;pr#55246&lt;/a&gt;, 叶海丰)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: edit add-remove-mds (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55648&quot;&gt;pr#55648&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: fix architecture link to corr

Copy link
Contributor

github-actions bot commented Oct 1, 2024

Successfully generated as following:

http://localhost:1200/ceph/blog/ - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 15:55:45 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&gt;pr#56102&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: fix get_version for nvmeof (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56099&quot;&gt;pr#56099&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: improve cephadm pull usage message (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56292&quot;&gt;pr#56292&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: remove restriction for crush device classes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56106&quot;&gt;pr#56106&lt;/a&gt;, Seena Fallah)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: rm podman-auth&lt;span&gt;&lt;/span&gt;.json if removing last cluster (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56105&quot;&gt;pr#56105&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: remove distutils Version classes because they&#39;re deprecated (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54119&quot;&gt;pr#54119&lt;/a&gt;, Venky Shankar, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-top: include the missing fields in --dump output (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54520&quot;&gt;pr#54520&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client/fuse: handle case of renameat2 with non-zero flags (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55002&quot;&gt;pr#55002&lt;/a&gt;, Leonid Usov, Shachar Sharon)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: append to buffer list to save all result from wildcard command (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53893&quot;&gt;pr#53893&lt;/a&gt;, Rishabh Dave, Jinmyeong Lee, Jimyeong Lee)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: call _getattr() for -ENODATA returned _getvxattr() calls (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54404&quot;&gt;pr#54404&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: fix leak of file handles (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56122&quot;&gt;pr#56122&lt;/a&gt;, Xavi Hernandez)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: Fix return in removexattr for xattrs from &lt;code&gt;system&amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;.&lt;/code&gt; namespace (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55803&quot;&gt;pr#55803&lt;/a&gt;, Anoop C S)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: queue a delay cap flushing if there are ditry caps/snapcaps (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54466&quot;&gt;pr#54466&lt;/a&gt;, Xiubo Li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: readdir_r_cb: get rstat for dir only if using rbytes for size (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53359&quot;&gt;pr#53359&lt;/a&gt;, Pinghao Wu)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/arrow: don&#39;t treat warnings as errors (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57375&quot;&gt;pr#57375&lt;/a&gt;, Casey Bodley)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/modules/BuildRocksDB&lt;span&gt;&lt;/span&gt;.cmake: inherit parent&#39;s CMAKE_CXX_FLAGS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55502&quot;&gt;pr#55502&lt;/a&gt;, Kefu Chai)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake: use or turn off liburing for rocksdb (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54122&quot;&gt;pr#54122&lt;/a&gt;, Casey Bodley, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/options: Set LZ4 compression for bluestore RocksDB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55197&quot;&gt;pr#55197&lt;/a&gt;, Mark Nelson)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/weighted_shuffle: don&#39;t feed std::discrete_distribution with all-zero weights (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55153&quot;&gt;pr#55153&lt;/a&gt;, Radosław Zarzyński)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common: resolve config proxy deadlock using refcounted pointers (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54373&quot;&gt;pr#54373&lt;/a&gt;, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DaemonServer&lt;span&gt;&lt;/span&gt;.cc: fix config show command for RGW daemons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55077&quot;&gt;pr#55077&lt;/a&gt;, Aishwarya Mathuria)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add ceph-exporter package (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56541&quot;&gt;pr#56541&lt;/a&gt;, Shinya Hayashi)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add missing bcrypt to ceph-mgr &lt;span&gt;&lt;/span&gt;.requires to fix resulting package dependencies (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54662&quot;&gt;pr#54662&lt;/a&gt;, Thomas Lamprecht)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst - fix typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55384&quot;&gt;pr#55384&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst: improve rados definition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55343&quot;&gt;pr#55343&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: correct typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56012&quot;&gt;pr#56012&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: improve some paragraphs (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55399&quot;&gt;pr#55399&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: remove pleonasm (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55933&quot;&gt;pr#55933&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm - edit t11ing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55482&quot;&gt;pr#55482&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm/services: Improve monitoring&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56290&quot;&gt;pr#56290&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: correct nfs config pool name (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55603&quot;&gt;pr#55603&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: improve host-management&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56111&quot;&gt;pr#56111&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: Improve multiple files (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56130&quot;&gt;pr#56130&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs/client-auth&lt;span&gt;&lt;/span&gt;.rst: correct ``fs authorize cephfs1 /dir1 clie… (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55246&quot;&gt;pr#55246&lt;/a&gt;, 叶海丰)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: edit add-remove-mds (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55648&quot;&gt;pr#55648&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: fix architecture link to correct relative path (&lt;a href=&quot;https:/

...

Copy link
Contributor

github-actions bot commented Oct 1, 2024

http://localhost:1200/ceph/blog/a11y - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog/a11y" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 15:55:45 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&gt;pr#56102&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: fix get_version for nvmeof (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56099&quot;&gt;pr#56099&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: improve cephadm pull usage message (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56292&quot;&gt;pr#56292&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: remove restriction for crush device classes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56106&quot;&gt;pr#56106&lt;/a&gt;, Seena Fallah)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: rm podman-auth&lt;span&gt;&lt;/span&gt;.json if removing last cluster (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56105&quot;&gt;pr#56105&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: remove distutils Version classes because they&#39;re deprecated (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54119&quot;&gt;pr#54119&lt;/a&gt;, Venky Shankar, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-top: include the missing fields in --dump output (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54520&quot;&gt;pr#54520&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client/fuse: handle case of renameat2 with non-zero flags (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55002&quot;&gt;pr#55002&lt;/a&gt;, Leonid Usov, Shachar Sharon)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: append to buffer list to save all result from wildcard command (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53893&quot;&gt;pr#53893&lt;/a&gt;, Rishabh Dave, Jinmyeong Lee, Jimyeong Lee)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: call _getattr() for -ENODATA returned _getvxattr() calls (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54404&quot;&gt;pr#54404&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: fix leak of file handles (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56122&quot;&gt;pr#56122&lt;/a&gt;, Xavi Hernandez)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: Fix return in removexattr for xattrs from &lt;code&gt;system&amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;.&lt;/code&gt; namespace (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55803&quot;&gt;pr#55803&lt;/a&gt;, Anoop C S)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: queue a delay cap flushing if there are ditry caps/snapcaps (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54466&quot;&gt;pr#54466&lt;/a&gt;, Xiubo Li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: readdir_r_cb: get rstat for dir only if using rbytes for size (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53359&quot;&gt;pr#53359&lt;/a&gt;, Pinghao Wu)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/arrow: don&#39;t treat warnings as errors (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57375&quot;&gt;pr#57375&lt;/a&gt;, Casey Bodley)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/modules/BuildRocksDB&lt;span&gt;&lt;/span&gt;.cmake: inherit parent&#39;s CMAKE_CXX_FLAGS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55502&quot;&gt;pr#55502&lt;/a&gt;, Kefu Chai)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake: use or turn off liburing for rocksdb (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54122&quot;&gt;pr#54122&lt;/a&gt;, Casey Bodley, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/options: Set LZ4 compression for bluestore RocksDB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55197&quot;&gt;pr#55197&lt;/a&gt;, Mark Nelson)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/weighted_shuffle: don&#39;t feed std::discrete_distribution with all-zero weights (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55153&quot;&gt;pr#55153&lt;/a&gt;, Radosław Zarzyński)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common: resolve config proxy deadlock using refcounted pointers (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54373&quot;&gt;pr#54373&lt;/a&gt;, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DaemonServer&lt;span&gt;&lt;/span&gt;.cc: fix config show command for RGW daemons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55077&quot;&gt;pr#55077&lt;/a&gt;, Aishwarya Mathuria)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add ceph-exporter package (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56541&quot;&gt;pr#56541&lt;/a&gt;, Shinya Hayashi)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add missing bcrypt to ceph-mgr &lt;span&gt;&lt;/span&gt;.requires to fix resulting package dependencies (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54662&quot;&gt;pr#54662&lt;/a&gt;, Thomas Lamprecht)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst - fix typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55384&quot;&gt;pr#55384&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst: improve rados definition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55343&quot;&gt;pr#55343&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: correct typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56012&quot;&gt;pr#56012&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: improve some paragraphs (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55399&quot;&gt;pr#55399&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: remove pleonasm (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55933&quot;&gt;pr#55933&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm - edit t11ing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55482&quot;&gt;pr#55482&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm/services: Improve monitoring&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56290&quot;&gt;pr#56290&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: correct nfs config pool name (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55603&quot;&gt;pr#55603&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: improve host-management&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56111&quot;&gt;pr#56111&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: Improve multiple files (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56130&quot;&gt;pr#56130&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs/client-auth&lt;span&gt;&lt;/span&gt;.rst: correct ``fs authorize cephfs1 /dir1 clie… (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55246&quot;&gt;pr#55246&lt;/a&gt;, 叶海丰)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: edit add-remove-mds (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55648&quot;&gt;pr#55648&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: fix architecture link to correct relative path (&lt;a href=

lib/routes/ceph/blog.ts Outdated Show resolved Hide resolved
Copy link
Contributor

github-actions bot commented Oct 1, 2024

Successfully generated as following:

http://localhost:1200/ceph/blog/ - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 18:46:32 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&gt;pr#56102&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: fix get_version for nvmeof (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56099&quot;&gt;pr#56099&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: improve cephadm pull usage message (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56292&quot;&gt;pr#56292&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: remove restriction for crush device classes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56106&quot;&gt;pr#56106&lt;/a&gt;, Seena Fallah)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: rm podman-auth&lt;span&gt;&lt;/span&gt;.json if removing last cluster (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56105&quot;&gt;pr#56105&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: remove distutils Version classes because they&#39;re deprecated (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54119&quot;&gt;pr#54119&lt;/a&gt;, Venky Shankar, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-top: include the missing fields in --dump output (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54520&quot;&gt;pr#54520&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client/fuse: handle case of renameat2 with non-zero flags (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55002&quot;&gt;pr#55002&lt;/a&gt;, Leonid Usov, Shachar Sharon)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: append to buffer list to save all result from wildcard command (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53893&quot;&gt;pr#53893&lt;/a&gt;, Rishabh Dave, Jinmyeong Lee, Jimyeong Lee)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: call _getattr() for -ENODATA returned _getvxattr() calls (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54404&quot;&gt;pr#54404&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: fix leak of file handles (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56122&quot;&gt;pr#56122&lt;/a&gt;, Xavi Hernandez)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: Fix return in removexattr for xattrs from &lt;code&gt;system&amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;.&lt;/code&gt; namespace (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55803&quot;&gt;pr#55803&lt;/a&gt;, Anoop C S)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: queue a delay cap flushing if there are ditry caps/snapcaps (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54466&quot;&gt;pr#54466&lt;/a&gt;, Xiubo Li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: readdir_r_cb: get rstat for dir only if using rbytes for size (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53359&quot;&gt;pr#53359&lt;/a&gt;, Pinghao Wu)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/arrow: don&#39;t treat warnings as errors (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57375&quot;&gt;pr#57375&lt;/a&gt;, Casey Bodley)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/modules/BuildRocksDB&lt;span&gt;&lt;/span&gt;.cmake: inherit parent&#39;s CMAKE_CXX_FLAGS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55502&quot;&gt;pr#55502&lt;/a&gt;, Kefu Chai)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake: use or turn off liburing for rocksdb (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54122&quot;&gt;pr#54122&lt;/a&gt;, Casey Bodley, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/options: Set LZ4 compression for bluestore RocksDB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55197&quot;&gt;pr#55197&lt;/a&gt;, Mark Nelson)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/weighted_shuffle: don&#39;t feed std::discrete_distribution with all-zero weights (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55153&quot;&gt;pr#55153&lt;/a&gt;, Radosław Zarzyński)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common: resolve config proxy deadlock using refcounted pointers (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54373&quot;&gt;pr#54373&lt;/a&gt;, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DaemonServer&lt;span&gt;&lt;/span&gt;.cc: fix config show command for RGW daemons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55077&quot;&gt;pr#55077&lt;/a&gt;, Aishwarya Mathuria)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add ceph-exporter package (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56541&quot;&gt;pr#56541&lt;/a&gt;, Shinya Hayashi)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add missing bcrypt to ceph-mgr &lt;span&gt;&lt;/span&gt;.requires to fix resulting package dependencies (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54662&quot;&gt;pr#54662&lt;/a&gt;, Thomas Lamprecht)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst - fix typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55384&quot;&gt;pr#55384&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst: improve rados definition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55343&quot;&gt;pr#55343&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: correct typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56012&quot;&gt;pr#56012&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: improve some paragraphs (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55399&quot;&gt;pr#55399&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: remove pleonasm (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55933&quot;&gt;pr#55933&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm - edit t11ing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55482&quot;&gt;pr#55482&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm/services: Improve monitoring&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56290&quot;&gt;pr#56290&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: correct nfs config pool name (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55603&quot;&gt;pr#55603&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: improve host-management&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56111&quot;&gt;pr#56111&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: Improve multiple files (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56130&quot;&gt;pr#56130&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs/client-auth&lt;span&gt;&lt;/span&gt;.rst: correct ``fs authorize cephfs1 /dir1 clie… (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55246&quot;&gt;pr#55246&lt;/a&gt;, 叶海丰)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: edit add-remove-mds (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55648&quot;&gt;pr#55648&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: fix architecture link to correct relative path (&lt;a href=&quot;https:/

...

Copy link
Contributor

github-actions bot commented Oct 1, 2024

http://localhost:1200/ceph/blog/a11y - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Ceph Blog</title>
    <link>https://ceph.io/en/news/blog/</link>
    <atom:link href="http://localhost:1200/ceph/blog/a11y" rel="self" type="application/rss+xml"></atom:link>
    <description>Ceph Blog - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Tue, 01 Oct 2024 18:46:33 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>v19.2.0 Squid released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;Squid is the 19th stable release of Ceph.&lt;/p&gt;&lt;p&gt;This is the first stable release of Ceph Squid.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;ATTENTION:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read &lt;a href=&quot;https://tracker.ceph.com/issues/68215&quot;&gt;Tracker Issue 68215&lt;/a&gt; before attempting an upgrade to 19.2.0.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#changes&quot;&gt;Major Changes from Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade&quot;&gt;Upgrading from Quincy or Reef&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrade-from-older-release&quot;&gt;Upgrading from pre-Quincy releases (like Pacific)&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#contributors&quot;&gt;Thank You to Our Contributors&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;major-changes-from-reef&quot;&gt;&lt;a id=&quot;changes&quot;&gt;&lt;/a&gt;Major Changes from Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#major-changes-from-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;h3 id=&quot;highlights&quot;&gt;Highlights &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#highlights&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;RADOS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/li&gt;&lt;li&gt;BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/li&gt;&lt;li&gt;Other improvements include more flexible EC configurations, an OpTracker to help debug mgr module issues, and better scrub scheduling.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Dashboard&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Improved navigation layout&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;CephFS&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/li&gt;&lt;li&gt;Manage authorization capabilities for CephFS resources&lt;/li&gt;&lt;li&gt;Helpers on mounting a CephFS volume&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RBD&lt;/p&gt;&lt;ul&gt;&lt;li&gt;diff-iterate can now execute locally, bringing a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;Support for cloning from non-user type snapshots is added.&lt;/li&gt;&lt;li&gt;rbd-wnbd driver has gained the ability to multiplex image mappings.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;RGW&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Crimson/Seastore&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ceph&quot;&gt;Ceph &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#ceph&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;ceph: a new &lt;code&gt;--daemon-output-file&lt;/code&gt; switch is available for &lt;code&gt;ceph tell&lt;/code&gt; commands to dump output to a file local to the daemon. For commands which produce large amounts of output, this avoids a potential spike in memory usage on the daemon, allows for faster streaming writes to a file local to the daemon, and reduces time holding any locks required to execute the command. For analysis, it is necessary to manually retrieve the file from the host running the daemon. Currently, only &lt;code&gt;--format=json|json-pretty&lt;/code&gt; are supported.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;cls_cxx_gather&lt;/code&gt; is marked as deprecated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracing: The blkin tracing feature (see &lt;a href=&quot;https://docs.ceph.com/en/reef/dev/blkin/&quot;&gt;https://docs.ceph.com/en/reef/dev/blkin/&lt;/a&gt;) is now deprecated in favor of Opentracing (&lt;a href=&quot;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&quot;&gt;https://docs.ceph.com/en/reef/dev/developer_guide/jaegertracing/&lt;/a&gt;) and will be removed in a later release.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PG dump: The default output of &lt;code&gt;ceph pg dump --format json&lt;/code&gt; has changed. The default JSON format produces a rather massive output in large clusters and isn&#39;t scalable, so we have removed the &#39;network_ping_times&#39; section from the output. Details in the tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/57460&quot;&gt;https://tracker.ceph.com/issues/57460&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephfs&quot;&gt;CephFS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephfs&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CephFS: it is now possible to pause write I/O and metadata mutations on a tree in the file system using a new suite of subvolume quiesce commands. This is implemented to support crash-consistent snapshots for distributed applications. Please see the relevant section in the documentation on CephFS subvolumes for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS evicts clients which are not advancing their request tids which causes a large buildup of session metadata resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. &lt;code&gt;mds_session_metadata_threshold&lt;/code&gt; config controls the maximum size that a (encoded) session metadata can grow.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: A new &quot;mds last-seen&quot; command is available for querying the last time an MDS was in the FSMap, subject to a pruning threshold.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the &#39;--fs&#39; argument.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The period specifier &lt;code&gt;m&lt;/code&gt; now implies minutes and the period specifier &lt;code&gt;M&lt;/code&gt; now implies months. This has been made consistent with the rest of the system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Running the command &quot;ceph fs authorize&quot; for an existing entity now upgrades the entity&#39;s capabilities instead of printing an error. It can now also change read/write permissions in a capability that the entity already holds. If the capability passed by user is same as one of the capabilities that the entity already holds, idempotency is maintained.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Two FS names can now be swapped, optionally along with their IDs, using &quot;ceph fs swap&quot; command. The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which would prompt a higher level storage operator (like Rook) to recreate the missing file system. See &lt;a href=&quot;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&quot;&gt;https://docs.ceph.com/en/latest/cephfs/administration/#file-systems&lt;/a&gt; docs for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Before running the command &quot;ceph fs rename&quot;, the filesystem to be renamed must be offline and the config &quot;refuse_client_session&quot; must be set for it. The config &quot;refuse_client_session&quot; can be removed/unset and filesystem can be online after the rename operation is complete.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Disallow delegating preallocated inode ranges to clients. Config &lt;code&gt;mds_client_delegate_inos_pct&lt;/code&gt; defaults to 0 which disables async dirops in the kclient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: MDS log trimming is now driven by a separate thread which tries to trim the log every second (&lt;code&gt;mds_log_trim_upkeep_interval&lt;/code&gt; config). Also, a couple of configs govern how much time the MDS spends in trimming its logs. These configs are &lt;code&gt;mds_log_trim_threshold&lt;/code&gt; and &lt;code&gt;mds_log_trim_decay_rate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Full support for subvolumes and subvolume groups is now available&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: The &lt;code&gt;subvolume snapshot clone&lt;/code&gt; command now depends on the config option &lt;code&gt;snapshot_clone_no_wait&lt;/code&gt; which is used to reject the clone operation when all the cloner threads are busy. This config option is enabled by default which means that if no cloner threads are free, the clone request errors out with EAGAIN. The value of the config option can be fetched by using: &lt;code&gt;ceph config get mgr mgr/volumes/snapshot_clone_no_wait&lt;/code&gt; and it can be disabled by using: &lt;code&gt;ceph config set mgr mgr/volumes/snapshot_clone_no_wait false&lt;/code&gt; for snap_schedule Manager module.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Commands &lt;code&gt;ceph mds fail&lt;/code&gt; and &lt;code&gt;ceph fs fail&lt;/code&gt; now require a confirmation flag when some MDSs exhibit health warning MDS_TRIM or MDS_CACHE_OVERSIZED. This is to prevent accidental MDS failover causing further delays in recovery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: fixes to the implementation of the &lt;code&gt;root_squash&lt;/code&gt; mechanism enabled via cephx &lt;code&gt;mds&lt;/code&gt; caps on a client credential require a new client feature bit, &lt;code&gt;client_mds_auth_caps&lt;/code&gt;. Clients using credentials with &lt;code&gt;root_squash&lt;/code&gt; without this feature will trigger the MDS to raise a HEALTH_ERR on the cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: Expanded removexattr support for cephfs virtual extended attributes. Previously one had to use setxattr to restore the default in order to &quot;remove&quot;. You may now properly use removexattr to remove. You can also now remove layout on root inode, which then will restore layout to default layout.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: cephfs-journal-tool is guarded against running on an online file system. The &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset&#39; and &#39;cephfs-journal-tool --rank &amp;lt;fs_name&amp;gt;:&amp;lt;mds_rank&amp;gt; journal reset --force&#39; commands require &#39;--yes-i-really-really-mean-it&#39;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph fs clone status&quot; command will now print statistics about clone progress in terms of how much data has been cloned (in both percentage as well as bytes) and how many files have been cloned.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;CephFS: &quot;ceph status&quot; command will now print a progress bar when cloning is ongoing. If clone jobs are more than the cloner threads, it will print one more progress bar that shows total amount of progress made by both ongoing as well as pending clones. Both progress are accompanied by messages that show number of clone jobs in the respective categories and the amount of progress made by each of them.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: The cephfs-shell utility is now packaged for RHEL 9 / CentOS 9 as required python dependencies are now available in EPEL9.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The CephFS automatic metadata load (sometimes called &quot;default&quot;) balancer is now disabled by default. The new file system flag &lt;code&gt;balance_automate&lt;/code&gt; can be used to toggle it on or off. It can be enabled or disabled via &lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; balance_automate &amp;lt;bool&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;cephx&quot;&gt;CephX &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#cephx&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;cephx: key rotation is now possible using &lt;code&gt;ceph auth rotate&lt;/code&gt;. Previously, this was only possible by deleting and then recreating the key.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;dashboard&quot;&gt;Dashboard &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#dashboard&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: CephFS Improvments&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing CephFS snapshots and clones, as well as snapshot schedule management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Manage authorization capabilities for CephFS resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Helpers on mounting a CephFS volume&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Dashboard: RGW Improvements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Support for managing bucket policies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add/Remove bucket tags&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ACL Management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Several UI/UX Improvements to the bucket form&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;mgr&quot;&gt;MGR &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#mgr&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;MGR/REST: The REST manager module will trim requests based on the &#39;max_requests&#39; option. Without this feature, and in the absence of manual deletion of old requests, the accumulation of requests in the array can lead to Out Of Memory (OOM) issues, resulting in the Manager crashing.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;MGR: An OpTracker to help debug mgr module issues is now available.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;monitoring&quot;&gt;Monitoring &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#monitoring&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Monitoring: Grafana dashboards are now loaded into the container at runtime rather than building a grafana image with the grafana dashboards. Official Ceph grafana images can be found in &lt;a href=&quot;http://quay.io/ceph/grafana&quot;&gt;quay.io/ceph/grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitoring: RGW S3 Analytics: A new Grafana dashboard is now available, enabling you to visualize per bucket and user analytics data, including total GETs, PUTs, Deletes, Copies, and list metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;mon_cluster_log_file_level&lt;/code&gt; and &lt;code&gt;mon_cluster_log_to_syslog_level&lt;/code&gt; options have been removed. Henceforth, users should use the new generic option &lt;code&gt;mon_cluster_log_level&lt;/code&gt; to control the cluster log level verbosity for the cluster log file as well as for all external entities.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rados&quot;&gt;RADOS &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rados&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;A POOL_APP_NOT_ENABLED&lt;/code&gt; health warning will now be reported if the application is not enabled for the pool irrespective of whether the pool is in use or not. Always tag a pool with an application using &lt;code&gt;ceph osd pool application enable&lt;/code&gt; command to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool. The user might temporarily mute this warning using &lt;code&gt;ceph health mute POOL_APP_NOT_ENABLED&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: For bug 62338 (&lt;a href=&quot;https://tracker.ceph.com/issues/62338&quot;&gt;https://tracker.ceph.com/issues/62338&lt;/a&gt;), we did not choose to condition the fix on a server flag in order to simplify backporting. As a result, in rare cases it may be possible for a PG to flip between two acting sets while an upgrade to a version with the fix is in progress. If you observe this behavior, you should be able to work around it by completing the upgrade or by disabling async recovery by setting osd_async_recovery_min_cost to a very large value on all OSDs until the upgrade is complete: &lt;code&gt;ceph config set osd osd_async_recovery_min_cost 1099511627776&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A detailed version of the &lt;code&gt;balancer status&lt;/code&gt; CLI command in the balancer module is now available. Users may run &lt;code&gt;ceph balancer status detail&lt;/code&gt; to see more details about which PGs were updated in the balancer&#39;s last optimization. See &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/balancer/&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/balancer/&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Read balancing may now be managed automatically via the balancer manager module. Users may choose between two new modes: &lt;code&gt;upmap-read&lt;/code&gt;, which offers upmap and read optimization simultaneously, or &lt;code&gt;read&lt;/code&gt;, which may be used to only optimize reads. For more detailed information see &lt;a href=&quot;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&quot;&gt;https://docs.ceph.com/en/latest/rados/operations/read-balancer/#online-optimization&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore has been optimized for better performance in snapshot-intensive workloads.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: BlueStore RocksDB LZ4 compression is now enabled by default to improve average performance and &quot;fast device&quot; space usage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: A new CRUSH rule type, MSR (Multi-Step Retry), allows for more flexible EC configurations.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RADOS: Scrub scheduling behavior has been improved.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;crimson%2Fseastore&quot;&gt;Crimson/Seastore &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#crimson%2Fseastore&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;Crimson&#39;s first tech preview release! Supporting RBD workloads on Replicated pools. For more information please visit: &lt;a href=&quot;https://ceph.io/en/news/crimson&quot;&gt;https://ceph.io/en/news/crimson&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rbd&quot;&gt;RBD &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rbd&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The &lt;code&gt;try-netlink&lt;/code&gt; mapping option for rbd-nbd has become the default and is now deprecated. If the NBD netlink interface is not supported by the kernel, then the mapping is retried using the legacy ioctl interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;Image::access_timestamp&lt;/code&gt; and &lt;code&gt;Image::modify_timestamp&lt;/code&gt; Python APIs now return timestamps in UTC.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: Support for cloning from non-user type snapshots is added. This is intended primarily as a building block for cloning new groups from group snapshots created with &lt;code&gt;rbd group snap create&lt;/code&gt; command, but has also been exposed via the new &lt;code&gt;--snap-id&lt;/code&gt; option for &lt;code&gt;rbd clone&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: The output of &lt;code&gt;rbd snap ls --all&lt;/code&gt; command now includes the original type for trashed snapshots.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_CLONE_FORMAT&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;clone_format&lt;/code&gt; optional parameter to &lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;RBD_IMAGE_OPTION_FLATTEN&lt;/code&gt; option has been exposed in Python bindings via &lt;code&gt;flatten&lt;/code&gt; optional parameter to &lt;code&gt;deep_copy&lt;/code&gt; and &lt;code&gt;migration_prepare&lt;/code&gt; methods.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RBD: &lt;code&gt;rbd-wnbd&lt;/code&gt; driver has gained the ability to multiplex image mappings. Previously, each image mapping spawned its own &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon, which lead to an excessive amount of TCP sessions and other resources being consumed, eventually exceeding Windows limits. With this change, a single &lt;code&gt;rbd-wnbd&lt;/code&gt; daemon is spawned per host and most OS resources are shared between image mappings. Additionally, &lt;code&gt;ceph-rbd&lt;/code&gt; service starts much faster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;rgw&quot;&gt;RGW &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#rgw&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;RGW: GetObject and HeadObject requests now return a x-rgw-replicated-at header for replicated objects. This timestamp can be compared against the Last-Modified header to determine how long the object took to replicate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted on decryption. A new tool, &lt;code&gt;radosgw-admin bucket resync encrypted multipart&lt;/code&gt;, can be used to identify these original multipart uploads. The &lt;code&gt;LastModified&lt;/code&gt; timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Introducing a new data layout for the Topic metadata associated with S3 Bucket Notifications, where each Topic is stored as a separate RADOS object and the bucket notification configuration is stored in a bucket attribute. This new representation supports multisite replication via metadata sync and can scale to many topics. This is on by default for new deployments, but is not enabled by default on upgrade. Once all radosgws have upgraded (on all zones in a multisite configuration), the &lt;code&gt;notification_v2&lt;/code&gt; zone feature can be enabled to migrate to the new format. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/zone-features&quot;&gt;https://docs.ceph.com/en/squid/radosgw/zone-features&lt;/a&gt; for details. The &quot;v1&quot; format is now considered deprecated and may be removed after 2 major releases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous &quot;book-keeping&quot; olh entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run &lt;code&gt;radosgw-admin bucket check olh&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run &lt;code&gt;radosgw-admin bucket check unlinked&lt;/code&gt;. If the &lt;code&gt;--fix&lt;/code&gt; flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing &lt;code&gt;radosgw-admin bucket check&lt;/code&gt; command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. Existing users can be adopted into new accounts. This process is optional but irreversible. See &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/account&quot;&gt;https://docs.ceph.com/en/squid/radosgw/account&lt;/a&gt; and &lt;a href=&quot;https://docs.ceph.com/en/squid/radosgw/iam&quot;&gt;https://docs.ceph.com/en/squid/radosgw/iam&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: On startup, radosgw and radosgw-admin now validate the &lt;code&gt;rgw_realm&lt;/code&gt; config option. Previously, they would ignore invalid or missing realms and go on to load a zone/zonegroup in a different realm. If startup fails with a &quot;failed to load realm&quot; error, fix or remove the &lt;code&gt;rgw_realm&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The radosgw-admin commands &lt;code&gt;realm create&lt;/code&gt; and &lt;code&gt;realm pull&lt;/code&gt; no longer set the default realm without &lt;code&gt;--default&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fixed an S3 Object Lock bug with PutObjectRetention requests that specify a RetainUntilDate after the year 2106. This date was truncated to 32 bits when stored, so a much earlier date was used for object lock enforcement. This does not effect PutBucketObjectLockConfiguration where a duration is given in Days. The RetainUntilDate encoding is fixed for new PutObjectRetention requests, but cannot repair the dates of existing object locks. Such objects can be identified with a HeadObject request based on the x-amz-object-lock-retain-until-date response header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;S3 &lt;code&gt;Get/HeadObject&lt;/code&gt; now supports the query parameter &lt;code&gt;partNumber&lt;/code&gt; to read a specific part of a completed multipart upload.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: The SNS CreateTopic API now enforces the same topic naming requirements as AWS: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Notification topics are now owned by the user that created them. By default, only the owner can read/write their topics. Topic policy documents are now supported to grant these permissions to other users. Preexisting topics are treated as if they have no owner, and any user can read/write them using the SNS API. If such a topic is recreated with CreateTopic, the issuing user becomes the new owner. For backward compatibility, all users still have permission to publish bucket notifications to topics owned by other users. A new configuration parameter, &lt;code&gt;rgw_topic_require_publish_policy&lt;/code&gt;, can be enabled to deny &lt;code&gt;sns:Publish&lt;/code&gt; permissions unless explicitly granted by topic policy.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: Fix issue with persistent notifications where the changes to topic param that were modified while persistent notifications were in the queue will be reflected in notifications. So if the user sets up topic with incorrect config (password/ssl) causing failure while delivering the notifications to broker, can now modify the incorrect topic attribute and on retry attempt to delivery the notifications, new configs will be used.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RGW: in bucket notifications, the &lt;code&gt;principalId&lt;/code&gt; inside &lt;code&gt;ownerIdentity&lt;/code&gt; now contains the complete user ID, prefixed with the tenant ID.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;telemetry&quot;&gt;Telemetry &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#telemetry&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The &lt;code&gt;basic&lt;/code&gt; channel in telemetry now captures pool flags that allows us to better understand feature adoption, such as Crimson. To opt in to telemetry, run &lt;code&gt;ceph telemetry on&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;upgrading-from-quincy-or-reef&quot;&gt;&lt;a id=&quot;upgrade&quot;&gt;&lt;/a&gt;Upgrading from Quincy or Reef &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-quincy-or-reef&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;You can monitor the progress of your upgrade at each stage with the &lt;code&gt;ceph versions&lt;/code&gt; command, which will tell you what ceph version(s) are running for each type of daemon.&lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;upgrading-cephadm-clusters&quot;&gt;Upgrading cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.0
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The same process is used to upgrade to future minor releases.&lt;/p&gt;&lt;p&gt;Upgrade progress can be monitored with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade status
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade progress can also be monitored with &lt;code&gt;ceph -s&lt;/code&gt; (which provides a simple progress bar) or more verbosely with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -W cephadm
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The upgrade can be paused or resumed with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade pause # to pause
        ceph orch upgrade resume # to resume
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or canceled with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph orch upgrade stop
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Quincy or Reef.&lt;/p&gt;&lt;h3 id=&quot;upgrading-non-cephadm-clusters&quot;&gt;Upgrading non-cephadm clusters &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-non-cephadm-clusters&quot;&gt;¶&lt;/a&gt;&lt;/h3&gt;&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Squid is automated (see above). For more information, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If your cluster is running Quincy (17.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl -l | grep &amp;lt;daemon type&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ systemctl -l | grep mon | grep active
        ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loaded active running &amp;nbsp; Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;noout&lt;/code&gt; flag for the duration of the upgrade. (Optional, but recommended.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd set noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mon.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once all monitors are up, verify that the monitor upgrade is complete by looking for the &lt;code&gt;squid&lt;/code&gt; string in the mon map. The command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph mon dump | grep min_mon_release
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;should report:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;min_mon_release 19 (squid)
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If it does not, that implies that one or more monitors hasn&#39;t been upgraded and restarted and/or the quorum does not include all monitors.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade &lt;code&gt;ceph-mgr&lt;/code&gt; daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mgr.target
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify the &lt;code&gt;ceph-mgr&lt;/code&gt; daemons are running by checking &lt;code&gt;ceph -s&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph -s
        ...
        services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
        ...
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-osd.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all CephFS MDS daemons. For each CephFS file system,&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Disable standby_replay:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; allow_standby_replay false
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status # ceph fs set &amp;lt;fs_name&amp;gt; max_mds 1
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Wait for the cluster to deactivate any non-zero ranks by periodically checking the status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Take all standby MDS daemons offline on the appropriate hosts with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl stop ceph-mds@&amp;lt;daemon_name&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Confirm that only one MDS is online and is rank 0 for your FS&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph status
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restart all standby MDS daemons that were taken offline&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl start ceph-mds.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Restore the original value of &lt;code&gt;max_mds&lt;/code&gt; for the volume&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph fs set &amp;lt;fs_name&amp;gt; max_mds &amp;lt;original_max_mds&amp;gt;
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;/ol&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts&lt;/p&gt;&lt;pre&gt;&lt;code&gt;systemctl restart ceph-radosgw.target
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Complete the upgrade by disallowing pre-Squid OSDs and enabling all new Squid-only functionality&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd require-osd-release squid
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you set &lt;code&gt;noout&lt;/code&gt; at the beginning, be sure to clear it with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph osd unset noout
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see &lt;a href=&quot;https://docs.ceph.com/en/squid/cephadm/adoption/&quot;&gt;https://docs.ceph.com/en/squid/cephadm/adoption/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3 id=&quot;post-upgrade&quot;&gt;Post-upgrade &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#post-upgrade&quot;&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Verify the cluster is healthy with &lt;code&gt;ceph health&lt;/code&gt;. If your cluster is running Filestore, and you are upgrading directly from Quincy to Squid, a deprecation warning is expected. This warning can be temporarily muted using the following command&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph health mute OSD_FILESTORE
        &lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider enabling the &lt;a href=&quot;https://docs.ceph.com/en/squid/mgr/telemetry/&quot;&gt;telemetry module&lt;/a&gt; to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry preview-all
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with&lt;/p&gt;&lt;pre&gt;&lt;code&gt;ceph telemetry on
        &lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The public dashboard that aggregates Ceph telemetry can be found at &lt;a href=&quot;https://telemetry-public.ceph.com/&quot;&gt;https://telemetry-public.ceph.com/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2 id=&quot;upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;&lt;a id=&quot;upgrade-from-older-release&quot;&gt;&lt;/a&gt;Upgrading from pre-Quincy releases (like Pacific) &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#upgrading-from-pre-quincy-releases-(like-pacific)&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;You &lt;strong&gt;must&lt;/strong&gt; first upgrade to &lt;a href=&quot;https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/&quot;&gt;Quincy (17.2.z)&lt;/a&gt; or &lt;a href=&quot;https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/&quot;&gt;Reef (18.2.z)&lt;/a&gt; before upgrading to Squid.&lt;/p&gt;&lt;h2 id=&quot;thank-you-to-our-contributors&quot;&gt;&lt;a id=&quot;contributors&quot;&gt;&lt;/a&gt;Thank You to Our Contributors &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#thank-you-to-our-contributors&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.&lt;/p&gt;&lt;p&gt;If you are interested in helping test the next release, Tentacle, please join us at the &lt;a href=&quot;https://ceph-storage.slack.com/archives/C04Q3D7HV1T&quot;&gt;#ceph-at-scale&lt;/a&gt; Slack channel.&lt;/p&gt;&lt;p&gt;The Squid release would not be possible without the contributions of the community:&lt;/p&gt;&lt;p&gt;Aashish Sharma ▪ Abhishek Lekshmanan ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alexander Indenbaum ▪ Alexander Mikhalitsyn ▪ Alexander Proschek ▪ Alex Wojno ▪ Aliaksei Makarau ▪ Alice Zhao ▪ Ali Maredia ▪ Ali Masarwa ▪ Alvin Owyong ▪ Andreas Schwab ▪ Ankush Behl ▪ Anoop C S ▪ Anthony D Atri ▪ Anton Turetckii ▪ Aravind Ramesh ▪ Arjun Sharma ▪ Arun Kumar Mohan ▪ Athos Ribeiro ▪ Avan Thakkar ▪ barakda ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ caisan ▪ Casey Bodley ▪ chentao.2022 ▪ Chen Xu Qiang ▪ Chen Yuanrun ▪ Christian Rohmann ▪ Christian Theune ▪ Christopher Hoffman ▪ Christoph Grüninger ▪ Chunmei Liu ▪ cloudbehl ▪ Cole Mitchell ▪ Conrad Hoffmann ▪ Cory Snyder ▪ cuiming_yewu ▪ Cyril Duval ▪ daegon.yang ▪ daijufang ▪ Daniel Clavijo Coca ▪ Daniel Gryniewicz ▪ Daniel Parkes ▪ Daniel Persson ▪ Dan Mick ▪ Dan van der Ster ▪ David.Hall ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Didier Gazen ▪ Dillon Amburgey ▪ Divyansh Kamboj ▪ Dmitry Kvashnin ▪ Dnyaneshwari ▪ Dongsheng Yang ▪ Doug Whitfield ▪ dpandit ▪ Eduardo Roldan ▪ ericqzhao ▪ Ernesto Puerta ▪ ethanwu ▪ Feng Hualong ▪ Florent Carli ▪ Florian Weimer ▪ Francesco Pantano ▪ Frank Filz ▪ Gabriel Adrian Samfira ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Gilad Sid ▪ Gil Bregman ▪ gitkenan ▪ Gregory O&#39;Neill ▪ Guido Santella ▪ Guillaume Abrioux ▪ gukaifeng ▪ haoyixing ▪ hejindong ▪ Himura Kazuto ▪ hosomn ▪ hualong feng ▪ HuangWei ▪ igomon ▪ Igor Fedotov ▪ Ilsoo Byun ▪ Ilya Dryomov ▪ imtzw ▪ Ionut Balutoiu ▪ ivan ▪ Ivo Almeida ▪ Jaanus Torp ▪ jagombar ▪ Jakob Haufe ▪ James Lakin ▪ Jane Zhu ▪ Javier ▪ Jayanth Reddy ▪ J. Eric Ivancich ▪ Jiffin Tony Thottan ▪ Jimyeong Lee ▪ Jinkyu Yi ▪ John Mulligan ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Josh Durgin ▪ Josh Salomon ▪ Josh Soref ▪ Joshua Baergen ▪ jrchyang ▪ Juan Miguel Olmo Martínez ▪ junxiang Mu ▪ Justin Caratzas ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kim Minjong ▪ Konstantin Monakhov ▪ Konstantin Shalygin ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Laura Flores ▪ Lei Cao ▪ Leonid Usov ▪ lichaochao ▪ lightmelodies ▪ limingze ▪ liubingrun ▪ LiuBingrun ▪ liuhong ▪ Liu Miaomiao ▪ liuqinfei ▪ Lorenz Bausch ▪ Lucian Petrut ▪ Luis Domingues ▪ Luís Henriques ▪ luo rixin ▪ Manish M Yathnalli ▪ Marcio Roberto Starke ▪ Marc Singer ▪ Marcus Watts ▪ Mark Kogan ▪ Mark Nelson ▪ Matan Breizman ▪ Mathew Utter ▪ Matt Benjamin ▪ Matthew Booth ▪ Matthew Vernon ▪ mengxiangrui ▪ Mer Xuanyi ▪ Michaela Lang ▪ Michael Fritch ▪ Michael J. Kidd ▪ Michael Schmaltz ▪ Michal Nasiadka ▪ Mike Perez ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mitsumasa KONDO ▪ Mohamed Awnallah ▪ Mohan Sharma ▪ Mohit Agrawal ▪ molpako ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ Naman Munet ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Nico Wang ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Nobuto Murata ▪ Oguzhan Ozmen ▪ Omri Zeneva ▪ Or Friedmann ▪ Orit Wasserman ▪ Or Ozeri ▪ Parth Arora ▪ Patrick Donnelly ▪ Patty8122 ▪ Paul Cuzner ▪ Paulo E. Castro ▪ Paul Reece ▪ PC-Admin ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Pete Zaitcev ▪ Philip de Nier ▪ Philipp Hufnagl ▪ Pierre Riteau ▪ pilem94 ▪ Pinghao Wu ▪ Piotr Parczewski ▪ Ponnuvel Palaniyappan ▪ Prasanna Kumar Kalever ▪ Prashant D ▪ Pritha Srivastava ▪ QinWei ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raimund Sacherer ▪ Ramana Raja ▪ Redouane Kachach ▪ RickyMaRui ▪ Rishabh Dave ▪ rkhudov ▪ Ronen Friedman ▪ Rongqi Sun ▪ Roy Sahar ▪ Sachin Punadikar ▪ Sage Weil ▪ Sainithin Artham ▪ sajibreadd ▪ samarah ▪ Samarah ▪ Samuel Just ▪ Sascha Lucas ▪ sayantani11 ▪ Seena Fallah ▪ Shachar Sharon ▪ Shilpa Jagannath ▪ shimin ▪ ShimTanny ▪ Shreyansh Sancheti ▪ sinashan ▪ Soumya Koduri ▪ sp98 ▪ spdfnet ▪ Sridhar Seshasayee ▪ Sungmin Lee ▪ sunlan ▪ Super User ▪ Suyashd999 ▪ Suyash Dongre ▪ Taha Jahangir ▪ tanchangzhi ▪ Teng Jie ▪ tengjie5 ▪ Teoman Onay ▪ tgfree ▪ Theofilos Mouratidis ▪ Thiago Arrais ▪ Thomas Lamprecht ▪ Tim Serong ▪ Tobias Urdin ▪ tobydarling ▪ Tom Coldrick ▪ TomNewChao ▪ Tongliang Deng ▪ tridao ▪ Vallari Agrawal ▪ Vedansh Bhartia ▪ Venky Shankar ▪ Ville Ojamo ▪ Volker Theile ▪ wanglinke ▪ wangwenjuan ▪ wanwencong ▪ Wei Wang ▪ weixinwei ▪ Xavi Hernandez ▪ Xinyu Huang ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ xuxuehan ▪ Yaarit Hatuka ▪ Yantao xue ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ yite gu ▪ Yonatan Zaken ▪ Yongseok Oh ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ yu.wang ▪ Zac Dover ▪ Zack Cerza ▪ zhangjianwei ▪ Zhang Song ▪ Zhansong Gao ▪ Zhelong Zhao ▪ Zhipeng Li ▪ Zhiwei Huang ▪ 叶海丰 ▪ 胡玮文&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/</guid>
      <pubDate>Wed, 25 Sep 2024 16:00:00 GMT</pubDate>
      <author>Laura Flores</author>
    </item>
    <item>
      <title>Cephalocon 2024 Shirt Design Competition</title>
      <description>&lt;div class=&quot;to-lg:w-full-breakout&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;mb-8 lg:mb-10 xl:mb-12 w-full&quot; loading=&quot;lazy&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/cephalocon-2024-header-1200x500.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;The &lt;strong&gt;Cephalocon Conference&lt;/strong&gt; t-shirt is a perennial favorite and is literally worn as a badge of honor around the world. And the &lt;strong&gt;design&lt;/strong&gt; on the shirt is what makes it so special!&lt;/p&gt;&lt;p&gt;How would you like to be honored as the creator adorning this year’s object d’arte!, and receive a complimentary registration to this year’s &lt;a href=&quot;https://events.linuxfoundation.org/cephalocon/&quot;&gt;event&lt;/a&gt; at CERN, in Geneva, Switzerland this December, in recognition!&lt;/p&gt;&lt;p&gt;You don’t need to be an artist nor a graphics designer, as we are looking for simple conceptual renderings of your design - scan in a hand-drawn image or sketch with your favorite tool. All we ask is that it be original art (need to avoid licensing issues). Also, please limit to black/white if possible, or at most one additional color, to be budget friendly.&lt;/p&gt;&lt;p&gt;To submit your idea for consideration, please email your drawing file (PDF or JPG) to &lt;a href=&quot;mailto:cephalocon24@ceph.io&quot;&gt;cephalocon24@ceph.io&lt;/a&gt;. &lt;strong&gt;All submissions must be received no later than Friday, August 16th&lt;/strong&gt; - so get those creative juices flowing!!&lt;/p&gt;&lt;p&gt;The Conference planning team will review and announce the winner when the Conference Schedule is announced in September.&lt;/p&gt;&lt;p&gt;&lt;em&gt;2023’s Image for reference, in case you need inspiration&lt;/em&gt;&lt;/p&gt;&lt;img align=&quot;left&quot; width=&quot;300&quot; height=&quot;300&quot; src=&quot;https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/images/Ceph-23-TShirt-FNL-Isolated-Back.png&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;/div&gt;</description>
      <link>https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</link>
      <guid isPermaLink="false">https://ceph.io/en/news/blog/2024/cephalocon24-tshirt-contest/</guid>
      <pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
      <author>Anthony Lewitt</author>
    </item>
    <item>
      <title>v18.2.4 Reef released</title>
      <description>&lt;div class=&quot;intro-para richtext&quot;&gt;&lt;p&gt;This is the fourth backport release in the Reef series. We recommend that all users update to this release.&lt;/p&gt;&lt;p&gt;An early build of this release was accidentally exposed and packaged as 18.2.3 by the Debian project in April. That 18.2.3 release should not be used. The official release was re-tagged as v18.2.4 to avoid further confusion.&lt;/p&gt;&lt;p&gt;v18.2.4 container images, now based on CentOS 9, may be incompatible on older kernels (e.g., Ubuntu 18.04) due to differences in thread creation methods. Users upgrading to v18.2.4 container images on older OS versions may encounter crashes during pthread_create. For workarounds, refer to the related tracker. However, we recommend upgrading your OS to avoid this unsupported combination. Related tracker: &lt;a href=&quot;https://tracker.ceph.com/issues/66989&quot;&gt;https://tracker.ceph.com/issues/66989&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;notable-changes&quot;&gt;Notable Changes &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#notable-changes&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;RBD: When diffing against the beginning of time (&lt;code&gt;fromsnapname == NULL&lt;/code&gt;) in fast-diff mode (&lt;code&gt;whole_object == true&lt;/code&gt; with &lt;code&gt;fast-diff&lt;/code&gt; image feature enabled and valid), diff-iterate is now guaranteed to execute locally if exclusive lock is available. This brings a dramatic performance improvement for QEMU live disk synchronization and backup use cases.&lt;/li&gt;&lt;li&gt;RADOS: &lt;code&gt;get_pool_is_selfmanaged_snaps_mode&lt;/code&gt; C++ API has been deprecated due to being prone to false negative results. Its safer replacement is &lt;code&gt;pool_is_in_selfmanaged_snaps_mode&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;RBD: The option &lt;code&gt;--image-id&lt;/code&gt; has been added to &lt;code&gt;rbd children&lt;/code&gt; CLI command, so it can be run for images in the trash.&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;changelog&quot;&gt;Changelog &lt;a class=&quot;link-anchor&quot; href=&quot;https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/#changelog&quot;&gt;¶&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;(reef) node-proxy: improve http error handling in fetch_oob_details (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55538&quot;&gt;pr#55538&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;[rgw][lc][rgw_lifecycle_work_time] adjust timing if the configured end time is less than the start time (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54866&quot;&gt;pr#54866&lt;/a&gt;, Oguzhan Ozmen)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;add checking for rgw frontend init (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54844&quot;&gt;pr#54844&lt;/a&gt;, zhipeng li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;admin/doc-requirements: bump Sphinx to 5&lt;span&gt;&lt;/span&gt;.0&lt;span&gt;&lt;/span&gt;.2 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55191&quot;&gt;pr#55191&lt;/a&gt;, Nizamudeen A)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport of fixes for 63678 and 63694 (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55104&quot;&gt;pr#55104&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;backport rook/mgr recent changes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55706&quot;&gt;pr#55706&lt;/a&gt;, Redouane Kachach)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-menv:fix typo in README (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55163&quot;&gt;pr#55163&lt;/a&gt;, yu.wang)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: add missing import (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56259&quot;&gt;pr#56259&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix a bug in _check_generic_reject_reasons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54705&quot;&gt;pr#54705&lt;/a&gt;, Kim Minjong)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Fix migration from WAL to data with no DB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55497&quot;&gt;pr#55497&lt;/a&gt;, Igor Fedotov)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix mpath device support (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53539&quot;&gt;pr#53539&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fix zap_partitions() in devices&lt;span&gt;&lt;/span&gt;.lvm&lt;span&gt;&lt;/span&gt;.zap (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55477&quot;&gt;pr#55477&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: fixes fallback to stat in is_device and is_partition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54629&quot;&gt;pr#54629&lt;/a&gt;, Teoman ONAY)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: update functional testing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56857&quot;&gt;pr#56857&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: use &#39;no workqueue&#39; options with dmcrypt (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55335&quot;&gt;pr#55335&lt;/a&gt;, Guillaume Abrioux)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph-volume: Use safe accessor to get TYPE info (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56323&quot;&gt;pr#56323&lt;/a&gt;, Dillon Amburgey)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: add support for openEuler OS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56361&quot;&gt;pr#56361&lt;/a&gt;, liuqinfei)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ceph&lt;span&gt;&lt;/span&gt;.spec&lt;span&gt;&lt;/span&gt;.in: remove command-with-macro line (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57357&quot;&gt;pr#57357&lt;/a&gt;, John Mulligan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm/nvmeof: scrape nvmeof prometheus endpoint (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56108&quot;&gt;pr#56108&lt;/a&gt;, Avan Thakkar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add mount for nvmeof log location (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55819&quot;&gt;pr#55819&lt;/a&gt;, Roy Sahar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: Add nvmeof to autotuner calculation (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56100&quot;&gt;pr#56100&lt;/a&gt;, Paul Cuzner)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: add timemaster to timesync services list (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56307&quot;&gt;pr#56307&lt;/a&gt;, Florent Carli)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: adjust the ingress ha proxy health check interval (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56286&quot;&gt;pr#56286&lt;/a&gt;, Jiffin Tony Thottan)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: create ceph-exporter sock dir if it&#39;s not present (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56102&quot;&gt;pr#56102&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: fix get_version for nvmeof (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56099&quot;&gt;pr#56099&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: improve cephadm pull usage message (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56292&quot;&gt;pr#56292&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: remove restriction for crush device classes (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56106&quot;&gt;pr#56106&lt;/a&gt;, Seena Fallah)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephadm: rm podman-auth&lt;span&gt;&lt;/span&gt;.json if removing last cluster (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56105&quot;&gt;pr#56105&lt;/a&gt;, Adam King)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-shell: remove distutils Version classes because they&#39;re deprecated (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54119&quot;&gt;pr#54119&lt;/a&gt;, Venky Shankar, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cephfs-top: include the missing fields in --dump output (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54520&quot;&gt;pr#54520&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client/fuse: handle case of renameat2 with non-zero flags (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55002&quot;&gt;pr#55002&lt;/a&gt;, Leonid Usov, Shachar Sharon)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: append to buffer list to save all result from wildcard command (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53893&quot;&gt;pr#53893&lt;/a&gt;, Rishabh Dave, Jinmyeong Lee, Jimyeong Lee)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: call _getattr() for -ENODATA returned _getvxattr() calls (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54404&quot;&gt;pr#54404&lt;/a&gt;, Jos Collin)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: fix leak of file handles (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56122&quot;&gt;pr#56122&lt;/a&gt;, Xavi Hernandez)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: Fix return in removexattr for xattrs from &lt;code&gt;system&amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;.&lt;/code&gt; namespace (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55803&quot;&gt;pr#55803&lt;/a&gt;, Anoop C S)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: queue a delay cap flushing if there are ditry caps/snapcaps (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54466&quot;&gt;pr#54466&lt;/a&gt;, Xiubo Li)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;client: readdir_r_cb: get rstat for dir only if using rbytes for size (&lt;a href=&quot;https://github.com/ceph/ceph/pull/53359&quot;&gt;pr#53359&lt;/a&gt;, Pinghao Wu)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/arrow: don&#39;t treat warnings as errors (&lt;a href=&quot;https://github.com/ceph/ceph/pull/57375&quot;&gt;pr#57375&lt;/a&gt;, Casey Bodley)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake/modules/BuildRocksDB&lt;span&gt;&lt;/span&gt;.cmake: inherit parent&#39;s CMAKE_CXX_FLAGS (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55502&quot;&gt;pr#55502&lt;/a&gt;, Kefu Chai)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;cmake: use or turn off liburing for rocksdb (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54122&quot;&gt;pr#54122&lt;/a&gt;, Casey Bodley, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/options: Set LZ4 compression for bluestore RocksDB (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55197&quot;&gt;pr#55197&lt;/a&gt;, Mark Nelson)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common/weighted_shuffle: don&#39;t feed std::discrete_distribution with all-zero weights (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55153&quot;&gt;pr#55153&lt;/a&gt;, Radosław Zarzyński)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;common: resolve config proxy deadlock using refcounted pointers (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54373&quot;&gt;pr#54373&lt;/a&gt;, Patrick Donnelly)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DaemonServer&lt;span&gt;&lt;/span&gt;.cc: fix config show command for RGW daemons (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55077&quot;&gt;pr#55077&lt;/a&gt;, Aishwarya Mathuria)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add ceph-exporter package (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56541&quot;&gt;pr#56541&lt;/a&gt;, Shinya Hayashi)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;debian: add missing bcrypt to ceph-mgr &lt;span&gt;&lt;/span&gt;.requires to fix resulting package dependencies (&lt;a href=&quot;https://github.com/ceph/ceph/pull/54662&quot;&gt;pr#54662&lt;/a&gt;, Thomas Lamprecht)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst - fix typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55384&quot;&gt;pr#55384&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture&lt;span&gt;&lt;/span&gt;.rst: improve rados definition (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55343&quot;&gt;pr#55343&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: correct typo (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56012&quot;&gt;pr#56012&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: improve some paragraphs (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55399&quot;&gt;pr#55399&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/architecture: remove pleonasm (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55933&quot;&gt;pr#55933&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm - edit t11ing (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55482&quot;&gt;pr#55482&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm/services: Improve monitoring&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56290&quot;&gt;pr#56290&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: correct nfs config pool name (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55603&quot;&gt;pr#55603&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: improve host-management&lt;span&gt;&lt;/span&gt;.rst (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56111&quot;&gt;pr#56111&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephadm: Improve multiple files (&lt;a href=&quot;https://github.com/ceph/ceph/pull/56130&quot;&gt;pr#56130&lt;/a&gt;, Anthony D&#39;Atri)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs/client-auth&lt;span&gt;&lt;/span&gt;.rst: correct ``fs authorize cephfs1 /dir1 clie… (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55246&quot;&gt;pr#55246&lt;/a&gt;, 叶海丰)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: edit add-remove-mds (&lt;a href=&quot;https://github.com/ceph/ceph/pull/55648&quot;&gt;pr#55648&lt;/a&gt;, Zac Dover)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;doc/cephfs: fix architecture link to correct relative path (&lt;a href=

@TonyRL TonyRL merged commit 0a79a86 into DIYgod:master Oct 1, 2024
27 checks passed
bxb100 added a commit to Lemon-Fork-Repository/RSSHub that referenced this pull request Oct 3, 2024
* origin/cnki:
  chore(deps-dev): bump globals from 15.9.0 to 15.10.0 (DIYgod#16985)
  chore(deps): bump tldts from 6.1.48 to 6.1.49 (DIYgod#16986)
  feat(route): add 中国证券监督管理委员会政府信息公开 (DIYgod#16982)
  feat(route): github discussion add category filter (DIYgod#16983)
  feat(routes/ceph): add ceph blog (DIYgod#16980)
  fix(route): douyin (DIYgod#16981)
  feat: add a route to Bangumi's user favorites list (DIYgod#16971)
  feat(route): add github discussions route (DIYgod#16977)
  chore(deps-dev): bump @typescript-eslint/eslint-plugin from 8.7.0 to 8.8.0 (DIYgod#16976)
  chore(deps-dev): bump @typescript-eslint/parser from 8.7.0 to 8.8.0 (DIYgod#16975)
  fix(route/rss): Manually set RSS version when parsing (DIYgod#16973)
artefaritaKuniklo pushed a commit to artefaritaKuniklo/RSSHub that referenced this pull request Dec 13, 2024
* feat(routes/ceph): add ceph blog

* Apply suggestions from code review

Co-authored-by: Tony <TonyRL@users.noreply.github.com>

* fix(route/ceph): add 1hour cache for index, fix url double slash

* fix(route/ceph): remove unnecessary caching

---------
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Auto: Route Test Complete Auto route test has finished on given PR Route
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants