2
0
mirror of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git synced 2025-09-04 20:19:47 +08:00

docs: nvme: fix grammar in nvme-pci-endpoint-target.rst

Notable changes:

 - Use "an NVMe" instead of "a NVMe" throughout the document
 - Fix incorrect phrasing such as "will is discoverable" -> "is
   discoverable"
 - Ensure consistent and proper article usage for clarity.

Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This commit is contained in:
Alok Tiwari 2025-06-24 21:16:34 -07:00 committed by Christoph Hellwig
parent b5cd5f1e50
commit 3be8ad8caa

View File

@ -6,21 +6,21 @@ NVMe PCI Endpoint Function Target
:Author: Damien Le Moal <dlemoal@kernel.org> :Author: Damien Le Moal <dlemoal@kernel.org>
The NVMe PCI endpoint function target driver implements a NVMe PCIe controller The NVMe PCI endpoint function target driver implements an NVMe PCIe controller
using a NVMe fabrics target controller configured with the PCI transport type. using an NVMe fabrics target controller configured with the PCI transport type.
Overview Overview
======== ========
The NVMe PCI endpoint function target driver allows exposing a NVMe target The NVMe PCI endpoint function target driver allows exposing an NVMe target
controller over a PCIe link, thus implementing an NVMe PCIe device similar to a controller over a PCIe link, thus implementing an NVMe PCIe device similar to a
regular M.2 SSD. The target controller is created in the same manner as when regular M.2 SSD. The target controller is created in the same manner as when
using NVMe over fabrics: the controller represents the interface to an NVMe using NVMe over fabrics: the controller represents the interface to an NVMe
subsystem using a port. The port transfer type must be configured to be subsystem using a port. The port transfer type must be configured to be
"pci". The subsystem can be configured to have namespaces backed by regular "pci". The subsystem can be configured to have namespaces backed by regular
files or block devices, or can use NVMe passthrough to expose to the PCI host an files or block devices, or can use NVMe passthrough to expose to the PCI host an
existing physical NVMe device or a NVMe fabrics host controller (e.g. a NVMe TCP existing physical NVMe device or an NVMe fabrics host controller (e.g. a NVMe
host controller). TCP host controller).
The NVMe PCI endpoint function target driver relies as much as possible on the The NVMe PCI endpoint function target driver relies as much as possible on the
NVMe target core code to parse and execute NVMe commands submitted by the PCIe NVMe target core code to parse and execute NVMe commands submitted by the PCIe
@ -181,10 +181,10 @@ Creating an NVMe endpoint device is a two step process. First, an NVMe target
subsystem and port must be defined. Second, the NVMe PCI endpoint device must subsystem and port must be defined. Second, the NVMe PCI endpoint device must
be setup and bound to the subsystem and port created. be setup and bound to the subsystem and port created.
Creating a NVMe Subsystem and Port Creating an NVMe Subsystem and Port
---------------------------------- -----------------------------------
Details about how to configure a NVMe target subsystem and port are outside the Details about how to configure an NVMe target subsystem and port are outside the
scope of this document. The following only provides a simple example of a port scope of this document. The following only provides a simple example of a port
and subsystem with a single namespace backed by a null_blk device. and subsystem with a single namespace backed by a null_blk device.
@ -234,8 +234,8 @@ Finally, create the target port and link it to the subsystem::
# ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \ # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
/sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn /sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn
Creating a NVMe PCI Endpoint Device Creating an NVMe PCI Endpoint Device
----------------------------------- ------------------------------------
With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint
device can now be created and enabled. The NVMe PCI endpoint target driver device can now be created and enabled. The NVMe PCI endpoint target driver
@ -303,7 +303,7 @@ device controller::
nvmet_pci_epf nvmet_pci_epf.0: Enabling controller nvmet_pci_epf nvmet_pci_epf.0: Enabling controller
On the host side, the NVMe PCI endpoint function target device will is On the host side, the NVMe PCI endpoint function target device is
discoverable as a PCI device, with the vendor ID and device ID as configured:: discoverable as a PCI device, with the vendor ID and device ID as configured::
# lspci -n # lspci -n